uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,993,463 | arxiv | \section{Introduction}
Let $K$ be a field, $m$ a positive integer and $n$ an integer prime to the characteristic of $K$. The Milnor-Bloch-Kato conjecture asserts that the Galois symbol
\begin{equation}
\label{eqn:galoisclass}
K^M_m(K)/n K^M_m(K) \quad \longrightarrow \quad H^m(K, {\mathbb Z}/n{\mathbb Z}(m))
\end{equation}
from Milnor $K$-groups to Galois cohomology is bijective. Recently, Rost and Voevodsky have announced a proof (special cases have been obtained earlier by Merkurjev-Suslin, Rost and Voevodsky).
In \cite{somekawa}, Somekawa has introduced certain {\it generalized Milnor $K$-groups} $K(K; A_1, \ldots, A_m)$ attached to semi-abelian varieties $A_1, \ldots, A_m$. If $A_1 = \ldots = A_m = {\mathbb G}_m$ is the one-dimensional split torus they agree with the usual $K^M_m(K)$. If $m=2$, $A_1 = \Jac_X$ and $A_2 = \Jac_Y$ are the Jacobians of smooth, projective and connected curves $X$ and $Y$ over $K$ having a $K$-rational point, then $K(K; A_1, A_2)$ is the kernel of the Albanese map $\CH_0(X\times Y)_{\deg = 0} \to \Alb_{X\times Y}(K)$.
Somekawa has defined a Galois symbol
\begin{equation}
\label{eqn:generalgaloisclass}
K(K; A_1, \ldots, A_m)/n K(K; A_1, \ldots, A_m) \longrightarrow H^m(K,A_1[n]\otimes\ldots\otimes A_m[n])
\end{equation}
and conjectured that it is always injective. In this note we present a counterexample (see section \ref{section:somekawa}). Let us describe it briefly. Let $L/K$ be a cyclic extension of degree $n$ and $\sigma$ a generator of the Galois group $\Gal(L/K)$. Let $T$ be the kernel of the norm map $\Res_{L/K} {\mathbb G}_m \to {\mathbb G}_m$. We show that the norm $K(L; T,T) \to K(K; T,T)$ induces an isomorphism $K_2(L; T,T)/(1-\sigma) \to K_2(K; T,T)$. On the other hand, the corresponding map of Galois cohomology groups $H^2(L,T[n]\otimes T[n])/(1-\sigma) \to H^2(K,T[n]\otimes T[n])$ is neither injective nor surjective (for a suitable choice of $L/K$). Note that, since $T$ is split over $L$, the Galois symbol $K_2(L; T,T)/n \to H^2(L,T[n]\otimes T[n])$ is bijective. Consequently, $K_2(K; T,T)/n \to H^2(K,T[n]\otimes T[n])$ is in general not injective.
In the section \ref{section:beilinson} we show that the motive $M(T\times T)$ gives a counterexample to another generalization of the Milnor-Bloch-Kato conjecture (proposed by Beilinson).
We would like to thank Bruno Kahn for his comments on the first version of this note. In particular he pointed out to us that our counterexample to Somekawa's conjecture should also provide a counterexample to Beilinson's conjecture.
\section{Counterexample to Somekawa's conjecture}
\label{section:somekawa}
\paragraph{Algebraic groups as Mackey-functors} Let $K$ be a field. For a finite field extension $L/K$ and commutative algebraic groups $G$ over $K$ and $H$ over $L$ we denote by $G_L$ the base change of $G$ to $L$ and by $\Res_{L/K} H$ the Weil restriction of $H$. The functor $G \mapsto G_L$ is left and right adjoint to $H \mapsto \Res_{L/K} H$. In particular there are adjunction homomorphisms $\iota_{L/K}: G \to \Res_{L/K} G_L$ and $N_{L/K}:\Res_{L/K} G_L \to G$. When $L/K$ is a Galois extension, the Galois group $\Gal(L/K)$ acts canonically on $\Res_{L/K} G_L$. The following simple result, whose proof will be left to the reader, will be used later.
\begin{lemma}
\label{lemma:norm}
Let $L/K$ be a cyclic Galois extension of degree $n$, $\sigma$ a generator of $\Gal(L/K)$ and let $G$ be a commutative algebraic group over $K$. Let $G'$ be the kernel of $N_{L/K}:\Res_{L/K} G_L \to G$ so that $G'_L \cong G_L^{n-1}$. Then the map
\[
\Res_{L/K} (G_L)^{n-1} \cong\Res_{L/K}G'_L\stackrel{N_{L/K}}{\longrightarrow} G' \hookrightarrow \Res_{L/K} G_L
\]
is given on the $i$-th summand by $1-\sigma^i$.
\end{lemma}
We denote by ${\cal C}_K$ the category of finite reduced $K$-schemes. Thus each object of ${\cal C}_K$ is isomorphic to $\Spec( E_1 \times \ldots \times E_r)$ where $E_1, \ldots, E_r/K$ are finite field extensions. A commutative algebraic group $G$ over $K$ defines a {\it Mackey-functor}, i.e.\ a co- and contravariant functor $G: {\cal C}_K \to \ab$ satisfying (i), (ii) below. If $f:X \to Y$ is a morphism we denote by $f_*: G(X)\to G(Y)$ and $f^*: G(Y)\to G(X)$ the homomorphisms induced by co- and contravariant functoriality respectively.
\begin{itemize}
\item[(i)] If $X = X_1 \coprod X_2 \in \Obj({\cal C}_K)$ then $G(X) = G(X_1) \oplus G(X_2)$.
\item[(ii)]
If
\[
\begin{CD}
X' @>> f' > Y'\\
@VV g' V @VV g V\\
X @>> f > Y
\end{CD}
\]
is a cartesian square in ${\cal C}_K$ then $g^* f_* = (f')_* (g')^*$.
\end{itemize}
If $K \subseteq E_1 \subseteq E_2$ are two finite field extensions and $f: \Spec E_2 \to \Spec E_1$ the corresponding map in ${\cal C}_K$ then $f^*$ (resp.\ $f_*$) is given by $\iota_{E_2/E_1}: G(E_1) \to G(E_2)$ (resp.\ $N_{E_2/E_1}: G(E_2)\to G(E_1)$).
\paragraph{Local symbols} We recall also the notion of a {\it local symbol} (\cite{serre} and \cite{somekawa}) for $G$. Let $X\to \Spec K$ is a proper non-singular algebraic curve (note that we do not assume that $X$ is connected). Let $K(X)$ denote the ring of rational functions on $X$ and $|X|$ the set of closed points of $X$. For $P \in |X|$ we denote by $K_P$ the quotient field of the completion ${\widehat{\cO}}_{X,P}$ of ${\cal O}_{X,P}$, by $v_P: K_P\to {\mathbb Z}\cup \{\infty\}$ the normalized valuation and by $K(P)$ the residue field of $K_P$. The local symbol at $P$ is a homomorphism $\partial_P: (K_P)^* \otimes G(K_P) \to G(K(P))$. It is characterized by the following properties:
\begin{itemize}
\item[(i)] If $f\in (K_P)^*$ and $g\in G({\widehat{\cO}}_{X,P})$ then $\partial_P(f\otimes g) = v_P(f) g(P)$. Here $g(P)$ is the image of $g$ under the canonical map $G({\widehat{\cO}}_{X,P}) \to G(K(P))$.
\item[(ii)] For $f\in K(X)^*$ and $g\in G(K(X))$ we have $\sum_{P \in |X|} \, N_{K(P)/K}(\partial_P(f\otimes g)) = 0$.
\end{itemize}
\paragraph{Milnor $K$-groups attached to commutative algebraic groups} Let $G_1, \ldots, G_m$ be commutative algebraic groups over $K$. In \cite{somekawa} Somekawa has introduced the Milnor $K$-group $K(K;G_1,\ldots, G_m)$ (actually Somekawa considered only the case of semiabelian varieties though his construction works for arbitrary commutative algebraic groups). It is given as
\[
K(K;G_1,\ldots, G_m) = \left(\bigoplus_{X} \, G_1(X) \otimes \ldots \otimes G_m(X)\right)/R
\]
where $X$ runs through all objects of ${\cal C}_K$ and where the subgroup $R$ is generated by the following elements:
\noindent (R1) If $f:X \to Y$ is a morphism in ${\cal C}_K$ and if $x_{i_0}\in G_{i_0}(Y)$ for some $i_0$ and $x_i\in G_i(X)$ for $i\ne i_0$, then
\[
x_1\otimes \ldots\otimes f_*(x_{i_0}) \otimes\ldots \otimes x_m - f^*(x_1)\otimes \ldots\otimes x_{i_0} \otimes\ldots \otimes f^*(x_m)\in R.
\]
\noindent (R2) Let $X\to \Spec K$ be a proper non-singular curve, $f\in K(X)^*$ and $g_i\in G_i(K(X))$. Assume that for each $P\in |X|$ there exists $i(P)$ such that $g_i\in G_i({\widehat{\cO}}_{X,P})$ for all $i\ne i(P)$. Then
\[
\sum_{P \in |X|} \, g_1(P) \otimes \ldots \otimes \partial_P(f\otimes g_{i(P)}) \otimes \ldots \otimes g_m(P)\in R.
\]
For $X\in {\cal C}_K$ and $x_i\in G_i(X)$ for $i=1, \ldots , m$ we write $\{x_1, \ldots, x_m\}_{X/K}$ for the image of $x_1\otimes \ldots \otimes x_m$ in $K(K;G_1,\ldots, G_m)$ (elements of this form will be referred to as {\it symbols}).
A sequence of algebraic groups $G' \to G \to G''$ over $K$ will be called {\it Zariski exact} if $G'(E) \to G(E) \to G''(E)$ is exact for every extension $E/K$. The proof of the following result is straightforward; hence will be omitted.
\begin{lemma}
\label{lemma:rightexact}
Let $m$ be a positive integer and let $i\in \{1, \ldots, m\}$. Let $G_1, \ldots, G_m$ be commutative algebraic groups over $K$ and let $G_i' \to G_i \to G_i''\to 1$ be a Zariski exact sequence of commutative algebraic groups over $K$. Then the sequence
\[
K(K;G_1,\ldots, G_i',\ldots )\to K(K;G_1,\ldots, G_i,\ldots )\to K(K;G_1,\ldots, G_i'',\ldots)\to 0
\]
is exact as well.
\end{lemma}
\paragraph{The norm map}
Let $G_1, \ldots, G_m$ be commutative algebraic groups over $K$ and let $L/K$ be a finite extension. Set $K(L;G_1,\ldots, G_m) \colon = K(L;(G_1)_{L},\break \ldots, (G_m)_{L})$. Then we have the norm map \cite{somekawa}
\begin{equation}
\label{eqn:norm}
N_{L/K}: K(L;G_1,\ldots, G_m) \longrightarrow K(K;G_1,\ldots, G_m)
\end{equation}
defined on symbols by $N_{L/K}(\{x_1, \ldots, x_m\}_{X/L}) = \{x_1, \ldots, x_m\}_{X/K}$ for any $X\in {\cal C}_L$ and $x_i \in G_i(X) ~(i=1, \ldots, m).$
We give another interpretation of (\ref{eqn:norm}) below when $L/K$ is separable. It is based on the following result.
\begin{lemma}
\label{lemma:weilres}
Let $L/K$ be a finite separable extension and let $i, m$ be positive integers with $i\le m$. Let $G_1, \ldots, G_{i-1}, G_{i+1}, \ldots, G_m$ be commutative algebraic groups over $K$ and let $G_i$ be a commutative algebraic group over $L$. Then, we have an isomorphism
\[
K(K;G_1,\ldots, \Res_{L/K} G_i,\ldots, G_m ) \cong K(L;(G_1)_{L},\ldots, G_i , \ldots, (G_m)_{L}).
\]
\end{lemma}
{\em Proof.} To simplify the notation we assume that $i=m$. We denote by $\pi^{-1}: {\cal C}_K \to {\cal C}_L$ and $\pi: {\cal C}_L \to {\cal C}_K$ the functors
\[
\pi^{-1}(X \to \Spec K) \colon = (X\otimes_K L \to \Spec L),
\]
\[
\pi(Y \to \Spec L) \colon = ( Y \to \Spec L \to \Spec K).
\]
$\pi$ is left adjoint to $\pi^{-1}$. For $X\in {\cal C}_K$ and $Y\in {\cal C}_L$ let
\[
p_X: X\otimes_K L \longrightarrow X, \quad {\iota}_Y: Y \longrightarrow Y\otimes_K L.
\]
be the adjunction morphisms. We define homomorphisms
\[
\phi: K(K;G_1,\ldots, G_{m-1}, \Res_{L/K} G_m) \longrightarrow K(K;(G_1)_{L},\ldots, (G_{m-1})_{L}, G_m),
\]
\[
\psi: K(L;(G_1)_{L},\ldots, (G_{m-1})_{L}, G_m) \longrightarrow K(K;G_1,\ldots, G_{m-1}, \Res_{L/K} G_m).
\]
as follows. For $X\in {\cal C}_K$, $x_1\in G_1(X), \ldots, x_{m-1}\in G_{m-1}(X)$ and $x_m\in G_m(X\otimes_K L)$ we put
\[
\phi(\{x_1, \ldots, x_m\}_{X/K}) = \{p^*(x_1), \ldots, p^*(x_{m-1}), x_m\}_{(X\otimes_K L)/L}.
\]
Conversely, for $Y\in {\cal C}_L$ and $y_1\in G_1(Y), \ldots, y_m \in G_m(Y)$ let
\[
\psi(\{y_1, \ldots, y_{m-1}, y_m\}_{Y/L} = \{y_1, \ldots, y_{m-1}, {\iota}_*(y_m)\}_{Y/K}.
\]
One can easily verify that these maps are well-defined and mutually inverse to each other.
\ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi
Let $G_1, \ldots, G_m$ be commutative algebraic groups over $K$ and let $L/K$ be a finite separable extension. Take any $i\in \{1,\ldots, m\}$. The map $N_{L/K}: \Res_{L/K}(G_{i})_{L}\to G_i$ induces a map $K(K;G_1,\ldots, \Res_{L/K} (G_{i})_{L},\ldots, G_m) \longrightarrow K(K;G_1,\ldots, G_m)$, and the composition of it with the isomorphism $\psi$ above coincides with the norm map \eqref{eqn:norm}. When $L/K$ is a Galois extension, the action of $\Gal(L/K)$ on $\Res_{L/K} (G_{i})_{L}$ induces its action on
\[
K(L;G_1,\ldots, G_m)\cong K(K;G_1,\ldots, \Res_{L/K} (G_{i})_{L},\ldots, G_m)
\]
and we have $N_{L/K} \circ \sigma = N_{L/K}$ for all $\sigma\in \Gal(L/K)$. This action does not depend on the choice of $i$.
\begin{lemma}
\label{lemma:hilbert90}
Let $L/K$ be a cyclic Galois extension and let $\sigma\in \Gal(L/K)$ be a generator. Suppose that for two different $i\in \{1, \ldots, m\}$ the sequence
\begin{equation}
\label{eqn:normsurj}
\Res_{L/K}(G_{i})_{K}\stackrel{N_{L/K}}{\longrightarrow} G_i\longrightarrow 1
\end{equation}
is Zariski exact. Then the sequence of abelian groups
\[
K(L;G_1,\ldots, G_m)\stackrel{1-\sigma}{\longrightarrow} K(L;G_1,\ldots, G_m)\stackrel{N_{L/K}}{\longrightarrow} K(K;G_1,\ldots, G_m)\longrightarrow 0
\]
is exact.
\end{lemma}
{\em Proof.} Suppose that (\ref{eqn:normsurj}) is exact for $i=m-1, m$. Let $G_m' \colon = \Ker(N_{L/K}:\Res_{L/K}(G_m)_L\to G_m)$. By Lemmas \ref{lemma:rightexact} and \ref{lemma:weilres} there are exact sequences
\begin{align}
\label{align:exakt1}
& K(K;G_1,\ldots, G_m')\to K(L;G_1,\ldots, G_m)\stackrel{N_{L/K}}{\longrightarrow} K(K;G_1,\ldots, G_m)\to 0\\
\label{align:exakt2}
& K(L;G_1,\ldots, G_{m-1}, G_m')\stackrel{N_{L/K}}{\longrightarrow} K(K;G_1,\ldots, G_{m-1}, G_m')\longrightarrow 0.
\end{align}
Since $(G_m')_L\cong (G_m)_L^{n-1}$ ($n\colon =[L:K]$) we can replace the first group of (\ref{align:exakt2}) by $K(L;G_1,\ldots, G_m)^{n-1}$. By Lemma \ref{lemma:norm} the composite
\[
K(L;G_1,\ldots, G_m)^{n-1} \to K(K;G_1,\ldots, G_{m-1}, G_m')\to K(L;G_1,\ldots, G_m)
\]
is given on the $i$-th summand by $1-\sigma^i$. The assertion follows. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi
\paragraph{Galois symbol}
Let $G_1, \ldots, G_m$ be connected commutative algebraic groups over $K$, and let $n$ be an integer prime to the characteristic of $K$.
For any finite extension $L/K$, we have a homomorphism \cite{somekawa}
\begin{equation}
\label{eqn:galoissymbol}
h_L: K(L; G_1, \ldots, G_m)/n
\longrightarrow H^m(L,G_1[n]\otimes\ldots\otimes G_m[n])
\end{equation}
called the {\it Galois symbol}.
This is characterized by the following properties.
\begin{itemize}
\item[(i)]
If $x_i \in G_i(L)$ for $i=1, \ldots, m$,
then
$h_L( \{x_1, \ldots, x_m \}_{L/L} ) = (x_1) \cup \ldots \cup (x_m)$.
Here we write by $(x_i)$ for the image of
$x_i$ in $H^1(L, G_i[n])$ by the connecting homomorphism
associated to the exact sequence
$1 \to G_i[n] \to G_i \overset{n}{\to} G_i \to 1.$
\item[(ii)]
If $M/L/K$ is a tower of finite extensions and if $M/L$ is
separable (resp. purely inseparable), then the diagram
\[
\begin{CD}
K(M; G_1, \ldots, G_m)/n @>> h_M >
H^m(M,G_1[n]\otimes\ldots\otimes G_m[n])\\
@VV N_{M/L} V
@VVV\\
K(L; G_1, \ldots, G_m)/n @>> h_L >
H^m(L, G_1[n]\otimes\ldots\otimes G_m[n])\\
\end{CD}
\]
is commutative, where the right vertical map is the corestriction
(resp. the multiplication by $[M:L]$
under the identification
$H^m(M,G_1[n]\otimes\ldots\otimes G_m[n]) \cong
H^m(L, G_1[n]\otimes\ldots\otimes G_m[n])$ ).
\end{itemize}
Property (i) implies in particular that (\ref{eqn:galoissymbol}) coincides with the usual Galois symbol (\ref{eqn:galoisclass}) in the case $G_1= \ldots = G_m = {\mathbb G}_m$. In \cite{somekawa} Remark 1.7, Somekawa conjectured that the Galois symbol associated to semiabelian varieties should be injective.
\paragraph{Galois cohomology of cyclic extensions}
Let $L/K$ be a cyclic Galois extension of degree $n$ and let $\sigma$ be a generator of $G \colon = \Gal(L/K)$. For a discrete $G_K$-module $M$, tensoring the short exact sequence of $G$-modules
\begin{equation}
\label{eqn:char}
0\longrightarrow {\mathbb Z} \longrightarrow {\mathbb Z}[G] \stackrel{1-\sigma}{\longrightarrow} {\mathbb Z}[G] \longrightarrow {\mathbb Z}\longrightarrow 0
\end{equation}
with $M$ yields a distinguished triangle
\begin{equation}
\label{eqn:triangle}
M[1] \stackrel{\alpha}{\longrightarrow} C^{\punkt}(M) \stackrel{\beta}{\longrightarrow} M \stackrel{\gamma}{\longrightarrow} M[2]
\end{equation}
in the derived category $D(G_K)$. Here we denote by $C^{\punkt}(M)$ the complex
\[
\Res_{L/K} M \stackrel{1-\sigma}{\longrightarrow}\Res_{L/K} M
\]
concentrated in degree $-1$ and $0$. The spectral sequence
\[
E^{p,q}_1 = H^q(K, C^p(M))\Longrightarrow E^{p+q} = H^{p+q}(K,C^{\punkt}(M))
\]
induces short exact sequences
\begin{equation}
\label{eqn:edge}
0 \to H^q(L, M)_G \to H^q(K,C^{\punkt}(M)) \to H^{q+1}(L, M)^G\to 0.
\end{equation}
It is easy to see that the composite
\[
H^{q+1}(K, M)\stackrel{\alpha}{\longrightarrow}H^q(K,C^{\punkt}(M))\to H^{q+1}(L, M)^G
\]
is the restriction and
\[
H^q(L, M)_G \to H^q(K,C^{\punkt}(M)) \stackrel{\beta}{\longrightarrow}H^q(K,M)
\]
is induced by the corestriction. In particular we have $\gamma(H^q(K,M)) \subseteq \break \Ker(\res: H^{q+2}(K, M) \to H^{q+2}(L, M))$ hence
\begin{equation}
\label{eqn:killn}
n \gamma(H^q(K,M)) = 0.
\end{equation}
For an integer $m$ prime to $\charac K$ and $r\in {\mathbb N}$ we write ${\mathbb Z}/m{\mathbb Z}(r) \colon = \mu_m^{\otimes^r}$ and
\[
H^3(L/K,{\mathbb Z}/m{\mathbb Z}(2))\colon = \Ker(H^3(K, {\mathbb Z}/m{\mathbb Z}(2))\stackrel{\res}{\longrightarrow}H^3(L, {\mathbb Z}/m{\mathbb Z}(2))).
\]
By restricting $\alpha: H^3(K, {\mathbb Z}/m{\mathbb Z}(2)) \to H^2(K,C^{\punkt}({\mathbb Z}/m{\mathbb Z}(2)))$ to the subgroup $H^3(L/K,{\mathbb Z}/m{\mathbb Z}(2))$ and composing it with the inverse of the first map in (\ref{eqn:edge}) we obtain a map
\begin{equation}
\label{eqn:keymap}
H^3(L/K,{\mathbb Z}/m{\mathbb Z}(2)) \to \Ker(H^2(L, {\mathbb Z}/m{\mathbb Z}(2))_G\stackrel{\cor}{\to} H^2(K, {\mathbb Z}/m{\mathbb Z}(2))).
\end{equation}
\begin{lemma}
\label{lemma:keylemma}
Assume that $n$ is prime to $\charac K$ and $\mu_{n^2}(\overline{K}) \subset K$. Then the homomorphism (\ref{eqn:keymap}) is injective for $m=n$.
\end{lemma}
{\em Proof.} It is enough to show that $\gamma: H^1(K, {\mathbb Z}/n{\mathbb Z}(2)) \to H^3(K, {\mathbb Z}/n{\mathbb Z}(2))$ is zero. Consider the commutative diagram
\[
\begin{CD}
H^1(K, {\mathbb Z}/n{\mathbb Z}(2)) @>>>H^1(K, {\mathbb Z}/n^2{\mathbb Z}(2))\\
@VV \gamma V@VV \gamma V\\
H^3(K, {\mathbb Z}/n{\mathbb Z}(2)) @>>>H^3(K, {\mathbb Z}/n^2{\mathbb Z}(2))
\end{CD}
\]
induced by the canonical injection ${\mathbb Z}/n{\mathbb Z}(2)\to {\mathbb Z}/n^2{\mathbb Z}(2)$. The assumption $\mu_{n^2}(\overline{K}) \subset K$ implies that the upper horizontal map can be identified with
\[
K^*/(K^*)^n \longrightarrow K^*/(K^*)^{n^2}, x (K^*)^n \mapsto x^n (K^*)^{n^2}.
\]
In particular the image is contained in $n H^1(K, {\mathbb Z}/n^2{\mathbb Z}(2))$. By (\ref{eqn:killn}) it is mapped under $\gamma$ to $n\gamma(H^1(K, {\mathbb Z}/n^2{\mathbb Z}(2)))= 0$. On the other hand it is a simple consequence of the Merkurjev-Suslin theorem \cite{ms} that the lower horizontal map is injective. Hence $\gamma(H^1(K, {\mathbb Z}/n(2))) = 0$. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi
\paragraph{The counterexample}
Let $L/K$ be as in the last section and let $T \colon = \Ker(N_{L/K}:\Res_{L/K} {\mathbb G}_m \to {\mathbb G}_m)$. We make the following assumptions
\begin{align}
\label{align:torus1}
& \mbox{$n$ is prime to $\charac K$ and $\mu_{n^2}(\overline{K}) \subset K$,}\hspace{3cm}\\
\label{align:torus2}
& \mbox{$H^3(L/K,{\mathbb Z}/n{\mathbb Z}(2)))\ne 0$.}
\end{align}
\begin{proposition}
\label{proposition:counterex}
The Galois symbol $K(K; T,T)/n \to H^2(K, T[n]\otimes T[n])$ is not injective.
\end{proposition}
{\em Proof.} Let $\sigma$ be a generator of $G \colon = \Gal(L/K)$. The exact sequence
\[
1\longrightarrow {\mathbb G}_m \longrightarrow\Res_{L/K} {\mathbb G}_m \stackrel{1-\sigma}{\longrightarrow}\Res_{L/K} {\mathbb G}_m\stackrel{N_{L/K}}{\longrightarrow} {\mathbb G}_m \longrightarrow 1
\]
yields two short exact sequences
\begin{align}
\label{align:torus3}
& 1\longrightarrow {\mathbb G}_m \longrightarrow \Res_{L/K} {\mathbb G}_m \longrightarrow T \longrightarrow 1,\hspace{2cm} \\
\label{align:torus4}
& 1 \longrightarrow T \longrightarrow \Res_{L/K} {\mathbb G}_m\longrightarrow {\mathbb G}_m\longrightarrow 1.
\end{align}
Correspondingly, (\ref{eqn:char}) induces two short exact sequences
\begin{equation}
\label{eqn:cochar}
0 \to {\mathbb Z} \to {\mathbb Z}[G] \to X \to 0, \qquad 0 \to X \to{\mathbb Z}[G] \to{\mathbb Z} \to 0
\end{equation}
where $X$ denotes the cocharacter group of $T$. Note that the sequence (\ref{align:torus3}) is Zariski exact by Hilbert 90. Since the map $\Res_{L/K} {\mathbb G}_m \to T$ factors through $\Res_{L/K} {\mathbb G}_m \to \Res_{L/K} T \to T$ the sequence $\Res_{L/K} T \to T \to 1$ is Zariski exact as well. By Lemma \ref{lemma:hilbert90} the upper horizontal map in the diagram
\[
\begin{CD}
(K(L; T, T)/n)_G @> N_{L/K} >> K(K; T,T)/n\\
@VVV@VVV\\
H^2(L, T[n]\otimes T[n])_G @> \cor >>H^2(K, T[n]\otimes T[n])
\end{CD}
\]
is an isomorphism. The vertical maps are Galois symbols. Since $T_L$ is a split torus the left vertical map is an isomorphism by the Merkurjev-Suslin theorem \cite{ms}. Thus to finish the proof it remains to show that the lower vertical arrow is not injective. Note that $T[n] \cong {\mathbb Z}/n{\mathbb Z}(1)\otimes X$. Hence the assertion follows from Lemma \ref{lemma:keylemma} and Lemma \ref{lemma:directsum} below.
\begin{lemma}
\label{lemma:directsum}
There exists homomorphisms of $G$-modules $e: {\mathbb Z} \to X\otimes_{{\mathbb Z}} X$ and $f: X\otimes_{{\mathbb Z}} X\to {\mathbb Z}$ such that $f\circ e: {\mathbb Z}\to {\mathbb Z}$ is multiplication by $n-1$.
\end{lemma}
{\em Proof.} For a $G$-module $M$ we write $M^{\vee}$ for the $G$-module $\Hom(M, {\mathbb Z})$. Let $(\,\,,\,\,):{\mathbb Z}[G] \otimes_{{\mathbb Z}} {\mathbb Z}[G] \longrightarrow {\mathbb Z}$
be the symmetric pairing given by
\begin{equation}
\label{eqn:pair}
(g,g') \quad = \quad \left\{ \begin{array}{ll}
1 & \mbox{if $g=g'$,}\\
0 & \mbox{if $g\ne g'$.}\\
\end{array} \right.
\end{equation}
It yields an isomorphism ${\mathbb Z}[G]\to {\mathbb Z}[G]^{\vee}$. For a submodule $M \subseteq {\mathbb Z}[G]$ let
\[
M^{\perp} = \{x\in {\mathbb Z}[G]\mid \, (x,m) = 0 \,\,\,\forall\, m\in M\}.
\]
Then we have $X^{\perp} = {\mathbb Z} S$ and $({\mathbb Z} S)^{\perp} = X$ where $S = \sum_{i=0}^{n-1} \, \sigma^i$. Thus (\ref{eqn:pair}) yields an isomorphism
$X \cong ({\mathbb Z}[G]/{\mathbb Z} S)^{\vee}$. By (\ref{eqn:cochar}) we have ${\mathbb Z}[G]/{\mathbb Z} S\cong X$, hence
\[
X\otimes_{{\mathbb Z}} X\quad \cong\quad X\otimes_{{\mathbb Z}} X^{\vee} \quad \cong\quad \Hom(X,X)
\]
Thus it suffices to prove the assertion for $\Hom(X,X)$. Obviously, for the two maps $e:{\mathbb Z} \to \Hom(X,X), m\mapsto m\id_X$ and $f: \Hom(X,X) \to {\mathbb Z}, \tau \mapsto \Tr(\tau)$ we have $f\circ e = \rank(X) = n-1$. \ifmmode\eqno\Box\else\noproof\vskip0.8truecm\fi
\begin{remark}
\label{remark:nlocal} \rm It is easy to construct examples where the assumptions (\ref{align:torus1}) and (\ref{align:torus2}) above are satisfied. For instance if $K$ is a $2$-local field satisfying property (\ref{align:torus1}) and $L/K$ is any cyclic extension of degree $n$ then (\ref{align:torus2}) holds by \cite{kato}.
\end{remark}
\section{Counterexample to a conjecture of Beilinson}
\label{section:beilinson}
We first introduce some notation and recall a few facts from \cite{voevodsky} and \cite{mvw}. Let $K$ be a field of characteristic zero. Let $Cor_K$ denote the additive category of finite correspondences (\cite{mvw}, 1.1). The objects of $Cor_K$ are smooth separated $K$-schemes of finite type and for $X,Y\in \Obj(Cor_K)$ the group of morphisms $Cor_K(X,Y)$ is the free abelian group generated by integral closed subschemes $W$ of $X\times Y$ which are finite and surjective over $X$. Let $\Dnis{K}$ (resp.\ $\Det{K}$) denote the derived category of complexes of Nisnevich (resp.\ {\'e}tale) sheaves with transfer bounded from above.
The category of effective motivic complexes $\DMn(K)$ (resp.\ {\'e}tale effective motivic complexes $\DMe(K)$) is the full subcategory of $D^{-}(Shv_{\Nis}\break (Cor_K))$ (resp.\ $\Det{K}$) which consists of complexes $C^\star$ with homotopy invariant cohomology sheaves $H^i(C^\star)$ for all $i$ (see \cite{voevodsky}, 3.1 or \cite{mvw}, 14.1, resp.\ 9.2). $\DMn(K)$ and $\DMe(K)$ are triangulated tensor categories. They are equipped with the t-structure induced from the standard t-structure on $\Dnis{K}$ (resp.\ $\Det{K}$). There is a covariant functor $M: Cor_K\to \DMn(K), X \mapsto M(X)$ and we have $M(X\times Y) = M(X)\otimes M(Y)$. There is also the "change of topology" functor $\alpha^*: \DMn(K)\to \DMe(K)$. It is a tensor functor which admits a right adjoint $R\alpha_*: \DMe(K)\to \DMn(K)$.
Beilinson \cite{beilinson} has proposed the following generalization of the Milnor-Bloch-Kato conjecture: For any smooth affine $K$-scheme $X$ the adjunction morphism $M(X)\to R\alpha_*\alpha^* M(X)$ induces an isomorphism on cohomology in degrees $\le 0$, i.e.\ the map
\begin{equation}
\label{eqn:beilinsonconj}
a_X: M(X)\longrightarrow t_{\le 0} R\alpha_*\alpha^* M(X)
\end{equation}
is an isomorphism in $\DMn(K)$.
If $X = ({\mathbb G}_m)^d= {\mathbb G}_m\times \ldots \times {\mathbb G}_m$ ($d$-fold product of ${\mathbb G}_m$) we have $M(X) \cong ({\mathbb Z} \oplus {\mathbb Z}(1)[1])^d$. Thus $a_X$ is an isomorphism if and only if
\begin{equation}
\label{eqn:mbkconj}
{\mathbb Z}(n) \longrightarrow t_{\le n} R\alpha_*\alpha^* {\mathbb Z}(n)
\end{equation}
is an isomorphism for all $n\le d$. It is known (compare \cite{sv}) that the Milnor-Bloch-Kato conjecture is equivalent to the assertion that (\ref{eqn:mbkconj}) is an isomorphism for all $n\ge 0$.
Let $L/K$ be a separable quadratic extension and let $T \colon = \Ker(N_{L/K}:\Res_{L/K} {\mathbb G}_m \to {\mathbb G}_m)$. We shall show that (\ref{eqn:beilinsonconj}) is in general not an isomorphism for $X = T^n$ for $n\ge 2$.
By (\cite{hk}, 7.3) there exists a canonical decomposition $M(T) = {\mathbb Z} \oplus {\mathbb Z}(L/K, 1)[1]$
where ${\mathbb Z}(L/K, 1)$ is the cone of the morphism ${\mathbb Z}(1)\to \Res_{L/K} {\mathbb Z}(1)$.
\begin{remarks}
\rm (a) Here is a more explicit description of the motive ${\mathbb Z}(L/K, 1)$. The torus $T$ defines a homotopy invariant {\'e}tale (hence Nisnevich) sheaf with transfer and therefore an element of $\DMn(K)$. We have
\[
{\mathbb Z}(L/K, 1) \quad \cong \quad T[-1].
\]
This can be deduced from the corresponding statement for ${\mathbb G}_m$ (\cite{mvw}, 4.1) and the exactness of (\ref{align:torus3}) (as a sequence in $Shv_{\Nis}(Cor_K)$).
\noindent (b)\footnote{This remark has been communicated to us by B.\ Kahn.} Let $A_1, \ldots , A_n$ be semi-abelian varieties over $K$. It should be possible to identify the generalized Milnor $K$-group $K(K;A_1,\ldots, A_n)$ with a $\Hom$-group in $\DMn(K)$. For that we view $A_1, \ldots , A_n$ again as elements in $Shv_{\Nis}(Cor_K)$. Then we expect that
\[
K(K;A_1,\ldots, A_n) \cong \Hom_{\DMn(K)}({\mathbb Z}, A_1\otimes \ldots \otimes A_n).
\]
If $A_1= \ldots = A_n = {\mathbb G}_m$ this is proved in (\cite{mvw}, lecture 5) and it is likely that the proof given there can be adapted to the case of arbitrary semi-abelian varieties.
\end{remarks}
For $p,q \ge 0$ and $n= p+q$ we define
\[
{\mathbb Z}(L/K, p, q) \colon = {\mathbb Z}(L/K, 1)^{\otimes^p}\otimes {\mathbb Z}(q)
\]
and denote by $C(p,q)$ the cone of ${\mathbb Z}(L/K, p, q) \longrightarrow t_{\le n} R\alpha_*\alpha^* {\mathbb Z}(L/K, p, q)$.
Note that ${\mathbb Z}(L/K, p, q)[n]$ is a direct summand of $M(T^p\times ({\mathbb G}_m)^q)$. We also put $C(n) \colon = C(0,n)$. We have
\begin{equation}
\label{eqn:mbk2}
C(n)\cong (t_{\ge n+1} R\alpha_* {\mathbb Q}/{\mathbb Z}(n))[-1]
\end{equation}
This follows from the Milnor-Bloch-Kato conjecture (in fact for our purpose we need (\ref{eqn:mbk2}) only after localization at the prime $2$ where it follows from the Milnor conjecture \cite{voevodsky2}).
Tensoring ${\mathbb Z}(1)\to \Res_{L/K} {\mathbb Z}(1)\to {\mathbb Z}(L/K,1) \to {\mathbb Z}(1)[1]$ with ${\mathbb Z}(L/K, p-1, q)$ (for $p\ge 1, q\ge 0$) yields a distinguished triangle
\[
{\mathbb Z}(L/K, p-1, q+1) \to \Res_{L/K} {\mathbb Z}(n) \to {\mathbb Z}(L/K, p, q)\to {\mathbb Z}(L/K, p-1, q+1)[1]
\]
hence also a triangle
\begin{equation}
\label{eqn:basictri}
C(p-1, q+1)\to \Res_{L/K} C(n) \to C(p,q)\to C(p-1, q+1)[1].
\end{equation}
The following Lemma follows easily by induction on $q$ using (\ref{eqn:mbk2}) and (\ref{eqn:basictri}).
\begin{lemma}
\label{lemma:counterexample}
Let $p\ge 1,q\ge 0$ and $n= p+q$. Then we have $H^{k}(C(p,q)) = 0$ for $k<q+2$ and
\[
H^{q+2}(C(p,q))(K) \cong H^{n+1}(L/K, {\mathbb Q}/{\mathbb Z}(n))
\]
where $H^{n+1}(L/K, {\mathbb Q}/{\mathbb Z}(n))\colon = \Ker(H^{n+1}(K, {\mathbb Q}/{\mathbb Z}(n))\stackrel{\res}{\longrightarrow}H^{n+1}(L, {\mathbb Q}/{\mathbb Z}(n)))$.
\end{lemma}
Since $[L:K]=2$ we have
\begin{align*}
H^{n+1}(L/K, {\mathbb Q}/{\mathbb Z}(n)) & \cong H^{n+1}(L/K, {\mathbb Q}_2/{\mathbb Z}_2(n)) \cong H^{n+1}(L/K, {\mathbb Z}/2{\mathbb Z}(n))\\
& \cong H^{n+1}(L/K, {\mathbb Z}/2{\mathbb Z})
\end{align*}
(the second isomorphism is a consequence of the Milnor conjecture).
Now the following Proposition follows by
applying Lemma \ref{lemma:counterexample} for $(p,q) = (2,0)$ and $(n, 0)$.
\begin{proposition}
\label{proposition:beilcounterexample} (a) There exists a short exact sequence
\[
0\longrightarrow H^0(M(T\times T))(K)\longrightarrow R^0\alpha_*\alpha^* M(T\times T)(K)\longrightarrow H^3(L/K, {\mathbb Z}/2{\mathbb Z})\longrightarrow 0
\]
In particular if $H^3(L/K, {\mathbb Z}/2{\mathbb Z})\ne 0$ then (\ref{eqn:beilinsonconj}) is not an isomorphism for $X = T\times T$.
\noindent (b) More generally let $n$ be an integer $\ge 2$ and assume that $H^{n+1}(L/K, {\mathbb Z}/2{\mathbb Z})\ne 0$. Then the map (\ref{eqn:beilinsonconj}) is not an isomorphism for $X = T^n$. More precisely either the map
\[
H^{2-n}(M(X))\longrightarrow R^{2-n}\alpha_*\alpha^* M(X)
\]
is not surjective or
\[
H^{3-n}(M(X))\longrightarrow R^{3-n}\alpha_*\alpha^* M(X)
\]
is not injective.
\end{proposition}
An $n$-local field $K$ of characteristic $0$ provides an example where the above assumption holds. In fact by \cite{kato} we have $H^{n+1}(L/K, {\mathbb Z}/2{\mathbb Z})\cong {\mathbb Z}/2{\mathbb Z}$ for such fields.
|
1,314,259,993,464 | arxiv | \section{Introduction}
Irrotational dust spacetimes have been widely studied, in
particular as models for the late universe, and as arenas for
the evolution of density perturbations and gravity wave
perturbations. In linearised theory, i.e. where the irrotational
dust spacetime is close to a Friedmann--Robertson--Walker dust
spacetime, gravity wave perturbations are usually
characterised by
transverse traceless tensor modes.
In terms of the covariant and gauge--invariant
perturbation formalism initiated by Hawking \cite{h}
and developed by Ellis and Bruni \cite{eb},
these perturbations are described by the
electric and magnetic Weyl tensors, given respectively by
\begin{equation}
E_{ab}=C_{acbd}u^c u^d\,,\quad H_{ab}={\textstyle{1\over2}}\eta_{acde}u^e
C^{cd}{}{}_{bf}u^f
\label{eh}
\end{equation}
where $C_{abcd}$ is the Weyl tensor, $\eta_{abcd}$ is the
spacetime permutation tensor, and $u^a$ is the dust four--velocity.
In the so--called `silent universe' case
$H_{ab}=0$, no information is exchanged between neighbouring
particles, also in the exact nonlinear case. Gravity wave
perturbations require nonzero $H_{ab}$, which is
divergence--free in the linearised case \cite{led}, \cite{he},
\cite{b}.
A crucial question for the analysis of gravity waves
interacting with matter is whether
the properties of the linearised perturbations are
in line with those of the exact nonlinear theory.
Lesame et al. \cite{led} used the covariant formalism
and then specialised to a shear tetrad, in order to
study this question. They concluded that in the nonlinear case,
the only solutions with $\mbox{div}\,H=0$ are those with $H_{ab}=0$
--- thus indicating a linearisation instability, with potentially
serious implications for standard analyses of gravity waves, as
pointed out in \cite{m}, \cite{ma}.
It is shown here that the argument of \cite{led}
does not in fact prove that
$\mbox{div}\,H=0$ implies
$H_{ab}=0$.
The error in \cite{led} is traced to an incorrect sign
in the Weyl tensor decomposition (see below).\footnote{The authors of
\cite{led} are in agreement about the error and its implication
(private communication).}
The same covariant formalism is used here, but with modifications
that
lead to simplification and greater clarity. This improved
covariant formalism renders the equations more transparent, and
together with the new identities derived via the formalism,
it facilitates a fully covariant analysis,
not requiring
lengthy tetrad calculations such as those
used in \cite{led}.
The improved formalism
is presented in Section II, and the identities that are crucial for
covariant analysis are given in the appendix.
In Section III, a covariant derivation is given to show
that {\em in the generic case of irrotational dust
spacetimes, the constraint equations are
preserved under evolution.}
A by--product of the
argument is the identification of the
error in \cite{led}.
In a companion paper \cite{mel},
we use the covariant formalism of Section III
to show that when $\mbox{div}\,H=0$,
no further conditions are generated. In particular, $H_{ab}$ {\em is
not forced to vanish, and there is not a linearisation instability.}
A specific example is presented in
Section IV, where
it is shown that Bianchi type V spacetimes
include cases in which
$\mbox{div}\,H=0$ but $H_{ab}\neq0$.
\section{The covariant formalism for propagation and
constraint equations}
The notation and conventions are based on
those of \cite{led}, \cite{e1};
in particular $8\pi G=1=c$, round brackets enclosing indices
denote symmetrisation and square brackets denote
anti--symmetrisation. Curvature tensor conventions are given in
the appendix.
Considerable simplification and streamlining results from
the following definitions: the projected permutation tensor
(compare \cite{e3}, \cite{mes}),
\begin{equation}
\varepsilon_{abc}=\eta_{abcd}u^d
\label{d1}
\end{equation}
the projected, symmetric and trace--free part of a tensor,
\begin{equation}
S_{<ab>}=h_a{}^c h_b{}^d S_{(cd)}-
{\textstyle{1\over3}}S_{cd}h^{cd} h_{ab}
\label{d2}
\end{equation}
where $h_{ab}=g_{ab}+u_au_b$ is the spatial projector
and $g_{ab}$ is the metric,
the projected spatial covariant derivative (compare \cite{e2},
\cite{eb}, \cite{mes}),
\begin{equation}
\mbox{D}_a S^{c\cdots d}{}{}{}{}_{e\cdots f}=h_a{}^b h^c{}_p \cdots
h^d{}_q h_e{}^r \cdots h_f{}^s \nabla_b
S^{p\cdots q}{}{}{}_{r\cdots s}
\label{d3}
\end{equation}
and the covariant spatial curl of a tensor,
\begin{equation}
\mbox{curl}\, S_{ab}=\varepsilon_{cd(a}\mbox{D}^c S_{b)}{}^d
\label{d4}
\end{equation}
Note that
$$
S_{ab}=S_{(ab)}\quad\Rightarrow\quad\mbox{curl}\, S_{ab}=\mbox{curl}\, S_{<ab>}
$$
since $\mbox{curl}\,(fh_{ab})=0$ for any $f$.
The covariant spatial divergence of $S_{ab}$ is
$$(\mbox{div}\,S)_a=\mbox{D}^b S_{ab}$$
The covariant spatial curl of a vector is
$$
\mbox{curl}\, S_a=\varepsilon_{abc}\mbox{D}^bS^c
$$
Covariant analysis of propagation and constraint equations
involves frequent use of a number of algebraic and differential
identities governing the above quantities. In particular, one
requires commutation rules for spatial and time derivatives.
The necessary identities are collected for convenience in the
appendix, which includes a simplification of
known results and a number of new results.
The Einstein, Ricci and Bianchi equations may be covariantly split
into propagation and constraint equations \cite{e1}.
The propagation equations given in
\cite{led} for irrotational dust are simplified by the present
notation, and become
\begin{eqnarray}
\dot{\rho}+\Theta\rho &=& 0
\label{p1}\\
\dot{\Theta}+{\textstyle{1\over3}}\Theta^2 &=& -{\textstyle{1\over2}}\rho
-\sigma_{ab}\sigma^{ab}
\label{p2}\\
\dot{\sigma}_{ab}+{\textstyle{2\over3}}\Theta\sigma_{ab}+\sigma_{c<a}
\sigma_{b>}{}^c &=& -E_{ab}
\label{p3}\\
\dot{E}_{ab}+\Theta E_{ab}-3\sigma_{c<a}E_{b>}{}^c &=&
\mbox{curl}\, H_{ab}-{\textstyle{1\over2}}\rho\sigma_{ab}
\label{p4}\\
\dot{H}_{ab}+\Theta H_{ab}-3\sigma_{c<a}H_{b>}{}^c &=& -\mbox{curl}\, E_{ab}
\label{p5}
\end{eqnarray}
while the constraint equations become
\begin{eqnarray}
\mbox{D}^b\sigma_{ab} &=& {\textstyle{2\over3}}\mbox{D}_a \Theta
\label{c1}\\
\mbox{curl}\, \sigma_{ab}&=& H_{ab}
\label{c2}\\
\mbox{D}^b E_{ab} &=& {\textstyle{1\over3}}\mbox{D}_a \rho +
\varepsilon_{abc}\sigma^b{}_d H^{cd}
\label{c3}\\
\mbox{D}^b H_{ab} &=& -\varepsilon_{abc}\sigma^b{}_d E^{cd}
\label{c4}
\end{eqnarray}
A dot denotes a covariant derivative along $u^a$, $\rho$ is the
dust energy density, $\Theta$ its rate of
expansion, and $\sigma_{ab}$ its shear. Equations (\ref{p4}),
(\ref{p5}), (\ref{c3}) and (\ref{c4}) display the analogy with
Maxwell's theory. The FRW case is covariantly characterised by
$$
\mbox{D}_a\rho=0=\mbox{D}_a\Theta\,,\quad\sigma_{ab}=E_{ab}=H_{ab}=0
$$
and in the linearised case of an almost FRW spacetime, these gradients
and tensors are first order of smallness.
The dynamical fields in these equations are the scalars $\rho$ and
$\Theta$, and the
tensors $\sigma_{ab}$,
$E_{ab}$ and $H_{ab}$, which all satisfy $S_{ab}=S_{<ab>}$. The
metric $h_{ab}$ of the spatial
surfaces orthogonal to $u^a$ is implicitly
also involved in the equations as a dynamical field. Its propagation
equation is simply the identity $\dot{h}_{ab}=0$,
and its constraint equation is the identity $\mbox{D}_a h_{bc}=0$ --
see (\ref{a4}). The Gauss--Codacci equations for the Ricci curvature
of the spatial surfaces \cite{e1}
\begin{eqnarray}
R^*_{ab}-{\textstyle{1\over3}}R^*h_{ab} &=& -\dot{\sigma}_{ab}-\Theta
\sigma_{ab} \nonumber\\
R^* &=&-{\textstyle{2\over3}}\Theta^2+\sigma_{ab}\sigma^{ab}+2\rho \label{r1}
\end{eqnarray}
have not been included, since the curvature is algebraically
determined by the other fields,
as follows from (\ref{p3}):
\begin{equation}
R^*_{ab}=E_{ab}-{\textstyle{1\over3}}\Theta\sigma_{ab}+\sigma_{ca}
\sigma_b{}^c+{\textstyle{2\over3}}\left(\rho-{\textstyle{1\over3}}\Theta^2\right)
h_{ab}
\label{r2}\end{equation}
The contracted Bianchi identities for the 3--surfaces \cite{e1}
$$
\mbox{D}^b R^*_{ab}={\textstyle{1\over2}}\mbox{D}_a R^*
$$
reduce to the Bianchi constraint (\ref{c3}) on using (\ref{c1}),
(\ref{c2}) and the identity (\ref{a13}) in (\ref{r1}) and
(\ref{r2}). Consequently, these identities do not impose any new
constraints.
By the constraint (\ref{c2}), one can in principle eliminate $H_{ab}$.
However, this leads to second--order derivatives in the propagation
equations (\ref{p4}) and (\ref{p5}). It seems preferable to maintain
$H_{ab}$ as a basic field.
One interesting use of (\ref{c2}) is in
decoupling the shear from the Weyl tensor.
Taking the time derivative of
the shear propagation equation (\ref{p3}), using the propagation
equation (\ref{p4}) and the constraint (\ref{c2}), together with
the identity (\ref{a16}), one gets
\begin{eqnarray}
&&-\mbox{D}^2\sigma_{ab}+\ddot{\sigma}_{ab}+{\textstyle{5\over3}}\Theta
\dot{\sigma}_{ab}-{\textstyle{1\over3}}\dot{\Theta}\sigma_{ab}+
{\textstyle{3\over2}}\mbox{D}_{<a}\mbox{D}^c\sigma_{b>c} \nonumber\\
&&{}=4\Theta\sigma_{c<a}\sigma_{b>}{}^c+6\sigma^{cd}\sigma_{c<a}
\sigma_{b>d}-2\sigma^{de}\sigma_{de}h_{c<a}\sigma_{b>}{}^c+
4\sigma_{c<a}\dot{\sigma}_{b>}{}^c
\label{s}\end{eqnarray}
where $\mbox{D}^2=\mbox{D}^a \mbox{D}_a$ is the covariant Laplacian.
This is {\em the exact nonlinear generalisation of the linearised
wave equation for shear perturbations} derived in \cite{he}.
In the linearised
case, the right hand side of (\ref{s}) vanishes, leading to a
wave equation governing the propagation of shear perturbations in
an almost FRW dust spacetime:
$$
-\mbox{D}^2\sigma_{ab}+\ddot{\sigma}_{ab}+{\textstyle{5\over3}}\Theta
\dot{\sigma}_{ab}-{\textstyle{1\over3}}\dot{\Theta}\sigma_{ab}+
{\textstyle{3\over2}}\mbox{D}_{<a}\mbox{D}^c\sigma_{b>c} \approx 0
$$
As suggested by comparison of (\ref{c2}) and (\ref{c4}), and
confirmed by the identity (\ref{a14}), div~curl is {\em not} zero,
unlike its Euclidean vector counterpart. Indeed, the divergence of
(\ref{c2}) reproduces (\ref{c4}), on using the (vector) curl
of (\ref{c1}) and
the identities
(\ref{a2}), (\ref{a8}) and (\ref{a14}):
\begin{equation}
\mbox{div (\ref{c2}) and curl (\ref{c1})}\quad\rightarrow\quad
\mbox{(\ref{c4})}
\label{i1}\end{equation}
Further
differential relations amongst the propagation and constraint
equations are
\begin{eqnarray}
\mbox{curl (\ref{p3}) and (\ref{c1}) and (\ref{c2}) and
(\ref{c2})$^{\displaystyle{\cdot}}$}\quad
& \rightarrow & \quad\mbox{(\ref{p5})} \label{i2}\\
\mbox{grad (\ref{p2}) and div (\ref{p3}) and (\ref{c1}) and
(\ref{c1})$^{\displaystyle{\cdot}}$ and
(\ref{c2})}\quad & \rightarrow & \quad \mbox{(\ref{c3})} \label{i3}
\end{eqnarray}
where the identities (\ref{a7}), (\ref{a11.}), (\ref{a13}),
(\ref{a13.}) and (\ref{a15}) have been used.
Consistency
conditions may arise
to preserve the constraint equations under
propagation along $u^a$ \cite{led}, \cite{he}.
In the general
case, i.e. without imposing any assumptions about
$H_{ab}$ or other quantities, the constraints are
preserved under evolution.
This is shown in the next section, and forms the
basis for analysing special cases, such as
$\mbox{div}\,H=0$.
\section{Evolving the constraints: general case}
Denote the constraint equations (\ref{c1}) --- (\ref{c4}) by
${\cal C}^A=0$, where
$$
{\cal C}^A=\left(\mbox{D}^b\sigma_{ab}-{\textstyle{2\over3}}\mbox{D}_a\Theta\,,\,
\mbox{curl}\,\sigma_{ab}-H_{ab}\,,\,\cdots\right)
$$
and $A={\bf 1},\cdots, {\bf 4}$.
The evolution of ${\cal C}^A$ along $u^a$ leads to a
system of equations $\dot{{\cal C}}^A={\cal F}^A
({\cal C}^B)$, where ${\cal F}^A$ do not contain
time derivatives, since these are eliminated via the propagation
equations and suitable identities. Explicitly, one obtains after
lengthy calculations the following:
\begin{eqnarray}
\dot{{\cal C}}^{\bf 1}{}_a&=&-\Theta{\cal C}^{\bf 1}{}_a+2\varepsilon_a{}^{bc}
\sigma_b{}^d{\cal C}^{\bf 2}{}_{cd}-{\cal C}^{\bf 3}{}_a
\label{pc1}\\
\dot{{\cal C}}^{\bf 2}{}_{ab}&=&-\Theta{\cal C}^{\bf 2}{}_{ab}
-\varepsilon^{cd}{}{}_{(a}\sigma_{b)c}{\cal C}^{\bf 1}{}_d
\label{pc2}\\
\dot{{\cal C}}^{\bf 3}{}_a&=&-{\textstyle{4\over3}}\Theta{\cal C}^{\bf 3}{}_a
+{\textstyle{1\over2}}\sigma_a{}^b{\cal C}^{\bf 3}{}_b-{\textstyle{1\over2}}\rho
{\cal C}^{\bf 1}{}_a \nonumber\\
&&{}+{\textstyle{3\over2}}E_a{}^b{\cal C}^{\bf 1}{}_b
-\varepsilon_a{}^{bc}E_b{}^d{\cal C}^{\bf 2}
{}_{cd}+{\textstyle{1\over2}}\mbox{curl}\,{\cal C}^{\bf 4}{}_a
\label{pc3}\\
\dot{{\cal C}}^{\bf 4}{}_a&=&-{\textstyle{4\over3}}\Theta{\cal C}^{\bf 4}{}_a
+{\textstyle{1\over2}}\sigma_a{}^b{\cal C}^{\bf 4}{}_b
\nonumber\\
&&{}+{\textstyle{3\over2}}H_a{}^b{\cal C}^{\bf 1}{}_b
-\varepsilon_a{}^{bc}H_b{}^d{\cal C}^{\bf 2}
{}_{cd}-{\textstyle{1\over2}}\mbox{curl}\,{\cal C}^{\bf 3}{}_a
\label{pc4}
\end{eqnarray}
For completeness, the following list of equations used in the
derivation is given:\\
Equation
(\ref{pc1}) requires (\ref{a7}), (\ref{a11.}), (\ref{p2}), (\ref{p3}),
(\ref{c1}), (\ref{c2}), (\ref{c3}), (\ref{a13}) -- where (\ref{a13})
is needed to eliminate the following term from the right hand side
of (\ref{pc1}):
\begin{eqnarray*}
&&\varepsilon_{abc}\sigma^b{}_d\,\mbox{curl}\,\sigma^{cd}
-\sigma^{bc}\mbox{D}_a \sigma_{bc}\\
&&{}+\sigma^{bc}
\mbox{D}_c \sigma_{ab}+{\textstyle{1\over2}}\sigma_{ac}\mbox{D}_b\sigma^{bc} \equiv0
\end{eqnarray*}
Equation
(\ref{pc2}) requires (\ref{a15}), (\ref{p3}), (\ref{p5}), (\ref{c1}),
(\ref{c2}), (\ref{a3.}) -- where (\ref{a3.}) is needed to eliminate
the following term from the right hand side of (\ref{pc2}):
$$
\varepsilon_{cd(a}\left\{\mbox{D}^c\left[\sigma_{b)}{}^e\sigma^d{}_e\right]+
\mbox{D}^e\left[\sigma_{b)}{}^d\sigma^c{}_e\right]\right\}\equiv0
$$
Equation
(\ref{pc3}) requires (\ref{a11.}), (\ref{p1}), (\ref{p4}), (\ref{p5}),
(\ref{a14}), (\ref{a3}), (\ref{c1}), (\ref{c3}), (\ref{c4}),
(\ref{a13}) -- where (\ref{a13}) is needed to eliminate the
following term from the right hand side of (\ref{pc3}):
\begin{eqnarray*}
&& {\textstyle{1\over2}}\sigma_{ab}\mbox{D}_c E^{bc}
+\varepsilon_{abc}E^b{}_d\, \mbox{curl}\,\sigma^{cd}\\
& &{}+\varepsilon_{abc}\sigma^b{}_d
\,\mbox{curl}\, E^{cd}
+{\textstyle{1\over2}}E_{ab}\mbox{D}_c\sigma^{bc}+E^{bc}\mbox{D}_b\sigma_{ac}\\
& &{}+\sigma^{bc}\mbox{D}_b E_{ac}-
\mbox{D}_a\left(\sigma^{bc}E_{bc}\right)\equiv 0
\end{eqnarray*}
Equation
(\ref{pc4}) requires (\ref{a11.}), (\ref{p3}), (\ref{p4}), (\ref{p5}),
(\ref{a14}), (\ref{a13}), (\ref{c1}), (\ref{c2}), (\ref{c3}),
(\ref{c4}).
In \cite{led}, a sign error in the Weyl tensor decomposition
(\ref{a5}) led to spurious consistency conditions arising from
the evolution of (\ref{c1}), (\ref{c2}). The evolution
of the Bianchi constraints (\ref{c3}), (\ref{c4})
was not considered in \cite{led}.
Now suppose that the constraints
are satisfied on an initial spatial surface $\{t=t_0\}$, i.e.
\begin{equation}
{\cal C}^A\Big|_{t_0}=0
\label{i}\end{equation}
where
$t$ is proper time along the dust worldlines. Then by
(\ref{pc1}) -- (\ref{pc4}), it follows that the
constraints are satisfied for all time, since ${\cal C}^A=0$ is
a solution for the given initial data. Since the system is linear,
this solution is unique.
This establishes that the constraint equations are preserved under
evolution. However, it does not prove existence of solutions to
the constraints in the generic case
--- only that if solutions exist, then they evolve
consistently. The question of existence is currently under
investigation. One would like to show explicitly how a metric
is constructed from given initial data in the covariant formalism.
This involves in particular considering whether the
constraints generate new constraints, i.e. whether they are
integrable as they stand, or whether there are implicit
integrability conditions. The relation (\ref{i1}) is part of the
answer to this question, in that it shows how, within any
$\{t=\mbox{ const}\}$ surface, the constraint ${\cal C}^{\bf 4}$
is satisfied if ${\cal C}^{\bf 1}$ and ${\cal C}^{\bf 2}$ are
satisfied. Specifically, (\ref{i1}) shows that
\begin{equation}
{\cal C}^{\bf 4}{}_a={\textstyle{1\over2}}\mbox{curl}\,{\cal C}^{\bf 1}
{}_a-\mbox{D}^b{\cal C}^{\bf 2}{}_{ab}
\label{i4}\end{equation}
Hence, if one takes ${\cal C}^{\bf 1}$ as determining
$\mbox{grad}\,\Theta$,
${\cal C}^{\bf 2}$ as defining $H$ and ${\cal C}^{\bf 3}$ as
determining $\mbox{grad}\,\rho$, the constraint equations are
consistent with each other because ${\cal C}^{\bf 4}$ then follows.
Thus if there exists a solution to the constraints on
$\{t=t_0\}$, then it is consistent and it evolves consistently.
In the next section, Bianchi type V spacetimes are shown to provide
a concrete example of existence and consistency in the case
$$
\mbox{div}\,E\neq 0\neq\mbox{curl}\, E\,,\quad\mbox{div}\,H=0\neq\mbox{curl}\, H\,,\quad
\mbox{grad}\,\rho=0=\mbox{grad}\,\Theta
$$
\section{Spacetimes with $\mbox{div}\,H=0\neq H$}
Suppose now that the magnetic Weyl tensor is divergence--free, a
necessary condition for gravity waves:
\begin{equation}
\mbox{div}\,H=0\quad\Leftrightarrow\quad [\sigma,E]=0
\label{dh}\end{equation}
where $[S,V]$ is the index--free notation for the covariant commutator
of tensors [see (\ref{a2})], and the equivalence follows from
the constraint (\ref{c4}).
Using the covariant
formalism of Section III, it can be shown \cite{mel} that (\ref{dh})
is preserved under evolution without generating further conditions.
In particular, (\ref{dh}) does not force $H_{ab}=0$ -- as shown by
the following explicit example.
First note that by (\ref{r2}) and (\ref{dh}):
$$
R^*_{ab}={\textstyle{1\over3}}R^*h_{ab}\quad\Rightarrow\quad
[\sigma,R^*]=0\quad\Rightarrow\quad\mbox{div}\,H=0
$$
i.e., {\em irrotational dust spacetimes
have $\mbox{div}\,H=0$ if $R^*_{ab}$ is isotropic.}
Now the example arises from the class of irrotational spatially
homogeneous spacetimes,
comprehensively analysed and classified by Ellis and MacCallum
\cite{em}.
According to Theorem 7.1 of \cite{em}, the only non--FRW
spatially homogeneous spacetimes
with $R^*_{ab}$ isotropic are Bianchi type I and
(non--axisymmetric) Bianchi type V. The former have $H_{ab}=0$.
For the latter, using
the shear eigenframe $\{{\bf e}_a\}$ of \cite{em}
\begin{equation}
\sigma_{ab} = \sigma_{22}\,\mbox{diag}(0,0,1,-1) \label{b0}
\end{equation}
Using (\ref{r1}) and (\ref{r2}) with (\ref{b0}), one
obtains
\begin{eqnarray}
E_{ab} &=& {\textstyle{1\over3}}\Theta\sigma_{ab}-\sigma_{c<a}
\sigma_{b>}{}^c \nonumber\\
&=&{\textstyle{1\over3}}
\sigma_{22}\,\mbox{diag}\left(0,2\sigma_{22},\Theta-\sigma_{22},
-\Theta-\sigma_{22}\right) \label{b0'}
\end{eqnarray}
in agreement with \cite{em}.\footnote{Note that
$E_{ab}$ in \cite{em} is the negative of $E_{ab}$ defined
in (\ref{eh}).}
The tetrad forms of div and curl
for type V are (compare \cite{vu}):
\begin{eqnarray}
\mbox{D}^b S_{ab}&=&\partial_b S_a{}^b-
3a^b S_{ab} \label{b2}\\
\mbox{curl}\, S_{ab} &=& \varepsilon_{cd(a}\partial^c
S_{b)}{}^d+\varepsilon_{cd(a}S_{b)}{}^c a^d \label{b3}
\end{eqnarray}
where $S_{ab}=S_{<ab>}$, $a_b=a\delta_b{}^1$
($a$ is the type V Lie algebra parameter) and
$\partial_a f$ is the directional derivative of $f$
along ${\bf e}_a$. Using (\ref{b3}) and (\ref{c2}):
\begin{eqnarray}
H_{ab} &=& \mbox{curl}\,\sigma_{ab}\nonumber\\
&=&-2a\sigma_{22}\delta_{(a}{}^2\delta_{b)}{}^3
\label{b1}\end{eqnarray}
Hence:\\ {\em Irrotational Bianchi V dust spacetimes in general
satisfy} $\mbox{div}\,H=0\neq H$.
Using (\ref{b0})---(\ref{b1}), one obtains
\begin{eqnarray}
\mbox{D}^bH_{ab}&=&0 \label{v1}\\
\mbox{curl}\, H_{ab}&=& -a^2\sigma_{ab} \label{v2}\\
\mbox{curl}\,\c H_{ab}&=& -a^2H_{ab} \label{v3}\\
\mbox{D}^bE_{ab} &=& -\sigma_{bc}\sigma^{bc}a_a \label{v4}\\
\mbox{curl}\, E_{ab} &=&{\textstyle{1\over3}}\Theta H_{ab} \label{v5}
\end{eqnarray}
Although (\ref{v1}) is a necessary condition for gravity waves,
it is not sufficient, and (\ref{b0'}) and (\ref{b1}) show that
$E_{ab}$ and $H_{ab}$ decay with the shear, so that
the type V solutions cannot be interpreted as gravity waves.
Nevertheless, these solutions do establish the existence of
spacetimes with $\mbox{div}\,H=0\neq H$.
This supplements the known result that the only spatially homogeneous
irrotational dust spacetimes with $H_{ab}=0$ are FRW, Bianchi types
I and VI$_{-1}$ $(n^a{}_a=0)$, and Kantowski--Sachs \cite{bmp}.
When $H_{ab}=0$, (\ref{b0}) and (\ref{b1}) show that $\sigma_{ab}=0$,
in which case the type V solution reduces to FRW.\\
A final remark concerns the special case $H_{ab}=0$, i.e. the
silent universes. The considerations of this paper show that the
consistency analysis of silent universes undertaken in \cite{lde}
needs to be re--examined. This is a further topic currently under
investigation. It seems likely that the silent universes, in the
full nonlinear theory, are {\em not} in general consistent.
\acknowledgements
Thanks to the referee for very helpful comments, and
to George Ellis, William Lesame and Henk van Elst
for very useful discussions.
This research was supported by grants from Portsmouth, Natal and
Cape Town Universities. Natal University, and especially Sunil
Maharaj, provided warm hospitality while part of this research
was done.
|
1,314,259,993,465 | arxiv | \section{Introduction}\label{section:introduction}
The proliferation of smartphones and tablets equipped with 3G and 4G connectivity
and the fast growing demand for downloading multimedia files
have resulted in severe overload in the internet backhaul,
and it is expected to be worse with the advent of 5G in near future.
Recent idea of densifying
cellular networks will improve wireless throughput, but this will
eventually push the backhaul bandwidth to its limit. In order to alleviate
this problem, the idea of caching popular multimedia contents has recently been proposed.
Given the fact that the popular contents
are requested many times which results in network congestion, one way to
reduce the congestion is to cache the popular contents
at various intermediate nodes in the network. In case of cellular network,
this requires adding physical memory to base stations (BSs):
macro, micro, nano and pico. This has several advantages:
(i) Caching contents at base stations reduce backhaul load.
(ii) Caching
reduces delay in fetching the content, thereby reducing the multimedia playback time.
(iii) Caching will allow the end user to
download a lower quality content in case his channel quality or bad
or in case he wants to control his total amount of download.
Under dense placement of base stations, it is often the case that the {\em cells}
(a cell is defined to be a region around a BS where
the user is able to get sufficient downlink data rate from the BS) of different
BSs might overlap with each other in an arbitrary manner
(see \cite{bartek-keeler15sinr-process-poisson-networks-factorial-moment-measures}).
Hence, if a user is covered by multiple BSs, he has the option to download a content
from any one of the serving BSs. This gives rise to the
problem of optimal content placement in the caches of cellular BSs (see \cite{bartek14caching}, \cite{femtocaching});
the trade-off is that ideally the caching strategy should avoid placing the same
content (even if it has very high popularity) in two BSs whose cells have a significant overlap,
while it is not desirable for the non-overlapped region.
Optimal content placement under such situation requires global knowledge of base station locations and cell topologies,
and solving the optimization problem requires intensive computation. In order to tackle these problems, we develop
sequential cache update algorithms (motivated by Gibbs sampling) that asymptotically lead to optimal content placement, where each base station updates
its contents only when a new content is downloaded from the backhaul to meet a user request, and this update is
made solely based on the knowledge of the neighbouring BSs whose cells have nonzero intersection with the
cell of the BS under consideration. The results are also extended to the case where the content popularities and cell topology
are unknown initially and
are learnt over time as new content requests arrive to the base stations.
Simple numerical computation demonstrates the improvement in cache hit rate using Gibbs sampling technique for cache update,
compared to most popular content placement and independent content placement strategies in the caches.
\subsection{Related Work}\label{subsection:related_work}
There have been considerable amount of work in the literature dedicated to cellular caching. Benefits and challenges
for caching in 5G networks have been described in \cite{wang14caching}. The authors of \cite{martina-14caching}
have developed a method to analyze the performance of caches (isolated or networked), and shown that placing the most
popular subset of contents in each cache is not optimal in case of interconnected caches. The paper
\cite{femtocaching} deals with optimal content placement in wireless caches given BS-user association.
The authors of \cite{poularakis13exploiting} have addressed the problem of optimal content placement under user mobility.
The authors of \cite{bartek14caching} have proposed a randomized content placement scheme in cellular BS caches
in order to maximize cache hit rate, but their
scheme assumes that the contents are placed independently across the caches, which is obviously suboptimal. This work was later extended
to the case of heterogeneous networks in \cite{serbetci16caching}. The authors of \cite{liu16caching}
have again considered independent probabilistic caching in a random heterogeneous network.
The authors of \cite{avrachenkov16caching} have addressed the problem of cache miss minimization in a random network setting.
The authors of \cite{debbah2016caching}
have studied the problem of distributed caching in
ultra-dense wireless small cell networks using mean field games; however,
this formulation requires us to take base station density to infinity (which may not be
true in practice), and it does not provide any guarantee on the optimality of this caching strategy.
The paper \cite{naveen-postdoc-paper} proposes a pricing based scheme for jointly assigning content requests to cellular BSs and
updating the cellular caches; but this paper focuses on certain cost minimization instead of hit rate maximization,
and it is optimal only when we can represent the data by very large number of chunks which can be used in employing rateless code.
The authors of \cite{bharath16learning-caching} and \cite{leconte16placing-dynamic-content-caches}
propose learning schemes for unknown time-varying
popularity of contents, but their scheme does not have theoretical guarantee of convergence
to the optimal content placement {\em across the network when cells of different BSs overlap with each other}.
The paper \cite{moharir14high-dimensional} establishes that, when popularity is dynamic, any scheme that separates
content popularity estimation and cache update (i.e., control) phases is strictly order-wise suboptimal in terms of hit rate.
Contrary to the prior literature, our current paper provides theoretical guarantee on convergence of the
cache update scheme in a cellular network to the optimal placement (for the first time to the best of our knowledge),
without the help of any centralized
decision-maker. A weaker version of the results also holds when popularities are unknown initially and are learnt over time using
the information of request arrivals in the base stations.
\subsection{Organization and Our Contribution}\label{subsection:our_contribution}
The rest of the paper is organized as follows.
\begin{itemize}
\item The system model has been defined in Section~\ref{section:system-model}.
\item In Section~\ref{section:given-temperature}, we propose an update scheme for the caches based on the knowledge of the
contents cached in neighbouring BSs. The update scheme is based on Gibbs sampling techniques, and cache updates are
made only when new content requests arrive. The scheme asymptotically converges to a near-optimal content placement
in the network, since the scheme is proposed for a finite ``inverse temperature'' (to be defined later). We prove convergence
of the proposed scheme. To the best of our knowledge, such a scheme has never been
used in the context of caching in cellular network.
\item In Section~\ref{section:varying-inverse-temperature}, we discuss how to slowly increase the inverse temperature to
$\infty$ so that the near-optimal limiting solution in Section~\ref{section:given-temperature} actually
converges to the globally optimal solution. We provide rigorous proof for the convergence of this scheme.
\item In Section~\ref{section:learning-popularities}, we discuss how to adapt the update schemes to the situation
when unknown content popularities and cell topology are learnt over time as new content requests arrive to the BSs over time.
\item In Section~\ref{section:numerical}, we numerically demonstrate that the proposed Gibbs sampling approach has the potential
to significantly improve the cache hit rate in cellular networks.
\item Finally, we conclude in Section~\ref{section:conclusion}.
\end{itemize}
\section{System Model and Notation}\label{section:system-model}
\subsection{Network Model}\label{subsection:network-model}
We consider a set $\mathcal{N}:=\{1,2,\cdots,N \}$ of base stations (BSs) on the two-dimensional Euclidean space.
The set of points covered (downlink) by a BS
constitute the {\em cell} of the corresponding BS.\footnote{This coverage could be signal to noise ration (SNR) based coverage
where a point is covered
by a BS if and only if the SNR at that point from the BS exceeds some threshold.}
We denote
the cell of BS~$i$ ($1 \leq i \leq N$) by $\mathcal{C}_i$. Let us define $\mathcal{C}:=\cup_{i=1}^N \mathcal{C}_i$.
The area of any subset $\mathcal{A}$ of $\mathbb{R}^2$ is denoted
by $|\mathcal{A}|$. We allow the cells of various BSs to have arbitrary and different {\em finite} areas.
The cells of two BSs might have a nonzero intersection; any downlink
mobile user located at such an intersection is covered by more than one BS.
Let us denote by $2^{\mathcal{N}}$ the collection of all subsets of $\mathcal{N}$, and let $s$ denote one such generic subset.
Let us denote by
$\mathcal{C}(s):=(\cap_{i \in s} \mathcal{C}_i ) \cap ( \cup_{i \notin s} \mathcal{C}_i )^{c}$ the region in $\mathcal{C}$
which is covered only by the BSs from the subset $s$.
See figure~\ref{fig:caching-cell-diagram} for a better understanding of the cell model.
\begin{figure}[!t]
\begin{center}
\includegraphics[height=3.5cm, width=6cm]{caching-cell-diagram.pdf}
\end{center}
\caption{A pictorial description of the base station (BS) coverage model. In the diagram, four BSs are shown with numbers
$1,2,3,4$. The circles correspond to the cells of the BSs.
The region marked as $\{1,2,3\}$ has $s=\{1,2,3\}$; i.e., this region is called $\mathcal{C}(\{1,2,3\})$
and it is covered only by BSs $\{1,2,3\}$ and no other BS. Similar meaning
applies to other regions.}
\label{fig:caching-cell-diagram}
\end{figure}
\subsection{Content Request Process}\label{subsection:content-request}
Contents from a set $\mathcal{M}:=\{1,2,\cdots,M\}$ are requested by users located inside $\mathcal{C}$. We assume that each of these
contents have the same size.
Content~$i$ ($1 \leq i \leq M$) is
requested by users according to a homogeneous Poisson point process
in space (inside $\mathcal{C}$) and time with intensity $\lambda_i$; this is the expected
number of requests for content~$i$ per second per square meter inside $\mathcal{C}$. Let $\lambda:=\sum_{i=1}^M \lambda_i$. Note that,
$\frac{\lambda_i}{\lambda}$ denotes the probability that a content request is for content~$i$; in other words,
$\frac{\lambda_i}{\lambda}$ is the popularity of content~$i$.
\subsection{Content Caching at BSs}\label{subsection:content-caching}
We assume that each BS can store $K$ number of contents (where $K < M$). Let $B$ denote a generic configuration of content placement in caches of the
network. $B$ is defined as a $M \times N$ matrix with $B_{i,j}=1$ if content~$i$ is stored at the cache of BS~$j$, and
$B_{i,j}=0$ otherwise. Note that, any feasible $B$ must satisfy $\sum_{i=1}^M B_{i,j}=K$ for all $j \in \{1,2,\cdots,N \}$; we rule out the possibility
of $\sum_{i=1}^M B_{i,j}<K$ since that will be a waste of cache memory resources in BSs.
Let us denote the set of all feasible configurations by $\mathcal{B}$.
Clearly, the cardinality of $\mathcal{B}$ is ${{M}\choose{K}}^N$.
Apart from $B$, we will also use the symbol $A$ for a generic configuration belonging to set $\mathcal{B}$.
\subsection{Cache Hit Rate Maximization Problem}\label{subsection:content-requests-hit-rate}
We assume that, whenever a new request for a content arrives, it is served
by one BS covering that point and having the content in its cache; if a content request is served from the cache, we call
the event as a {\em cache hit}. In case no covering BS has the content (i.e., no cache hit),
the content needs to be downloaded by one of the covering BSs and served to the user
(this will be explained later). The requests do not tolerate any delay; i.e., we do not consider
the possibility of holding the requests in a queue and serving the content to users in batch
once the content becomes available in a BS. Also, we assume infinite bandwidth available for all downlink transmissions; i.e.,
each content is assumed to be served instantaneously.\footnote{This is a valid assumption when the downlink traffic in the network is light.}
Let the random variable $H_B$ denote the number of cache hits in the entire network in unit time, under configuration $B$.
We define the cache hit rate
$h(B)=\mathbf{E}(H_B)$ where the expectation is over the randomness in the content request arrival process.
Clearly,
\begin{eqnarray}
h(B)=\sum_{s \in 2^{\mathcal{N}}}
|\mathcal{C}(s)| \sum_{i=1}^M \lambda_i \mathbf{1}\{\sum_{j \in s} B_{i,j} \geq 1\}
\label{eqn:expanded-expression-of-hit-rate}
\end{eqnarray}
In this paper, we are interested in finding an optimal configuration which achieves:
\begin{equation}
\sup_{B \in \mathcal{B}}h(B) \label{eqn:objective-function}
\end{equation}
We know that \eqref{eqn:objective-function} is is an optimization problem with $0-1$ integer variables,
nonlinear objective function
and linear constraints. This class of problems has been shown to be NP-complete (see \cite{karp-complexity-paper}),
and hence, we cannot expect any
polynomial time algorithm to solve \eqref{eqn:objective-function}. Hence, in this section, we provide iterative, distributed
cache update scheme that asymptotically solves the problem. However, since the algorithm is iterative, we cannot use the optimal
configuration over infinite time horizon. Hence, we seek to design a randomized iterative cache update scheme which yields
\begin{equation}\label{eqn:asymptotic-target}
\liminf_{T \rightarrow \infty} \frac{\int_0^T \mathbf{E}(h(R(\tau))) d \tau}{T} = \sup_{B \in \mathcal{B}}h(B)
\end{equation}
where $R(\tau) \in \mathcal{B}$ is the configuration of all caches in the network at time $\tau$. Our iterative scheme is randomized, which renders
$R(\tau)$ a random variable; hence, we work with its expectation $\mathbf{E}$.
\section{Cache Update via Basic Gibbs Sampling}\label{section:given-temperature}
Let us rewrite \eqref{eqn:expanded-expression-of-hit-rate} as $h(B)=\sum_{j=1}^N h_j(B)$ where
\begin{eqnarray}
h_j(B)= \sum_{i=1}^M \lambda_i \sum_{s \in 2^{\mathcal{N}} }
\frac{ |\mathcal{C}(s)| B_{i,j} \mathbf{1}\{j \in s\} }{ \max\{1, \sum_{k \in s} B_{i,k} \} }
\label{eqn:expanded-expression-of-per-node-hit-rate}
\end{eqnarray}
We call $ h_j(B)$ to be the cache hit rate seen by BS~$j$ under configuration~$B$. This will be the true cache
hit rate seen by BS~$j$ under configuration~$B$ if a new content request is served by one covering BS chosen uniformly from the set
of covering BSs having that content. Note that, if more than one covering BSs have that content, choice of the serving BS will not affect
the hit rate; hence, we can safely assume uniform choosing of the serving BS.
In order to solve $\sup_{B \in \mathcal{B}} \sum_{j=1}^N h_j(B)$, we propose to employ Gibbs sampling techniques
(see \cite[Chapter~$7$]{breamud99gibbs-sampling}). Let us assume that each BS maintains a {\em virtual} cache capable of
storing $K$ contents. The broad idea is that one can update the virtual cache contents in an iterative fashion
using Gibbs sampling. Whenever a content is requested from a BS not having the content in its
physical (real) cache, the BS will download it from the backhaul and, at
the same time, will decide to store it in the real cache depending on whether it is stored in its virtual cache or not.
We will update the virtual caches according to a stochastic iterative algorithm so that the steady state probability
of configuration $B$ becomes:
\begin{equation}\label{eqn:expression-for-gibbs-stationary-probability}
\pi_{\beta}(B):=\frac{e^{\beta h(B)}}{ \sum_{B^{'} \in \mathcal{B}} e^{\beta h(B^{'})} }:= \frac{e^{\beta h(B)}}{Z_{\beta}}
\end{equation}
Here $\beta$ is called the ``inverse temperature'' (motivated by literature from statistical Physics), and
$Z_{\beta}$ is called the {\em partition function}.
Note that, $\lim_{\beta \rightarrow \infty} \sum_{B \in \arg \max_{B^{'} \in \mathcal{B}} h(B^{'})} \frac{e^{\beta h(B)}}{ \sum_{B^{'} \in \mathcal{B}} e^{\beta h(B^{'})} }=1$.
Hence, if we choose configuration $B$ for all virtual caches with probability
$\pi_{\beta}(B)$, then, for sufficiently large $\beta$, the chosen configuration will belong to
$\arg \max_{B \in \mathcal{B}} h(B)$ with probability close to $1$. If real cache configuration closely follows virtual cache configuration,
we can achieve optimal cache hit rate for real caching system.
\subsection{Gibbs sampling approach for ``virtual'' cache update}
\label{subsection:modified-Gibbs-given-temperature}
Let us consider discrete time instants $t=0, 1,2 , \cdots$ when virtual cache contents are updated; this is different from the continuous
time $\tau$ used before. Let us denote the
configuration in all virtual caches in the network after the $t$-th decision instant by $V(t)$, where $V(t) \in \mathcal{B}$.
The Gibbs sampling algorithm simulates a discrete-time Markov chain $V(t)$ on state space
$\mathcal{B}$, whose stationary probability
distribution is given by
$\pi_{\beta}(B)= \frac{e^{\beta h(B)}}{Z_{\beta}}$.
Let us define the set of {\em neighbours} of BS~$j$ ({\bf including BS~$j$}) as
$\Psi(j):=\{n: n \in \mathcal{N}, \mathcal{C}_j \cap \mathcal{C}_n \neq \emptyset \}$. Let us denote by $B_{\cdot,-j}$ the restriction
of configuration $B$ to all BSs except BS~$j$, i.e., $B_{\cdot,-j}$ is obtained by deleting the $j$-th column of $B$. Let
$\pi_{\beta}(\cdot | B_{\cdot,-j})$ denote the conditional distribution of the network-wide configuration conditioned on $B_{\cdot,-j}$,
under the joint distribution $\pi_{\beta}(\cdot)$. Clearly, $\pi_{\beta}(A | B_{\cdot,-j})=0$ if $A_{\cdot,-j} \neq B_{\cdot,-j}$.
If $A_{\cdot,-j} = B_{\cdot,-j}$, then
\begin{equation}\label{eqn:first-expression-for-conditional-probability}
\pi_{\beta}(A | B_{\cdot,-j})=\frac{e^{\beta h( A )}}{\sum_{v_j \in \{0,1\}^M, ||v_j||_1=K} e^{\beta h( v_j,B_{\cdot,-j} )}}
\end{equation}
where $||v_j||_1$ is the sum of all components of the vector $v_j$.
Note that, there is common factor $ e^{\beta \sum_{n \notin \Psi(j)} h_n (B) }$ in both numerator and denominator
of the expression in \eqref{eqn:first-expression-for-conditional-probability}, since this term does
not depend on the contents in the virtual cache in BS~$j$. Hence, \eqref{eqn:first-expression-for-conditional-probability}
can be further simplified as:
\begin{equation}\label{eqn:second-expression-for-conditional-probability}
\pi_{\beta}(A | B_{\cdot,-j})=\frac{e^{\beta \sum_{n \in \Psi(j)}h_n(A)}}{\sum_{v_j \in \{0,1\}^M, ||v_j||_1=K} e^{\beta \sum_{n \in \Psi(j)}h_n(v_j,B_{\cdot,-j} ) }}
\end{equation}
Now, let us define $h_n(A,s)$ to be the hit rate seen by BS~$n$
under configuration $A$ due to the content requests generated from the region $\mathcal{C}(s)$. Clearly,
$h_n(A)=\sum_{s \in 2^{\mathcal{N}}} h_n(A,s)$, since the hit rate at BS~$n$ under configuration $A$ is equal to the sum of hit rates
by requests generated from all possible segments $\{\mathcal{C}(s)\}_{s \in 2^{\mathcal{N}}}$. Now, note that, the term
$e^{\beta \sum_{n \in \Psi(j)} \sum_{s: j \notin s} h_n(A,s)}$ is a common factor in the numerator and denominator of the expression
in \eqref{eqn:second-expression-for-conditional-probability}, since
this factor does not depend on the contents in the virtual cache of BS~$j$. Hence, when
$A_{\cdot,-j} = B_{\cdot,-j}$, we can simplify \eqref{eqn:second-expression-for-conditional-probability} further as follows:
\small
\begin{equation}\label{eqn:third-expression-for-conditional-probability}
\pi_{\beta}(A | B_{\cdot,-j})=\frac{e^{\beta \sum_{n \in \Psi(j), s \ni j }h_n(A,s)}}{\sum_{v_j \in \{0,1\}^M, ||v_j||_1=K} e^{\beta \sum_{n \in \Psi(j), s \ni j}h_n(v_j,B_{\cdot,-j}, s ) }}
\end{equation}
\normalsize
where
\begin{equation}\label{eqn:hnA_definition}
h_n( A,s )= \sum_{i=1}^M \lambda_i
\frac{ |\mathcal{C}(s)| A_{i,n} \mathbf{1}\{n \in s\} }{ \max\{1, \sum_{k \in s} A_{i,k} \} }
\end{equation}
We now describe an algorithm for sequentially updating the network-wide virtual cache configuration $V(t)$.
\begin{algorithm}\label{algorithm:virtual-cache-update-basic-gibbs-sampling}
Start with an arbitrary $V(0) \in \mathcal{B}$.
At discrete time $t$, pick a node $j_t \in \mathcal{N}$ randomly having uniform distribution from $\mathcal{N}$.
Then, update the contents in the virtual cache of BS~$j_t$ by picking up a network-wide virtual cache configuration $A \in \mathcal{B}$
with probability $\pi_{\beta}(A | V_{\cdot,-j_t}(t-1))$. Note that, only contents in the virtual cache of BS~$j_t$ are modified
by this operation.
\hfill \ensuremath{\Box}
\end{algorithm}
\begin{theorem}
Under Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling},
$\{V(t)\}_{t \geq 0}$ is a reversible Markov chain, and it achieves the steady-state
probability distribution $\pi_{\beta}(B)= \frac{e^{\beta h(B)}}{Z_{\beta}}$.
\end{theorem}
\begin{proof}
The proof is standard, and it follows from the theory in \cite[Chapter~$7$]{breamud99gibbs-sampling}).
\end{proof}
\begin{remark}
In Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling}, in
order to make an update at time $t$, node $j_t$ needs to know the contents of the virtual caches only
from $\Psi(j_t)$. This requires information exchange in each slot. Such information exchange
can use the backhaul network, and this does not exert much load on the backhaul since the actual contents
are not exchanged via the backhaul in this process.
\end{remark}
\begin{remark}
The denominator in the simplified sampling probability expression in \eqref{eqn:third-expression-for-conditional-probability}
requires a summation over all possible virtual cache configurations in $\Psi(j_t)$. This allows the system
to avoid the huge combinatorial problem of calculating $Z_{\beta}$ which requires
${{M}\choose{K}}^N$ addition operations. The advantage will be even more visible if we consider
the possibility of varying $\beta$ with time or learning $\{\lambda_i\}_{1 \leq i \leq M}$ over time if they are not
known; the optimization problem $\sup_{B \in \mathcal{B}} h(B)$ will change over time in this case, and it will require
calculation of the partition function in each slot.
\end{remark}
\subsection{The real cache update scheme for fixed $\beta$}
\label{subsection:real-cache-update}
Now we propose a cache update scheme for the real caches present in the BSs. {\em Our scheme decides to store
a content in the cache of a BS only when the content is requested from that BS}. This eliminates the necessity of any
unnecessary download from the backhaul.
Let us consider content request arrivals at continuous time (denoted by $\tau$ again) to the BS. Let us recall that the
virtual caches are updated only at discrete times $t=0,1,2,\cdots$. We assume that these discrete time instants
$t=0,1,2,\cdots$ units are superimposed on the continuous time axis $\tau \geq 0$. Hence, $V(\tau)$ is defined to be equal to
$V(t)$ for $\tau \in [t,t+1)$, where $t \in \mathbb{Z}_{+}$.
Let us consider an increasing sequence of positive real numbers (viewed as time durations)
$T_1,T_2, T_3,\cdots$ such as $T_k \uparrow \infty$ as $k \rightarrow \infty$. Let $S_l:=T_1+T_2+\cdots+T_l$.
Let $\kappa(\tau):=\sup \{l \in \mathbb{Z}_{+}: S_l \leq \tau \} $ and
$\zeta(\tau):=S_{\kappa(\tau)}$.
The real cache update scheme is given as follows:
\begin{algorithm}\label{algorithm:real-cache-update-algorithm}
Start with some arbitrary $R(0) \in \mathcal{B}$.
At time $\tau$, if the
request for content~$i$ arrives at BS~$j$ (either because no other covering BS has this content or because
BS~$j$ has been chosen from among the covering BSs having content~$i$), then BS~$j$ does the following:
If BS~$j$ has content~$i$, it will serve that.
If BS~$j$ does not have content~$i$, it serves the content by downloading from the backhaul. Then content~$i$
is stored in the real cache of BS~$j$ if and only if $V_{i,j}(\zeta(\tau)-)=1$ (i.e., if content~$i$ was stored in the virtual cache
of BS~$j$ at time $\zeta(\tau)-$). If the BS~$j$
decides to store content~$i$ then,
in order to make room for the newly stored content~$i$, any content~$k \neq i$
such that $V_{k,j}(\zeta(\tau)-)=0$ and $R_{k,j}(\tau)=1$, is removed
from the real cache of BS~$j$. \hfill \ensuremath{\Box}
\end{algorithm}
\begin{remark}
The idea behind taking $T_k \rightarrow \infty$ as $k \rightarrow \infty$ in Algorithm~\ref{algorithm:real-cache-update-algorithm}
is as follows. We know that $V(t)$ reaches the distribution
$\pi_{\beta}(\cdot)$ as $t \rightarrow \infty$. As $k \rightarrow \infty$, the fraction of time spent during $\tau \in [S_k, S_{k+1})$
in copying the contents present in $V(S_k-)$ to real
caches becomes negligible, and the real caches are allowed to operate larger and larger fraction of time under
content distribution close to $\pi_{\beta}(\cdot)$.
\end{remark}
Now we make the following assumption:
\begin{assumption}\label{assumption:each-cell-has-a-region-covered-only-by-itself}
$| \mathcal{C}_i \cap ( \cup_{j \neq i} \mathcal{C}_j )^{c}|>0$ for all $i \in \{1,2,\cdots,N\}$.
\end{assumption}
\begin{theorem}\label{theorem:asymptotic-optimality-of-real-cache-update}
Under Assumption~\ref{assumption:each-cell-has-a-region-covered-only-by-itself},
Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling} and
Algorithm~\ref{algorithm:real-cache-update-algorithm}, we have (for the real caches in all BSs):
$$\lim_{T \rightarrow \infty}\frac{ \int_0^T \mathbf{P}(R(\tau)=B) d \tau }{T}=
\frac{e^{\beta h(B)}}{ \sum_{B^{'} \in \mathcal{B}} e^{\beta h(B^{'})} }$$
\end{theorem}
\begin{proof}
Fix a small $\epsilon>0$. Under configuration $B$ of the {\em virtual caches},
let us denote the total time $T_B$ (a generic random variable)
taken by the arrival process so that, for all possible pairs $(i,j) \in \{1,2,\cdots,M\} \times \{1,2,\cdots,N\}$
there is at least one arrival of content $i$ to BS~$j$ if virtual configuration~$B$
suggests placing content~$i$ at BS~$j$; clearly $\mathbf{E}(T_B)<\infty$, since we have made
Assumption~\ref{assumption:each-cell-has-a-region-covered-only-by-itself}.
Let us consider $l \in \mathbb{Z}_{+}$ large enough such that:
(i) $\sum_{B \in \mathcal{B}} |\mathbf{P}(V(t)=B)-\pi_{\beta}(B)|<\epsilon$ for all integer $t \geq S_{l-1}$,
(ii) $\mathbf{P}(T_B>\epsilon T_{l+1}) < \epsilon$.
Now,
\footnotesize
\begin{eqnarray}
&& \frac{ \int_{S_l}^{S_{l+1}} \mathbf{P}(R(\tau)=B) d \tau }{T_{l+1}} \nonumber\\
& \geq & \frac{ \mathbf{E} \int_{\min\{S_l+T_B^{'},S_{l+1} \} }^{S_{l+1}} \mathbf{P}(R(\tau)=B) d \tau }{T_{l+1}} \nonumber\\
& \geq & \frac{ (1-\epsilon) \int_{S_l+\epsilon T_{l+1} }^{S_{l+1}} \mathbf{P}(R(\tau)=B| T_B^{'} \leq \epsilon T_{l+1}) d \tau }{T_{l+1}} \nonumber\\
& = & \frac{ (1-\epsilon) (T_{l+1}-\epsilon T_{l+1}) \mathbf{P}(V(S_l-)=B) }{T_{l+1}} \nonumber\\
& \geq & (1-\epsilon)^2 (\pi_{\beta}(B)-\epsilon) \nonumber\\
\end{eqnarray}
\normalsize
where $T_B^{'}$ has the same distribution as $T_B$.
Hence,
\begin{eqnarray*}
&& \liminf_{T \rightarrow \infty} \frac{ \int_0^T \mathbf{P}(R(\tau)=B) d \tau }{T} \\
&=& \liminf_{T \rightarrow \infty} \frac{ \int_{S_l}^T \mathbf{P}(R(\tau)=B) d \tau }{T-S_l} \\
&\geq & (1-\epsilon)^2 (\pi_{\beta}(B)-\epsilon)
\end{eqnarray*}
Since $\epsilon>0$ is arbitrarily small, we have:
$$\liminf_{T \rightarrow \infty} \frac{ \int_0^T \mathbf{P}(R(\tau)=B) d \tau }{T} \geq \pi_{\beta}(B)$$.
But, by Fatou's lemma,
\begin{eqnarray*}
&& \sum_{B \in \mathcal{B}}\liminf_{T \rightarrow \infty} \frac{ \int_{0}^{T} \mathbf{P}(R(\tau)=B) d \tau }{T} \\
& \leq & \liminf_{T \rightarrow \infty} \sum_{B \in \mathcal{B}} \frac{ \int_{0}^{T} \mathbf{P}(R(\tau)=B) d \tau }{T} \\
&=& 1
\end{eqnarray*}
and
$\sum_{B \in \mathcal{B}} \pi_{\beta}(B)=1$. Hence, we must have
$\liminf_{T \rightarrow \infty} \frac{ \int_{0}^{T} \mathbf{P}(R(\tau)=B) d \tau }{T} = \pi_{\beta}(B)$ for all $B \in \mathcal{B}$.
On the other hand,
$$ \limsup_{T \rightarrow \infty} \frac{ \int_0^T \mathbf{P}(R(\tau)=B) d \tau }{T}
= \limsup_{l \rightarrow \infty} \frac{ \int_{S_l}^{S_{l+1}} \mathbf{P}(R(\tau)=B) d \tau }{T_{l+1}} $$
Now,
\small
\begin{eqnarray*}
&& \frac{ \int_{S_l}^{S_{l+1}} \mathbf{P}(R(\tau)=B) d \tau }{T_{l+1}} \\
& \leq & \frac{ \int_{ S_l+ \epsilon T_{l+1} }^{S_{l+1}} \mathbf{P}(R(\tau)=B ) d \tau + \epsilon T_{l+1} }{T_{l+1}} \\
& \leq & \frac{ \int_{ S_l+ \epsilon T_{l+1} }^{S_{l+1}} \mathbf{P}(R(\tau)=B | T_B^{'} \leq \epsilon T_{l+1}) d \tau + 2 \epsilon T_{l+1} }{T_{l+1}} \\
& \leq & \frac{ \int_{ S_l+ \epsilon T_{l+1} }^{S_{l+1}} \mathbf{P}(V(S_l-)=B) d \tau + 2 \epsilon T_{l+1} }{T_{l+1}} \\
&\leq & \pi_{\beta} (B) + 3 \epsilon
\end{eqnarray*}
\normalsize
where the second inequality follows from the fact that for $\tau \in [S_l+ \epsilon T_{l+1}, S_{l+1})$, the following holds:
\begin{eqnarray*}
&& \mathbf{P}(R(\tau)=B ) \\
&\leq & \mathbf{P}(R(\tau)=B | T_B^{'} \leq \epsilon T_{l+1}) \\
&& + \mathbf{P}(T_B^{'} > \epsilon T_{l+1}) \mathbf{P}(R(\tau)=B | T_B^{'} > \epsilon T_{l+1}) \\
& \leq & \mathbf{P}(R(\tau)=B | T_B^{'} \leq \epsilon T_{l+1}) + \epsilon
\end{eqnarray*}
Since $\epsilon>0$ is arbitrarily small, we can say that
$$\limsup_{T \rightarrow \infty} \frac{ \int_0^T \mathbf{P}(R(\tau)=B) d \tau }{T} \leq \pi_{\beta}(B)$$
Hence, $\lim_{T \rightarrow \infty} \frac{ \int_0^T \mathbf{P}(R(\tau)=B) d \tau }{T} = \pi_{\beta}(B)$. \end{proof}
\begin{remark}
Note that, Assumption~\ref{assumption:each-cell-has-a-region-covered-only-by-itself} is very crucial in the proof of
Theorem~\ref{theorem:asymptotic-optimality-of-real-cache-update}, because this assumption ensures that every BS
gets content requests at some nonzero arrival rate, and hence can update its real cache at strictly positive rate.\footnote{If
Assumption~\ref{assumption:each-cell-has-a-region-covered-only-by-itself} is not satisfied, then one can still achieve near optimal hit rate
in real caches. It is achieved under a scheme where a new content request is sent to any of its covering BSs with very small probability $\eta>0$,
and
otherwise the request is sent to a covering BS having that content. Similar analysis as in this paper can show that the time-average
expected hit rate under this scheme differs from the optimal hit rate $\max_{B \in \mathcal{B}} h(B)$ only by a small margin which goes to $0$ as
$\eta \downarrow 0$.}
\end{remark}
\section{Varying $\beta$ to Reach Optimality}\label{section:varying-inverse-temperature}
In this section, we discuss how to vary the inverse temperature $\beta$ to infinity with time so that the
Gibbs sampling algorithm (used to
update virtual caches)
converges to the optimizer of \eqref{eqn:objective-function}. The intuition is that this, combined with
Algorithm~\ref{algorithm:real-cache-update-algorithm} used for real cache update, will achieve
optimal time-average expected cache hit rate for problem \eqref{eqn:asymptotic-target}.
Let us define $$\Delta:=\max_{B \in \mathcal{B}}h(B)-\min_{B^{'} \in \mathcal{B}}h(B^{'})>0$$
\begin{algorithm}\label{algorithm:virtual-cache-update-varying-inverse-temperature}
This algorithm is analogous to Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling} except that,
at discrete time
instant $t=lN,lN+1,\cdots, lN+N-1$, we use $\beta_t:=\beta_0 \log (1+l)$ instead of fixed $\beta$, where
$0< \beta_0 < \infty$ is the initial inverse temperature satisfying $\beta_0 N \Delta<1$ and
$\beta_0 \max_{B \in \mathcal{B}}h(B)<1$.\hfill \ensuremath{\Box}
\end{algorithm}
\begin{theorem}\label{theorem:strong-ergodicity-varying-inverse-temperature}
Under Algorithm~\ref{algorithm:virtual-cache-update-varying-inverse-temperature} for virtual cache update,
the discrete time non-homogeneous Markov chain $\{V(t)\}_{t \geq 0}$
is strongly ergodic, and the limiting distribution $\pi_{v,\infty}$ satisfies $\pi_{v,\infty}(\arg \max_{B \in \mathcal{B}} h(B))=1$.
\end{theorem}
\begin{proof}
In this proof, we will use the notion of weak and strong ergodicity of time-inhomogeneous Markov chains from
\cite[Chapter~$6$, Section~$8$]{breamud99gibbs-sampling}), which is provided in the appendix.
Fix $k =0$.
We will first show that the Markov chain $\{V(t)\}_{t \geq 0}$ in weakly ergodic.
Let us consider the transition probability matrix (t.p.m.) $Q_l$ for the inhomogeneous Markov
chain $\{Y(l)\}_{l \geq 0}$ (where $Y(l):=\{V(lN), V(lN+1),\cdots,V(lN+N-1)\}$) indexed by the period index
$l$ (when $\beta_l=\beta_0 \log(1+l)$).
Then, the Dobrushin's ergodic coefficient $\delta(Q_l)$ is given by
(see \cite[Chapter~$6$, Section~$7$]{breamud99gibbs-sampling} for definition)
$\delta(Q_l)=1- \inf_{\mathbf{B^{'}},\mathbf{B^{''}} \in \mathcal{B}^N} \sum_{\mathbf{B} \in \mathcal{B}^N} \min \{Q_l(\mathbf{B^{'}},\mathbf{B}),Q_l(\mathbf{B^{''}},\mathbf{B}) \}$.
The Markov chain is weakly ergodic if $\sum_{l=1}^{\infty}(1-\delta(Q_l))=\infty$ (by
\cite[Chapter~$6$, Theorem~$8.2$]{breamud99gibbs-sampling}).
Now, with positive probability, virtual caches in all nodes are updated over a period of $N$ slots. Hence, any $\mathbf{B} \in \mathcal{B}^N$
can be reached over a period of
$N$ slots, starting from any other $\mathbf{B^{'}} \in \mathcal{B}^N$. Note that, once a node $j_t$
is chosen in Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling}, the sampling probability for any set of contents in
its virtual cache in a slot
is lower bounded by $\frac{e^{-\beta_l \Delta}}{{{M}\choose{K}}}$.
Hence, for independent sampling over $N$ slots, we will always have
$Q_l(\mathbf{B^{'}},\mathbf{B}) > \bigg( \frac{e^{-\beta_l \Delta}}{N{{M}\choose{K}}} \bigg)^N >0$ for all pairs $\mathbf{B^{'}},\mathbf{B}$.
Hence,
\small
\begin{eqnarray}
&&\sum_{l=0}^{\infty}(1-\delta(Q_l)) \nonumber\\
&=& \sum_{l=0}^{\infty} \inf_{\mathbf{B^{'}},\mathbf{B^{''}} \in \mathcal{B}^N} \sum_{\mathbf{B} \in \mathcal{B}^N} \min \{Q_l(\mathbf{B^{'}},\mathbf{B}),Q_l(\mathbf{B^{''}},\mathbf{B}) \} \nonumber\\
& \geq & \sum_{l=0}^{\infty} \bigg( \frac{e^{-\beta_0 \log(1+l) \times \Delta}}{N{ {M}\choose{K} }} \bigg)^N \nonumber\\
& = & \frac{1}{(N{ {M}\choose{K} })^N} \sum_{l=0}^{\infty} e^{- N \Delta \beta_0 \log(1+l)} \nonumber\\
& = & \frac{1}{(N{ {M}\choose{K} })^N} \sum_{l=0}^{\infty} \frac{1}{ (1+l)^{\beta_0 N \Delta}} \nonumber\\
& = & \infty
\end{eqnarray}
\normalsize
Here the last step follows from the fact that $\sum_{l=1}^{\infty} \frac{1}{l^a}$ diverges for $0 <a<1$.
Hence, the Markov chain $\{Y(l)\}_{l \geq 0}$
is weakly ergodic. Hence, $\{V(t)\}_{t \geq 0}$ is also weakly ergodic Markov chain.
Now we will use \cite[Chapter~$6$, Theorem~$8.3$]{breamud99gibbs-sampling} to prove strong ergodicity of
$\{V(t)\}_{t \geq 0}$.
Let us denote the t.p.m. of $\{V(t)\}_{t \geq 0}$ at a specific time $t=T$
by $Q^{(T)}$ (a specific matrix). If the Markov chain $\{V(t)\}_{t \geq 0}$ is allowed to evolve up to infinite time
with {\em fixed} t.p.m. $Q^{(T)}$, then we will get stationary distribution $\pi_{\beta_T}(B)= \frac{e^{\beta_T h(B)}}{Z_{\beta_T}}$.
This satisfies Condition~$8.9$ of \cite[Chapter~$6$, Theorem~$8.3$]{breamud99gibbs-sampling}.
Now we will check Condition~$8.10$ of \cite[Chapter~$6$, Theorem~$8.3$]{breamud99gibbs-sampling}.
For any $B \in \arg \max_{B^{'} \in \mathcal{B}} h(B^{'})$, it is easy to see that $\pi_{\beta_T}(B)$ increases with $T$ for
large $T$ (can be seen by considering derivative of $\pi_{\beta}(B)$ w.r.t. $\beta$). For all other configurations $B$,
$\pi_{\beta_T}(B)$ decreases with $T$ for large $T$.
Hence, $\sum_{T=0}^{\infty} \sum_{B \in \mathcal{B}} |\pi_{\beta_{T+1}}(B)-\pi_{\beta_T}(B)| < \infty$.
Hence, by \cite[Chapter~$6$, Theorem~$8.3$]{breamud99gibbs-sampling}, $\{V(t)\}_{t \geq 0}$ is strongly ergodic.
The expression for the limiting distribution is straightforward to derive.
\end{proof}
\begin{theorem}\label{theorem:asymptotic-optimal-real-cache-update-varying-inverse-temperature}
Under Assumption~\ref{assumption:each-cell-has-a-region-covered-only-by-itself},
Algorithm~\ref{algorithm:virtual-cache-update-varying-inverse-temperature} for virtual cache update and
Algorithm~\ref{algorithm:real-cache-update-algorithm} for real cache update, we have:
$$\lim_{T \rightarrow \infty}\frac{ \int_0^T \mathbf{P}(R(\tau)=\arg \max_{B \in \mathcal{B}} h(B)) d \tau }{T}=1$$
and hence,
$$\lim_{T \rightarrow \infty}\frac{ \int_0^T \mathbf{E}(h(R(\tau))) d \tau }{T}= \max_{B \in \mathcal{B}} h(B)$$
\end{theorem}
\begin{proof}
The first part of the proof follows using similar arguments as in the proof of Theorem~\ref{theorem:asymptotic-optimality-of-real-cache-update}.
The second part follows from the first part using the fact that $\mathbf{E}(h(R(\tau)))=\sum_{B \in \mathcal{B}} \mathbf{P}(R(\tau)=B) h(B)$.
\end{proof}
\begin{remark}
From \cite[Figure~$3$]{bartek14caching}, we notice that independent placement of contents across BSs can significantly
outperform the placement of $K$ most popular contents in each BS cache (for a Poisson distributed network). However, our proposed scheme yields the optimal
hit rate for every realization of the location of BSs, so long as the number of BSs is finite. Hence, we can safely claim that our
proposed scheme significantly outperforms the placement of $K$ most popular contents in each BS cache.
\end{remark}
\subsection{Convergence rate of the virtual cache update scheme}\label{subsection:convergence-speed}
While we are not aware of any closed-form bound on the convergence rate
for Algorithm~\ref{algorithm:virtual-cache-update-varying-inverse-temperature}, by using
\cite[Chapter~$6$, Theorem~$7.2$]{breamud99gibbs-sampling}, we can provide convergence rate guarantee for
Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling}. Let us consider
the Markov
chain $Y(l):=\{V(lN), V(lN+1),\cdots,V(lN+N-1)\}, l \geq 0$ (indexed by the period index
$l$) evolving under
Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling}. Then, by \cite[Chapter~$6$, Theorem~$7.2$]{breamud99gibbs-sampling},
the total variation distance between $\mu_l$ (i.e., the probability distribution of $Y(l)$) and the steady state distribution $\mu$
(with marginal distribution equal to $\pi_{\beta}(\cdot)$) is upper bounded as:
\small
$$ d_V(\mu_l,\mu) \leq d_V(\mu_0,\mu) (\delta(Q))^l \leq d_V(\mu_0,\mu) \bigg( 1-\bigg(\frac{e^{-\beta \Delta}}{N{ {M}\choose{K} }}\bigg)^N \bigg)^l $$
where $Q$ is the transition probability matrix (t.p.m.)
of the homogeneous Markov chain $Y(l)$ under Algorithm~\ref{algorithm:virtual-cache-update-basic-gibbs-sampling}.
\normalsize
Clearly, the R.H.S. of the above equation increases with $\beta$. Hence, under
Algorithm~\ref{algorithm:virtual-cache-update-varying-inverse-temperature},
we can expect slower convergence rate as time increases. It has to be noted that there is a trade-off
between convergence rate and the accuracy
of the virtual cache update scheme using Gibbs sampling; higher accuracy (by taking very large $\beta$)
obviously requires longer time because of slow convergence rate. It also suggests that the rate of convergence decreases
with $N$ (provided that other parameters such as $\beta_0$ and $\Delta$ are fixed).
\section{Learning Content Popularities and Cell Topology}\label{section:learning-popularities}
In previous sections, we assumed that the content request arrival rates per unit area, $\lambda_1,\lambda_2,\cdots,\lambda_M$, and the
areas $|\mathcal{C}(s)|, s \in 2^{\mathcal{N}}$ are known
to all BSs. But, in practice, these quantities may not be known apriori, and one has to estimate these quantities
over time as new content requests arrive to the system. In this system, we will extend
Algorithm~\ref{algorithm:virtual-cache-update-varying-inverse-temperature} to adapt to learning of these quantities.
At time slot $t$, the BS $j_t$ (uniformly chosen from the set of BSs) chooses its virtual contents in such a way that
the probability of choosing network-wide configuration $A$ at time $t, lN \leq t \leq lN+N-1$ is
$\pi_{\beta}(A | V_{\cdot,-j_t}(t-1))$.
Let us recall the expression for $h_n(A,s)$ from \eqref{eqn:hnA_definition}.
Clearly, if one can estimate $\lambda_i |\mathcal{C}(s)|$ for all possible $(i,s) \in \mathcal{M} \times 2^{\mathcal{N}}$,
then one can have an estimate of $h_n(A,s)$. This can be done by estimating the request arrival rate for content~$i$ from
the region $\mathcal{C}(s)$; this is easy to do because this is a time-homogeneous Poisson process with rate
$\lambda_i |\mathcal{C}(s)|$ request per unit time.
Let us assume that each BS~$k$ has an estimate $\hat{\theta}(k,i,s,t)$ for $\lambda_i | \mathcal{C}(s) |$ in slot $t$.
This can be done through continuous
message exchange among the BSs which observe the content request arrival process over time.
Now we present the virtual cache update algorithm.
\begin{algorithm}\label{algorithm:virtual-cache-update-learning}
This algorithm is same as Algorithm~\ref{algorithm:virtual-cache-update-varying-inverse-temperature} except that
the estimate $\hat{\theta}(k,i,s,t)$ is used at slot $t$ by BS~$k$, instead of the actual value of $\lambda_i | \mathcal{C}(s) |$.\hfill \ensuremath{\Box}
\end{algorithm}
\begin{assumption}\label{assumption:estimates-converge}
$\lim_{t \rightarrow \infty} \hat{\theta}(k,i,s,t)=\lambda_i | \mathcal{C}(s) |$ almost surely
for all $k \in \mathcal{N}, i \in \mathcal{M}, s \in 2^{\mathcal{N}}$.
\end{assumption}
\begin{assumption}\label{assumption:uniqueness-of-maximizer}
$\arg \max_{B \in \mathcal{B}}h(B)$ is unique.
\end{assumption}
\begin{theorem}\label{theorem:convergence-virtual-cache-update-learning}
Under Assumption~\ref{assumption:estimates-converge}, Assumption~\ref{assumption:uniqueness-of-maximizer} and
Algorithm~\ref{algorithm:virtual-cache-update-learning} for virtual cache update,
the discrete time non-homogeneous Markov chain $\{V(t)\}_{t \geq 0}$
is strongly ergodic, and the limiting distribution $\pi_{v,\infty}(\cdot)$ satisfies $\pi_{v,\infty}(\arg \max_{B \in \mathcal{B}} h(B))=1$.
\end{theorem}
\begin{proof}
Note that, at a given fixed time $t=T$, given the instantaneous value of estimates, the instantaneous transition probability matrix for
$\{V(t)\}_{t \geq 0}$ Markov chain,
$Q^{(T)}$, will have a stationary probability distribution. Also, if we assume that there exists exactly one configuration in the set
$\arg \max_{B \in \mathcal{B}}h(B)$, then we can say that $\lim_{T \rightarrow \infty} |Q^{(T)}-Q^*|=0$, where $Q^*$ has a stationary
distribution which assigns probability $1$ on $\arg \max_{B \in \mathcal{B}}h(B)$, and $Q^*$ is strongly ergodic (by
Theorem~\ref{theorem:strong-ergodicity-varying-inverse-temperature}). Hence, by
\cite[Chapter~$6$, Theorem~$8.5$]{breamud99gibbs-sampling}, the Markov chain of virtual cache configuration $\{V(t)\}_{t \geq 0}$ is
strongly ergodic.
\end{proof}
\begin{theorem}\label{theorem:asymptotic-optimal-real-cache-update-varying-inverse-temperature-and-learning}
Under Assumption~\ref{assumption:each-cell-has-a-region-covered-only-by-itself}, Assumption~\ref{assumption:estimates-converge},
Assumption~\ref{assumption:uniqueness-of-maximizer},
Algorithm~\ref{algorithm:virtual-cache-update-learning} for virtual cache update and
Algorithm~\ref{algorithm:real-cache-update-algorithm} for real cache update, the conclusions of
Theorem~\ref{theorem:asymptotic-optimal-real-cache-update-varying-inverse-temperature} hold.
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem~\ref{theorem:asymptotic-optimal-real-cache-update-varying-inverse-temperature}.
\end{proof}
\begin{figure}[!t]
\begin{center}
\includegraphics[height=5.5cm, width=8cm]{caching-plot.pdf}
\end{center}
\caption{Comparison of Gibbs sampling based caching strategy, independent content placement strategy, and most popular content
placement strategy, for
a network with two BSs, two possible contents, and storage capacity for exactly one content in each BS cache. Detailed system parameters
can be found in Section~\ref{section:numerical}.}
\label{fig:caching-plot}
\end{figure}
\section{Performance Improvement Using Gibbs Sampling: A Numerical Example}\label{section:numerical}
We consider two BSs placed along a line (say, the $x$~axis). The regions covered by BS~$1$ and BS~$2$ are $[0,6]$ and
$[1,10]$ respectively (note that our theory allows the cells to be asymmetric around the BSs). There are two contents
$\mathcal{M}=\{1,2\}$ with popularities
$0.55$ and $0.45$ respectively. Each BS can store at most one content (i.e., $K=1$).
Content requests are being generated over the region $[0,10]$ according to a time and space homogeneous
Poisson point process with intensity $\frac{1}{10}$ requests per unit time per unit length, and any new request can be for content~$1$ and
content~$2$ with probabilities $0.55$ and $0.45$, respectively. The cache hit rate over the region $[0,10]$ is $0.55$ per unit time
if the most popular
content is placed in both BSs. The cache hit rate is maximized if BS~$1$ contains content~$2$ and BS~$2$ contains content~$1$, and
the cache hit rate is $\frac{1 \times 0.45 + 5 \times 1 + 4 \times 0.55}{10}=0.765$ hits per unit time in this case.
If two BSs choose the contents independently with probabilities $r$ and $(1-r)$ respectively, the cache hit rate
is $0.55 r^2+0.45 (1-r)^2 + r(1-r) \frac{1 \times 0.55 + 5 \times 1 + 4 \times 0.45}{10}+ (1-r) r \frac{1 \times 0.45 + 5 \times 1 + 4 \times 0.55}{10}$,
which, upon maximization over $r$, becomes $0.63$ hits per unit time.
If the
contents in both caches are chosen probabilistically according to the steady state Gibbs distribution $\pi_{\beta}(\cdot)$,
one can expect that the expected cache hit rate improves as $\beta$ increases, and converges to the maximum possible cache
hit rate ($0.765$ hits per unit time in our example) as $\beta \uparrow \infty$. We do not consider $\beta=0$ since
it chooses a configuration $B$ uniformly from $\mathcal{B}$.
The above phenomena have been captured in
Figure~\ref{fig:caching-plot}. It also shows that even with finite but large $\beta$, significantly higher cache hit rate can be achieved
asymptotically compared to the most popular content placement strategy for all BSs, and even w.r.t. independent placement of contents
in the BSs. However, if the cells of two BSs do not have any overlap, then placing most popular content will be optimal, and
independent placement will be worse than that.
It is to be noted that this example is provided only to demonstrate the potential for performance improvement via Gibbs sampling approach.
Providing guarantees for
the actual margin of performance improvement for a more realistic network topology (such as Poisson Boolean model for cells) is
left for future research endeavours on this topic.
\section{Conclusion}\label{section:conclusion}
In this paper, we have provided
algorithms for cache content update in a cellular network, motivated by Gibbs sampling techniques.
The algorithms were shown to converge asymptotically to the optimal content placement in the caches. It turns out that the computation and
communication cost is affordable for practical cellular network base stations.
While the current paper solves an important problem, there are still possibilities for numerous interesting extensions:
(i) We assumed uniform download cost from the backhaul network for all base stations. However, this is not in general true. Depending on the backhaul
architecture, backhaul link capacities and congestion scenario, it might be more desirable to avoid download from some specific base stations.
Even different base stations might have different link capacities, and in practice, this will result in queueing delay for the download process.
Contents might be of various classes, and hence may not have fixed size.
Hence, a combined formulation of cache update and backhaul network state evolution will be necessary. (ii) Different cells might witness different
content popularities, but this has not been addressed in the current paper. (iii) Once a content becomes irrelevant (e.g., a news video), it has to be removed
completely from all caches; one needs to develop techniques to detect when to remove a content from all caches.
(iv) Providing convergence rate guarantees when the inverse temperature is increasing and when arrival rates and cell topology are learnt over time,
is a very challenging problem.
We leave these issues for future research endeavours on this topic.
\appendices
\section{Definition of weak and strong ergodicity}
\label{subsection:weak-and-strong-ergodicity}
Let us consider a discrete-time inhomogeneous Markov chain $\{X(t)\}_{t \geq 0}$ whose transition probability matrix (t.p.m.) between
$t=m$ and $t=m+n$ is given by $P(m;n)$. Let $\mathcal{D}$ be the collection of all possible distributions
(each element in $\mathcal{D}$ is assumed to be a row vector) on the state space.
Then $\{X(t)\}_{t \geq 0}$ is called weakly ergodic if, for all $m \geq 0$,
$$\lim_{n \uparrow \infty} \sup_{\mu,\nu \in \mathcal{D}} d_V (\mu P(m;n) , \nu P(m;n) ) =0 $$
where $d_V(\cdot,\cdot)$ is the total variation distance between two distributions.
$\{X(t)\}_{t \geq 0}$ is called strongly ergodic if there exists $\pi \in \mathcal{D}$ such that, for all $m \geq 0$,
$$\lim_{n \uparrow \infty} \sup_{\mu \in \mathcal{D}} d_V (\mu^{T} P(m;n) , \pi ) =0 $$
{\small
\bibliographystyle{unsrt}
|
1,314,259,993,466 | arxiv | \section{Introduction}
It is well known that radiative transition process of emitters in
media differs from those in
vacuum\cite{Blo1965,Top2003}. Because
of fundamental importance and relevance to various
applications in low-dimensional optical materials and photonic
crystals, this issue continues to attract both theoretical and
experimental attention\cite{Luk2002,Kum2003,Ber2004}. Various
macroscopic (see Ref. \onlinecite{Top2003} for a recent review) and
microscopic \cite{Bor1999,Cre2000,Ber2004} theoretical models have
been developed to predict, among other optical properties, the
spontaneous emission rates of lifetimes on refractive index.
However, different models predict substantially different
dependences of radiative lifetime on refractive index.
The macroscopic model based on Lorentz local field, usually
referred to as virtual-cavity model\cite{Top2003,Ber2004} has
appeared in most textbooks and been used in calculations.
Only limited experimental studies aimed specifically at discriminating
between different models\cite{Top2003,Kum2003}, with results appear
to support the real-cavity model\cite{Yab1988,Gla1991}. It has also
been pointed out that different models should apply under different
circumstances\cite{Top2003}.
The underline assumption of all those models and experimental
studies is that the only contribution to the spontaneous
radiative lifetime is from the electric dipole moment whose strength
does not vary ( or changes in a predictable way) when surrounding media
vary. We notice that the experimental results that have been claimed to
support the real-cavity model\cite{Rik1995,Sch1998,Kum2003}
are all lifetimes of the $^5D_0$ level of Eu$^{3+}$ in different hosts
with varying refractive index. It is well-known that
part of the radiative relaxation of $^5D_0$ (to $^7F_1$) is due to
magnetic dipole moment, which has a different dependence on
refractive index, and the electric dipole strength of $^5D_0$ to
$^7F_2$ transition is hypersensitive to environment and may not
be treated as a constant. In general, $4f\rightarrow 4f$ electric
dipole radiative relaxation
in rare-earth ions is due to mixing in $4f^N$ states with states
with opposite parity, which depend strongly on the environment. Since
this dependence is usually very difficult to be taken into account,
lifetimes of $4f\rightarrow 4f$ radiative relaxation do not serve as
a good examination of different models. In contrast, $5d\rightarrow 4f$
radiative transitions of rare-earth ions are dominated by
allowed electric-dipole moment contributions, whose strengths are
less perturbed by the environment and the line strengths for the radiative
relaxation can be reliably predicted. Hence the lifetimes of
$5d\rightarrow 4f$ radiative transitions give a better test of
different models.
In this paper we analyze the lifetimes of $5d\rightarrow 4f$ transition
of Ce$^{3+}$ ions in hosts of different refractive indices and make
a comparison between different models. In Sec. II we derive the basic
formula to calculate the line strength and lifetime of the $d$ levels
of Ce$^{3+}$. The lifetimes and energies of $d$ levels of Ce$^{3+}$
ions and the refractive indices are summarized and analyzed with
different models in Sec. III.
\section{$d\rightarrow f$ transition rates of Ce$^{3+}$ in
hosts}
The general spontaneous radiative emission rate of
electric dipole transition from an localized initial state $I$ to
a localized final state $F$ can be written as \cite{Top2003}
\begin{equation}
\Gamma_{IF}= \frac{64\pi^4}{3h} \chi \nu_{IF}^3 |\vec {\mu}_{IF}|^2,
\label{ind_rate}
\end{equation}
where $I$ and $F$ and transition initial and final states,
respectively, $\nu_{IF}$ is the emission wavenumber,
$\vec{\mu_{IF}}$ is the electric dipole moment $-e\vec{r}$
between state $I$ and
$F$, and $\chi$ is an enhancement factor due to dielectric medium,
which equals $n[(n^2+2)/3]^2$ for virtual- and $n[3n^2/(2n^2+1)]^2$
for real-cavity model. The lifetime of energy $I$ can be calculated
as the inverse of the total emission rate of $I$.
For the $5d\rightarrow 4f$ emission of Ce$^{3+}$ ions,
The eigenvectors of transition initial states are dominated by
bases with only one electron in open shell $5d$ and the transition
final states are dominated by bases with only one electron in
open shell $4f$. it is tempting
to approximate the electric dipole moment between a $5d$ state and
a $4f$ state with the straightforward matrix element of electric
dipole $-e\vec{r}$ between one particle orbitals $4f$ and
$5d$. Such an approximation overestimate the radiative lifetime of
Ce$^{3+}$ free ion by a factor of about 3.
Since the transition initial and final states are actually
many-particle states, calculation \cite{Zha2001}
showed that configuration mixing needed to be taken into account
to obtain correct radiative lifetime for Ce$^{3+}$ free ion.
For rare-earth ions in hosts, ligand polarization could also contribute
to the radiative transition rate. Theoretical treatment of
$f-d$ electric dipole moment of rare-earth ions taking all those
corrections into account is not trivial, which can be found
in Ref.\onlinecite{Dua2005a}. For Ce$^{3+}$ ions in
hosts, since there is only one electron in the open shell,
neglecting the small ligand polarization contribution,
the correction due to configuration mixing is equivalent to
reduce the radial integral $\ME {5d} r {4f}$. For Ce$^{3+}$ free ion,
the effective radial integer is $\ME {5d} r {4f}_{\rm eff} = 0.025$nm.
For Ce$^{3+}$ ions, since the splitting between different transition
final states is much smaller than the average energy difference
between the lowest $5d$ and $4f$ states, we can make an approximation
to the summation of Eq.\ (\ref{ind_rate}) over final state $F$
by replace the wave numbers
with the average value $\bar{\nu}$. Under this approximation, the total
spontaneous emission rate turns out to be independent of
the wavefunction of the initial $5d$ state, and can be written as
\begin{eqnarray}
\label{theory}
\frac{1}{\tau_r} &=& \frac{64 \pi^4 e^2 \chi |\ME {5d} {r}
{4f}_{\rm eff}|^2\bar{\nu}^3}{5h}
\\
&=& 4.34 \times 10^{-4} |\ME {5d} {r} {4f}_{\rm eff}|^2
\chi \bar{\nu}^3 (s^{-1}),
\end{eqnarray}
where units for radial integral, $\bar{\nu}$ and $\tau_r$
are nm, cm$^{-1}$ and sec, respectively. With measured $\tau_r$ and
$\bar{\nu}$ values, we can derive measured values for
$\ME {5d} r {4f}^2 \chi$ ($\sim \tau_r^{-1}$) and compare them
with the predictions of different model.
\section{Analysis of radiative relaxation lifetimes of Ce$^{3+}$
in different hosts}
The $5d\rightarrow 4f$ transitions of Ce$^{3+}$ in various hosts
have been widely studied due to applications as scintillators, tunable
UV lasers and phosphors. The lifetimes, peak wavelengths of emission
spectra and refractive indices of Ce$^{3+}$ in different hosts are
summarized in Table \ref{table1}. Some of the data are measured at
room temperature and some are measured at low temperature.
Ideally, we need work with the lifetimes for different hosts
at the same low temperature, preferably at 0K. Fortunately,
due to large separation between $5d$ and $4f$ states and strong
electric dipole $5d-4f$ radiative relaxation, nonradiative
relaxations are negligible and the lifetimes at room temperature
only change (decrease) slightly from low-temperature ones.
In some experiments, the observed lifetimes
at room temperature is even slightly longer than the low-temperature
lifetimes due to reabsorption. We neglect all these small corrections
and put a uncertainty of about $10\%$ to the spontenous emission
lifetime in the figure to guide eyes.
Since the transition rates depend not only on refractive index factors but
also the emission energy, we cannot follow Ref.s \ ~ \onlinecite{Top2003,Kum2003}
to compare experimental and theoretical lifetime-refractive index curves.
Instead, the measured $\ME {5d} r {4f} ^2 \chi$ values are plot as a
function of measured refractive index in Fig.\ref{figure}, together
with calculated curves using two different models with
$\ME {5d} r {4f}_{\rm eff}^2$ values obtained with
experimental-value-weighted least-square
fitting. It can be seen that the virtual-cavity model fits the measured
data much better than the real-cavity model, while the real-cavity
model gives an almost linear dependence of the emission rates on refractive
index, which cannot fit the measured data at all. This is in
contrary to the conclusion draw from the $f-f$ transitions of
Eu$^{3+}$ in various
hosts. The best-fit value for the effective radial integral is
$\ME {5d} r {4f}_{\rm eff} = 0.0281$. This value is actually bigger
than the free ion value $0.025$, in contrary to expectations
that it should be smaller than the free ion
value\cite{Kru1966,Lyu1991}.
Using the virtual-cavity model, the $\ME {5d} r {4f}_{\rm eff}$ for
each hosts have been calculated and are given in Table \ref{table1}.
It can be seen that most of the values are quite consistent.
\section{Conclusion}
In conclusion, we analyze the spontaneous emission rates $5d\rightarrow 4f$
transition of Ce$^{3+}$ in hosts of refractive indices between 1.4 to 2.2 with
the two major models. The dependence of the rates on refractive indices
favor the macroscopic virtual-cavity model based on Lorentz local field
\cite{Top2003}. We also conclude that the
values of Ce$^{3+}$ effective radial integral
$\ME {5d} {r} {4f}_{\rm eff}$ are larger in crystals than in vacuum.
\section*{Acknowledgment}
C.K.D. acknowledges support of this work by the National Natural Science
Foundation of China, Grant No. 10404040 and 10474092.
|
1,314,259,993,467 | arxiv | \section{\setcounter{equation}{0}\oldsection}
\renewcommand\thesection{\arabic{section}}
\renewcommand\theequation{\thesection.\arabic{equation}}
\allowdisplaybreaks
\def\pf{\it{Proof.}\rm\quad}
\DeclareMathOperator*{\Cat}{\mathbf{Cat}}
\DeclareMathOperator*{\sgn}{sgn}
\DeclareMathOperator*{\dep}{dep}
\DeclareMathOperator{\Li}{Li}
\DeclareMathOperator{\Mi}{Mi}
\DeclareMathOperator{\ti}{ti}
\newcommand\UU{\mbox{\bfseries U}}
\newcommand\FF{\mbox{\bfseries \itshape F}}
\newcommand\h{\mbox{\bfseries \itshape h}}\newcommand\dd{\mbox{d}}
\newcommand\g{\mbox{\bfseries \itshape g}}
\newcommand\xx{\mbox{\bfseries \itshape x}}
\def\R{\mathbb{R}}
\def\pa{\partial}
\def\N{\mathbb{N}}
\def\Z{\mathbb{Z}}
\def\Q{\mathbb{Q}}
\def\CC{\mathbb{C}}
\def\sn{\sum\limits_{k=1}^n}
\def\cn{{\rm cn}}
\def\dn{{\rm dn}}
\def\su{\sum\limits_{n=1}^\infty}
\def\sk{\sum\limits_{k=1}^\infty}
\def\sj{\sum\limits_{j=1}^\infty}
\def\t{\widetilde{t}}
\def\S{\widetilde{S}}
\def\ZZ{\mathcal{Z}}
\def\ze{\zeta}
\def\M{\bar M}
\def\tt{\left(\frac{1-t}{1+t} \right)}
\def\xx{\left(\frac{1-x}{1+x} \right)}
\def\a{^{(A)}}
\def\B{^{(B)}}
\def\C{^{(C)}}
\def\ab{^{(AB)}}
\def\abc{^{(ABC)}}
\def\ol{\overline}
\newcommand\divg{{\text{div}}}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{con}[thm]{Conjecture}
\newtheorem{pro}[thm]{Proposition}
\newtheorem{prob}[thm]{Problem}
\theoremstyle{definition}
\newtheorem{defn}{Definition}[section]
\newtheorem{re}[thm]{Remark}
\newtheorem{exa}[thm]{Example}
\setlength{\arraycolsep}{0.5mm}
\newcommand{\myone}{{1}}
\begin{document}
\title{\bf Parametric Euler Sums of Harmonic Numbers}
\author{
{Junjie Quan${}^{a,}$\thanks{Email: as6836039@163.com} \quad Xiyu Wang${}^{b,}$ \thanks{Email: xiyuwang2021@outlook.com.} \quad Xiaoxue Wei${}^{c,}$\thanks{Email: xiaoxueweidufe@163.com.}\quad Ce Xu${}^{b,}$\thanks{Email: cexu2020@ahnu.edu.cn}}\\[1mm]
\small a. School of Information Science and Technology, Xiamen University Tan Kah Kee College\\
\small Xiamen Fujian 363105, P.R. China\\
\small b. School of Mathematics and Statistics, Anhui Normal University,\\ \small Wuhu 241002, P.R. China\\
\small c. School of Economics and Management, Anhui Normal University,\\ \small Wuhu 241002, P.R. China
}
\date{}
\maketitle
\noindent{\bf Abstract.} We define a parametric variant of generalized Euler sums and construct contour integration to give some explicit evaluations of these parametric Euler sums. In particular, we establish several explicit formulas of (Hurwitz) zeta functions, linear and quadratic parametric Euler sums. Furthermore, we also give an explicit evaluation of alternating double zeta values $\ze(\overline{2j},2m+1)$ in terms of a combination of alternating Riemann zeta values by using the parametric Euler sums.\\
\medskip
\noindent{\bf Keywords}: Parametric Euler sums, harmonic numbers, contour integral, residue theorem.
\noindent{\bf AMS Subject Classifications (2020):} 11M32, 11M99
\medskip
\noindent{\bf Declaration of interest.} None.
\section{Introduction}
We begin with some basic notations. Let $\Z,\N$ and $\N^-$ be the set of integers, positive integers and negative integers, respectively, $\N_0:=\N\cup \{0\}$ and $\N^-_0:=\N^-\cup \{0\}$. For $p\in\N$ and $n\in\N_0$, the \emph{generalized harmonic number} $H_n^{(p)}$ is defined by
\begin{align}
H_n^{(p)}:=\sum_{k=1}^n \frac{1}{k^p}\quad \text{and}\quad H_0^{(p)}:=0,
\end{align}
where if $p=1$ then $H_n\equiv H_n^{(1)}$ is the \emph{classical harmonic number}.
Between late 1742 and early 1743, Euler first touched on the subject of the \emph{linear Euler sums} in a series of correspondence with Goldbach. In modern notation, these are defined
as follows:
\begin{align}
S_{p;q}:=\sum_{n\geq k\geq 1}^\infty \frac{1}{k^pn^q}=\sum_{n=1}^\infty \frac{H_n^{(p)}}{n^q},
\end{align}
where $p,q\in \N$ and $q\geq 2$ is to ensure convergence of the series.
Euler returned to the same subject after about 30 years and discovered the now famous Euler's decomposition formula in \cite{Euler1776}. More than two hundred years later, these objects were generalized to the so-called \emph{generalized Euler sums} by Flajolet and Salvy \cite{FS1998}:
\begin{align}\label{Defn-Gen-Euler-Sums}
{S_{{\bf p};q}} := \sum\limits_{n = 1}^\infty {\frac{{H_n^{\left( {{p_1}} \right)}H_n^{\left( {{p_2}} \right)} \cdots H_n^{\left( {{p_r}} \right)}}}
{{{n^q}}}},
\end{align}
where ${\bf p}=(p_1,p_2,\ldots,p_r)\in \N^r$ with $p_1\leq p_2\leq \cdots\leq p_r$ and $q\in \N\setminus \{1\}$. The quantity $w:={p _1} + \cdots + {p _r} + q$ is called the weight and the quantity $r$ is called the degree (order). Moreover, if $r>1$ in \eqref{Defn-Gen-Euler-Sums}, they were called \emph{nonlinear Euler sums}. As usual, repeated summands in partitions are indicated by powers, so that for instance
\[{S_{{1^3}{2^2}5;q}} = {S_{111225;q}} = \sum\limits_{n = 1}^\infty {\frac{{H_n^3[H^{(2)} _n]^2{H^{(5)} _n}}}{{{n^q}}}}. \]
The Euler sums is in contrast to \emph{multiple zeta values} (abbr. MZVs) defined in \cite{H1992,DZ1994} as follows:
\begin{align}\label{Defn-MZV}
\zeta(\bfk)\equiv \zeta(k_1,\ldots,k_r):=\sum\limits_{n_1>\cdots>n_r>0 } \frac{1}{n_1^{k_1}\cdots n_r^{k_r}},
\end{align}
where $\bfk=(k_1,k_r,\ldots,k_r)\in \N^r$ and $k_1>1$ is ensure convergence of the series. Here $|\bfk|:=k_1+\cdots+k_r$ and $\dep(\bfk):=r$ were called the depth and the weight of MZVs, respectively. Clearly, if $r=1$ and $k_1=k$ then it becomes the Riemann zeta value $\zeta(k)\ (k\in \N\setminus \{1\})$. This theory indeed dates back to Hoffman \cite{H1992} and Zagier \cite{DZ1994} (independently at almost the same time), while recent research on this topic
has been quite active. For instance, these quantities have appeared in several areas of mathematics and physics, and have a remarkable
depth of algebraic structure in the past three decades (for example, see the book by Zhao \cite{Z2016}). Recently, several variants of classical multiple zeta values of level 2 called multiple $t$-values (abbr. MtVs), multiple $T$-values (abbr. MTVs) and multiple mixed values (abbr. MMVs) were introduced and studied in Hoffman \cite{H2019}, Kaneko-Tsumura \cite{KTA2019} and Xu-Zhao \cite{XZ2020}. It is clear that every MMV (MtV or MTV) can be written as a $\Q$-linear combination of colored MZVs of level two. The colored MZV (abbr. CMZV) of level $N$ is defined for any $(k_1,\dotsc, k_r)\in\N^r$ and $N$th roots of unity $\eta_1,\dotsc,\eta_r$ by (see Yuan-Zhao \cite{YuanZh2014a})
\begin{equation*}
Li_{k_1,\dotsc, k_r}(\eta_1,\dotsc,\eta_r):=\sum\limits_{n_1>\cdots>n_r>0}
\frac{\eta_1^{n_1}\dots \eta_r^{n_r}}{n_1^{k_1} \dots n_r^{k_r}}
\end{equation*}
which converges if $(k_1,\eta_1)\ne (1,1)$ (see \cite[Ch. 15]{Z2016}), in which case we call $(\bfk,\bfeta)$ \emph{admissible}.
The study found that there should be rich connections between Euler sums and multiple zeta values. However, we do not pursue any study of MZVs' aspects in this paper.
For an early introduction and study on the evaluations of Euler sums, the readers may consult in Bailey-Borwein-Girgensohn's \cite{BBG1994} and Flajolet-Salvy's paper \cite{FS1998}, in which they have developed experimental method and a contour integral representation approach to the evaluation of Euler sums, respectively. For
some recent progress, the readers are referred to \cite{M2014,W2017,XW2018} and references therein. Recently, some parametric Euler sums were introduced and studied, see \cite{BBD2008,Xu2017-JMAA,XC2019} and references therein. For example, in \cite[Thm. 1]{BBD2008}, Borweins and Bradley proved the results that
\begin{align*}
\sum_{n=1}^\infty \frac{1}{n(n-a)}\sum_{k=1}^{n-1} \frac{1}{k-a}=\sum_{n=1}^\infty \frac{1}{n(n-a)}\quad (a\notin \N^-).
\end{align*}
If setting $x=0$, then we obtain the well-known identity $\zeta(2,1)=\zeta(3)$. In \cite{Xu2017-JMAA}, the last author shown the formula ($a\notin \Z\setminus \{0\}$)
\begin{align*}
\frac{5}{2}\sum\limits_{n = 1}^\infty {\frac{1}{{{{\left( {{n^2} - {a^2}} \right)}^2}}}} - {\left( {\sum\limits_{n = 1}^\infty {\frac{1}{{{n^2} - {a^2}}}} } \right)^2} = 2{a^2}\left\{ {\sum\limits_{n = 1}^\infty {\frac{1}{{{n^2} - {a^2}}}} \sum\limits_{n = 1}^\infty {\frac{1}{{{{\left( {{n^2} - {a^2}} \right)}^2}}}} - \sum\limits_{n = 1}^\infty {\frac{1}{{{{\left( {{n^2} - {a^2}} \right)}^3}}}} } \right\}.
\end{align*}
If setting $x=0$ gives $\zeta(2)^2=\frac{5}{2}\zeta(4)$. Therefore, it is possible to obtain many classical results of Euler sums and MZVs by studying the parametric Euler sums. In this paper, we will use the approach of contour integral representation to study the following (alternating) parametric Euler sums involving generalized harmonic numbers
\begin{align}\label{Defn-para--Gen-Euler-Sums}
{S^{\sigma}_{{\bf p};{\bf q}}}(a_1,a_2,\ldots,a_m) := \sum\limits_{n = 1}^\infty {\frac{{H_n^{\left( {{p_1}} \right)}H_n^{\left( {{p_2}} \right)} \cdots H_n^{\left( {{p_r}} \right)}\sigma^n}}
{(n+a_1)^{q_1}(n+a_2)^{q_2}\cdots (n+a_m)^{q_m}}}\quad (a_1,\ldots,a_m\notin \N^-),
\end{align}
where $\sigma\in \{\pm 1\},{\bf p}=(p_1,\ldots,p_r)\in \N^r$ and ${\bf q}=(q_1,\ldots,q_m)\in \N^m_0$ with $q_1+\cdots+q_m\geq 2$. Obviously, if $\sigma=1,m=1$ and $a_1=0,q_1=q$ in \eqref{Defn-para--Gen-Euler-Sums}, then it becomes the classical Euler sums $S_{{\bf p};q}$ \eqref{Defn-Gen-Euler-Sums}.
\section{Main Results}
We define a complex kernel function $\xi(z)$ with two requirements: (i). $\xi(z)$ is meromorphic in the whole complex plane. (ii). $\xi(z)$ satisfies $\xi (z)=o(z)$ over an infinite collection of circles $|z|=\rho_k$ with $\rho_k\to \infty $. Applying these two conditions of kernel function $\xi(z)$, Flajolet and Salvy discovered the following residue lemma.
\begin{lem}\emph{(cf. \cite{FS1998})}\label{lem-residue}
Let $\xi(z)$ be a kernel function and let $r(z)$ be a rational function which is $O(z^{-2})$ at infinity. Then
\begin{align*}
\sum_{\alpha\in E} \Res(r(z)\xi(z),\alpha)+ \sum_{\beta\in S} \Res(r(z)\xi(z),\beta) = 0,
\end{align*}
where $S$ is the set of poles of $r(z)$ and $E$ is the set of poles of $\xi(z)$ that are not poles of $r(z)$. Here $\Res(r(z),\alpha)$ denotes the residue of $r(z)$ at $z=\alpha$.
\end{lem}
Furthermore, Flajolet and Salvy \cite[Eq. (2.4)]{FS1998} found the fact that any polynomial form in $\pi \cot\pi z,\ \frac{\pi}{\sin \pi z},\ \psi^{(j)}(\pm z)$ is itself a kernel function with poles at a subset of the integers. Here, ${\psi ^{\left( j \right)}}\left( z \right)$ stands for the polygamma function of order $j$ defined as the $(j+1)$-st derivative of the logarithm of the gamma function:
\[{\psi ^{\left( j \right)}}\left( z \right): = \frac{{{d^j}}}{{d{z^j}}}\psi \left( z \right) = \frac{{{d^{j+1}}}}{{d{z^{j+1}}}}\log \Gamma \left( z \right).\]
Thus
\[{\psi ^{\left( 0 \right)}}\left( z \right) = \psi \left( z \right) = \frac{{\Gamma '\left( z \right)}}{{\Gamma \left( z \right)}}\]
holds, where $\psi (x)$ is the digamma function and $\Gamma \left( z \right)$ is the usual gamma function. Observe that ${\psi ^{\left( j \right)}}\left( z \right)$ satisfy the following relations
\[\psi \left( z \right) = - \gamma + \sum\limits_{n = 0}^\infty {\left( {\frac{1}{{n + 1}} - \frac{1}{{n + z}}} \right)} ,\;z\notin \N^-_0, \]
\[{\psi ^{\left( j \right)}}\left( z \right) = {\left( { - 1} \right)^{j+ 1}}j!\sum\limits_{k = 0}^\infty {1/{{\left( {z + k} \right)}^{j + 1}}},\ j\in \N,\]
\[\psi \left( {z + n} \right) = \frac{1}{z} + \frac{1}{{z + 1}} + \cdots + \frac{1}{{z + n - 1}} + \psi \left( z \right),\;n \in \N .\]
where, $\gamma$ denotes the Euler-Mascheroni constant, defined by
\[\gamma := \mathop {\lim }\limits_{n \to \infty } \left( {\sum\limits_{k = 1}^n {\frac{1}{k}} - \log n} \right) = - \psi \left( 1 \right) \approx {\rm{ 0 }}{\rm{. 577215664901532860606512 }}....\]
Moreover, from classical expansions and the properties of $\psi$ function, they listed the following expressions of $\pi \cot\pi z$ and $\psi^{(j)}(- z)$ at an integer $n$.
\begin{lem}\emph{(cf. \cite{FS1998})}\label{lem-approach-psi} For integer $p$,
\begin{align}
&\pi \cot(\pi z)\overset{z\rightarrow n}{=}\frac{1}{z-n}-2\sum\limits_{k=1}^{\infty}\zeta(2k)(z-n)^{2k-1}\quad (n\in\Z),\\
&\frac{\pi}{\sin(\pi z)}\overset{z\rightarrow n}{=}(-1)^n \left(\frac1{z-n}+2\sum_{k=1}^\infty {\bar \zeta}(2k)(z-n)^{2k-1}\right)\quad (n\in\Z),\\
&\frac{\psi^{(p-1)}(-z)}{(p-1)!}
\overset{z\rightarrow n}{=}\frac{1}{(z-n)^{p}}\sum\limits_{k=1}^{\infty}\binom{k+p-2}{p-1}
\left[(-1)^{p}\zeta(k+p-1)-(-1)^{k}H^{(k+p-1)}_{n}\right]\left(z-n\right)^{k+p-1}\nonumber\\
&\quad\quad\quad\quad\quad\quad+\frac{1}{(z-n)^{p}}\quad (n\in\N_0),\\
&\frac{\psi^{(p-1)}(-z)}{(p-1)!}\overset{z\rightarrow -n}{=}(-1)^{p}\sum\limits_{k=1}^{\infty}\binom{p+k-2}{p-1}
\left[\zeta(p+k-1)-H^{(p+k-1)}_{n-1}\right](z+n)^{k-1}\quad (n\in \N),
\end{align}
where $\ze(1)$ should be interpreted as $0$ and if $p=1$, replace $\psi(-z)$ by $\psi(-z)+\gamma$. ${\bar \zeta}(s)$ denotes the \emph{alternating Riemann zeta function} which is defined by
\begin{align}
{\bar \zeta}(s):=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^s}\quad (\Re(s)\geq 1).
\end{align}
\end{lem}
In \cite{FS1998}, Flajolet and Salvy used residue computations on large circular contour and specific functions to obtain more independent relations for classical Euler sums. These functions are of the form $\xi(z)r(z)$, where $r(z):=1/{z^q}$ and $\xi(z)$ is a product of cotangent (or cosecant) and polygamma function. In \cite{Xu2017-JMAA}, the last author used the method of Flajolet and Salvy to obtain some explicit evaluations of parametric Euler sums. Next, we will use a similar method with the help of Lemmas \ref{lem-residue} and \ref{lem-approach-psi} to give some explicit evaluations of the parametric Euler sums \eqref{Defn-para--Gen-Euler-Sums}.
\subsection{Linear Parametric Euler Sums}
In this subsection, we apply the contour integral representation approach to consider the linear parametric Euler sums
\[\sum_{n=1}^\infty \frac{H_n^{(p)}}{(n+a)(n+b)}\ \text{and}\ \sum_{n=1}^\infty \frac{H_n^{(p)}}{(n+a)(n+b)}(-1)^n.\]
\begin{thm}\label{thm-para2-harmonic-double-sum} For positive integer $p$ and complexes $a,b$ with $a\neq b$ and $a,b\notin \Z$, we have
\begin{align}\label{para2-harmonic-double-sum}
&\sum_{n=1}^\infty \frac{H_n^{(p)}}{(n+a)(n+b)}-(-1)^p \sum_{n=1}^\infty \frac{H_{n-1}^{(p)}}{(n-a)(n-b)}\nonumber\\
&=2\frac{(-1)^p}{b-a} \sum_{k=0}^{[p/2]} \ze(2k)\Big(\ze(p-2k+1;a)-\ze(p-2k+1;b)\Big)\nonumber\\
&\quad+\frac{(-1)^p}{b-a} \Big(\pi \cot(\pi a)\big(\ze(p;a)-\ze(p)\big)-\pi \cot(\pi b) \big(\ze(p;b)-\ze(p)\big)\Big),
\end{align}
where $\ze(0):=-1/2$ and $\ze(1;a):=-(\psi(a)+\gamma)$, and $\ze(s;a)$ is \emph{Hurwitz zeta function} defined by ($a\neq 0,-1,-2,-3,\ldots$)
\[\ze(s;a):=\sum_{n=0}^\infty \frac{1}{(n+a)^s}\quad (\Re(s)>1).\]
\end{thm}
\begin{proof}
Applying the method of contour integration in \cite{Xu2017-JMAA} (similar to the proof in \cite[Thm. 2.4]{Xu2017-JMAA}), we need to consider the contour integral
\begin{align*}
\oint\limits_{\left( \infty \right)}F(z)dz:=\oint\limits_{\left( \infty \right)}\frac{\pi \cot(\pi z) \psi^{(p-1)}(-z)}{(p-1)!(z+a)(z+b)}dz,
\end{align*}
where $\oint_{\left( \infty \right)}$ denotes integratl along large circles, that is, the limit of integrals $\oint_{\left| z \right| = R}$ as $R\to \infty$.
Clearly, ${\pi \cot(\pi z)\psi^{(p-1)}(-z)}/(p-1)!$ is a kernel function. Hence, $\oint_{\left( \infty \right)} F(z)dz=0$.
Note that the function in the contour integration only have poles at $z=n$ and $-a,-b$. Applying Lemma \ref{lem-approach-psi}, we can find that, at a nonnegative integer $n$, the pole has order $p+1$. Moreover, by a direct calculation, for $n\in \N_0$, we have
\begin{align*}
F(z)\overset{z\rightarrow n}{=}\frac{1}{(z-n)^{p+1}}\frac{1-2\sum\limits_{1\leq k\leq [p/2]}
\zeta(2k)(z-n)^{2k}+\left((-1)^{p}\zeta(p)+H_{n}^{(p)}\right)(z-n)^{p}+o((z-n)^{p})}{(z+a)(z+b)}
\end{align*}
and the residue is
\begin{align*}
\text{Res}(F(z),n)&=\lim_{z\to n}\frac{1}{p!}\frac{d^{p}}{dz^{p}}\left\{(z-n)^{p+1} F(z)\right\}\\
&=2\frac{(-1)^p}{a-b}\sum_{k=0}^{[p/2]}\ze(2k)\left(\frac{1}{(n+a)^{p-2k+1}}-\frac{1}{(n+b)^{p-2k+1}}\right)+\frac{(-1)^p\ze(p)+H_n^{(p)}}{(n+a)(n+b)}.
\end{align*}
At a negative integer $-n$ and complexes $-a,-b$, the poles are simple and residues are
\begin{align*}
&\text{Res}(F(z),-n)=(-1)^p \frac{\ze(p)-H_{n-1}^{(p)}}{(n-a)(n-b)},\\
&\text{Res}(F(z),-a)=(-1)^p \frac{\pi \cot(\pi a)}{a-b} \ze(p;a),\\
&\text{Res}(F(z),-b)=(-1)^p \frac{\pi \cot(\pi b)}{b-a} \ze(p;b).
\end{align*}
Hence, using Lemma \ref{lem-residue}, we have
\begin{align*}
\sum_{n=0}^\infty \text{Res}(F(z),n)+\sum_{n=1}^\infty \text{Res}(F(z),-n)+\text{Res}(F(z),-a)+\text{Res}(F(z),-b)=0.
\end{align*}
Summing these four contributions yields the statement of the theorem.
\end{proof}
\begin{cor}\emph{(cf. \cite{BBD2008,Xu2017-JMAA})} For $a\notin \Z$ and $m\in \N_0$,
\begin{align}\label{for-para-harm-num-hur-ze}
\sum\limits_{n=1}^{\infty}\frac{H_{n}^{(2m+1)}}{n^{2}-a^{2}}
&=\frac{1}{2}\sum\limits_{n=1}^{\infty}\frac{1}{n^{2m+1}(n^{2}-a^{2})}
+\frac{1}{2a}\sum\limits_{k=0}^{m}\zeta(2k)\left(\zeta(2m-2k+2;a)-\zeta(2m-2k+2;-a)\right)\nonumber\\
&\quad+\frac{1}{4a}\pi \cot(\pi a)\Big(\zeta(2m+1;a)+\zeta(2m+1;-a)-2\zeta(2m+1)\Big).
\end{align}
\end{cor}
\begin{proof}
Setting $b=-a$ and $p=2m+1$ in \eqref{para2-harmonic-double-sum} yields the desired result.
\end{proof}
\begin{cor} For integer $m\geq 0$ and $a\notin\Z$ with $a\neq 1/2$,
\begin{align}\label{para1-harmonic-double-sum}
\sum_{n=1}^\infty \frac{H_n^{(2m+1)}}{(n+a)(n+1-a)}&=\frac{1}{2a-1} \sum_{k=0}^m \ze(2k)\Big(\ze(2m-2k+2;a)-\ze(2m-2k+2;1-a)\Big)\nonumber\\
&\quad+\frac{\pi \cot(\pi a)}{2(2a-1)} \Big(\ze(2m+1;a)+\ze(2m+1;1-a)-2\ze(2m+1)\Big).
\end{align}
\end{cor}
\begin{proof}
Setting $b=1-a$ and $p=2m+1$ in \eqref{para2-harmonic-double-sum} yields the desired result.
\end{proof}
It is obvious that upon differentiating both members of \eqref{para2-harmonic-double-sum} $k-1$ times with
respect to $a$ and $l-1$ times with respect to $b$, we obtain an explicit evaluation of the combined series
\[\sum_{n=1}^\infty \frac{H_n^{(p)}}{(n+a)^k(n+b)^{l}}-(-1)^{p+k+l}\sum_{n=1}^\infty \frac{H_{n-1}^{(p)}}{(n-a)^k(n-b)^l}\quad (k,l\geq 1).\]
For example, upon differentiating both members of \eqref{para2-harmonic-double-sum} $1$ times with
respect to $a$ and $1$ times with respect to $b$, and noting the facts that $\psi^{(j)}(a)=(-1)^{j+1}j!\zeta(j+1;a)$ and $\ze(1;a):=-(\psi(a)+\gamma)$, by a direct calculation,
we deduce
\begin{align}
&\sum_{n=1}^\infty \frac{H_n}{(n+a)^2(n+b)^2}+\sum_{n=1}^\infty \frac{H_{n-1}}{(n-a)^2(n-b)^2}\nonumber\\
&=-\frac{2 (-\psi ^{(1)}(a)-\pi \cot (\pi a) (\psi(a)+\gamma )+\psi ^{(1)}(b)+\pi \cot (\pi b) (\psi(b)+\gamma ))}{(a-b)^3}\nonumber\\&\quad-\frac{\psi ^{(2)}(a)+\pi \cot (\pi a) \psi ^{(1)}(a)-\pi ^2 \csc ^2(\pi a) (\psi (a)+\gamma )}{(a-b)^2}\nonumber\\&\quad-\frac{\psi ^{(2)}(b)+\pi \cot (\pi b) \psi ^{(1)}(b)-\pi ^2 \csc ^2(\pi b) (\psi(b)+\gamma )}{(a-b)^2}.
\end{align}
Then setting $b=1-a$ yields the following evaluation
\begin{align}
&\sum_{n=1}^\infty \frac{H_n}{(n+a)^2(n+1-a)^2}\nonumber\\
&=\frac{-\psi ^{(1)}(1-a)+\psi ^{(1)}(a)+\pi \cot (\pi a) (\psi(1-a)+\gamma)+\pi \cot (\pi a) (\psi (a)+\gamma)}{(2 a-1)^3}\nonumber\\
&\quad-\frac{\psi ^{(2)}(1-a)-\pi \cot (\pi a) \psi ^{(1)}(1-a)-\pi ^2 \csc ^2(\pi a) (\psi (1-a)+\gamma )}{2(2 a-1)^2}\nonumber\\
&\quad-\frac{\psi ^{(2)}(a)+\pi \cot (\pi a) \psi ^{(1)}(a)-\pi ^2 \csc ^2(\pi a) (\psi (a)+\gamma )}{2(2 a-1)^2}.
\end{align}
More general, we can get the following general theorem.
\begin{thm}\label{thm-para1-ration-funct-residue} For $p\in \N$,
\begin{align}\label{para1-ration-funct-residue}
&\sum_{n=0}^\infty \Big((-1)^p\zeta(p)+H_n^{(p)}\Big)r(n)+(-1)^p\sum_{n=1}^\infty \Big(\zeta(p)-H_{n-1}^{(p)}\Big)r(-n)\nonumber\\
&-2\sum_{k=0}^{[p/2]} \frac1{(p-2k)!} \ze(2k) \sum_{n=0}^\infty r^{(p-2k)}(n)+\sum_{\beta\in S}{\rm Res}(f(z),\beta)=0,
\end{align}
where $\ze(0)$ and $\ze(1)$ should be interpreted as $-1/2$ and $0$ wherever they
occur. $r^{(p)}(n)$ is defined as the $p$-st derivative of $r(z)$ with $z=n$, $r(z)$ is a rational function which is $O(z^{-2})$ at infinity, $S$ is the set of poles of $r(z)$ and any integers $n$ are not poles of $r(z)$. The function $f(z)$ is defined by
\[f(z):=\frac{\pi \cot(\pi z) \psi^{(p-1)}(-z)}{(p-1)!}r(z).\]
\end{thm}
\begin{proof} Considering the contour integral
\begin{align*}
\oint\limits_{\left( \infty \right)}f(z)dz=0.
\end{align*}
By a similar argument as in the proof of Theorem \ref{thm-para2-harmonic-double-sum}, we obtain
\begin{align*}
&\text{Res}(f(z),n)=-2\sum_{k=0}^{[p/2]}\ze(2k)\frac{r^{(p-2k)}(n)}{(p-2k)!}+\Big((-1)^p\ze(p)+H_n^{(p)}\Big)r(n)\quad (n\in \N_0),\\
&\text{Res}(f(z),-n)=(-1)^p\Big(\zeta(p)-H_{n-1}^{(p)}\Big)r(-n)\quad (n\in \N).
\end{align*}
Hence, applying Lemma \ref{lem-residue}, we have
\begin{align*}
\sum_{n=0}^\infty \text{Res}(f(z),n)+\sum_{n=1}^\infty \text{Res}(f(z),-n)+\sum_{\beta\in S}{\rm Res}(f(z),\beta)=0.
\end{align*}
Thus, combining related identities yields the desired result.
\end{proof}
It is clear that the Theorem \ref{thm-para2-harmonic-double-sum} follows immediately from Theorem \ref{thm-para1-ration-funct-residue} if we set $r(z)=1/((z+a)(z+b))$.
\begin{thm}\label{thm-para2-alter-harmonic-double-sum} For positive integer $p$ and complexes $a,b$ with $a\neq b$ and $a,b\notin \Z$, we have
\begin{align}\label{para2-alter-harmonic-double-sum}
&\sum_{n=1}^\infty \frac{H_n^{(p)}}{(n+a)(n+b)}(-1)^n-(-1)^p \sum_{n=1}^\infty \frac{H_{n-1}^{(p)}}{(n-a)(n-b)}(-1)^n\nonumber\\
&=2\frac{(-1)^p}{b-a} \sum_{k=0}^{[p/2]} {\bar \ze}(2k)\Big({\bar \ze}(p-2k+1;b)-{\bar \ze}(p-2k+1;a)\Big)\nonumber\\
&\quad+\frac{(-1)^p}{b-a}\pi \left( \frac{\ze(p;a)-\ze(p)}{\sin(\pi a)}-\frac{\ze(p;b)-\ze(p)}{\sin(\pi b)}\right),
\end{align}
where ${\bar \ze}(0):=1/2$ and ${\bar \ze}(s;a)$ is \emph{alternating Hurwitz zeta function} defined by ($a\notin \N_0^-$)
\[{\bar \ze}(s;a):=\sum_{n=0}^\infty \frac{(-1)^n}{(n+a)^s}\quad (\Re(s)\geq 1).\]
\end{thm}
\begin{proof}
The proof of Theorem \ref{thm-para2-alter-harmonic-double-sum} is similar to the proof of Theorem \ref{thm-para2-harmonic-double-sum}, we need to consider the contour integral
\begin{align*}
\oint\limits_{\left( \infty \right)}G(z)dz:=\oint\limits_{\left( \infty \right)}\frac{\pi \psi^{(p-1)}(-z)}{(p-1)!\sin(\pi z)(z+a)(z+b)}dz.
\end{align*}
Clearly, ${\pi \psi^{(p-1)}(-z)}/((p-1)!\sin(\pi z))$ is a kernel function. Hence, $\oint_{\left( \infty \right)} G(z)dz=0$.
Note that the function in the contour integral only have poles at $z=n$ and $-a,-b$. Applying Lemma \ref{lem-approach-psi}, we can find that, at a nonnegative integer $n$, the pole has order $p+1$. Moreover, by a direct calculation, for $n\in \N_0$, we have
\begin{align*}
G(z)\overset{z\rightarrow n}{=}\frac{(-1)^n}{(z-n)^{p+1}}\frac{1+2\sum\limits_{1\leq k\leq [p/2]}
{\bar \zeta}(2k)(z-n)^{2k}+\left((-1)^{p}\zeta(p)+H_{n}^{(p)}\right)(z-n)^{p}+o((z-n)^{p})}{(z+a)(z+b)}
\end{align*}
and the residue is
\begin{align*}
\text{Res}(G(z),n)&=\lim_{z\to n}\frac{1}{p!}\frac{d^{p}}{dz^{p}}\left\{(z-n)^{p+1} G(z)\right\}\\
&=2\frac{(-1)^p}{b-a}\sum_{k=0}^{[p/2]}{\bar \ze}(2k)\left(\frac{(-1)^n}{(n+a)^{p-2k+1}}-\frac{(-1)^n}{(n+b)^{p-2k+1}}\right)+\frac{(-1)^p\ze(p)+H_n^{(p)}}{(n+a)(n+b)}(-1)^n.
\end{align*}
At a negative integer $-n$ and complexes $-a,-b$, the poles are simple and residues are
\begin{align*}
&\text{Res}(G(z),-n)=(-1)^p \frac{\ze(p)-H_{n-1}^{(p)}}{(n-a)(n-b)}(-1)^n,\\
&\text{Res}(G(z),-a)=(-1)^p \frac{\pi \ze(p;a)}{\sin(\pi a)(a-b)} ,\\
&\text{Res}(G(z),-b)=(-1)^p \frac{\pi \ze(p;b)}{\sin(\pi b)(b-a)}.
\end{align*}
Hence, using Lemma \ref{lem-residue}, we have
\begin{align*}
\sum_{n=0}^\infty \text{Res}(G(z),n)+\sum_{n=1}^\infty \text{Res}(G(z),-n)+\text{Res}(G(z),-a)+\text{Res}(G(z),-b)=0.
\end{align*}
Noting the fact that
\begin{align*}
\sum_{n=0}^\infty \frac{(-1)^n}{(n+a)(n+b)}+\sum_{n=1}^\infty \frac{(-1)^n}{(n-a)(n-b)}=\frac{\pi}{b-a} \left(\frac1{\sin (\pi a)}-\frac1{\sin (\pi b)}\right).
\end{align*}
Summing these four contributions yields the statement of the theorem.
\end{proof}
\begin{cor} For $a\notin \Z$ and $m\in \N_0$,
\begin{align}\label{for-para-alter-harm-num-hur-ze}
\sum\limits_{n=1}^{\infty}\frac{H_{n}^{(2m+1)}}{n^{2}-a^{2}}(-1)^n
&=\frac{1}{2}\sum\limits_{n=1}^{\infty}\frac{(-1)^n}{n^{2m+1}(n^{2}-a^{2})}\nonumber\\
&\quad+\frac{1}{2a}\sum\limits_{k=0}^{m}{\bar \zeta}(2k)\left({\bar \zeta}(2m-2k+2;-a)-{\bar \zeta}(2m-2k+2;a)\right)\nonumber\\
&\quad+\frac{\pi}{4a \sin(\pi a)}\Big(\zeta(2m+1;a)+\zeta(2m+1;-a)-2\zeta(2m+1)\Big).
\end{align}
\end{cor}
\begin{proof}
Setting $b=-a$ and $p=2m+1$ in \eqref{para2-alter-harmonic-double-sum} yields the desired result.
\end{proof}
\begin{cor} For integer $m\geq 1$ and $a\notin\Z$ with $a\neq 1/2$,
\begin{align}\label{para1-alter-harmonic-double-sum}
\sum_{n=1}^\infty \frac{H_n^{(2m)}}{(n+a)(n+1-a)}(-1)^n&=\frac{1}{1-2a} \sum_{k=0}^m {\bar \ze}(2k)\Big({\bar \ze}(2m-2k+2;1-a)-{\bar \ze}(2m-2k+2;a)\Big)\nonumber\\
&\quad+\frac{\pi}{2(1-2a)} \frac{\ze(2m;a)-\ze(2m;1-a)}{\sin(\pi a)}.
\end{align}
\end{cor}
\begin{proof}
Setting $b=1-a$ and $p=2m$ in \eqref{para2-alter-harmonic-double-sum} yields the desired result.
\end{proof}
Similar to Theorem \ref{thm-para1-ration-funct-residue} , we can get the following general theorem.
\begin{thm}\label{thm-alter-para1-ration-funct-residue} For $p\in \N$,
\begin{align}\label{a;lter-para1-ration-funct-residue}
&\sum_{n=0}^\infty \Big((-1)^p\zeta(p)+H_n^{(p)}\Big)(-1)^nr(n)+(-1)^p\sum_{n=1}^\infty \Big(\zeta(p)-H_{n-1}^{(p)}\Big)(-1)^nr(-n)\nonumber\\
&+2\sum_{k=0}^{[p/2]} \frac1{(p-2k)!} {\bar \ze}(2k) \sum_{n=0}^\infty (-1)^nr^{(p-2k)}(n)+\sum_{\beta\in T}{\rm Res}(g(z),\beta)=0,
\end{align}
where $\ze(0)$ and $\ze(1)$ should be interpreted as $-1/2$ and $0$ wherever they
occur. $r^{(p)}(n)$ is defined as the $p$-st derivative of $r(z)$ with $z=n$, $r(z)$ is a rational function which is $O(z^{-2})$ at infinity, $T$ is the set of poles of $r(z)$ and any integers $n$ are not poles of $r(z)$. The function $g(z)$ is defined by
\[g(z):=\frac{\pi \psi^{(p-1)}(-z)}{(p-1)!\sin(\pi z)}r(z).\]
\end{thm}
\begin{proof} Considering the contour integral
\begin{align*}
\oint\limits_{\left( \infty \right)}g(z)dz=0.
\end{align*}
By a similar argument as in the proof of Theorem \ref{thm-para2-alter-harmonic-double-sum}, we obtain
\begin{align*}
&\text{Res}(g(z),n)=2\sum_{k=0}^{[p/2]}{\bar \ze}(2k)\frac{r^{(p-2k)}(n)}{(p-2k)!}(-1)^n+\Big((-1)^p\ze(p)+H_n^{(p)}\Big)(-1)^nr(n)\quad (n\in \N_0),\\
&\text{Res}(g(z),-n)=(-1)^p\Big(\zeta(p)-H_{n-1}^{(p)}\Big)(-1)^nr(-n)\quad (n\in \N).
\end{align*}
Hence, applying Lemma \ref{lem-residue}, we have
\begin{align*}
\sum_{n=0}^\infty \text{Res}(g(z),n)+\sum_{n=1}^\infty \text{Res}(g(z),-n)+\sum_{\beta\in T}{\rm Res}(g(z),\beta)=0.
\end{align*}
Thus, combining related identities yields the desired result.
\end{proof}
Hence, Theorem \ref{thm-para2-alter-harmonic-double-sum} follows immediately from Theorem \ref{thm-alter-para1-ration-funct-residue} if we set $r(z)=1/((z+a)(z+b))$.
In \cite{BBD2008}, Borweins and Bradley used the evaluation \eqref{for-para-harm-num-hur-ze} to obtain an explicit formula of double zeta values $\zeta(2j,2m+1)\ (j\in \N,m\in \N_0)$ by using power series expansions and comparing the coefficients on both sides. Similarly, applying \eqref{for-para-alter-harm-num-hur-ze}, we can also get the following corollary.
\begin{cor} For $j\in \N$ and $m\in \N_0$,
\begin{align}\label{double-alter-zeta-values}
\zeta({\overline{2j}},2m+1)&=\frac1{2}{\bar \ze}(2m+2j+1)-\sum_{k=0}^m \binom{2j+2m-2k}{2j-1}{\bar \ze}(2k){\bar \ze}(2m+2j+1-2k)\nonumber\\
&\quad+\sum_{l=0}^{j-1} \binom{2j+2m-2l}{2m}{\bar \ze}(2l)\ze(2m+2j+1-2l),
\end{align}
where $\zeta({\overline{2j}},2m+1)$ is a alternating double zeta values defined by
\begin{align}
\zeta({\overline{2j}},2m+1):=\sum_{n>k\geq 1} \frac{(-1)^n}{n^{2j}k^{2m+1}}=\sum_{n=1}^\infty \frac{H_{n-1}^{(2m+1)}}{n^{2j}}(-1)^n.
\end{align}
\end{cor}
\begin{proof}
By direct calculations, for $|a|<1$, we have
\begin{align*}
&\sum_{n=1}^\infty \frac{H_{n-1}^{(2m+1)}}{n^2-a^2}(-1)^n=\sum_{j=1}^\infty \zeta(\overline{2j},2m+1)a^{2j-2},\\
&\sum_{n=1}^\infty \frac{(-1)^n}{n^{2m+1}(n^2-a^2)}=-\sum_{j=1}^\infty {\bar \ze}(2m+2j+1)a^{2j-2},\\
&\frac{\pi}{\sin(\pi a)}=\frac1{a}+2a\sum_{j=1}^\infty {\bar \ze}(2j)a^{2j-2},\\
&\ze(2m+1;a)+\ze(2m+1;1-a)-2\ze(2m+1)\\&\quad=2\sum_{j=1}^\infty \binom{2j+2m}{2j}\ze(2m+2j+1)a^{2j},\\
&{\bar \ze}(2m-2k+2;-a)-{\bar \ze}(2m-2k+2;a)\\&\quad=-2\sum_{j=1}^\infty \binom{2j+2m-2k}{2j-1}{\bar \ze}(2m+2j+1-2k)a^{2j-1}.
\end{align*}
Then, using \eqref{for-para-alter-harm-num-hur-ze} gives
\begin{align*}
&\sum_{j=1}^\infty \ze(\overline{2j},2m+1)a^{2j-2}\\&=\frac1{2} \sum_{j=1}^\infty {\bar \ze}(2m+2j+1)a^{2j-2}-\sum_{j=1}^\infty \left\{\sum_{k=0}^m \binom{2j+2m-2k}{2j-1}{\bar \ze}(2k){\bar \ze}(2m+2j+1-2k)\right\}a^{2j-2}\\
&\quad+\frac1{2} \sum_{j=1}^\infty \binom{2j+2m}{2j} \ze(2m+2j+1)a^{2j-2}+\sum_{j=1}^\infty \sum_{j_1+j_2=j,\atop j_1,j_2\geq 1} \binom{2j_2+2m}{2j_2}{\bar \ze}(2j_1)\ze(2m+2j_2+1)a^{2j-2}.
\end{align*}
Thus, comparing the coefficients of $a^{2j-2}$ in above identity yields the desired evaluation.
\end{proof}
\subsection{Quadratic Parametric Euler Sums}
Now, we give some evaluations of quadratic parametric Euler sums by using the method of contour integral.
\begin{thm}\label{thm-quadratic-para-ES} For $p,m\in \N$ and $a,b\notin \Z$ with $a\neq b$,
\begin{align}\label{Formu-quadratic-para-ES}
&\sum\limits_{n=1}^{\infty}\frac{H_{n}^{(p)}H_{n}^{(m)}}{(n+a)(n+b)}
+(-1)^{p+m}\sum\limits_{n=1}^{\infty}\frac{H_{n-1}^{(p)}H_{n-1}^{(m)}}{(n-a)(n-b)}\nonumber\\
&+(-1)^{m}\zeta(m)\sum\limits_{n=1}^{\infty}\frac{H_{n}^{(p)}}{(n+a)(n+b)}
+(-1)^{p}\zeta(p)\sum\limits_{n=1}^{\infty}\frac{H_{n}^{(m)}}{(n+a)(n+b)}\nonumber\\
&-(-1)^{p+m}\zeta(m)\sum\limits_{n=1}^{\infty}\frac{H_{n-1}^{(p)}}{(n-a)(n-b)}
-(-1)^{p+m}\zeta(p)\sum\limits_{n=1}^{\infty}\frac{H_{n-1}^{(m)}}{(n-a)(n-b)}\nonumber\\
&+(-1)^{p+m}\frac{\pi \cot(\pi b)}{b-a}\left\{\zeta(p;b)\zeta(m;b)-\zeta(p)\zeta(m)\right\}\nonumber\\
&-(-1)^{p+m}\frac{\pi \cot(\pi a)}{b-a}\left\{\zeta(p;a)\zeta(m;a)-\zeta(p)\zeta(b)\right\}\nonumber\\
&-2\frac{(-1)^{p+m}}{b-a}\sum\limits_{k=0}^{[(p+m)/{2}]}\zeta(2k)
\left\{\zeta(p+m-2k+1;a)-\zeta(p+m-2k+1;b)\right\}\nonumber\\
&+2\frac{(-1)^{p+m}}{b-a}
\sum\limits_{2k_{1}+k_{2}\leq m+1,\atop k_{1}\geq 0,k_{2}\geq 1}
(-1)^{k_{2}}\binom{k_{2}+p-2}{p-1}\zeta(2k_{1})\nonumber\\&\quad\quad\quad\quad\quad\quad\quad\times
\left\{ \begin{array}{l}
\zeta(k_{2}+p-1)\left[\zeta(m-2k_{1}-k_{2}+2;a)-\zeta(m-2k_{1}-k_{2}+2;b)\right]\nonumber\\
-(-1)^{p+k_{2}}\sum\limits_{n=0}^{\infty}
\left[\frac{H_{n}^{(k_{2}+p-1)}}{(n+a)^{m-2k_{1}-k_{2}+2}}-\frac{H_{n}^{(k_{2}+p-1)}}{(n+b)^{m-2k_{1}-k_{2}+2}}\right]\nonumber\\
\end{array} \right\}\\
&+2\frac{(-1)^{p+m}}{b-a}\sum\limits_{2k_{1}+k_{2}\leq p+1,\atop k_{1}\geq 0,k_{2}\geq 1}
(-1)^{k_{2}}\binom{k_{2}+m-2}{m-1}\zeta(2k_{1})\nonumber\\&\quad\quad\quad\quad\quad\quad\quad\times
\left\{ \begin{array}{l}
\zeta(k_{2}+m-1)\left[\zeta(p-2k_{1}-k_{2}+2;a)-\zeta(p-2k_{1}-k_{2}+2;b)\right]\nonumber\\
-(-1)^{m+k_{2}}\sum\limits_{n=0}^{\infty}
\left[\frac{H_{n}^{(k_{2}+m-1)}}{(n+a)^{p-2k_{1}-k_{2}+2}}-\frac{H_{n}^{(k_{2}+m-1)}}{(n+b)^{p-2k_{1}-k_{2}+2}}\right]\end{array} \right\}\nonumber\\
&=0,
\end{align}
where $\zeta \left(1\right)$ and $\zeta(0)$ should be interpreted as $0$ and $-1/2$, respectively, wherever they occur. $\ze(1;a):=-(\psi(a)+\gamma)$.
\end{thm}
\begin{proof}
Similarly to the proof of Theorem \ref{thm-para2-harmonic-double-sum}, we consider the contour integral
\begin{align*}
\oint\limits_{\left( \infty \right)}H(z)dz:=\oint\limits_{\left( \infty \right)}\frac{\pi \cot(\pi z)\psi^{(p-1)}(-z)\psi^{(m-1)}(-z)}{(z+a)(z+b)(p-1)!(m-1)!}dz=0.
\end{align*}
Observe that $H(z)$ has poles only at $-a,-b$ and the integers. Applying Lemma \ref{lem-approach-psi}, we can find that, at a nonnegative integer $n$, the pole has order $p+m+1$. Moreover, by a straightforward calculation, for $n\in \N_0$, we have
\begin{align*}
&H(z)\overset{z\rightarrow n}{=}\frac{1}{(z-n)^{p+m+1}}\frac{1}{(z+a)(z+b)}\\
&\quad\quad\times\left\{\begin{array}{l}
1-2\sum\limits_{k=1}^{[(p+m)/{2}]}\zeta(2k)(z-n)^{2k}\\+\sum\limits_{k=1}^{m+1}\binom{k+p-2}{p-1}\left[(-1)^{p}\zeta(k+p-1)-(-1)^{k}H_{n}^{(k+p-1)}\right](z-n)^{k+p-1}\\
-2\sum\limits_{2k_{1}+k_{2}\leq m+1,\atop k_{1},k_{2}\geq 1}
\binom{k_{2}+p-2}{p-1}\zeta(2k_{1})\left[(-1)^{p}\zeta(k_{2}+p-1)-(-1)^{k_{2}}H_{n}^{(k_{2}+p-1)}\right]\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\times(z-n)^{2k_{1}+k_{2}+p-1}\\
+\sum\limits_{k=1}^{p+1}\binom{k+m-2}{m-1}
\left[(-1)^{m}\zeta(k+m-1)-(-1)^{k}H_{n}^{(k+m-1)}\right](z-n)^{k+m-1}\\
-2\sum\limits_{2k_{1}+k_{2}\leq p+1,\atop k_{1},k_{2}\geq 1}
\binom{k_{2}+m-2}{m-1}\zeta(2k_{1})\left[(-1)^{m}\zeta(k_{2}+m-1)-(-1)^{k_{2}}H_{n}^{(k_{2}+m-1)}\right]\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\times(z-n)^{2k_{1}+k_{2}+m-1}\\
+\left[(-1)^{p}\zeta(p)+H_{n}^{(p)}\right]\left[(-1)^{m}\zeta(m)+H_{n}^{(m)}\right]
(z-n)^{p+m}+o((z-n)^{p+m})\end{array}
\right\}
\end{align*}
and the residue is
\begin{align*}
&\text{Res}[H(z),n]=\frac{1}{(p+m)!}\lim_{z\to n}\frac{d^{p+m}}{dz^{p+m}}\left\{(z-n)^{p+m+1}H(z)\right\}\\
&=\frac{(-1)^{p+m}}{b-a}\left\{\frac{1}{(n+a)^{p+m+1}}-\frac{1}{(n+b)^{p+m+1}}\right\}\\
&\quad-2\frac{(-1)^{p+m}}{b-a}\sum\limits_{k=1}^{[(p+m)/{2}]}\zeta(2k)\left\{\frac{1}{(n+a)^{p+m-2k+1}}-\frac{1}{(n+b)^{p+m-2k+1}}\right\}\\
&\quad-\frac{(-1)^{p+m}}{b-a}\sum\limits_{k=1}^{m+1}
\binom{k+p-2}{p-1}\left[(-1)^{k}\zeta(k+p-1)-(-1)^{p}H_{n}^{(k+p-1)}\right]\\
&\quad\quad\quad\quad\quad\quad\quad\times\left[\frac{1}{(n+a)^{m-k+2}}-\frac{1}{(n+b)^{m-k+2}}\right]\\
&\quad+2\frac{(-1)^{p+m}}{b-a}
\sum\limits_{2k_{1}+k_{2}\leq m+1,\atop k_{1},k_{2}\geq 1}
\binom{k_{2}+p-2}{p-1}(-1)^{2k_{1}+k_{2}}\zeta(2k_{1})\\
&\quad\quad\quad\times
\left[\zeta(k_{2}+p-1)-(-1)^{p+k_{2}}H_{n}^{(k_{2}+p-1)}\right]\left[\frac{1}{(n+a)^{m-2k_{1}-k_{2}+2}}-\frac{1}{(n+b)^{m-2k_{1}-k_{2}+2}}\right]\\
&\quad-\frac{(-1)^{p+m}}{b-a}\sum\limits_{k=1}^{p+1}\binom{k+m-2}{m-1}\left[(-1)^{k}\zeta(k+m-1)-(-1)^{m}H_n^{(k+m-1)}\right]
\\&\quad\quad\quad\quad\quad\quad\quad\times\left[\frac{1}{(n+a)^{p-k+2}}-\frac{1}{(n+b)^{p-k+2}}\right]\\
&\quad+2\frac{(-1)^{p+m}}{b-a}
\sum\limits_{2k_{1}+k_{2}\leq m+1,\atop k_{1},k_{2}\geq 1}
\binom{k_{2}+m-2}{m-1}(-1)^{2k_{1}+k_{2}}\zeta(2k_{1})\\
&\quad\quad\quad\times
\left[\zeta(k_{2}+m-1)-(-1)^{m+k_{2}}H_{n}^{(k_{2}+m-1)}\right]\left[\frac{1}{(n+a)^{p-2k_{1}-k_{2}+2}}-\frac{1}{(n+b)^{p-2k_{1}-k_{2}+2}}\right]\\
&\quad+\frac{(-1)^{p+m}\zeta(p)\zeta(m)+(-1)^{p}\zeta(p)H_{n}^{(m)}+(-1)^{m}\zeta(m)H_{n}^{(p)}+H_{n}^{(p)}H_{n}^{(m)}}{(n+a)(n+b)}.
\end{align*}
At a negative integer $-n$ and reals $-a,-b$, the poles are simple and residues are
\begin{align*}
&\text{Res}[H(z),-n]=(-1)^{p+m}\frac{\zeta(p)\zeta(m)-\zeta(p)H_{n-1}^{(m)}-\zeta(m)H_{n-1}^{(p)}+H_{n-1}^{(p)}H_{n-1}^{(m)}}{(n-a)(n-b)},\\
&\text{Res}[H(z),-a]=-(-1)^{p+m}\frac{\pi \cot(\pi a)}{b-a}\zeta(p;a)\zeta(m;a),\\
&\text{Res}[H(z),-b]=(-1)^{p+m}\frac{\pi \cot(\pi b)}{b-a}\zeta(p;b)\zeta(m;b).
\end{align*}
Hence, using Lemma \ref{lem-residue}, we have
\begin{align*}
\sum_{n=0}^\infty \text{Res}(H(z),n)+\sum_{n=1}^\infty \text{Res}(H(z),-n)+\text{Res}(H(z),-a)+\text{Res}(H(z),-b)=0.
\end{align*}
Summing these four contributions yields the statement of the theorem.
\end{proof}
Letting $(p,m)=(1,1)$ and $(2,2)$ in Theorem \ref{thm-quadratic-para-ES}, we can get the following corollaries.
\begin{cor} For $a,b\notin \Z$ with $a\neq b$,
\begin{align}
&\sum\limits_{n=1}^{\infty}\frac{H_{n}^{2}}{(n+a)(n+b)}+\sum\limits_{n=1}^{\infty}\frac{H_{n-1}^{2}}{(n-a)(n-b)}
+\frac{\pi \cot(\pi b)}{b-a}\left(\psi(b)+\gamma\right)^{2}
-\frac{\pi \cot(\pi a)}{b-a}\left(\psi(a)+\gamma\right)^{2}\nonumber\\
&\quad-\frac{2}{b-a}
\left\{-\frac{\zeta(3;a)-\zeta(3;b)}{2}+\zeta(2)\left(\psi(b)-\psi(a)\right)\right\}\nonumber\\
&\quad-\frac{2}{b-a}\left\{\sum\limits_{n=1}^{\infty}\left[\frac{H_{n}}{(n+a)^{2}}-\frac{H_{n}}{(n+b)^{2}}\right]
+\zeta(2)\left(\psi(b)-\psi(a)\right)+\sum\limits_{n=0}^{\infty}\frac{H_{n}^{(2)}}{(n+a)(n+b)}(b-a)\right\}\nonumber\\
&=0.
\end{align}
\end{cor}
\begin{cor} For $a,b\notin \Z$ with $a\neq b$,
\begin{align}
&\sum\limits_{n=1}^{\infty}\frac{[H_{n}^{(2)}]^{2}}{(n+a)(n+b)}+\sum\limits_{n=1}^{\infty}\frac{[H_{n-1}^{(2)}]^2}{(n-a)(n-b)}\nonumber\\
&\quad+2\zeta(2)\sum\limits_{n=1}^{\infty}\frac{H_{n}^{(2)}}{(n+a)(n+b)}-2\zeta(2)\sum\limits_{n=1}^{\infty}\frac{H_{n-1}^{(2)}}{(n-a)(n-b)}\nonumber\\
&\quad+\frac{\pi \cot(\pi b)}{b-a}\left\{\zeta(2;b)^{2}-\zeta(2)^{2}\right\}
-\frac{\pi \cot(\pi a)}{b-a}\left\{\zeta(2;a)^{2}-\zeta(2)^{2}\right\}\nonumber\\
&\quad-\frac{2}{b-a}\left\{-\frac{\zeta(5;a)-\zeta(5;b)}{2}+\zeta(2)\left(\zeta(3;a)-\zeta(3;b)\right)+\zeta(4))\left(\psi(b)-\psi(a)\right)\right\}\nonumber\\
&\quad-\frac{2}{b-a}
\left\{ \begin{array}{l}
-\zeta(2)\left[\zeta(3;a)-\zeta(3;b)\right]-\sum\limits_{n=0}^{\infty}\left[\frac{H_{n}^{(2)}}{(n+a)^{3}}-\frac{H_{n}^{(2)}}{(n+b)^{3}}\right]\\
+2\zeta(3)\left[\zeta(2;a)-\zeta(2;b)\right]-2\sum\limits_{n=0}^{\infty}\left[\frac{H_{n}^{(3)}}{(n+a)^{2}}-\frac{H_{n}^{(3)}}{(n+b)^{2}}\right]\\
-3\zeta(4)\left(\psi(b)-\psi(a)\right)-3(b-a)
\sum\limits_{n=0}^{\infty}\frac{H_{n}^{(4)}}{(n+a)(n+b)}
\end{array} \right\}\nonumber\\
&\quad-\frac{4}{b-a}\zeta(2)\left\{\zeta(2)\left(\psi(b)-\psi(a)\right)+(b-a)\sum\limits_{n=0}^{\infty}\frac{H_{n}^{(2)}}{(n+a)(n+b)}\right\}\nonumber\\
&=0.
\end{align}
\end{cor}
\begin{re} From \cite[Eq. (2.24)]{Xu2017-JMAA} and \cite[Thm. 3.2]{XC2019}, we know that for integer $p\geq 2$, the parametric Euler sums
\begin{align*}
\sum_{n=1}^\infty \frac{H_n}{(n+a)^p} \quad (a\notin \N^-)
\end{align*}
can be expressed in terms of a combination of products of (Hurwitz) zeta function and digamma function. Moreover, if considering the contour integral
\begin{align*}
\oint\limits_{\left( \infty \right)}\frac{\pi \cot(\pi z)\psi^{(p-1)}(-z)\psi^{(m-1)}(-z)}{(p-1)!(m-1)!}r(z)dz=0,
\end{align*}
by a similar argument as in the proof of Theorem \ref{thm-quadratic-para-ES},
we also obtain a similar evaluation of Theorem \ref{thm-para1-ration-funct-residue}. More general, we can consider the general contour integral
\begin{align*}
\oint\limits_{\left( \infty \right)}\frac{\pi \cot(\pi z)\psi^{(p_1-1)}(-z)\cdots\psi^{(p_m-1)}(-z)}{(p_1-1)!\cdots(p_m-1)!}r(z)dz=0
\end{align*}
and
\begin{align*}
\oint\limits_{\left( \infty \right)}\frac{\pi \psi^{(p_1-1)}(-z)\cdots\psi^{(p_m-1)}(-z)}{\sin(\pi z)(p_1-1)!\cdots(p_m-1)!}r(z)dz=0
\end{align*}
to establish some quite general evaluations of (alternating) parametric Euler sums. We leave the detail to the interested reader.
\end{re}
\medskip
{\bf Funding.} The authors is supported by the National Natural Science Foundation of China (Grant No. 12101008), the Natural Science Foundation of Anhui Province (Grant No. 2108085QA01, 2108085QG304) and the University Natural Science Research Project of Anhui Province (Grant No. KJ2020A0057).
|
1,314,259,993,468 | arxiv | \section{Introduction}
SU UMa-type dwarf novae, a subclass of dwarf novae, exhibit two types
of outbursts: normal outburst and superoutburst (for a review, see
\cite{war95book}; \cite{osa96review}). During superoutburst,
hump-like modulations called superhumps are visible. Basically, the
light source of superhumps is understood as phase-dependent tidal
dissipation in an eccentric accretion disk (\cite{whi88tidal};
\cite{hir90SHexcess}). The general consensus is that an eccentricity of
the accretion disk is excited by a 3:1 orbital resonance
(\cite{osa89suuma}). According to the
original thermal-tidal instability model (TTI model, \cite{osa89suuma}),
the radius of the accretion disk monotonically increases with each
normal outburst. When the disk finally reaches the 3:1 resonance radius,
the accretion disk is tidally deformed and triggers a superoutburst. The
TTI model reproduces well basic behavior of SU UMa-type dwarf
novae. However, a reform of the TTI model may be required, particularly in
systems with unusual recurrence times of superoutbursts
(\cite{hel01eruma}; \cite{pat02wzsge}; \cite{osa03DNoutburst}).
Over the past few years, research activity concerning SU UMa-type dwarf
novae has been significantly improved. One of the most important research
is that unprecedented photometric surveys during superoutbursts have
been carried out by T. Kato and his colleagues (\cite{pdot}; \cite{pdot2};
\cite{pdot3}). They collected all of available data and
analyzed the light curves during superoutbursts, from which they have
established the basic picture of the superhump period changes (see
figure 3 of \citet{pdot}). Another important research to be noted is
that the $Kepler$ satellite has provided us with unprecedentedly precise
light curves at the one-minite cadence (\cite{kepler1};
\cite{kepler2}). This allows us to investigate detailed light curves
that cannot be achieved under ground-based observations
(\cite{2010ApJ...717L.113S}; \cite{2010ApJ...725.1393C};
\cite{2011ApJ...741..105W}; \cite{pdot3};
\cite{2012ApJ...747..117C}). Although these surveys improve our
understanding of SU UMa-type dwarf novae, the diversity of the
observations claims further modification of the TTI model.
In order to decipher and understand the observed diversity of SU
UMa-type dwarf novae, we started a new approach: simultaneous multicolor
photometry of dwarf novae not only during outburst but also during
quiescence. As a first step, we performed multicolor photometry of SU
UMa itself from 2011 December to 2012 February. SU UMa is a prototype of
SU UMa-type dwarf novae ranging $V$=11.3-15.7 \citep{RKcat08} and its
orbital period is determined to be $P_{\rm orb}$=0.07635 d
\citep{tho86suuma}. However, anomalous behavior with short and long time
scales were reported in the previous studies (\cite{ros00suuma};
\cite{kat02suuma}). In this letter, we report on detection of superhumps
during a 2012 January normal outburst. This is the first recorded
superhumps that emerged in the middle of a supercycle. Results of the
whole observations will be discussed in a forthcoming paper.
\section{Observation and Result}
\begin{figure*}
\begin{center}
\FigureFile(160mm,80mm){figure1.eps}
\end{center}
\caption{$R_{\rm c}$ band light curve of SU UMa. The abscissa and
ordinate denote HJD$-$2455900 and $R_{\rm c}$ magnitude,
respectively. The normal outburst in which superhumps are detected is
marked with an arrow. Note that this normal outburst is held between
two normal outbursts.}
\label{lc}
\end{figure*}
Time-resolved CCD photometry were performed from 2011 December 1 to 2012
February 20 at Okayama Astrophysical Observatory using the 50-cm MITSuME
telescope, which is able to obtain $g'$, $R_{\rm c}$, and $I_{\rm c}$
bands simultaneously \citep{3me}. In this letter, we extracted data from
2012 January 4 to 7, during which SU UMa experienced a normal
outburst. The exposure time was 30 s with a read-out time as short as 1
s. The data were processed under the standard manner using
IRAF/daophot\footnote{IRAF (Image Reduction and Analysis Facility) is
distributed by the National Optical Astronomy Observatories, which is
operated by the Association of Universities for Research in Astronomy,
Inc., under cooperative agreement with National Science
Foundation.}. After removing bad images, we acquired
available 1577 images for $g'$ band, 1588 images for $R_{\rm c}$ band,
and 1587 images for $I_{\rm c}$ band, respectively. Differential
photometry were carried out using Tycho-2 4126-00036-1 (RA:
08:12:45.104, Dec: +62:26:17.57), whose constancy was checked by nearby
stars in the same image. Heliocentric correction was made before the
following analyses.
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure2.eps}
\end{center}
\caption{PDM analysis during HJD 2455932-34, corresponding to the
declining stage of the normal outburst. A weak signal can be seen at
$P$=0.07903(11) d, almost identical to the mean superhump period during
the 1989 April superoutburst of SU UMa. Also seen is the periodicity at
$P$=0.07616(11) d, slightly shorter than the orbital period of SU
UMa.}
\label{pdm}
\end{figure}
Figure \ref{lc} shows $R_{\rm c}$ band light curve of SU UMa between
2011 December 3 and 2012 February 2. On 2012 January 4 (HJD 2455931),
the magnitude monotonically brightened at a rate of -1.41(4) mag/d,
indicating the initiation of the outburst. The magnitude reached
$R_{\rm c}$ ${\sim}$ 12.3 on 2012 January 5 (HJD 2455932), after which
the light curve decayed at a rate of 0.64(1) mag/d. After the end of the
normal outburst, surprisingly, SU UMa entered an anomalous state in
which the magnitude was ${\sim}$ 0.5 mag brighter than that in usual
quiescence. This $bright$ $quiescence$ lasted until the next normal
outburst. More details are beyond the scope of this letter and will be
discussed in a forthcoming paper. Taking this observation into
consideration, we infer that the normal outburst ended until 2012
January 8 (HJD 2455935).
We performed the phase dispersion minimization method (PDM,
\cite{ste78pdm}) for estimation of periods during the normal
outburst. After removing the declining trends, we combined light curves
on 2012 January 5, 6, and 7. The resultant theta diagram on $R_{\rm c}$
band is displayed in figure \ref{pdm}. As can be seen in this figure,
two strong signals coincide with $P$=0.07616(11) d and $P$=0.07903(11) d,
respectively. The former period is very close to the orbital period of
the system but slightly shorter. The latter period, on the other hand,
is in excellent agreement with the mean superhump period during
the 1989 April superoutburst of SU UMa reported by \citet{uda90suuma}.
In order to clarify the nature of this periodicity, we obtained
phase-averaged $R_{\rm c}$ band light curve and $g'-I_{\rm c}$ color
folded with 0.07903 d, which are given in figure \ref{shvar}. Although
the data contain a secondary peak around phase 0.4, a rapid rise and
slow decline, characteristic of superhumps are visible. Furthermore,
this profile bears significant resemblance to that obtained in the
previous study \citep{kat02suuma}. As for $g'-I_{\rm c}$ color, the
bluest peak is prior to the maximum timing of the $R_{\rm c}$
light curve by ${\phi}{\sim}$0.2. It should be noted again that such a
phase discordance also occurred during the 2007 superoutbursts of
V455 And \citep{mat09v455and}.\footnote{See also \citet{uem08j1021} in
which phase discordance between $V$ and $J$ was reported during the 2006
superoutburst of SDSS J102146.44+234926.3.} We also folded the
$R_{\rm c}$ light curve and $g'-I_{\rm c}$ with 0.07616 d, which are
given in figure 4. In this figure, a significant difference compared
with figure 3 is that the peak $R_{\rm c}$ magnitudes coincide with the
bluest peaks in $g'-I_{\rm c}$.
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure3.eps}
\end{center}
\caption{Phase averaged $R_{\rm c}$ (filled circle) and $g'-Ic$ color
(filled square) after folding with
$P$=0.07903 d. Although $R_{\rm c}$ light curve shows the secondary peak, a
rapid rise and slow decline, characteristic of superhumps are
visible. Note that the bluest peak in $g'-I_{\rm c}$ is prior to
$R_{\rm c}$ by phase ${\sim}$ 0.2. Datapoints are vertically shifted
for display purpose.}
\label{shvar}
\end{figure}
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){figure4.eps}
\end{center}
\caption{Same as figure 3 but folding with $P$=0.07916 d. Double-peaked
modulations, remiscent of orbital humps, are visible. Note that the
bluest peak in $g'-I_{\rm c}$ is in accordance with the magnitude peak
in $R_{\rm c}$. Datapoints are vertically shifted for display purpose.}
\label{shvar}
\end{figure}
\section{Discussion}
In the previous studies, many authors have reported the presence of
(late) superhumps after the end of the superoutburst (\cite{pat02wzsge};
\cite{pat98egcnc}; \cite{uem02j2329}; \cite{kat04egcnc};
\cite{kat08lsh}; \cite{pdot}). This phenomenon can be understood if the
accretion disk persist on eccentricity after the end of the
superoutburst. Recently, \citet{pdot3} detected superhumps during a
normal outburst of V1504 Cyg. After this normal outburst, V1504 Cyg
returned to quiescence and the subsequent outburst was erupted as
superoutburst. In this case, full development of superhumps may have
been prevented despite the radius of the accretion exceeding the 3:1
resonance. In any case, superhumps are observed in the vicinity of a
superoutburst.
According to the AAVSO light curve generator, SU UMa experienced
superoutbursts on 2011 July and 2012 May. This means that the 2012
January normal outburst is independent of the superoutbursts. As for
magnitudes and color behavior, \citet{mat09v455and} suggest that
superhump phase discordance between them is associated with the heating
or cooling process in the accretion disk. In combination with the
observed periodicity, profile of the light curve, and behavior of color
index in $g'-I_{\rm c}$, we can conclude that this is the first example
that superhumps are observed during an $isolated$ normal outburst of SU
UMa-type dwarf novae. Our present result further indicate that the
radius of the accretion disk exceeds the 3:1 resonance radius even in the
middle of the supercycle. Recently, \citet{2012ApJ...747..117C} studied
Kepler data of V344 Lyr, in which the radius at which the
thermal instability sets in ($r_{\rm trig}$) may be the largest roughly
in the midst of a supercycle. Although $r_{\rm trig}$ may not be
necessarily linked with the radius of the accretion disk, this result
provides the potential possibility that the accretion disk exceeds the
3:1 resonance radius even in the midst of quiescence. In order to test
our hypothesis, quiescent spectroscopy should be performed, which
enables us to measure the radius of the accretion disk.
Finally, we briefly discuss figure 4. As already described above, the
most important point is that the peak magnitude coincides with color
index in $g'-I_{\rm c}$. This implies that the main light source is
influenced by a geometric effect of the accretion disk rather than the
disk process itself. If this is the case, then negative superhumps in ER
UMa stars, possibly originated from a tilted disk, show phase accordance
between magnitude and color \citep{oht12eruma}. This should be
clarified in future observations.
\vskip 5mm
We express our gratitude to Daisaku Nogami for his constructive comments
on the manuscript of the letter. We acknowledge with thanks the variable
star observations from the AAVSO International Database contributed by
observers worldwide and used in this research. A.I. and H.I. are
supported by Grant-In-Aid for Scientific Research (A) 23244038 from
Japan Society for Promotion of the Science (JSPS). This work is partly
supported by Optical \& Near-infrared Astronomy Inter-University
Cooperation Program, supported by the MEXT of Japan.
|
1,314,259,993,469 | arxiv | \section{Introduction and main results}\label{sec:inro}
Let $\Omega \subset {\mathbb R}^N$, $N \geq 2$, be a bounded, open domain, and let $p>1$. We say that $u \in W^{1,p}_0(\Omega) \setminus \{0\}$ is an eigenfunction of the $p$-Laplacian associated to the eigenvalue $\lambda \in \mathbb{R}$ if it is a weak solution of
\begin{equation}\label{eq:D}
\left\{
\begin{array}{r c l l} -\Delta_p u & = & \lambda |u|^{p-2}u & \text{in }\Omega, \\ u & = & 0 & \text{on }\partial \Omega,
\end{array}
\right.
\end{equation}
where $\Delta_p u = \text{div}(|\nabla u|^{p-2}\nabla u)$. If $p=2$, \eqref{eq:D} is the well-known eigenvalue problem for the Laplace operator. The first eigenvalue $\lambda_1(p;\Omega)$ of the $p$-Laplacian is defined as
\begin{equation}\label{firsteigenvalue}
\lambda_1(p;\Omega) = \min_{u \in \mathcal{S}_p} \int_\Omega |\nabla u|^p \, dx,
\end{equation}
where
$$
\mathcal{S}_p := \{u \in W^{1,p}_0(\Omega)\,\big|\, \|u\|_{L^p(\Omega)} = 1\}.
$$
Besides the first eigenvalue, in the linear case $p=2$, the standard Courant-Fisher minimax formula
\begin{equation}\label{eq:eigenvalue_linear}
\lambda_{k}(2;\Omega) = \min_{X_k} \max_{u \in X_k \cap \mathcal{S}_2} \int_\Omega |\nabla u|^2 \, dx,
\quad k \in \mathbb{N},
\end{equation}
provides a sequence of eigenvalues which exhausts the spectrum of the Laplacian, cf.\ \cite[Theorem 8.4.2]{attouchbuttazzo}.
In \eqref{eq:eigenvalue_linear}, the minimum is taken over subspaces $X_k \subset W_0^{1,2}(\Omega)$ of dimension $k$.
However, for $p\neq 2$ the problem is nonlinear, and it is necessary to make use of a different method. A sequence of \textit{variational eigenvalues} can be obtained by means of the following minimax variational principle.
Let $\mathcal{A} \subset W^{1,p}_0(\Omega)$ be a \textit{symmetric set}, i.e., if $u \in \mathcal{A}$, then $-u \in \mathcal{A}$. Define the {\it Krasnosel'ski\u{\i} genus} of $\mathcal{A}$ as
$$
\gamma(\mathcal{A}):=\inf\{k\in\mathbb{N} \,\big|\, \exists \mbox{ a continuous odd map } f:\mathcal{A} \to S^{k-1}\}
$$
with the convention $\gamma(\mathcal{A}) = +\infty$ if, for every $k \in \mathbb{N}$, no continuous odd map $f:\mathcal{A} \to S^{k-1}$ exists.
Here $S^{k-1}$ is a $(k-1)$-dimensional sphere.
For $k \in \mathbb{N}$ we define
$$
\Gamma_k(p) := \left\{\mathcal{A} \subset \mathcal{S}_p \,\big|\, \mathcal{A} \mbox{ symmetric and compact},\, \gamma(\mathcal{A})\geq k\right\}
$$
and
\begin{equation}\label{highereigenvalues}
\lambda_k(p;\Omega) := \inf_{\mathcal{A} \in \Gamma_k(p)} \max_{u \in \mathcal{A}} \int_\Omega |\nabla u|^p \, dx.
\end{equation}
It is known that each $\lambda_k(p;\Omega)$ is an eigenvalue and
\begin{equation*}
0 < \lambda_1(p;\Omega) < \lambda_2(p;\Omega) \leq \dots \leq \lambda_k(p;\Omega) \to +\infty
\quad \text{as } k \to +\infty,
\end{equation*}
see \cite[\S 5]{garciaazoreroperal}.
However, it is not known if the sequence $\{\lambda_k(p;\Omega)\}_{k=1}^{+\infty}$ exhausts all possible eigenvalues, except for the case $p=2$, where the eigenvalues in \eqref{highereigenvalues} coincide with the eigenvalues in \eqref{eq:eigenvalue_linear}, see, e.g., \cite[Proposition 4.7]{cuesta} or \cite[Appendix~A]{brascoparinisquassina}.
It has to be observed that the definitions of $\lambda_1(p;\Omega)$ by \eqref{firsteigenvalue} and \eqref{highereigenvalues} are consistent.
The associated first eigenfunction is unique modulo scaling and has a strict sign in $\Omega$ (cf.~\cite{bellonikawohl,vazquez}), while eigenfunctions associated to any other eigenvalue must necessarily be sign-changing (see, e.g., \cite[Lemma~2.1]{kawohllindqvist}).
Therefore, it makes sense to define the \textit{nodal domains} of an eigenfunction $u$ as the connected components of the set $\{x \in \Omega: u(x) \neq 0\}$, and the \textit{nodal set} of $u$ as $\{x \in \Omega: u(x) = 0\}$.
The version of the Courant nodal domain theorem for the $p$-Laplacian obtained in \cite{drabekrobinson} states that any eigenfunction associated to $\lambda_k(p;\Omega)$ with $k \geq 2$ has at most $2k-2$ nodal domains.
In particular, any eigenfunction associated to $\lambda_2(p;\Omega)$ has exactly two nodal domains.
Moreover, since there are no eigenvalues between $\lambda_1(p;\Omega)$ and $\lambda_2(p;\Omega)$ \cite{anane}, the latter is indeed the second eigenvalue.
\medskip
For the sake of simplicity, in the following we will restrict our attention mainly to the case where $\Omega = B^N$ is an open $N$-ball centred at the origin. In the linear case $p=2$, the eigenfunctions of the Laplace operator on $B^N$ are given explicitly by means of Bessel functions and spherical harmonics, and therefore it can be seen that the first eigenfunction is radially symmetric, while the nodal set of any second eigenfunction is an equatorial section of the ball; moreover, the following multiplicity result holds true:
\begin{equation}\label{eq:chain_for_linear_case}
\lambda_1(2;B^N) < \lambda_2(2;B^N) = \dots = \lambda_{N+1}(2;B^N) < \lambda_{N+2}(2;B^N),
\end{equation}
see, for instance, the discussion in \cite{helffer}.
In contrast, in the nonlinear case $p \neq 2$ much less is known. While it is relatively easy to show that the first eigenfunction is still radially symmetric by means of Schwarz symmetrization, symmetry properties of second eigenfunctions, as well as the multiplicity of the second eigenvalue, are not yet completely understood. For instance, it is known only that second eigenfunctions can not be radially symmetric; this was shown in the planar case in \cite{parini} for $p$ close to $1$, and later in \cite{benediktdrabekgirg} for general $p > 1$. The result was finally generalized to any dimension in \cite{anoopdrabeksasi}. The notion of multiplicity itself needs to be clarified in the nonlinear case. We say that the variational eigenvalue $\lambda_{k}(p;\Omega)$ has multiplicity $m$ if there exist $m$ variational eigenvalues $\lambda_{l}, \dots, \lambda_{l+m-1}$ with $l \leq k \leq l+m-1$ such that
\begin{equation}\label{eq:multiplicity}
\lambda_{l-1}(p;\Omega) < \lambda_{l}(p;\Omega) = \dots = \lambda_{k}(p;\Omega) = \dots =
\lambda_{l+m-1}(p;\Omega) < \lambda_{l+m}(p;\Omega).
\end{equation}
We point out that we are not aware of any multiplicity results for higher eigenvalues of the $p$-Laplacian.
\medskip
Despite the deficit of information about symmetry properties of variational eigenfunctions, it is possible to consider eigenvalues (possibly non-variational) with associated eigenfunctions which respect certain symmetries of $B^N$. For instance, the existence of a sequence of eigenvalues
$$
0 < \mu_1(p;B^N) < \mu_2(p;B^N) < \dots < \mu_k(p;B^N) \to +\infty
\quad \text{as } k \to +\infty,
$$
corresponding to \emph{radial eigenfunctions} has been shown, for instance, in \cite{delpinomanasevich}. Each radial eigenfunction associated to $\mu_k(p;B^N)$ is unique modulo scaling and possesses exactly $k$ nodal domains. The latter implies that $\lambda_k(p;B^N) \leq \mu_k(p;B^N)$ for any $k \in \mathbb{N}$ and $p>1$ (see Lemma~\ref{lem:Krasnoselskii_by_scaling} below). The above-mentioned results about radial properties of first and second eigenfunctions, together with \cite[Theorem~1.1]{bobkovdrabek}, can therefore be stated as
$$
\lambda_1(p;B^N) = \mu_1(p;B^N)
\quad
\text{and}
\quad \lambda_k(p;B^N)<\mu_k(p;B^N)
\quad \text{for all } p>1 \text{ and } k \geq 2.
$$
Another sequence of eigenvalues
$$
0 < \tau_1(p;B^N) < \tau_2(p;B^N) < \dots < \tau_k(p;B^N) \to +\infty
\quad \text{as } k \to +\infty,
$$
was considered in \cite[Theorem~1.2]{anoopdrabeksasi}. Here $\tau_k(p;B^N)$ is constructed in such a way that it has an associated \textit{symmetric eigenfunction}\footnote{\label{footref1}
We use the adjective ``symmetric'' to distinguish this eigenfunction from the radial one, since $\mu_k(p;B^N)$ and $\tau_k(p;B^N)$ can be equal to each other and hence might have associated eigenfunctions with not appropriate nodal structures, see \cite[Corollary~1.3 and Theorem~1.4]{bobkovdrabek}.} whose nodal domains are spherical wedges of angle $\frac{\pi}{k}$; see also Section~\ref{sec:Eigenvalue_auxiliary_facts} below, where a generalization of this sequence to other symmetric domains is given.
In particular, the nodal set of any symmetric eigenfunction associated to $\tau_1(p;B^N)$ is an equatorial section of $B^N$.
By construction, a symmetric eigenfunction associated to $\tau_k(p;B^N)$ has $2k$ nodal domains, which implies that
$$
\lambda_{2k}(p;B^N) \leq \tau_k(p;B^N)
\quad
\text{for any } k \in \mathbb{N} \text{ and } p > 1.
$$
At the same time, in the linear case, one can easily use the Courant-Fisher variational principle \eqref{eq:eigenvalue_linear} to show (see Remark~\ref{rem:2k+1_linear} below) that at least
\begin{equation}\label{eq:l2k<tk}
\lambda_{2k}(2;B^N) \leq \lambda_{2k+1}(2;B^N) \leq \tau_k(2;B^N)
\quad
\text{for any } k \in \mathbb{N}.
\end{equation}
The generalization of even such simple facts as \eqref{eq:chain_for_linear_case} and \eqref{eq:l2k<tk} to the nonlinear case $p \neq 2$ meets certain difficulties.
The main obstruction consists in the following fairly common problem:
\medskip
\begin{center}
\textit{How to obtain a symmetric compact set $A \subset \mathcal{S}_p$ with suitably high Krasnosel'ski\u{\i} genus, and, at the same time, with suitably low value $\max\limits_{u \in A} \int_\Omega |\nabla u|^p \, dx$?}
\end{center}
\medskip
In the linear case, the consideration of subspaces spanned by the first $k$ eigenfunctions $\varphi_1, \dots, \varphi_k$ directly solves this problem.
Let us sketchily describe the approach supposing that we want to prove the multiplicity in \eqref{eq:chain_for_linear_case} using the definition \eqref{highereigenvalues} only.
Let $\varphi_1$ and $\varphi_2$ be a first and a second eigenfunction of the Laplacian on $B^N$, respectively, such that $\|\varphi_i\|_{L^2(B^N)} = 1$ for $i=1,2$. Since $B^N$ and the Laplace operator are rotation invariant, we see that $\varphi_2$ generates $N$ linearly independent second eigenfunctions $\varphi_2, \dots, \varphi_{N+1}$ whose nodal sets are equatorial sections of $B^N$ orthogonal to each other.
Consider the set
\begin{equation}\label{eq:A_k}
\mathcal{B}_2 :=
\bigg\{
\sum_{i=1}^{N+1} \alpha_i \varphi_i\,\big|\, \sum_{i=1}^{N+1} |\alpha_i|^2 = 1
\biggr\}.
\end{equation}
Evidently, $\mathcal{B}_2$ is symmetric and compact, and it is not hard to show that $\gamma(\mathcal{B}_2) = N+1$. Moreover, since all $\varphi_1, \dots, \varphi_{N+1}$ are mutually orthogonal with respect to $L^2$-inner product, we get $\mathcal{B}_2 \subset \mathcal{S}_2$. Indeed,
\begin{equation}\label{eq:orthogonaliry}
\|u\|_{L^2(B^N)}^2 = \sum_{i=1}^{N+1} \alpha_i^2 \, \|\varphi_i\|_{L^2(B^N)}^2 = 1
\quad \text{for any } u \in \mathcal{B}_2.
\end{equation}
Therefore, $\mathcal{B}_2 \in \Gamma_{N+1}(2)$, and, using again the orthogonality, we obtain
\begin{equation*}
\lambda_{N+1}(2;B^N) \leq \max_{u \in \mathcal{B}_2} \int_{B^N} |\nabla u|^2 \, dx \leq
\max_{\alpha_1^2 + \dots + \alpha_{N+1}^2 = 1}\sum_{i=1}^{N+1} \alpha_i^2 \, \lambda_2(2;B^N) \|\varphi_i\|_{L^2(B^N)}^2 = \lambda_2(2;B^N),
\end{equation*}
which leads to the desired chain of equalities in \eqref{eq:chain_for_linear_case}.
However, this approach does not work well enough in the nonlinear case $p \neq 2$.
First of all, we do not know if a second eigenfunction has an equatorial section of $B^N$ as its nodal set. This can be overcome by considering a symmetric eigenfunction $\Psi_1$ associated to $\tau_1(p;B^N)$. Using the first eigenfunction $\varphi_1$, symmetric eigenfunction $\Psi_1$, and noting that the $p$-Laplacian is rotation invariant for $p>1$, we can produce $(N+1)$ linearly independent eigenfunctions as above and define a symmetric compact set $\mathcal{B}_p$ analogously to \eqref{eq:A_k}.
Moreover, similarly to \cite[Lemma~2.1]{huang} it can be shown that $\gamma(\mathcal{B}_p) = N+1$. However, the lack of the $L^2$-orthogonality prevents to achieve $\mathcal{B}_p \subset \mathcal{S}_p$ as in \eqref{eq:orthogonaliry}, and further normalization of $\mathcal{B}_p$ increases the value $\max\limits_{u \in \mathcal{B}_p} \int_{B^N} |\nabla u|^p \, dx$.\footnote{A similar approach was used in \cite[Section~2]{huang}. However, this approach also does not give a necessarily small upper bound for $\max\limits_{u \in A_{k}(p)} \int_\Omega |\nabla u|^p \, dx$ due to a gap in the proof of \cite[Lemma~2.3]{huang}. Namely, it is assumed that $\|u\|_{L^p(\Omega)}=1$ for any $u \in A_k(p)$ which might not be correct.}
\medskip
Another usual approach to obtain sets of higher Krasnosel'ski\u{\i} genus for general $p>1$ is based on the independent \textit{scaling} of nodal components of a function, cf.\ Lemma~\ref{lem:Krasnoselskii_by_scaling} below. Assume that some $w \in W_0^{1,p}(\Omega)$ can be represented as $w = w_1 +\dots+ w_k$, where all $w_i \in \mathcal{S}_p$ and they are disjointly supported.
Considering the set
$$
\mathcal{C}_k =
\biggl\{
\sum_{i=1}^{k} \alpha_i w_i\,\big|\, \sum_{i=1}^{k} |\alpha_i|^p = 1
\biggr\},
$$
we easily achieve that $\mathcal{C}_k \in \Gamma_k(p)$.
However, as before, the disadvantage of this approach is that $\max\limits_{u \in \mathcal{C}_k} \int_{\Omega} |\nabla u|^p \, dx$ cannot be made, in general, appropriately small.
\medskip
In this article, we present a variation of the above-mentioned approaches. Namely, using the symmetries of $\Omega$, we combine the scaling of nodal components of an eigenfunction with its rotations, which allows us to find a set $\mathcal{A} \in \Gamma_k(p)$ for appropriately big $k \in \mathbb{N}$, while keeping control of the value $\max\limits_{u \in \mathcal{A}} \int_{\Omega} |\nabla u|^p \, dx$.
By virtue of this fact, we obtain the following generalizations of \eqref{eq:chain_for_linear_case} and \eqref{eq:l2k<tk}, which can be seen as a step towards exact multiplicity results for nonlinear variational higher eigenvalues.
\medskip
\begin{thm}\label{thm:main1}
Let $\Omega \subset {\mathbb R}^N$ be a radially symmetric bounded domain, $N \geq 2$. Let $p>1$, $k \geq 1$ and let $\tau_k(p;\Omega)$ be defined as in \eqref{definitiontau}. Then the following inequalities are satisfied:
\begin{align}
\label{eq:ln+1<t2}
\lambda_2(p;\Omega) \leq \dots \leq \lambda_{N+1}(p;\Omega) &\leq \tau_1(p;\Omega);\\
\label{eq:l2k+1tk1}
\lambda_{2k}(p;\Omega) \leq \lambda_{2k + 1}(p;\Omega) &\leq \tau_k(p;\Omega).
\end{align}
\end{thm}
\medskip
Theorem \ref{thm:main1} implies that, if $\lambda_2(p;\Omega)=\tau_1(p;\Omega)$, then the second eigenvalue has multiplicity at least $N$. It is also meaningful to emphasize that the inequalities \eqref{eq:ln+1<t2} \textit{do not} imply that eigenfunctions associated to $\lambda_3(p;B^N), \dots, \lambda_{N+1}(p;B^N)$ are nonradial. Indeed, to the best of our knowledge, the inequality $\tau_1(p;B^N) < \mu_2(p;B^N)$ is not proved yet for general $p > 1$ and $N \geq 3$. Nevertheless, in the planar case, the results of \cite{benediktdrabekgirg} and \cite{bobkovdrabek} allow us to characterize Theorem~\ref{thm:main1} in a more precise way.
For visual simplicity we denote
$$
\lambda_\ominus(p):= \tau_1(p;B^2),
\quad
\lambda_\oplus(p):= \tau_2(p;B^2),
\quad
\lambda_\circledcirc(p) := \mu_2(p;B^2).
$$
Recall that if $p=2$, then
\begin{equation*}
\lambda_2(2;B^2)=\lambda_3(2;B^2)=\lambda_\ominus(p)
~<~
\lambda_4(2;B^2)=\lambda_5(2;B^2)=\lambda_\oplus(p)
~<~
\lambda_6(2;B^2)=\lambda_\circledcirc(p).
\end{equation*}
For $p>1$ we have the following result.
\medskip
\begin{prop}\label{prof:detalization_of_main_theorem_in_2D_case}
Let $N=2$. Then for every $p>1$ it holds
\begin{equation}\label{eq:detalization_of_l2k+1tk_in_2D_case}
\lambda_2(p;B^2) \leq \lambda_3(p;B^2) \leq \lambda_\ominus(p) < \lambda_\circledcirc(p),
\end{equation}
that is, any third eigenfunction on the disc is not radially symmetric.
Moreover, there exists $p_1>1$ such that
\begin{equation}\label{eq:l4<l5<lrad}
\lambda_4(p;B^2) \leq \lambda_5(p;B^2) \leq \lambda_\oplus(p;2) < \lambda_\circledcirc(p;2)
\quad \text{ for all } p > p_1,
\end{equation}
that is, fourth and fifth eigenfunctions on the disc are also not radially symmetric for $p>p_1$.
\end{prop}
Note that the last inequality in \eqref{eq:l4<l5<lrad} is reversed for $p$ close to $1$, see~\cite[Theorem~1.3]{bobkovdrabek}.
\medskip
Consider now a bounded domain $\Omega \subset \mathbb{R}^N$ which is invariant under rotation of $N-l$ variables for some $l \in \{1,\dots,N-1\}$, see the definition \eqref{eq:domain_of_revolution} below. Analogously to the case of $N$-ball, it is possible to define symmetric eigenvalues $\tau_k(p;\Omega)$ of the $p$-Laplacian on $\Omega$ for any $k \in \mathbb{N}$, see Section~\ref{sec:Eigenvalue_auxiliary_facts} below. Similarly to Theorem~\ref{thm:main1}, we have the following facts.
\medskip
\begin{prop}\label{prop:main2}
Let $\Omega \subset {\mathbb R}^N$ be a bounded domain of $N-l$ revolutions defined by \eqref{eq:domain_of_revolution}, where $N \geq 2$ and $l \in \{1,\dots,N-1\}$. Let $p>1$ and $k \geq 1$.
Then the following inequalities are satisfied:
\begin{align}
\label{eq:ln+1<t2_excentric}
\lambda_2(p;\Omega) \leq \dots \leq \lambda_{N-l+2}(p;\Omega) &\leq \tau_1(p;\Omega);\\
\label{eq:l2k+1tk1_excentric}
\lambda_{2k}(p;\Omega) \leq \lambda_{2k + 1}(p;\Omega) &\leq \tau_k(p;\Omega).
\end{align}
\end{prop}
\medskip
The article is organized as follows.
In Section~\ref{sec:HomAlg}, we recall some facts from Algebraic Topology and prove necessary technical statements.
Section~\ref{sec:Eigenvalue_auxiliary_facts} is mainly devoted to the construction of symmetric eigenvalues on domains of revolution.
Section~\ref{sec:proofs} contains the proofs of the main results.
Finally, in Section~\ref{sec:open_problems}, we discuss the limit cases $p=1$ and $p=\infty$ and some naturally appeared open problems.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Some algebraic topological results}\label{sec:HomAlg}
Recall first that a subset $X$ of a topological vector space is \textit{symmetric} if it is invariant under the central symmetry map $\iota$ defined as $\iota(x)=-x$. A map $f$ between symmetric sets is called \emph{odd} if $f \circ \iota = \iota \circ f$, and it will be called \emph{even} if $f\circ\iota=f$.
In the following, we assume all maps to be continuous.
Let us denote by $H_k(X)$ the $k^\textrm{th}$ homology group (over ${\mathbb Z}$) of a manifold $X$ (cf.~\cite[Chapter~2]{hatcher}).
We say that a manifold is an \emph{$n$-manifold} (with $n \in \mathbb{N}$) if it is an oriented closed $n$-dimensional manifold.
If $X$ is an $n$-manifold, then it can be shown that $H_n(X) \cong {\mathbb Z}$ \cite[Theorem~3.26]{hatcher} with a preferred generator given by the orientation of $X$. Moreover, by post-composition, any map $f:X\to Y$ induces linear maps $f_k:H_k(X)\to H_k(Y)$ for each $k\in{\mathbb N}$. When both $X$ and $Y$ are $n$-manifolds, the \emph{degree of the map $f$} is defined as the image by $f_n$ of the preferred generator of $H_n(X)$ in $H_n(Y)\cong{\mathbb Z}$ and denoted as ${\textrm{deg}}(f)$. It follows directly from the definitions that if $f:X\to Y$ and $g:Y \to Z$ are two continuous maps between $n$-manifolds, then ${\textrm{deg}}(g \circ f) = {\textrm{deg}}(g) {\textrm{deg}}(f)$. Moreover, two \emph{homotopic} maps, that is two maps with a continuous path of maps between them, have the same degree since they induce the same map on homology; see \cite[Theorem 2.10]{hatcher} and point (c) in \cite[p.134]{hatcher}.
The following result is known as \textit{Borsuk's Theorem} and it was proved in \cite[Hilfssatz~6]{borsuk}.
An English written proof can found in \cite[Proposition~2B.6]{hatcher}.
\begin{thm}\label{thm:Borsuk}
Any odd map $f:S^n\to S^n$ has an odd degree.
\end{thm}
\begin{rem}\label{rem:Borsuk_Ulam_classic}
Borsuk's Theorem implies the classical Borsuk-Ulam Theorem which states that there
is no odd map from a sphere into a sphere of strictly lower dimension.
\end{rem}
The following proposition is considered as well-known in the literature, see, e.g., \cite[Exercice 14, p. 156]{hatcher}.
\begin{prop}\label{prop:EvenBorsuk}
Any even map $f:S^n\to S^n$ has an even degree.
\end{prop}
The following lemma, which will be crucial for our arguments, is a consequence of Borsuk's Theorem.
\begin{lem}\label{lem:ObstructionLemma}
Let $X$ be a symmetric subset of a topological space. Suppose that there is a map $f:S^n\times [0,1]\to X$
such that $f_{|S^n\times\{0\}}$ is odd, and either of the following conditions is satisfied:
\begin{enumerate}
\item[(a)] $f_{|S^n\times\{1\}}$ is even;
\item[(b)] $f_{|S^n\times\{1\}}$ is equal to $f_{|S^n\times\{0\}}\circ g$, where $g:S^n\to S^n$ is a map such that ${\textrm{deg}}(g)\neq 1$.
\end{enumerate}
Then there is no odd map from $X$ to $S^k$ for $k\leq n$.
\end{lem}
\begin{proof}
Assume, by contradiction, that there exists an odd map $h:X\to S^k$ for some $k\leq n$.
By considering $S^k$ as an iterated equator of
$S^n$, $f$ can be promoted as an odd map $h:X\to S^n$.
Since $\big(t\mapsto h\circ f_{|S^n\times\{t\}}\big)$ is a continuous map from $h\circ f_{|S^n\times\{0\}}$ to $h\circ f_{|S^n\times\{1\}}$, it follows that they are homotopic and hence have the same degree $d$.
Moreover, since $h\circ f_{|S^n\times\{0\}}:S^n\to S^n$ is an odd map, it follows from
Theorem \ref{thm:Borsuk} that $d$ is odd. Now we distinguish the two cases:
\begin{enumerate}
\item[(i)] Under assumption $(a)$, if $f_{|S^n\times\{1\}}$ is even, then so
is $h\circ f_{|S^n\times\{1\}}:S^n\to S^n$ and hence $d$ is even by Proposition \ref{prop:EvenBorsuk}.
\item[(ii)] Under assumption $(b)$, we use the multiplicativity of the degree to get
$$
d={\textrm{deg}}(h\circ f_{|S^n\times\{1\}})={\textrm{deg}}(h\circ f_{|S^n\times\{0\}}\circ g)={\textrm{deg}}(h\circ
f_{|S^n\times\{0\}}){\textrm{deg}}(g)=d \cdot {\textrm{deg}}(g) \neq d,
$$
since ${\textrm{deg}}(g)\neq 1$ by assumption, and $d\neq 0$ since it is odd.
\end{enumerate}
In both cases, we get a contradiction, and hence the lemma follows.
\end{proof}
\begin{rem}
It is possible to obtain a weaker result by using the classical Borsuk-Ulam Theorem, without any assumptions on $f_{|S^n\times\{1\}}$. In this case, one can only prove nonexistence of odd maps from $X$ to $S^k$ for $k \leq n-1$.
\end{rem}
To be applied, Lemma \ref{lem:ObstructionLemma} requires an evaluation of the degree of the map $g$. We address now a very elementary example that will be useful to prove Proposition \ref{prop:first_inequality} below.
For that purpose, we consider the permutation map $\tau:S^n\to S^n$ defined by
$\tau(x_1,x_2,\ldots,x_{n+1})=(x_{n+1},x_1,\ldots,x_n)$.
\begin{lem}\label{lem:Degrees}
The map $\tau$ has degree $(-1)^n$.
\end{lem}
\begin{proof}
As auxiliary maps, we define $\rho_1$ the reflexion along the first coordinate, and $\theta_i$ the rotation of angle $\frac\pi2$ in the oriented plane generated by the $i^\textrm{th}$ and the $(i+1)^\textrm{th}$ coordinates. More explicitly, we have $\rho_1(x_1,x_2,\ldots,x_{n+1}) = (-x_1,x_2,\ldots,x_{n+1})$ and
$$
\theta_i(x_1,\ldots,x_{i-1},x_i,x_{i+1},x_{i+2}\ldots,x_{n+1})=(x_1,\ldots,x_{i-1},-x_{i+1},x_i,x_{i+2},\ldots,x_{n+1}).
$$
It is then directly computed that
$$
\tau=\left\{
\begin{array}{ll}
\theta_1\circ\cdots\circ\theta_{n}&\textrm{for $n$ even},\\
\rho_1\circ\theta_1\circ\cdots\circ\theta_{n}&\textrm{for $n$ odd}.
\end{array}\right.
$$
It is easily seen that ${\textrm{deg}}(\rho_1)=-1$, cf.\ \cite[Section 2.2, Property (e), p.~134]{hatcher}. Moreover, all rotations are path-connected to the identity map and hence they have degree $1$ by the same codomain ${\mathbb Z}$ argument as in the proof of Lemma~\ref{lem:ObstructionLemma}. Combined with the multiplicativity of the degree, this proves the statement.
\end{proof}
\subsection{The eigenvalue problem}\label{sec:Eigenvalue_auxiliary_facts}
First we give the following well-known fact.
\begin{lem}\label{lem:Krasnoselskii_by_scaling}
Let $w \in W_0^{1,p}(\Omega)$ be such that $w = w_1 +\dots+ w_k$, where $w_i$ and $w_j$ have disjoint supports for $i \neq j$ and each $w_i \in \mathcal{S}_p$.
Then
$$
\mathcal{C}_k :=
\biggl\{
\sum_{i=1}^{k} \alpha_i w_i\,\big|\, \sum_{i=1}^{k} |\alpha_i|^p = 1
\biggr\} \subset \mathcal{S}_p,
$$
$\mathcal{C}_k$ is symmetric and compact, and $\gamma(\mathcal{C}_k) = k$.
Moreover,
$$
\max_{u \in \mathcal{C}_k} \int_{\Omega} |\nabla u|^p \, dx
\leq
\max\Bigl\{\int_{\Omega} |\nabla w_1|^p \, dx, \dots, \int_{\Omega} |\nabla w_k|^p \, dx\Bigr\}.
$$
In particular, if $w$ is an eigenfunction of the $p$-Laplacian on $\Omega$ associated to an eigenvalue $\lambda$, and $w$ has at least $k$ nodal domains, then
$$
\lambda_k(p;\Omega) \leq \max\limits_{u \in \mathcal{C}_k} \int_{\Omega} |\nabla u|^p \, dx \leq \lambda.
$$
\end{lem}
\begin{proof}
Since all the statements are trivial, we will prove, for the sake of completeness, only that $\gamma(\mathcal{C}_k) = k$; see~\cite[Proposition~7.7]{rabinowitz}. Note first that there exists an odd homeomorphism $f$ between $\mathcal{C}_k$ and $S^{k-1}$ given by
$$
f\biggl(\sum_{i=1}^{k} \alpha_i w_i\biggr) = \left(|\alpha_1|^{\frac{p}{2}-1}\alpha_1,\dots,|\alpha_k|^{\frac{p}{2}-1}\alpha_k\right).
$$
This implies that $\gamma(\mathcal{C}_k) \leq k$. If we suppose that $\gamma(\mathcal{C}_k) = n < k$, then there exists a continuous odd map $g:\mathcal{C}_k \to S^{n-1}$. However, the composition $g \circ f^{-1}$ is odd and maps $S^{k-1}$ into $S^{n-1}$ which contradicts the classical Borsuk-Ulam Theorem, cf.\ Remark~\ref{rem:Borsuk_Ulam_classic}.
Thus, $\gamma(\mathcal{C}_k) = k$.
\end{proof}
Now we generalize the construction of eigenvalues $\tau_k(p;B^N)$ and corresponding symmetric eigenfunctions given in \cite{anoopdrabeksasi} to domains of revolution.
Let us introduce the usual spherical coordinates in $\mathbb{R}^N$:
\begin{align*}
x_1 &= r \cos \theta_1,\\
x_2 &= r \sin \theta_1 \cos \theta_2,\\
&\cdots \\
x_{N-1} &= r \sin \theta_1 \sin \theta_2 \dots \sin \theta_{N-2} \cos \theta_{N-1},\\
x_{N} &= r \sin \theta_1 \sin \theta_2 \dots \sin \theta_{N-2} \sin \theta_{N-1},
\end{align*}
where $r \in [0,+\infty)$, $(\theta_1, \dots, \theta_{N-2}) \in [0, \pi]^{N-2}$ and $\theta_{N-1} \in [0, 2\pi)$.
We say that $\Omega \subset \mathbb{R}^N$, $N \geq 2$, is a bounded \textit{domain of $N-l$ revolutions}, if $\Omega$ is a bounded domain and there exists a set $\mathcal{O} \subset [0,+\infty) \times [0, \pi]^{l-1}$ with $l \in \{1, \dots, N-1\}$ such that
\begin{equation}\label{eq:domain_of_revolution}
\Omega =
\Bigl\{
x \in \mathbb{R}^N \,\big|\, (r,\theta_1,\dots,\theta_{l-1}) \in \mathcal{O},~ (\theta_{l},\dots,\theta_{N-2}) \in [0,\pi]^{N-l-1}, ~ \theta_{N-1} \in [0, 2\pi)
\Bigr\}.
\end{equation}
Note that the latter two constraints describe a unit sphere $S^{N-l}$.
Moreover, if $l=1$, then $\Omega$ is radially symmetric.
For any $k \in \mathbb{N}$ consider $2k$ wedges of $\Omega$ defined as (cf.\ Figure~\ref{Fig})
\begin{equation}\label{eq:spherical_wedge}
\mathcal{W}_i(k) :=
\Bigl\{
x \in \Omega \,\big|\, \frac{(i-1) \pi}{k} < \theta_{N-1} < \frac{i \pi}{k}
\Bigr\},
\quad
i \in \{1, \dots, 2k\}.
\end{equation}
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\linewidth]{mandarin4}
\caption{Partitioning of an ellipsoid $\Omega \subset \mathbb{R}^3$ on eight wedges $\mathcal{W}_1(8), \dots, \mathcal{W}_8(8)$.
(The drawing is based on \cite{trzeciak}.)}
\label{Fig}
\end{figure}
Let $v \in W_0^{1,p}(\mathcal{W}_1(k))$ be a first eigenfunction of the $p$-Laplacian on $\mathcal{W}_1(k)$ and $\lambda_1(p;\mathcal{W}_1(k))$ be the associated first eigenvalue.
Hereinafter, we assume that $v$ is extended by zero outside of its support.
We define
\begin{equation}\label{definitiontau}
\tau_k(p;\Omega) := \lambda_1(p;\mathcal{W}_1(k)).
\end{equation}
Let $R_\omega(x)$ be the rotation of $x \in \mathbb{R}^N$ on the angle of measure $\omega \in {\mathbb R}$ with respect to $\theta_{N-1}$, that is,
$$
R_\omega(x) = (x_1,\dots,x_{N-2}, r \sin \theta_1 \dots \sin \theta_{N-2} \cos (\theta_{N-1}+\omega), r \sin \theta_1 \dots \sin \theta_{N-2} \sin (\theta_{N-1}+\omega)).
$$
Denote by $v_\omega \in W_0^{1,p}(R_\omega(\mathcal{W}_1(k)))$ the corresponding rotation of $v$, that is,
\begin{equation}\label{eq:rotation}
v_\omega(x) = v(R_{-\omega}(x))
\quad \text{for all } x \in R_\omega(\mathcal{W}_1(k)).
\end{equation}
Consider the function $\Psi_k \in W_0^{1,p}(\Omega)$ given by
\begin{equation}\label{eq:Psi_k}
\Psi_k = v - v_{\frac{\pi}{k}} + v_{\frac{2\pi}{k}} - \dots - v_{\frac{(2k-1)\pi}{k}} \equiv \sum\limits_{i=1}^{2k} (-1)^{i-1} v_{\frac{(i-1) \pi}{k}}.
\end{equation}
\begin{lem}\label{lem:symmetric_eigenvalue}
$\Psi_k$ is an eigenfunction of the $p$-Laplacian on $\Omega$ associated to the eigenvalue $\tau_k(p;\Omega)$.
\end{lem}
\begin{proof}
Note that $R_{\frac{i\pi}{k}}(\mathcal{W}_j(k))=\mathcal{W}_{m}(k)$, where $i \in \mathbb{N}$, $j, m \in \{1,\dots,2k\}$ and $m \equiv j+i \ \ (\text{mod } 2k)$.
Moreover, if we denote by $\sigma_{H_i}(\mathcal{W}_j(k))$ the reflection of $\mathcal{W}_j(k)$ with respect to the hyperplane $H_i := \{x \in \mathbb{R}^N\,\big|\, \theta_{N-1} = \frac{i\pi}{k}\}$, then it is not hard to see that $\sigma_{H_i}(\mathcal{W}_j(k)) = \mathcal{W}_{s}(k)$, where $i \in \mathbb{N}$, $j, s \in \{1,\dots,2k\}$ and $s \equiv 2i-j+1 \ \ (\text{mod } 2k)$.
At the same time, since the $p$-Laplacian is invariant under orthogonal changes of variables, we obtain that the rotation $v_{\frac{\pi}{k}}$ of $v$ is a first eigenfunction of the $p$-Laplacian on $\mathcal{W}_2(k)$. Analogously, if $w$ is a reflection of $v$ with respect to the hyperplane $H_1$, then $w$ is also a first eigenfunction on $\mathcal{W}_2(k)$. Since the first eigenvalue is simple, we conclude that $w \equiv v_{\frac{\pi}{k}}$. Now, the proof of \cite[Theorem~1.2]{anoopdrabeksasi} based on reflection arguments can be applied with no changes to conclude the desired fact.
\end{proof}
\begin{rem}
Let $(\Psi_k)_\omega$ be obtained by rotating $\Psi_k$ on the angle of measure $\omega \in {\mathbb R}$ with respect to $\theta_{N-1}$, see~\eqref{eq:rotation}. Since the $p$-Laplacian and $\Omega$ are invariant under such rotation, we see that $(\Psi_k)_\omega$ is also an eigenfunction associated to $\tau_k(p;\Omega)$.
\end{rem}
\section{Proofs of the main results}\label{sec:proofs}
The proofs of Theorem~\ref{thm:main1} and Propositions \ref{prof:detalization_of_main_theorem_in_2D_case} and \ref{prop:main2} will be achieved in several steps.
First, in Proposition~\ref{prop:first_inequality}, we prove the inequalities \eqref{eq:l2k+1tk1_excentric} of Proposition~\ref{prop:main2}.
The inequalities \eqref{eq:l2k+1tk1} of Theorem~\ref{thm:main1}, being a partial case of \eqref{eq:l2k+1tk1_excentric}, will be hence covered.
Second, in Proposition~\ref{prop:eq:ln+1<t2}, we prove the inequalities \eqref{eq:ln+1<t2} of Theorem~\ref{thm:main1}. The method of proof carries over to the inequalities \eqref{eq:ln+1<t2_excentric} of Proposition~\ref{prop:main2}, see Proposition~\ref{prop:ln+1<t2_excentric}.
Finally, we give the proof of Proposition \ref{prof:detalization_of_main_theorem_in_2D_case}.
\begin{prop}\label{prop:first_inequality}
Let $\Omega \subset \mathbb{R}^N$ be a bounded domain of $N-l$ revolutions defined by \eqref{eq:domain_of_revolution}, where $N \geq 2$ and $l \in \{1,\dots,N-1\}$.
For any $p>1$ and $k \in \mathbb{N}$ it holds
\begin{equation}\label{eq:l2k+1tk}
\lambda_{2k+1}(p;\Omega) \leq \tau_k(p;\Omega).
\end{equation}
\end{prop}
\begin{proof}
Denote by $v$ a first eigenfunction of the $p$-Laplacian on the wedge $\mathcal{W}_1(k)$ defined by~\eqref{eq:spherical_wedge} and assume that $v$ is normalized such that $\|v\|_{L^p(\mathcal{W}_1(k))} = 1$.
Then $v$ generates the eigenfunction $\Psi_k$ of the $p$-Laplacian on $\Omega$, as defined by~\eqref{eq:Psi_k}, associated to the eigenvalue $\tau_k(p;\Omega)$, see Lemma~\ref{lem:symmetric_eigenvalue}. Note that $\Psi_k$ has exactly $2k$ nodal domains.
Consider the set
$$
\mathcal{A} :=
\biggl\{
\sum_{i=1}^{2k} \alpha_i\,v_{\gamma+\frac{(i-1)\pi}{k}}\,\big|\, \sum_{i=1}^{2k} |\alpha_i|^p = 1,\, \gamma \in {\mathbb R}
\biggr\},
$$
where $v_\varphi$ is obtained by rotating $v$ on the angle of measure $\varphi \in {\mathbb R}$ with respect to $\theta_{N-1}$, see~\eqref{eq:rotation}.
It is not hard to see that $\mathcal{A}$ is symmetric, compact and $\mathcal{A} \subset \mathcal{S}_p$.
Consider the continuous map $f:S^{2k-1} \times [0,1] \to \mathcal{A}$ defined by
$$
f\left(\left(|\alpha_1|^{\frac{p}{2}-1}\alpha_1,\ldots,|\alpha_{2k}|^{\frac{p}{2}-1}\alpha_{2k}\right),t\right)= \sum_{i=1}^{2k} \alpha_i \, v_{\frac{t \pi}{k}+ \frac{(i-1)\pi}{k}},
\quad \text{where } \sum_{i=1}^{2k} |\alpha_i|^p = 1.
$$
Then, $f$ clearly satisfies $f_{|S^{2k-1}\times\{0\}}\circ\iota=\iota\circ f_{|S^{2k-1}\times\{0\}}$ and, in view of~\eqref{eq:Psi_k}, $f_{|S^{2k-1}\times\{1\}}=f_{|S^{2k-1}\times\{0\}}\circ \tau$, where $\iota$ and $\tau$ are defined in Section \ref{sec:HomAlg}. Therefore, it follows from assertion $(b)$ of Lemma \ref{lem:ObstructionLemma} and Lemma \ref{lem:Degrees} that there is no odd map from $\mathcal{A}$ to $S^{n}$ for any $n \leq 2k-1$, which implies that $\gamma(\mathcal{A}) \geq 2k+1$.
Thus, $\mathcal{A} \in \Gamma_{2k+1}(p)$.
Noting now that for any $u \in \mathcal{A}$ it holds
$$
\int_{\Omega} |\nabla u|^p \, dx = \sum_{i=1}^{2k}
|\alpha_i|^p \int_{\Omega} \left|\nabla v_{\gamma+\frac{(i-1)\pi}{k}}\right|^p dx =
\sum_{i=1}^{2k}
|\alpha_i|^p \, \tau_k(p;\Omega) = \tau_k(p;\Omega),
$$
we conclude the desired inequality:
$$
\lambda_{2k+1}(p;\Omega) \leq \max_{u \in \mathcal{A}} \int_{\Omega} |\nabla u|^p \, dx =
\tau_k(p;\Omega).
$$
\end{proof}
\begin{rem}\label{rem:2k+1_linear}
In the linear case $p=2$, the inequality \eqref{eq:l2k+1tk} can be easily obtained using the Courant-Fisher variational principle \eqref{eq:eigenvalue_linear}.
Indeed, since the Laplacian is rotation invariant and $\Omega$ is a domain of revolution, for any $i \geq 1$ we can find at least two linearly independent symmetric eigenfunctions associated to $\tau_i(2;\Omega)$, one is a rotation of another. Therefore, taking a first eigenfunction and also two linearly independent eigenfunctions for every $i \in \{1,\dots,k\}$, we produce a $(2k+1)$-dimensional subspace of $W_0^{1,2}(\Omega)$ which leads to the desired inequality via \eqref{eq:eigenvalue_linear}.
Let us also remark that, in view of Pleijel's Theorem, the inequality \eqref{eq:l2k+1tk} is strict for sufficiently large $k \in \mathbb{N}$, see, e.g., \cite{helffer}.
\end{rem}
\begin{rem}\label{remark:connection_between_multiplicity_and_nodal_set}
Let, for simplicity, $N=2$, $\Omega = B^2$ and $k=1$. Assume that there exists a second eigenfunction $\phi$ of the $p$-Laplacian on $\Omega$ which is antisymmetric with respect to the rotation of the angle $\pi$, that is, $\phi_{\pi} = -\phi$. (This happens, for instance, when the nodal set is a diameter or a ``yin-yang''-type curve.) Then the proof of Proposition~\ref{prop:first_inequality} works with no changes considering $\phi^+$ or $\phi^-$ instead of $v$, which yields $\lambda_2(p;B^2) = \lambda_3(p;B^2)$. Therefore, the knowledge about structure of the nodal set of higher eigenfunctions plays an important role for our arguments.
\end{rem}
It is of independent interest to prove the inequalities~\eqref{eq:ln+1<t2} of Theorem~\ref{thm:main1} up to $\lambda_N(p;\Omega)$, since the proof uses only rotations of $\Psi_1$ to increase the Krasnosel'ski\u{\i} genus.
\begin{prop}\label{prop:ln<t2}
Let $\Omega \subset \mathbb{R}^N$ be a bounded radially symmetric domain, $N \geq 2$.
Then for any $p>1$ it holds
\begin{equation*}\label{eq:lNleqt1}
\lambda_{N}(p;\Omega) \leq \tau_1(p;\Omega).
\end{equation*}
\end{prop}
\begin{proof}
For any $x \in S^{N-1}$ we define
$$
\Omega_x := \{z \in \Omega\,\big|\, \langle z, x \rangle > 0\}.
$$
Denote as $v_x$ the first eigenfunction on $\Omega_x$ such that $v_x > 0$ in $\Omega_x$ and $\|v_x\|_{L^p(\Omega_x)} = 1$, and extend it by zero outside of $\Omega_x$.
Arguing as in Lemma~\ref{lem:symmetric_eigenvalue}, it can be deduced that $\frac{v_x - v_{-x}}{\sqrt[p]{2}}$ is an eigenfunction associated to $\tau_1(p;\Omega)$ for any $x \in S^{N-1}$.
Consider the set
$$
\mathcal{A} := \Bigl\{\frac{v_x - v_{-x}}{\sqrt[p]{2}}\,\big|\, x \in S^{N-1} \Bigr\}.
$$
It is not hard to see that $\mathcal{A}$ is compact. Moreover, $\mathcal{A}$ is evidently symmetric and $\mathcal{A} \subset \mathcal{S}_p$.
Note that $x$ is uniquely determined by the choice of $\frac{v_x-v_{-x}}{\sqrt[p]{2}}$ since $x$ corresponds to the unique unit normal vector of the nodal set which points to the nodal domain $\Omega_x$.
Therefore, taking $h:\mathcal{A} \to S^{N-1}$ defined by $h\left(\frac{v_x - v_{-x}}{\sqrt[p]{2}}\right) = x$, we deduce that $h$ is an odd homeomorphism, and hence $\gamma(\mathcal{A}) \leq N$. If we suppose that $\gamma(\mathcal{A}) < N$, then we get a contradiction as in the proof of Lemma~\ref{lem:Krasnoselskii_by_scaling}. Therefore, $\gamma(\mathcal{A}) = N$ and $\mathcal{A} \in \Gamma_N(p)$, and we conclude as in the proof of Proposition~\ref{prop:first_inequality}.
\end{proof}
To prove the whole chain of inequalities \eqref{eq:ln+1<t2} of Theorem~\ref{thm:main1}, we combine rotations of $\Psi_1$ with the scaling of its nodal components.
\begin{prop}\label{prop:eq:ln+1<t2}
Let $\Omega \subset \mathbb{R}^N$ be a bounded radially symmetric domain, $N \geq 2$.
Then for any $p>1$ it holds
\begin{equation*}
\lambda_{N+1}(p;\Omega) \leq \tau_1(p;\Omega).
\end{equation*}
\end{prop}
\begin{proof}
Using the notation $v_x$ from Proposition~\ref{prop:ln<t2}, we define the set
$$
\mathcal{A} := \left\{ \alpha_1\,v_x+\alpha_2\,v_{-x}\,\big|\, |\alpha_1|^p + |\alpha_2|^p = 1,\,x \in S^{N-1} \right\}.
$$
As before, $\mathcal{A} \subset \mathcal{S}_p$ and $A$ is symmetric and compact.
Let $\gamma: [0,1] \to \{z \in \mathbb{R}^2: |z_1|^p + |z_2|^p = 1\}$ be a path from $\Big(\frac{1}{\sqrt[p]{2}},-\frac{1}{\sqrt[p]{2}}\Big)$ to $\Big(\frac{1}{\sqrt[p]{2}},\frac{1}{\sqrt[p]{2}}\Big)$ and denote by $\gamma_1(t)$ and $\gamma_2(t)$ the first and the second component of $\gamma(t)$, respectively.
The continuous map $f:S^{N-1} \times [0,1] \to \mathcal{A}$ defined by $f(x,t)=\gamma_1(t)v_x+\gamma_2(t)v_{-x}$ clearly satisfies $f_{|S^{N-1}\times\{0\}}\circ\iota=\iota\circ f_{|S^{N-1}\times\{0\}}$ and $f_{|S^{N-1}\times\{1\}}\circ\iota=f_{|S^{N-1}\times\{1\}}$, where $\iota$ is defined in Section~\ref{sec:HomAlg}. Then, it follows from assertion $(a)$ of Lemma \ref{lem:ObstructionLemma} that there is no odd map from $\mathcal{A}$ to $S^{n-1}$ for any $n \leq N$, and hence $\gamma(\mathcal{A})\geq N+1$.
Thus $\mathcal{A} \in \Gamma_{N+1}(p)$, and we conclude as in the proof of Proposition~\ref{prop:first_inequality}.
\end{proof}
\begin{cor}
If $\lambda_2(p;\Omega)=\tau_1(p;\Omega)$, then the second eigenvalue has multiplicity at least $N$.
\end{cor}
The inequalities \eqref{eq:ln+1<t2_excentric} of Proposition~\ref{prop:main2} can be proved in much the same way as Proposition~\ref{prop:eq:ln+1<t2}. Let us briefly sketch the proof.
\begin{prop}\label{prop:ln+1<t2_excentric}
Let $\Omega \subset \mathbb{R}^N$ be a bounded domain of $N-l$ revolutions, where $N \geq 2$ and $l \in \{1,\dots,N-1\}$.
Then for any $p>1$ it holds
$$
\lambda_{N-l+2}(p;\Omega) \leq \tau_1(p;\Omega)
$$
\end{prop}
\begin{proof}
Take any $x \in S^{N-l}$ and define a hemisphere
$$
S^{N-l}_x := \{y \in S^{N-l}\,\big|\, \langle x,y \rangle > 0\}.
$$
We parametrize $S^{N-l}_x$ in spherical coordinates by angles $(\theta_l, \dots, \theta_{N-1})$ and define
$$
\Omega_x := \{z \in \Omega\,\big|\, (\theta_l, \dots, \theta_{N-1}) \in S^{N-l}_x\}.
$$
Denote as $v_x$ the first eigenfunction on $\Omega_x$ such that $v_x > 0$ in $\Omega_x$ and $\|v_x\|_{L^p(\Omega_x)} = 1$.
In view of the symmetries of $\Omega$ (see \eqref{eq:domain_of_revolution}) it is not hard to obtain that $v_x$ is associated to the eigenvalue $\lambda = \tau_1(p;\Omega)$ for any $x \in S^{N-l}$. Consider the set
$$
\mathcal{A} := \{ \alpha_1\,v_x+\alpha_2\,v_{-x}\,\big|\, |\alpha_1|^p + |\alpha_2|^p =
1,\,x \in S^{N-l} \}.
$$
The rest of the proof goes along the same lines as in Proposition~\ref{prop:eq:ln+1<t2}.
\end{proof}
\bigskip
\noindent
\textbf{Proof of Proposition~\ref{prof:detalization_of_main_theorem_in_2D_case}.}
1) In view of \eqref{eq:ln+1<t2} with $N=2$, to justify \eqref{eq:detalization_of_l2k+1tk_in_2D_case} it is sufficient to show that
$$
\lambda_\ominus(p) < \lambda_\circledcirc(p) \quad \text{for any } p>1.
$$
This fact was fully proved in \cite{benediktdrabekgirg}, although the case $p \in (1,1.01)$ is not explicitly stated in the text.
For the sake of completeness, we collect the arguments from \cite{benediktdrabekgirg} to explain the proof.
Denote by $B^+$ a half-disc of a unit disc $B^2$. By definition we have $\lambda_\ominus(p) = \lambda_1(p;B^+)$. Translation invariance of the $p$-Laplacian and the strict domain monotonicity of its first eigenvalue (cf.\ \cite[Proposition~4]{benediktdrabekgirg}) imply that $\lambda_\ominus(p) < \lambda_1(p;B^2_{1/2})$, where $B^2_{1/2}$ is a disc of radius $1/2$.
On the other hand, it is known that $\lambda_\circledcirc(p) = \lambda_1\bigl(p;B^2_{\nu_1(p)/\nu_2(p)}\bigr)$, where $B^2_{\nu_1(p)/\nu_2(p)}$ is a disc of radius $\nu_1(p)/\nu_2(p)$, and $\nu_1(p)$, $\nu_2(p)$ are the first two positive roots of a (unique) solution of the Cauchy problem
\begin{equation}\label{eq:radial}
\left\{
\begin{aligned}
&-(r|u'|^{p-2}u')' = r |u|^{p-2}u \quad \text{in } (0, +\infty),\\
&u(0) = 1 \quad u'(0)=0,
\end{aligned}
\right.
\end{equation}
see \cite[Lemmas~5.2 and 5.3]{delpinomanasevich}.
Therefore, if the inequality
\begin{equation}\label{eq:2nu1<nu2}
2\nu_1(p)<\nu_2(p)
\end{equation}
holds for all $p>1$, then the strict domain monotonicity yields the desired conclusion:
$$
\lambda_\ominus(p) < \lambda_1(p;B^2_{1/2}) <
\lambda_1\bigl(p;B^2_{\nu_1(p)/\nu_2(p)}\bigr)
= \lambda_\circledcirc(p).
$$
The inequality \eqref{eq:2nu1<nu2} is, in fact, the main objective of \cite{benediktdrabekgirg}.
In the interval $p \in [1.01, 226]$, \eqref{eq:2nu1<nu2} was proved in \cite[Proposition~7]{benediktdrabekgirg} via a self-validated numerical integration of \eqref{eq:radial}. For $p > 226$, \eqref{eq:2nu1<nu2} was proved in \cite[Proposition~13]{benediktdrabekgirg} by obtaining analytical bounds for $\nu_1(p)$ and $\nu_2(p)$. In the rest case $p \in (1, 1.01)$ it was shown that $\lambda_\ominus(p) \leq 3.5$, see the proof of \cite[Proposition~6]{benediktdrabekgirg}. This fact was enough to apply the proof of \cite[Theorem~6.1]{parini} and get nonradiality of the second eigenfunction.
However, as a byproduct of the proof of \cite[Theorem~6.1]{parini}, we know also that $\lambda_\circledcirc(p) > 3.5$ for $p \in (1, 1.1)$, which yields $\lambda_\ominus(p) < \lambda_\circledcirc(p)$ for $p \in (1, 1.01)$.
Thus, summarizing the above facts, we conclude that $\lambda_\ominus(p) < \lambda_\circledcirc(p)$ for all $p>1$.
2) The first two inequalities in \eqref{eq:l4<l5<lrad} follow from \eqref{eq:l2k+1tk1} by taking $k=2$. The last inequality in \eqref{eq:l4<l5<lrad} was proved in \cite[Theorem~1.2]{bobkovdrabek}.
\qed
\section{Final remarks and open questions}\label{sec:open_problems}
The results of this paper can be applied also to the singular case $p=1$, which must be treated separately. In \cite{littig} the authors defined a sequence of variational eigenvalues and proved that they can be approximated by the corresponding eigenvalues of the $p$-Laplacian as $p \to 1$. The second variational eigenvalue of the $1$-Laplacian can be characterized geometrically, as a consequence of \cite[Theorem 2.4]{littig} and \cite[Theorem 5.5]{parini} (see also \cite{bobkovparini}). In particular, if $\Omega = B^2$ is a disc, it holds $\lambda_2(1;B^2)=\lambda_\ominus(1;B^2)$, and therefore
\[ \lambda_2(1;B^2)=\lambda_3(1;B^2)=\lambda_\ominus(1;B^2)\]
by reasoning as in Proposition \ref{prop:first_inequality}. That is, the second eigenvalue of the $1$-Laplacian on a disc has multiplicity (in the sense of \eqref{eq:multiplicity}) at least $2$.
The limit case $p = \infty$ can be also considered in terms of a geometric characterization of the corresponding first and second eigenvalues.
It is known from \cite{julindman1999} and \cite{julind2005} that
\[
\lim\limits_{p \to \infty} \lambda_1(p;\Omega)^{\frac{1}{p}} = \frac{1}{R_1}
\quad \text{and} \quad
\lim\limits_{p \to \infty} \lambda_2(p;\Omega)^{\frac{1}{p}} = \frac{1}{R_2},
\]
where $R_1$ is the radius of a maximal ball inscribed in $\Omega$, and $R_2$ is the maximal radius of two equiradial disjoint balls inscribed in $\Omega$.
Let $B^N$ be a ball of radius $R$. Then we deduce from \eqref{eq:ln+1<t2} that
$$
\lim_{p \to \infty} \lambda_2(p;B^N)^{\frac{1}{p}} = \dots =
\lim_{p \to \infty} \lambda_{N+1}(p;B^N)^{\frac{1}{p}} =
\lim_{p \to \infty} \tau_1(p;B^N)^{\frac{1}{p}}
\equiv
\lim_{p \to \infty} \lambda_1(p;\mathcal{W}_1(2))^{\frac{1}{p}} = \frac{2}{R}.
$$
\medskip
We are left with several open problems.
\begin{enumerate}
\item By analogy with the linear case, it would be interesting to show the optimality of \eqref{eq:ln+1<t2}, namely whether the inequality \[\tau_1(p;\Omega)<\lambda_{N+2}(p;\Omega),\] where $\Omega$ is a radially symmetric bounded domain, holds true.
\item To prove \eqref{eq:l2k+1tk1} we used the scaling of nodal components of symmetric eigenfunctions corresponding to $\tau_k(p;\Omega)$ together with their rotation with respect to the angle $\theta_{N-1}$. However, it is not hard to see that for $N \geq 3$, symmetric eigenfunctions can be also rotated with respect to all the angles $\theta_i$, where $i \in \{1,\dots,N-1\}$ if $\Omega$ is radial, and $i \in \{2,\dots,N-1\}$ if $\Omega$ is a general domain of revolution. This observation leads to the conjecture that for every $k \geq 1$ there exists $j \geq 2$ such that
$$
\lambda_{2k}(p;\Omega) \leq \dots \leq \lambda_{2k+j}(p;\Omega) \leq \tau_k(p;\Omega).
$$
The proof might be achieved by showing the nonexistence of maps $S^{n_1} \times S^{n_2} \to S^{m}$, for suitable $n_1$, $n_2$, $m \in {\mathbb N}$, which are odd in the first variable (corresponding to the normalization constraint) and satisfy some additional conditions given by symmetries of eigenfunctions.
\item In the spirit of the previous question, it is natural to study a generalization of \eqref{eq:l2k+1tk1} where the upper bound is given by eigenvalues whose associated eigenfunctions are invariant under the action of other symmetry groups.
\item Is it possible to obtain multiplicity results for domains $\Omega$ which satisfy different symmetry properties, for instance if $\Omega$ is a square? In this case, on the one hand, numerical evidence \cite{yaozhou} supports the conjecture that $\lambda_2(p;\Omega)<\lambda_3(p;\Omega)$ if $p\neq 2$, unlike the linear case where equality trivially holds.
On the other hand, if the nodal set of a second eigenfunction $\varphi_{2,p}$ is a middle line or a diagonal of the square, as indicated again in \cite{yaozhou}, then there is another second eigenfunction linearly independent with $\varphi_{2,p}$ obtained by rotating $\varphi_{2,p}$ by an angle of $\frac{\pi}{2}$.
\end{enumerate}
\bigskip
\noindent
{\bf Acknowledgments.}
The article was started during a visit of E.P. at the University of West Bohemia and was finished during a visit of V.B. at Aix-Marseille University. The authors wish to thank the hosting institutions for the invitation and the kind hospitality.
V.B. was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports.
|
1,314,259,993,470 | arxiv | \section{Introduction}
Let $X$ be a smooth projective variety defined over an algebraically closed field $k$
and let $H$ be an ample divisor on $X$. A family ${\mathcal S}$ of isomorphism classes of coherent sheaves on $X$ is called bounded if there exists a scheme $S$ of finite type over $k$ and an $S$-flat family ${\mathcal E}$ of coherent sheaves on the fibers of $X\times S\to S$, which contains isomorphism classes of all sheaves from ${\mathcal S}$. The following theorem is a crucial step in the construction of a projective moduli scheme of semistable sheaves with fixed numerical invariants:
\begin{theorem}\label{main}
The family of isomorphism classes of slope $H$-semistable torsion free coherent sheaves on $X$ with fixed numerical invariants (e.g., with fixed rank and Chern classes) is bounded.
\end{theorem}
Many people contributed to the proof of this result including D. Gieseker, S. Kleiman, M. Maruyama, F. Takemoto and the author.
The final version in characteristic zero was proven by M. Maruyama using the Grauert--M\"ulich type restriction theorem.
In general, the above theorem was conjectured by M. Maruyama in late seventies and finally proven in \cite{La1}. It allowed to finish the construction of the moduli scheme of semistable sheaves on $X$ in case $k$ has positive characteristic $p$. We refer to \cite{HL}, \cite{La1} and \cite{Ma} for more on the history of the problem and earlier results.
The proof of this result in \cite{La1} is rather complicated and it uses some purely characteristic $p$ methods through the study of Frobenius pullbacks of torsion free sheaves. Since its appearance there were no simplifications or other proofs of this result. In this note we give a new simple proof of Theorem \ref{main} that avoids any characteristic $p$ methods. In fact, as in \cite{La1}, we prove a more general result that works in mixed characteristic (see Theorem \ref{boundedness}).
Our proof proceeds by reducing the statement to the case of projective spaces and proving Bogomolov's inequality by induction on the dimension using changes of polarization. The main novelty of this approach is that one can prove Bogomolov's inequality on projective spaces without using any restriction theorems. This in turn allows to prove restriction theorems and reduce the problem to lower dimensions. This method is new even in the characteristic zero case. A small variant of this approach allows us to give a new simple proof of Bogomolov's inequality on higher dimensional varieties in characteristic zero without using any restriction theorems (see Theorem \ref{Bog-0}). This inequality implies effective restriction theorems for slope stability and slope semistability changing the usual logic and allowing to dispense with proving the Mehta--Ramanathan restriction theorems.
\section{Preliminaries}
In this section $X$ is a smooth projective $n$-dimensional variety defined over an algebraically closed field $k$. We assume that $n\ge 1$.
\subsection{Semistability}
Let $(L_1,..., L_{n-1})$ be a collection of nef divisors on $X$. Let us assume that the $1$-cycle $L_1...L_{n-1}$ is numerically nontrivial, i.e., there exists some divisor $D$ such that $DL_1...L_{n-1}\ne 0$.
Let $E$ be a rank $r$ torsion free sheaf on $X$. Then we define the \emph{slope of $E$ with respect to} $(L_1,..., L_{n-1})$ as
$$\mu _{L_1...L_{n-1}} (E)= \frac{c_1(E) L_1...L_{n-1}}{r}. $$
We define $\mu _{\max, L_1...L_{n-1}} (E)$ as the maximum of $\mu _{L_1...L_{n-1}} (F)$ for all subsheaves $F\subset E$. Similarly, we define $\mu _{\min, L_1...L_{n-1}} (E)$ as the minimum of $\mu _{L_1...L_{n-1}} (F)$ for all torsion free quotients $F$ of $ E$. These are well defined rational numbers.
We say that $E$ is \emph{slope $(L_1,..., L_{n-1})$-semistable} if
$\mu _{L_1...L_{n-1}} (F)\le \mu _{L_1...L_{n-1}} (E)$ for all subsheaves $F\subset E$ of rank less than $r$.
Similarly, we define \emph{slope $(L_1,..., L_{n-1})$-stable} sheaves using the strict inequality
$\mu _{L_1...L_{n-1}} (F)<\mu _{L_1...L_{n-1}} (E)$.
In case $H$ is an ample divisor, we define $\mu_H(E)$, $\mu_{\max , H} (E)$, $\mu_{\min ,H}(E)$ and slope $H$-(semi)stability using the collection of $(n-1)$-divisors $(H, ..., H)$.
\medskip
If $E$ is a rank $r$ torsion free sheaf on $X$ we define the \emph{discriminant} of $E$ as $\Delta (E)=2rc_2(E)-(r-1)c_1^2(E)$. We say that \emph{Bogomolov's inequality holds for the collection of nef divisors} $(L_1,...,L_{n-1})$ if for every slope $(L_1,..., L_{n-1})$-semistable sheaf $E$ we have $\Delta (E)L_2...L_{n-1}\ge 0$. Note that this notion depends on the order of nef divisors in the collection.
\subsection{The Hodge index theorem}
The Hodge index theorem implies that if $D_1$ and $D_2$ are divisors on a smooth projective surface
and $(a_1D_1+a_2D_2)^2>0$ for some $a_1,a_2\in {\mathbb R}$ then $D_1^2\cdot D_2^2\le (D_1D_2)^2$.
Now if $X$ is a smooth projective variety of dimension $n\ge 2$ we see that if $H_1,..., H_{n-2}$ are ample divisors and $D_1, D_2$ are such that $(a_1D_1+a_2D_2)^2H_1...H_{n-2}>0$ for some $a_1,a_2\in {\mathbb R}$ then $D_1^2H_1...H_{n-2}\cdot D_2^2H_1...H_{n-2}\le (D_1D_2H_1...H_{n-2})^2$. If $H_1,...,H_{n-2}$
are only nef we can replace $H_i$ by $H_i+tH$ for some ample $H$ and positive $t$. Passing with $t$ to $0$
we get the same result as above assuming that $H_1,...,H_{n-2}$ are only nef.
This allows us to prove the following version of the Hodge index theorem.
\begin{lemma}\label{HIT}
Let $(L_1,...,L_{n-1})$ be a collection of nef divisors such that the $1$-cycle $L_1...L_{n-1}$ is numerically nontrivial. Then for any ample divisor $H$ we have $HL_1...L_{n-1}>0$. Moreover, if $DL_1...L_{n-1}=0$
for some divisor $D$ then $D^2L_2...L_{n-1}\le 0$.
\end{lemma}
\begin{proof}
If $L_1...L_{n-1}$ is numerically nontrivial then there exists some divisor $M$ such that $ML_1...L_{n-1}> 0$.
But then there exists some $m>0$ such that $H^0(X, {\mathcal O}_{X}(mH-M ) )\ne 0$. Therefore $mH L_1...L_{n-1}\ge ML_1...L_{n-1}> 0$.
The above version of the Hodge index theorem implies that for any $t>0$ and any ample $H$ we have
$$D^2L_2...L_{n-1}\cdot (L_1+tH)^2L_2...L_{n-1} \le (D(L_1+tH)L_2...L_{n-1} )^2=t^2 (DHL_2...L_{n-1})^2. $$
Passing with $t$ to $0$ we see that if $L_1^2L_2...L_{n-1}>0$ then $D^2L_2...L_{n-1}\le 0$.
If $L_1^2L_2...L_{n-1}=0$ then dividing the above inequality by $t$ and passing with $t$ to $0$
again gives $D^2L_2...L_{n-1}\le 0$.
\end{proof}
\section{Boundedness of semistable sheaves}
\subsection{Change of polarization}
The following proposition has a similar proof as \cite[Proposition 6.2]{La3}. Since it is crucial
for the following arguments, we give all details of its proof for the convenience of the reader.
\begin{proposition}\label{pol-change}
Let $X$ be a smooth projective variety of dimension $n$ and let $(L_1,...,L_{n-1})$ be a collection of nef divisors such that the $1$-cycle $L_1...L_{n-1}$ is numerically nontrivial. If Bogomolov's inequality holds for $(L_1,...,L_{n-1})$ then it holds for any $(M,L_2,...,L_{n-1})$ such that $M$ is nef and $ML_2...L_{n-1}$ is numerically nontrivial.
\end{proposition}
\begin{proof}
The proof is by induction on the rank of $E$ with rank $1$ being left to the reader. Let us assume that Bogomolov's inequality holds for all sheaves of rank less than $r$ which are slope semistable with respect to some $ML_2...L_{n-1}$, where $M$ is nef and $ML_2...L_{n-1}$ is numerically nontrivial. Let us now fix some nef $M$ such that
$ML_2...L_{n-1}$ is numerically nontrivial and let $E$ be a slope $ML_2...L_{n-1}$-semistable torsion free sheaf of rank $r$.
If $E$ is slope $L_1L_2...L_{n-1}$-semistable then
$\Delta (E)L_2...L_{n-1}\ge 0$ by the assumption that Bogomolov's inequality holds for $L_1...L_{n-1}$. So we can assume that $E$ is not slope $L_1L_2...L_{n-1}$-semistable. In this case we consider $M_t=(1-t)M+tL_1$ for $t\in [0,1]$.
Let us note that $M_{t}L_2...L_{n-1}$ is numerically nontrivial, as otherwise for any ample $H$ we get $(1-t)HML_2...L_{n-1}+tHL_1...L_{n-1}=0$, which contradicts Lemma \ref{HIT}. Now we have the following lemma.
\begin{lemma}\label{change-of-polarization}
There exist some $t_0\in (0, 1]$ and slope $M_{t_0}L_2...L_{n-1}$-semistable torsion free sheaves $E'$ and $E''$ or ranks $r'$, $r''$ less than $r$, such that the sequence
$$0\to E'\to E\to E''\to 0$$
is exact and $\mu _{M_{t_0}L_2...L_{n-1}}(E')=\mu _{M_{t_0}L_2...L_{n-1}}(E'')=\mu _{M_{t_0}L_2...L_{n-1}}(E)$.
\end{lemma}
\begin{proof}
Let ${\mathcal S}$ be the set of all saturated subsheaves $F\subset E$ of rank less than $r$ such that $\mu _{L_1L_2...L_{n-1}}(F)>\mu _{L_1L_2...L_{n-1}}(E)$. Note that for any $F\in {\mathcal S}$ we have $r! \mu _{L_1L_2...L_{n-1}}(F)\in {\mathbb Z}$ and $\mu _{L_1L_2...L_{n-1}}(F)\le \mu _{\max, L_1L_2...L_{n-1}}(E)$, so the set $\{\mu _{L_1L_2...L_{n-1}}(F): F\in {\mathcal S}\}$
is finite. Let us take $E'\in {\mathcal S}$ such that the quotient
$$s(F):= \frac{\mu _{ML_2...L_{n-1}}(E)-\mu _{ML_2...L_{n-1}}(F)} {\mu _{L_1L_2...L_{n-1}}(E)-\mu _{L_1L_2...L_{n-1}}(F)}$$
attains the minimum among all $F\in {\mathcal S}$. Such $E'$ exists since
$r!(\mu _{ML_2...L_{n-1}}(E)-\mu _{ML_2...L_{n-1}}(F))$ is a non-negative integer and the denominator takes only a finite number of positive values.
Let us set $t_0=\frac{s(E')}{1+s(E')}$ so that $s(E')=t_0/(1-t_0)$.
For any $F\subset E$ of rank less than $r$ we have
{ \small
$$ \mu _{M_{t_0}L_2...L_{n-1}}(E)-\mu _{M_{t_0}L_2...L_{n-1}}(F)=(1-t_0) (\mu _{ML_2...L_{n-1}}(E)-\mu _{ML_2...L_{n-1}}(F))
-t_0 (\mu _{L_1L_2...L_{n-1}}(E)-\mu _{L_1L_2...L_{n-1}}(F)).
$$}
This difference is clearly positive if $F\not \in {\mathcal S}$ and $\le 0$ if $F\in {\mathcal S}$ with equality for $F=E'$.
Therefore $E'$ and $E''=E/E'$ satisfy the required assertions.
\end{proof}
Now to finish proof of the proposition note that by the induction assumption we have $\Delta (E') L_2...L_{n-1}\ge 0$ and $\Delta (E'') L_2...L_{n-1}\ge 0$.
Therefore by the Hodge index theorem (see Lemma \ref{HIT}) we get
$$\frac{\Delta (E) L_2...L_{n-1}}{r}=\frac{\Delta (E') L_2...L_{n-1}}{r'}+\frac{\Delta (E'') L_2...L_{n-1}}{r''}-\frac{r'r''}{r}\left(\frac{c_1(E')}{r'}-\frac{c_1(E'')}{r''}
\right) ^2L_2...L_{n-1}\ge 0.$$
\end{proof}
\begin{remark}
Lemma \ref{change-of-polarization} is rather standard (see, e.g., \cite[Lemma 4.C.5]{HL}) but we give all the details as the proof in \cite{HL} uses Grothendieck's lemma that fails in our case (it fails even in the surface case when both $L_1$ and $M$ are not ample).
\end{remark}
\subsection{Bogomolov's inequality and restriction theorem on projective spaces}
\begin{theorem}\label{Bogomolov-proj-sp}
Let $H$ be a hyperplane on ${\mathbb P} ^n$ and let $E$ be a slope $H$-semistable torsion free coherent sheaf on ${\mathbb P} ^n$. Then $\Delta (E)H^{n-2}\ge 0.$
\end{theorem}
\begin{proof}
The proof is by induction on the dimension $n$ starting with $n=2$. In this case the proof is classical and the result follows from the Riemann--Roch theorem. More precisely, using Jordan--H\"older's filtration one can reduce proving the inequality for slope $H$-semistable sheaves to the case of slope $H$-stable sheaves. If $E$ is slope $H$-stable then $h^0({\mathop{\mathcal{E}nd}\,} E)=1$ and $h^2({\mathop{\mathcal{E}nd}\,} E)=h^0({\mathop{\mathcal{E}nd}\,} E(-3))=0$, so $\chi ({\mathop{\mathcal{E}nd}\,} E)= -\Delta (E)+r^2\chi ({\mathcal O}_{{\mathbb P} ^2})\le 1$, i.e., $\Delta (E)\ge r^2-1\ge 0$.
Now let us assume that $n\ge 3$.
Let $E$ be a slope $H$-semistable torsion free coherent sheaf on ${\mathbb P} ^n$.
Let $\Lambda \subset |{\mathcal O}_{{\mathbb P} ^n} (1)|$ be a general pencil of hyperplanes. Let $q: Y\to {\mathbb P}^n$
be the blow up of ${\mathbb P} ^n$ in the base locus of $\Lambda$ and let $p: Y\to \Lambda={\mathbb P}^1$ be the canonical
projection.
We claim that Bogomolov's inequality holds for the collection $(p^*{\mathcal O} _{\Lambda }(1), q^*(H)^{n-2})$.
Namely, let $F$ be a torsion free sheaf $F$ on $Y$, which is slope $p^*{\mathcal O} _{\Lambda }(1)q^*(H^{n-2})$-semistable.
Existence of the flattening stratification (see \cite[Theorem 2.1.5]{HL}) implies that there exists a non-empty open subset $U\subset {\mathbb P}^1$ such that $F$ is flat over $U$. Moreover, we can assume that for every $s\in U$ the restriction $F_s
$ to the fiber of $p$ over $s$ is torsion-free (cf.~\cite[Lemma 3.1.1]{HL}).
Since $F$ is slope semistable on the generic fiber of $p$, by openness of slope semistability for flat families,
$F_s$ is slope semistable on some fiber $Y_s={\mathbb P}^{n-1}$ over a geometric point $s$ of $U\subset \Lambda$. Therefore by the induction assumption $\Delta (F)q^*(H^{n-2})=\Delta (F_s)H_s^{n-3}\ge 0$, where $H_s$ is a hyperplane in $Y_s$. This proves our claim.
Now Proposition \ref{pol-change} implies that Bogomolov's inequality holds also for $q^*(H)^{n-1}$. But $q^*E$ is torsion free and it is slope $q^*(H)^{n-1}$-semistable, so $\Delta (E)H^{n-2} =\Delta (q^*E)(q^*H)^{n-2}\ge 0.$
\end{proof}
As in \cite[Theorem 5.1]{La1} the above theorem implies the following corollary:
\begin{corollary}\label{unstable-Bogomolov}
Let $E$ be a torsion free rank $r$ sheaf on ${\mathbb P} ^n$.
Then we have
$$\Delta (E)H^{n-2}+ r^2(\mu_{\max , H}(E) -\mu _H (E))(\mu _H (E) -\mu_{\min , H} (E) )\ge 0.$$
\end{corollary}
As in \cite[Corollary 5.4]{La1} the above corollary implies the following restriction theorem (we state only a simplified version for slope semistability).
\begin{theorem}\label{restriction}
Let $E$ be a torsion free rank $r$ sheaf on ${\mathbb P} ^n$. Let $D\in |mH|$ be a general hypersurface of degree
$$m>\frac {(r-1)^2\Delta (E) H^{n-2} +1}{r(r-1)} .$$
If $E$ is slope $H$-semistable then the restriction $E_D$ is slope $H_D$-semistable.
\end{theorem}
\subsection{Proof of boundedness of semistable sheaves}
Let $X _k$ be an $n$-dimensional projective scheme over
an algebraically closed field $k$ and let $H={\mathcal O} _{X_k}(1)$ be an ample divisor on $X_k$.
If $E$ is a coherent sheaf of pure dimension $d$ on $X_k$ then there exist integers $a_0(E),\dots ,a_d(E)$
and $\alpha_0 (E), \dots ,\alpha_d(E)$ such that
$$\chi (X_k, E(m))=\sum _{i=0}^d a_i (E) {m+d-i \choose d-i}=\sum _{i=0}^d \alpha_i (E)\frac{m^i}{i!}.$$
One defines the \emph{generalized slope} of $E$ by $\hat\mu (E)=\frac{\alpha_{d-1}(E)}{\alpha_d (E)}=\frac{a_1(E)}{a_0(E)}+\frac{d+1}{2}$. It is used
to define $\hat \mu _{\max} (E)$ in the same way as the usual slope is used to define $\mu_{\max}$, i.e.,
$\hat \mu _{\max} (E)$ is the maximum of $\hat \mu (F)$ for all subsheaves $F\subset E$.
\medskip
Let $f: X \to S$ be a projective morphism of noetherian schemes of relative dimension $n$ and let ${\mathcal O} _{X/S}(1)$ be an $f$-very ample line bundle on $X$. Let us consider the following families of sheaves.
\begin{enumerate}
\item Let ${\mathcal S} _{X/S} (d; r,a_1,\dots ,a_d, \mu _{\max})$ be
the family of isomorphism classes of
coherent sheaves on the fibres of $f$ such that $E$ on a geometric fibre $X_s$
is a member of the family if $E$ is of pure dimension $d$,
$\hat \mu _{\max} (E)\le \mu_{\max}$, $a_0(E)=r$, $a_1(E)=a_1$ and $a_i(E)\ge a_i$
for $i\ge 2$.
\item Let ${\mathcal S} '_{X/S} (d; r,a_1,a_2, \mu _{\max})$
be the family of isomorphism classes of
coherent sheaves on the fibres of $f$ such that $E$ on a geometric fibre $X_s$
is a member of the family if $E$ is of pure dimension $d$ and it satisfies Serre's condition $S_2$,
$\hat\mu _{\max} (E)\le \mu_{\max}$, $a_0(E)=r$, $a_1(E)=a_1$ and $a_2(E)\ge a_2$.
\end{enumerate}
The following theorem was first proven in \cite[Theorem 4.4]{La1}. Here we sketch a simple proof of this theorem based on the results of the previous subsection.
\begin{theorem} \label{boundedness}
The families ${\mathcal S} _{X/S} (d; r,a_1,\dots ,a_d, \mu _{\max})$
and ${\mathcal S} '_{X/S} (d; r,a_1,a_2, \mu _{\max})$ are bounded.
\end{theorem}
\begin{proof}
For simplicity we consider only the family ${\mathcal S}= {\mathcal S} _{X/S} (d; r,a_1,\dots ,a_d, \mu _{\max})$ but a similar proof works also for the other family. We proceed by induction on $d$ with $d=0$ being trivial. Let us assume that
the family ${\mathcal S} _{X/S} (d-1; r,a_1,\dots ,a_{d-1}, \mu _{\max})$ is bounded for all possible $X/S$ and all $r,a_1,\dots ,a_{d-1}, \mu _{\max}$.
Theorem \ref{restriction} and our induction assumption imply that the family
${\mathcal S} _{{\mathbb P} ^d_S/S} (d; r,a_1,\dots ,a_d, \mu _{\max})$ is bounded for all $r,a_1,\dots ,a_{d}, \mu _{\max}$
(this reduction step is a classical result of M. Maruyama; see \cite[Proposition 3.4]{Ma}). The remaining part of the proof depends on projection's method, which is originally due to C. Simpson and J. Le Potier (see proof of \cite[Theorem 7.9]{Ma}).
Without loss of generality we can shrink $S$ so that $X/S$ embeds into ${\mathbb P} ^N_S$ for some fixed $N$.
Let us fix a linear subspace $T={\mathbb P} ^{N-d-1}_S\subset {\mathbb P} ^N_S$.
It is sufficient to bound the subfamily $\tilde {\mathcal S}$ of ${\mathcal S}$ that contains classes of all sheaves $E$ on the geometric fiber $X_s$ such that the support of $E$ does not intersect $T_s$. Let us fix such $E$.
Taking the linear projection ${\mathbb P} ^N_S\dasharrow {\mathbb P}^d_S$ from $T$ and restricting it to the the scheme-theoretic support $Z$ of $E$ we get a well defined finite morphism $\pi : Z\to {\mathbb P} ^d_s$. Then $\pi_*E$ is torsion free and we have
$$H^i(X, E(k))= H^i(Z, E(k))=H^i ({\mathbb P} ^d_s, \pi_*E\otimes {\mathcal O}_{{\mathbb P}^d_s}(k)).\leqno{(*)}$$
Moreover, one can prove that
$$\mu _{\max} (\pi_*E)- \mu (\pi _*E)\le \hat \mu _{\max} (E)- \hat\mu (E)+(a_0(E))^2$$
(see \cite[Lemmas 6.2.1 and 6.2.2]{La4} or proof of \cite[Theorem 7.9]{Ma} for a slightly weaker estimate). This implies that there exist some constants
$r',a_1',\dots ,a_d', \mu _{\max} '$ such that for all $E$ in $\tilde {\mathcal S}$, the pushforward $\pi_*E$ is in the family $ {\mathcal S}_{{\mathbb P}}:={\mathcal S} _{{\mathbb P}^d_S/S} (d; r',a_1',\dots ,a_d', \mu _{\max} ')$. We claim that boundedness of the family $ {\mathcal S} _{{\mathbb P}}$
implies boundedness of $\tilde {\mathcal S}$. This follows from the fact that boundedness of a family implies that there is a common bound on the Castelnuovo--Mumford regularity of sheaves in this family. But equality $(*)$ implies then existence of a common bound on the Castelnuovo--Mumford regularity of sheaves in the family $\tilde {\mathcal S}$. By the Castelnuovo--Mumford criterion and Grothendieck's lemma \cite[Lemma 1.7.9]{HL} we get boundedness of the family ${\mathcal S}$.
\end{proof}
The above result has many applications. Let us just recall that by the proof of \cite[Theorem 1]{Mo1} it implies the following Bogomolov type inequality for strongly semistable sheaves.
\begin{corollary}
Let $X$ be a smooth projective variety of dimension $n\ge 2$ defined over an algeraically closed field $k$ of characteristic $p>0$. Let $H$ be an ample divisor and let
$E$ be a strongly $H$-semistable sheaf on $X$. Then we have $\Delta(E)H^{n-2}\ge 0$.
\end{corollary}
A generalization of the above result was first proven in \cite[Theorem 3.2]{La1} as part of the proof of Theorem \ref{boundedness}.
\subsection{Bogomolov's inequality in characteristic zero}
In characteristic zero a similar proof as that of Theorem \ref{Bogomolov-proj-sp} allows us to reduce Bogomolov's theorem in higher dimension to the surface case:
\begin{theorem}\label{Bog-0}
Let $X$ be a smooth projective variety defined over an algebraically closed field $k$ of characteristic $0$.
Let $(D_1, ..., D_{n-1})$ be a collection of nef divisors on $X$ such that the $1$-cycle $D_1...D_{n-1}$ is numerically nontrivial. If $E$ is a slope $(D_1, ..., D_{n-1})$-semistable torsion free coherent sheaf on $X$
then $$\Delta (E)D_2...D_{n-1}\ge 0.$$
\end{theorem}
\begin{proof}
The proof is by induction on the dimension $n$ assuming the inequality in dimension $n=2$.
Let $E$ be a slope $(D_1, ..., D_{n-1})$-torsion free coherent sheaf on $X$ of dimension $n>2$. By Proposition \ref{pol-change} we can assume that $D_1$ is very ample. Let $\Lambda \subset |{\mathcal O}_X(D_1)|$ be a general pencil of hyperplanes. Let $q: Y\to X$ be the blow up of $X$ in the base locus of $\Lambda$ and let $p: Y\to \Lambda={\mathbb P}^1$ be the canonical projection. As in the proof of Theorem \ref{Bogomolov-proj-sp}, the induction assumption implies that Bogomolov's inequality holds for $(p^*{\mathcal O} _{\Lambda }(1),q^*D_2,...,q^*D_{n-1})$. Therefore by Proposition \ref{pol-change} it also holds for $(q^*D_1,... , q^*D_{n-1})$.
But $q^*E$ is torsion free and it is slope $(q^*D_1, ..., q^*D_{n-1})$-semistable, so
$$\Delta (E)D_2...D_{n-1} =\Delta (q^*E)q^*(D_2...D_{n-1})\ge 0.$$
\end{proof}
\begin{remark}
For $n>2$ Bogomolov's inequality is usually stated for collections of ample divisors and obtained by restricting to surfaces using Mehta--Ramanathan's theorem (see \cite[Theorem 7.3.1]{HL}).
This approach works well if $D_1,...,D_{n-1}$ are multiples of the same ample divisor $H$ giving $\Delta(E) H^{n-2}\ge 0$. However, to the author's knowledge there is no written account of Mehta--Ramanathan's restriction theorem for non-proportional ample divisors. The only other approach to Theorem \ref{Bog-0} is that from \cite[Theorem 3.2]{La1}, where it is a part of a complicated induction procedure.
\end{remark}
\begin{remark}
As in the case of Theorem \ref{restriction}, Theorem \ref{Bog-0} implies effective restriction theorems for slope stability and slope semistability (see \cite[Theorem 5.2 and Corollary 5.4]{La1}).
This approach makes proving the Mehta--Ramanathan restriction theorems \cite[Theorems 7.2.1 and 7.2.8]{HL} obsolete as we recover much stronger results.
\end{remark}
|
1,314,259,993,471 | arxiv | \section{Introduction}
Let $\Omega\subset\subset\mathbb{C}^{2}$ be a smoothly bounded domain. Throughout, we suppose that $
\Omega$ admits a $\mathcal{C}^\infty$-smooth defining function $\rho$ which is plurisubharmonic on
the
boundary, $b\Omega$,
of $\Omega$,
i.e.,
\begin{align*}
H_{\rho}(\xi,\xi)(z):=
\sum_{j,k=1}^{2}\frac{\partial^{2}\rho}{\partial z_{j}\partial\bar{z}_{k}}(z)\xi_{j}\bar{\xi}_{k}\geq 0
\end{align*}
for all $z\in b\Omega$ and $\xi=(\xi_{1},\xi_{2})\in\mathbb{C}^{2}$. This property comes
up naturally as a sufficiency condition for global regularity of the Bergman projection, see
\cite{BoaStr91,HerMcN06}.
The purpose of this paper is to investigate how the plurisubharmonicity of $\rho$ influences the
behaviour of the complex Hessian of $\rho$ (or of the complex Hessians of some other defining
functions of $\Omega$) away from the boundary of $\Omega$.
Suppose $D=\{z\in\mathbb{C}^{2}\;|\;r(z)<0\;\}$ is a smoothly bounded, pseudoconvex domain. Then it
follows by standard arguments, that
there exists a neighborhood $W$ of the boundary of $D$ such
that the following lower estimate for the complex Hessian of $r$ holds:
\begin{align}\label{E:StandardEst}
H_{r}(\xi,\xi)(q)\geq\mathcal{O}(r(q))|\xi|^{2}
+\mathcal{O}\left(|\xi|\cdot\left|\langle\partial r(q),\xi\rangle\right|\right)
\end{align}
for $q\in W$ and $\xi\in\mathbb{C}^{2}$(see for instance \cite{R1981} for
details).
Our main result shows how to improve the estimate \eqref{E:StandardEst} under the additional
condition that there is some smooth defining function of $D$ which is plurisubharmonic on the
boundary of $D$.
\begin{theorem}\label{T:MainTheorem}
Let $\Omega\subset\subset\mathbb{C}^{2}$ be a smoothly bounded domain, and suppose $\Omega$
admits a smooth defining function which is plurisubharmonic on the boundary, $b\Omega$, of $
\Omega$.
Then the following holds:
for each $\epsilon>0$ and $K>0$, there exist a neighborhood $V$
of $b\Omega$ and defining functions $r_{1}$ and $r_{2}$ such that for all
$\xi\in\mathbb{C}^{2}$
\begin{align}\label{E:MainEst}
H_{r_{1}}(\xi,\xi)(q)\geq\epsilon r_{1}(q)|\xi|^{2}+K|\langle\partial r_{1}(q),\xi\rangle|^{2}
\;\;\;\text{for}\;\;\; q\in V\cap\overline{\Omega}
\end{align}
and
\begin{align}\label{E:MainEst2}
H_{r_{2}}(\xi,\xi)(q)\geq -\epsilon r_{2}(q)|\xi|^{2}+K|\langle\partial r_{2},\xi\rangle|^{2}
\;\;\;\text{for}\;\;\; q\in V\cap\overline{\Omega^{C}}.
\end{align}
\end{theorem}
An immediate consequence of Theorem \ref{T:MainTheorem} is the existence of strictly
plurisubharmonic exhaustion functions of $\Omega$ and of the complement of $\overline{\Omega}$:
\begin{corollary}\label{C:DFexponent}
Suppose the hypotheses of Theorem \ref{T:MainTheorem} holds. Then
\begin{enumerate}
\item[(i)] for any $\eta\in(0,1)$ there exists a smooth defining function $\widetilde{r}_{1}$ such that
$-(-\widetilde{r}_{1})^{\eta}$ is strictly plurisubharmonic on $\Omega$,
\item[(ii)] for any $\eta>1$ there exist a neighborhood $V$ of $b\Omega$ and a smooth defining
function $\widetilde{r}_{2}$ such that
$\widetilde{r}_{2}^{\eta}$ is strictly plurisubharmonic on $V\setminus \overline{\Omega}$.
\end{enumerate}
\end{corollary}
A Diederich-Forn\ae ss exponent of a domain $D\subset\subset\mathbb{C}^{n}$ is a number
$\tau\in(0,1]$ for which there exists a smooth defining function $s$ of $D$ so that $-(-s)^{\tau}$ is strictly
plurisubharmonic on $D$. That all
smoothly bounded pseudoconvex domains in $\mathbb{C}^{n}$ have a
Diederich-Forn\ae ss exponent $\tau$ was shown in \cite{DF1977A} (see also \cite{R1981}). It is also
known that there are pseudoconvex domains for which the largest possible $\tau$
might be arbitrarily close to 0 (see \cite{DF1977B}).
However, part (i) of Corollary \ref{C:DFexponent} says that
the Diederich-Forn\ae ss exponent
can be chosen arbitrarily close to 1 on domains which admit a smooth
defining function, which is plurisubharmonic on the boundary.
Part (ii) of Corollary \ref{C:DFexponent} is of interest, since it implies that the closure of
$\Omega$ has a Stein neighborhood basis. In particular, it follows that $b\Omega$ is uniformly
H-convex. We remark that partial results regarding the existence of a Stein neighborhood basis for
the closure of a domain, which satisfies the hypotheses of Theorem \ref{T:MainTheorem}, have been
obtained
in \cite{Sah06}.
\medskip
The paper is structured as follows. In Section 2, we identify the obstruction to \eqref{E:MainEst} to
hold for the given defining function $\rho$. We then give an example to show
that this obstruction might actually occur.
In Section \ref{S:Modification}, we prove Theorem \ref{T:MainTheorem}, and we conclude this paper by
proving Corollary \ref{C:DFexponent} in Section
\ref{S:DFexponent} .
We would like to thank J.D. McNeal for stimulating discussions about this project.
\section{The obstruction}\label{S:Obstruction}
Throughout, $(z_{1},z_{2})$ will denote the coordinates of $\mathbb{C}^2$. We shall identify
the vector $\langle\xi_1,\xi_2\rangle$ in $\mathbb{C}^{2}$ with $\xi_1\frac{\partial}{\partial z_1}+
\xi_2\frac{\partial}{\partial z_2}$ in the $(1,0)$-tangent bundle of $\mathbb{C}^2$ at any given point.
We use the
pointwise hermitian inner product $\langle .,.\rangle$ defined by
$\langle\frac{\partial}{\partial z_j},\frac{\partial}{\partial z_k}\rangle=\delta^j_k$. We also shall use
$\langle.,.\rangle$ to denote contractions of
vector fields and forms. We hope this abuse of notation will not confuse the reader as it should be clear
from the context what is meant.
\medskip
Let us first see which quantities the right hand side of \eqref{E:StandardEst} depends on. To do so,
we need to use Taylor's formula:
\subsection{Taylor's formula in our context}\label{SS:Taylor}
Since $b\Omega$ is smooth, there exist a neighborhood $U$ of $b\Omega$ and a smooth map
\begin{align*}
\pi:\overline{\Omega}\cap U&\longrightarrow b\Omega\\
q&\longmapsto \pi(q)=p
\end{align*}
such that $p\in b\Omega$ lies on the line normal to $b\Omega$ passing through
$q$, and $|p-q|$ is equal to the complex euclidean distance, $d_{b\Omega}(q)$, of $q$ to
$b\Omega$.
Denote by $\vec{n}_{p}$ the unit outward normal to $b\Omega$ at $p$. Then
$q=p-d_{b\Omega}(q)\vec{n}_{p}$. Note that in complex
notation
\begin{align*}
\vec{n}_{p}=\frac{\left\langle \frac{\partial\rho}{\partial\overline{z}_{1}},
\frac{\partial\rho}{\partial\overline{z}_{2}}\right\rangle}
{|\partial\rho|}(p),\;\;\text{which implies}\;\;
q=p-\frac{d_{b\Omega}(q)}{|\partial\rho|}\left\langle\frac{\partial\rho}{\partial \overline{z}_{1}},
\frac{\partial\rho}{\partial\overline{z}_{2}}\right\rangle(p).
\end{align*}
Let $f\in C^{2}(\overline{\Omega})$, $q\in\overline{\Omega}\cap U$ and $p=\pi(q)$. Then Taylor's formula
in complex notation says
\begin{align*}
f(q)&=f(p)+\sum_{j=1}^{2}\left[ \frac{\partial f}{\partial z_{j}}(p)(q_{j}-p_{j})
+\frac{\partial f}{\partial\overline{z}_{j}}(p)(\overline{q}_{j}-\overline{p}_{j})\right]
+\mathcal{O}(|q-p|^{2})\\
&=
f(p)-\frac{d_{b\Omega}(q)}{|\partial\rho(p)|}
\sum_{j=1}^{2}\left[\frac{\partial\rho}{\partial\overline{z}_{j}}(p)\frac{\partial f}{\partial z_{j}}(p)
+\frac{\partial\rho}{\partial z_{j}}(p)\frac{\partial f}{\partial\overline{z}_{j}}(p)
\right]+\mathcal{O}(d^{2}_{b\Omega}(q)).
\end{align*}
Define the vector field $N(z)=\frac{1}{|\partial\rho(z)|}\sum_{j=1}^{2}
\frac{\partial\rho}{\partial\overline{z}_{j}}(z)
\frac{\partial}{\partial z_{j}}$. Then
\begin{align}\label{E:Taylor}
f(q)=f(p)-2d_{b\Omega}(q)\left[(ReN)(f)\right](p)+\mathcal{O}(d_{b\Omega}^{2}(q)).
\end{align}
\medskip
\subsection{Partial Taylor analysis of the complex Hessian of $\rho$}\label{SS:Tayloronrho}
After possibly shrinking the neighborhood $U$ of $b\Omega$, the smooth vector fields
\begin{align*}
L=\frac{\frac{\partial\rho}{\partial z_{2}}\frac{\partial}{\partial z_{1}}
-\frac{\partial\rho}{\partial z_{1}}\frac{\partial}{\partial z_{2}}}
{|\partial\rho|}\;\;\text{and}\;\;
N=\frac{\frac{\partial\rho}{\partial\overline{z}_{1}}\frac{\partial}{\partial z_{1}}
+\frac{\partial\rho}{\partial\overline{z}_{2}}\frac{\partial}{\partial z_{2}}}
{|\partial\rho|}
\end{align*}
are defined on $\overline{\Omega}\cap U$, and it holds that
\begin{align*}
L(\rho)=\langle L,N\rangle=0\;\;\text{and}\;\;|L|=1=|N|\;\;\text{on}\;\;\overline{\Omega}\cap U.
\end{align*}
Before we get down to business, we need some more notation:
for vector fields $X(z)=\sum_{i=1}^{2}X_{i}(z)\frac{\partial}{\partial z_{i}}$ and
$Y(z)=\sum_{i=1}^{2}Y_{i}(z)\frac{\partial}{\partial z_{i}}$, we shall write
\begin{align*}
H_{\rho}(X,Y)(z)&=\sum_{j,k=1}^{2}\frac{\partial^{2}\rho}{\partial z_{j}\partial\bar{z}_{k}}(z)
X_{j}(z)\overline{Y}_{k}(z).
\end{align*}
We denote by $\Omega_{W}$ the set of all points $q\in\Omega\cap U$ for which $p=\pi(q)$ is a weakly
pseudoconvex boundary point.
Let $\epsilon>0$ be fixed. For each fixed $q\in\Omega_{W}\cap U$ and $\xi\in\mathbb{C}^{2}$ there
exist constants $a_{q,\xi}$ and $b_{q,\xi}$ such that
$\xi=a_{q,\xi}L(q)+b_{q,\xi}N(q)$. Note that then $|\xi|^{2}=|a_{q,\xi}|^{2}+|b_{q,\xi}|^{2}$. For now, we
only consider $q\in\Omega_{W}\cap U$, and for notational ease, we shall drop the subscripts $q,\xi$.
We first note that
\begin{align}\label{E:xiLN}
H_{\rho}(\xi,\xi)(q)=|a|^{2}H_{\rho}(L,L)(q)+2Re\left(a\overline{b}H_{\rho}(L,N)(q)\right)
+|b|^{2}H_{\rho}(N,N)(q).
\end{align}
We apply \eqref{E:Taylor} to $H_{\rho}(L,L)(q)$, i.e.,
\begin{align*}
H_{\rho}(L,L)(q)
=H_{\rho}(L,L)(p)-2d_{b\Omega}(q)\left(ReN\right)\left(H_{\rho}(L,L)\right)(p)
+\mathcal{O}(d^{2}_{b\Omega}(q)).
\end{align*}
Since $H_{\rho}(L,L)$ is real valued and $H_{\rho}(L,L)(p)=0$, it follows that
\begin{align*}
H_{\rho}(L,L)(q)=-2d_{b\Omega}(q)Re(NH_{\rho}(L,L))(p)+\mathcal{O}(d^{2}_{b\Omega}(q)).
\end{align*}
Notice that $NH_{\rho}(L,L)(p)$ is real. The last equation combined with \eqref{E:xiLN} gives us then
\begin{align*}
H_{\rho}(\xi,\xi)(q)\geq&
|a|^{2}\left[-2d_{b\Omega}(q)\left(NH_{\rho}(L,L)\right)(p)+\mathcal{O}(d^{2}_{b\Omega}(q))\right]\\
&-2|a||b||H_{\rho}(L,N)(q)|+|b|^{2}H_{\rho}(N,N)(q).
\end{align*}
The Cauchy-Schwarz inequality implies
\begin{align*}
H_{\rho}(\xi,\xi)(q)\geq&
|a|^{2}\left[-2d_{b\Omega}(q)\left(NH_{\rho}(L,L)\right)(p)-\rho^{2}(q)
+\mathcal{O}(d^{2}_{b\Omega}(q))\right]\\
&+|b|^{2}\left[\frac{-1}{\rho^{2}(q)}|H_{\rho}(L,N)(q)|^{2}+H_{\rho}(N,N)(q)\right].
\end{align*}
Notice that, after possibly shrinking the neighborhood $U$ of $b\Omega$, we can assume that
\begin{align*}
-\rho^{2}(q)+\mathcal{O}(d^{2}_{b\Omega}(q))\geq\frac{\epsilon}{4}\rho(q)
\end{align*}
for all $q\in\Omega_{W}\cap U$. Therefore,
\begin{align*}
H_{\rho}(\xi,\xi)(q)\geq&
|a|^{2}\left[-2d_{b\Omega}(q)\left(NH_{\rho}(L,L)\right)(p)+\frac{\epsilon}{4}\rho(q)\right]\\
&+|b|^{2}\left[\frac{-1}{\rho^{2}(q)}|H_{\rho}(L,N)(q)|^{2}+H_{\rho}(N,N)(q)\right]
\end{align*}
for all $q\in\Omega_{W}\cap U$.
Because of the plurisubharmonicity of $\rho$ on $\overline{\Omega}_{W}\cap b\Omega$, it follows that
\begin{align*}
|H_{\rho}(L,N)|\leq\left(H_{\rho}(L,L)\right)^{\frac{1}{2}}\left(H_{\rho}(N,N)\right)^{\frac{1}{2}}
\end{align*}
holds on $\overline{\Omega}_{W}\cap b\Omega$. Since $q\in\Omega_{W}\cap U$, i.e., since $\pi(q)=p$ is a weakly pseudoconvex boundary point, we get that $H_{\rho}(L,N)(p)=0$. Therefore, there exists a constant $c_{1}>0$, depending on $\rho$, such that
\begin{align*}
|H_{\rho}(L,N)(q)|^{2}\leq c_{1}|\rho(q)|^{2}\;\;\text{for all}\;\;q\in\Omega_{W}\cap U.
\end{align*}
This gives us the following lower bound on $H_{\rho}(\xi,\xi)(q)$:
\begin{align*}
H_{\rho}(\xi,\xi)(q)\geq& |a|^{2}\left[-2d_{b\Omega}(q)\left(NH_{\rho}(L,L)\right)(p)
+\frac{\epsilon}{4}\rho(q)\right]\\
&-|b|^{2}\left[c_{1}-H_{\rho}(N,N)(q)\right],
\end{align*}
which implies that for some constant $c_{2}>0$ depending on $\rho$
\begin{align}\label{E:TayloronHrho}
H_{\rho}(\xi,\xi)(q)\geq |a|^{2}\left[-2d_{b\Omega}(q)\left(NH_{\rho}(L,L)\right)(p)
+\frac{\epsilon}{4}\rho(q)\right]-c_{2}|b|^{2}
\end{align}
holds for $q\in\Omega_{W}\cap U$.
Note that \eqref{E:TayloronHrho} is a more detailed version of \eqref{E:StandardEst} for those points
$q\in\Omega$ near $b\Omega$ whose projections $\pi(q)$ are weakly pseudoconvex boundary points.
Moreover,
inequality \eqref{E:MainEst} is within range, if $NH_{\rho}(L,L)$ is non-positive at all weakly
pseudconvex boundary points. The term $NH_{\rho}(L,L)$ being positive at some weakly pseudoconvex boundary point $p_{0}$ means that the function $H_{\rho}(L,L)$ decreases when one moves from $p_{0}$ inside the domain along the line normal to $b\Omega$ at $p_{0}$. This, of course, means that $H_{\rho}(L,L)$ becomes negative there, which destroys any hope of $\rho$ being plurisubharmonic in some neighborhood of $p_{0}$. Clearly, $NH_{\rho}(L,L)(p_{0})> 0$ obstructs
inequality \eqref{E:MainEst} to hold for all $\epsilon>0$.
\medskip
\subsection{Example \& idea of modification of $\rho$} We shall first give an example of a domain where $NH_{\rho}(L,L)$ is positive at a weakly pseudoconvex boundary point. Consider the domain $D=\{(z,w)\in\mathbb{C}^{2}\;|\;\rho(z,w)<0\}$ near the origin, where
\begin{align*}
\rho(z,w)=Re(w)+|w|^{2}+Re(w)|z|^{2}+|z|^{2}|w|^{2}+|z|^{4}+|z|^{6}.
\end{align*}
One can easily show that $\rho$ is plurisubharmonic on $bD$ near the origin. In fact, $\rho$ is strictly plurisubharmonic on $bD$ near the origin except when $z=0$. Let $\xi=(\xi_{1},\xi_{2})$ and $q=(0,w)$ be a point in $D$ which lies on the line normal to $bD$ through the origin. Then
\begin{align*}
H_{\rho}(\xi,\xi)(q)=(Re(w)+|w|^{2})|\xi_{1}|^{2}+|\xi_{2}|^{2}.
\end{align*}
Thus $\rho$ can not be plurisubharmonic in any neighborhood of the origin. Note that
this is caused by the term $Re(w)|z|^{2}$ contained in the definition of $\rho$. However, this is our old enemy, that is
\begin{align*}
(NH_{\rho}(L,L))(0)=\frac{\partial}{\partial w}\left(
\frac{\partial^{2}\rho}{\partial z\partial\bar{z}}
\right)(0)=\frac{1}{2}>0!
\end{align*}
Now the question is, whether we can manipulate $\rho$ such that the obstruction vanishes. Notice that
the answer to that in the above example is yes: let $r(z,w)=\rho(z,w)/(1+|z|^{2})$. Then
\begin{align*}
r(z,w)=Re(w)+|w|^{2}+|z|^{4},
\end{align*}
which is plurisubharmonic everywhere.
\medskip
Recall, that we actually want to show an estimate like
\begin{align*}
-2d_{b\Omega}(q)(NH_{\rho}(L,L))(p)\geq\epsilon\rho(q).
\end{align*}
Obviously, just multiplying $\rho$ by a small positive number is not going to remove the obstruction.
So, we consider another defining function $\rho\cdot h$ of $\Omega$, where $h$ is some smooth,
positive function. We shall now list a few characteristics of $h$ which should give us some control on
the obstruction term:
\begin{enumerate}
\item In order to use the basic estimate \eqref{E:TayloronHrho} for $\rho\cdot h$, we would need that $
\rho\cdot h$ is still plurisubharmonic at weakly pseudoconvex boundary points. This can be achieved,
if we choose $h$ such that all its first order derivatives vanish at all weakly pseudoconvex boundary
points.
\item We need to consider those third order derivatives of $\rho\cdot h$, which are forced upon us by
the
obstruction term, at weakly pseudoconvex points. If we assume that all first order derivatives of $h$
vanish at weakly pseudoconvex
points (and if we ignore, at least temporarily, that in the obstruction term $N$ does not only act on the
Levi form of $\rho\cdot h$ but also on $L$ and $\overline{L}$), then there are only two terms to be
considered:
\begin{enumerate}
\item There is the product of the original obstruction term of $\rho$ and $h$, which tells us that $h$
itself should not be large at the weakly pseudoconvex points.
\item There are the terms which involve one derivative of $\rho$ and two derivatives of $h$. Since we
are on $b\Omega$ the only such term which can appear is $N$ acting on $\rho$ multiplied
with the Levi form of $h$. This seems to say that we need the Levi form of $h$ to be negative definite
at the weakly pseudoconvex points. One can show that
$NH_{\rho}(L,L)$ equals $LH_{\rho}(N,L)$ at weakly pseudoconvex points
(see \eqref{E:something1} and the following lemma). Thus the obstruction term itself gives us a
function, $-|H_{\rho}(N,L)|^{2}$, whose Levi form is strictly negative definite at those points where
it is needed.
\end{enumerate}
\end{enumerate}
Clearly, we can not choose $h$ to be $-|H_{\rho}(N,L)|^{2}$, since the latter function vanishes at weakly pseudoconvex points, and hence $\rho\cdot h$ would not be a defining function of $\Omega$. However, taking (1) and (2) into account $e^{-|H_{\rho}(N,L)|^{2}}$ seems like a suitable candidate for $h$.
\section{Proof of Theorem \ref{T:MainTheorem}}\label{S:Modification}
Let $C>0$ be a large constant, which will be chosen later. We will consider the smooth defining
function
\begin{align*}
r_{C}=r=\rho e^{-C\sigma},\;\;\text{where}\;\;\sigma=|H_{\rho}(N,L)|^{2},
\end{align*}
and we shall work with the vector fields
\begin{align*}
L^{r}=\frac{\frac{\partial r}{\partial z_{2}}\frac{\partial}{\partial z_{1}}
-\frac{\partial r}{\partial z_{1}}\frac{\partial}{\partial z_{2}}}
{|\partial r|}\;\;\text{and}\;\;
N^{r}=\frac{\frac{\partial r}{\partial\overline{z}_{1}}\frac{\partial}{\partial z_{1}}
+\frac{\partial r}{\partial\overline{z}_{2}}\frac{\partial}{\partial z_{2}}}
{|\partial r|},
\end{align*}
which are defined on $\overline{\Omega}\cap U$ (after possibly shrinking $U$). As before, we note that $L^{r}(r)=\langle L^{r},N^{r}\rangle=0$ and $|L^{r}|=|N^{r}|=1$. Moreover, on $b\Omega$ we have
$L^{r}=L$ and $N^{r}=N$.
As before, we suppose that $q\in\Omega_{W}\cap U$. Here, the decomposition of a vector
$\xi\in\mathbb{C}^{2}$ with respect to the vector fields $L^{r}$ and $N^{r}$ at $q$ is different than
before. Clearly, we can write $\xi=a_{q,\xi}L^{r}(q)+b_{q,\xi}N^{r}(q)$ again; however, the constants
$a_{q,\xi}$ and $b_{q,\xi}$ are different from before. Again, for notational convenience, we shall drop
those subscripts $q,\xi$.
Let us first see whether the basic estimate \eqref{E:TayloronHrho} holds for $r$.
The only special property of $\rho$, which we used to derive \eqref{E:TayloronHrho}, is that
$H_{\rho}(L,N)(p)=0$, where $p$ is a weakly pseudoconvex boundary point. Thus, to see whether
\eqref{E:TayloronHrho} holds for $r$ we shall compute $H_{r}(L^{r},N^{r})(p)$. A straightforward computation gives
\begin{align*}
\frac{\partial ^{2}r}{\partial z_{j}\partial\overline{z}_{k}}
=
e^{-C\sigma}\left[-C\frac{\partial\sigma}{\partial\overline{z}_{k}}
\left(\frac{\partial \rho}{\partial z_{j}}
-C\rho\frac{\partial\sigma}{\partial z_{j}}\right)\right.
&+\frac{\partial^{2}\rho}{\partial z_{j}\partial\overline{z}_{k}}\\
&\left.-C\frac{\partial\rho}{\partial\overline{z}_{k}}
\frac{\partial\sigma}{\partial z_{j}}-C\rho\frac{\partial^{2}\sigma}{\partial z_{j}\partial\overline{z}_{k}}
\right].
\end{align*}
Since $H_{\rho}(L,N)(p)=0$, it follows that not only $\sigma$ but also any derivative of $\sigma$ at $p$
vanishes, and thus we obtain
\begin{align*}
\frac{\partial ^{2}r}{\partial z_{j}\partial\overline{z}_{k}}(p)=
\frac{\partial ^{2}\rho}{\partial z_{j}\partial\overline{z}_{k}}(p).
\end{align*}
In particular, $r$ is plurisubharmonic at $p$ and $H_{r}(L^{r},N^{r})(p)=0$. Thus \eqref{E:TayloronHrho}
holds for $r$. That is: there exists a constant $c_{2}>0$ (depending on $r$) such that
\begin{align}\label{E:TayloronHr}
H_{r}(\xi,\xi)(q)\geq |a|^{2}\left[-2d_{b\Omega}(q) \left(N^{r}H_{r}(L^{r},L^{r})\right)(p)+
\frac{\epsilon}{4}r(q)\right]-c_{2}|b|^{2}
\end{align}
holds for all $q\in\Omega_{W}\cap U$ after possibly shrinking $U$.
\medskip
To see whether we truly gain anything by using $r$ instead of $\rho$, we have to figure out how
$(N^{r}H_{r}(L^{r},L^{r}))(p)$ is related to $(NH_{\rho}(L,L))(p)$. We shall prove the following
\begin{align}\label{Claim}
\text{\underline{Claim}:}\;\;
N^{r}H_{r}(L^{r},L^{r})(p)\leq \left[NH_{\rho}(L,L)-C|\partial\rho|\cdot(NH_{\rho}(L,L))^{2}\right](p).
\end{align}
Note that $N^{r}=N$ on $b\Omega$, which implies on $b\Omega$
\begin{align*}
N^{r}H_{r}(L^{r},L^{r})
=
NH_{r}(L^{r},L^{r})
=
\sum_{\ell=1}^{2}N_{l}\frac{\partial}{\partial z_{\ell}}
\left(
\sum_{j,k=1}^{2}\frac{\partial ^{2}r}{\partial z_{j}\partial\overline{z}_{k}}L^{r}_{j}
\overline{L}^{r}_{k}
\right).
\end{align*}
Since $L^{r}$ is a weak complex tangential direction at $p$ and $r$ is plurisubharmonic at $p$,
we have
\begin{align*}
\sum_{j,k=1}^{2}\frac{\partial^{2} r}{\partial z_{j}\overline{z}_{k}}
\left(\sum_{\ell=1}^{2}N_{\ell}\frac{\partial L^{r}_{j}}{\partial z_{\ell}}\right)\overline{L}^{r}_{k}(p)
=0=
\sum_{j,k=1}^{2}\frac{\partial^{2} r}{\partial z_{j}\overline{z}_{k}} L_{j}^{r}
\left(\sum_{\ell=1}^{2}N_{\ell}\frac{\partial \overline{L}^{r}_{k}}{\partial z_{\ell}}\right)(p).
\end{align*}
Moreover, we have $L^{r}(p)=L(p)$, which gives us that
\begin{align*}
\left(N^{r}H_{r}(L^{r},L^{r})\right)(p)
=\left(\sum_{j,k,\ell=1}^{2}\frac{\partial^{3}r}{\partial z_{j}\partial\overline{z}_{k}\partial z_{\ell}}
L_{j}\overline{L}_{k}N_{\ell}\right)(p).
\end{align*}
Let us now compute those third derivatives of $r$:
\begin{align*}
&\frac{\partial^{3} r}{\partial z_{j}\partial\overline{z}_{k}\partial z_{\ell}}\\
=&
e^{-C\sigma}\left[-C\frac{\partial\sigma}{\partial z_{\ell}}
\left\{
-C\frac{\partial\sigma}{\partial\overline{z}_{k}}
\left(
\frac{\partial\rho}{\partial z_{j}}-C\rho\frac{\partial\sigma}{\partial z_{j}}
\right)
+\frac{\partial^{2}\rho}{\partial z_{j}\partial\overline{z}_{k}}
\right.
-C\frac{\partial\rho}{\partial\overline{z}_{k}}\frac{\partial\sigma}{\partial z_{j}}
-C\rho\frac{\partial^{2}\sigma}{\partial z_{j}\overline{z}_{k}}\right\}\\
&\hspace{1cm}-C\frac{\partial^{2}\sigma}{\partial\overline{z}_{k}z_{\ell}}
\left(\frac{\partial\rho}{\partial z_{j}}-C\rho\frac{\partial\sigma}{\partial z_{j}}\right)
-C\frac{\partial\sigma}{\partial\overline{z}_{k}}
\left(\frac{\partial^{2}\rho}{\partial z_{j}\partial z_{\ell}}
-C\frac{\partial\rho}{\partial z_{\ell}}\frac{\partial\sigma}{\partial z_{j}}-C\rho
\frac{\partial^{2}\sigma}{\partial z_{j}\partial z_{\ell}}
\right)\\
&\hspace{0.8cm}\left.+\frac{\partial^{3}\rho}{\partial z_{j}\partial\overline{z}_{k}\partial z_{\ell}}
-C\left(\frac{\partial^{2}\rho}{\partial\overline{z}_{k}\partial z_{\ell}}
\frac{\partial\sigma}{\partial z_{j}}+\frac{\partial\rho}{\partial\overline{z}_{k}}
\frac{\partial^{2}\sigma}{\partial z_{j}\partial z_{\ell}}
+\frac{\partial\rho}{\partial z_{\ell}}\frac{\partial^{2}\sigma}{\partial z_{j}\overline{z}_{k}}
+\rho\frac{\partial^{3}\sigma}{\partial z_{j}\partial\overline{z}_{k}\partial z_{\ell}}
\right)\right].
\end{align*}
First note that $\rho$ as well as $\sigma$ and all its first order derivatives vanish at $p$. Also, since $L$ is complex tangential to $b\Omega$ all the terms involving $\frac{\partial\rho}{\partial z_{j}}$ or
$\frac{\partial\rho}{\partial\overline{z}_{k}}$ vanish. Thus we get
\begin{align*}
\left(N^{r}H_{r}(L^{r},L^{r})\right)(p)
=
\left(NH_{\rho}(L,L)-C\langle\partial\rho,N\rangle H_{\sigma}(L,L)
\right)(p).
\end{align*}
Since $\langle\partial\rho, N\rangle(p)=|\partial\rho(p)|$, it follows that
\begin{align*}
\left(N^{r}H_{r}(L^{r},L^{r})\right)(p)
=
\left(NH_{\rho}(L,L)-C|\partial\rho|H_{\sigma}(L,L)
\right)(p).
\end{align*}
Recall that $\sigma=|H_{\rho}(N,L)|^{2}$. Using that $H_{\rho}(N,L)(p)=0$, a direct computation gives
us
\begin{align*}
H_{\sigma}(L,L)(p)&=
\left|\langle \partial H_{\rho}(N,L),L\rangle(p)\right|^{2}
+
\left|\langle \overline{\partial} H_{\rho}(N,L),\overline{L}\rangle(p)\right|^{2}\\
&\geq
\left|\langle \partial H_{\rho}(N,L),L\rangle(p)\right|^{2}.
\end{align*}
We compute further
\begin{align*}
\langle\partial H_{\rho}(N,L),L\rangle
&=
\sum_{j=1}^{2}L_{j}\frac{\partial}{\partial z_{j}}
\left(
\sum_{k,\ell=1}^{2}\frac{\partial^{2}\rho}{\partial z_{\ell}\overline{z}_{k}}
N_{\ell}\overline{L}_{k}
\right)\\
&=
\sum_{j,k,\ell=1}^{2}
\frac{\partial^{3}\rho}{\partial z_{j}\partial\overline{z}_{k}\partial z_{\ell}}
L_{j}\overline{L}_{k}N_{\ell}
+\sum_{k,\ell=1}^{2}\frac{\partial^{2}\rho}{\partial z_{l}\partial\overline{z}_{k}}
\left(\sum_{j=1}^{2}L_{j}\frac{\partial}{\partial z_{j}}
\left(\overline{L}_{k}N_{\ell}
\right)
\right).
\end{align*}
Since $L$ is a weak complex tangential direction at $p$ and $\rho$ is plurisubharmonic at $p$, it follows that
\begin{align}\label{E:something1}
\langle\partial H_{\rho}(N,L),L\rangle(p)
=
NH_{\rho}(L,L)(p)+
\sum_{k,\ell=1}^{2}
\frac{\partial^{2}\rho}{\partial z_{\ell}\partial\overline{z}_{k}}N_{\ell}
\left(
\sum_{j=1}^{2}L_{j}\frac{\partial \overline{L}_{k}}{\partial z_{j}}
\right)(p).
\end{align}
We claim that the last term on the right hand side vanishes:
\begin{lemma}\label{L:Zisweak}
Suppose $X$ is a smooth vectorfield, which is complex tangential to $b\Omega$.
Furthermore, suppose that $b\Omega$ is weakly pseudoconvex at some boundary point $p$. Define
$Y=\sum_{j=1}^{2}\overline{X}_{j}\frac{\partial X_{k}}{\partial \overline{z}_{j}}
\frac{\partial}{\partial z_{k}}$.
Then $Y$ is weak complex tangential to $b\Omega$ at $p$.
\end{lemma}
\begin{proof}
Since $X$ is tangential, $X(\rho)=0$ holds on $b\Omega$. Moreover, we
have $\overline{X}(X(\rho))=0$ on $b\Omega$. Therefore
\begin{align*}
0&=\overline{X}(X(\rho))(p)
=\sum_{j,k=1}^{2}\overline{X}_{j}\frac{\partial}{\partial\overline{z}_{j}}
\left(
X_{k}\frac{\partial \rho}{\partial z_{k}}
\right)(p)\\
&=\sum_{j,k=1}^{2}\overline{X}_{j}\frac{\partial X_{k}}{\partial \overline{z}_{j}}
\frac{\partial \rho}{\partial z_{k}}(p)
+
\sum_{j,k=1}^{2}\frac{\partial^{2}\rho}{\partial z_{k}\partial\overline{z}_{j}}
X_{k}\overline{X}_{j}(p)=Y(\rho)(p),
\end{align*}
where the last step holds since $H_{\rho}(X,X)(p)=0$ by our hypothesis. Thus, $Y$ is complex
tangential direction at $p$. In particular, $H_\rho(Y,Y)(p)=0$.
\end{proof}
If we set $X=L$, Lemma \ref{L:Zisweak} implies that the last term in
\eqref{E:something1} vanishes.
Thus, we obtain
\begin{align*}
H_{\sigma}(L,L)(p)
\geq
\left|
\langle
\partial H_{\rho}(N,L),L
\rangle(p)
\right|^{2}=
\left|
NH_{\rho}(L,L)(p)
\right|^{2},
\end{align*}
which proves the Claim \eqref{Claim}. That is
\begin{align*}
N^{r}H_{r}(L^{r},L^{r})(p)\leq\left[NH_{\rho}(L,L)-C|\partial\rho|\cdot(NH_{\rho}(L,L))^{2}\right](p).
\end{align*}
Hence, the lower estimate \eqref{E:TayloronHr} on the complex Hessian of $r$ now becomes
\begin{align}
H_{r}(\xi,\xi)(q)
\geq&
|a|^{2}
\left[
2d_{b\Omega}(q)\left\{Cc_{3}\left(NH_{\rho}(L,L)\right)^{2}
-NH_{\rho}(L,L)\right\}(p)+
\frac{\epsilon}{4}r(q)
\right]\notag\\
&-c_{2}|b|^{2}
\end{align}
for $q\in\Omega_{W}\cap U$, where $c_{3}>0$ is chosen such that $|\partial\rho|\geq c_{3}$ on $b\Omega$.
\medskip
We are now set to show that there exist a $C>0$ and a neighborhood $U_{C}$ of $b\Omega$ such that
\begin{align}\label{E:toshow}
2d_{b\Omega}(q)\left[
C c_{3}\left( NH_{\rho}(L,L)\right)^{2}
- NH_{\rho}(L,L)
\right](p)\geq\frac{\epsilon}{4}r(q)
\end{align}
holds for $q\in\Omega_{W}\cap U_{C}$, which would imply that \eqref{E:MainEst} holds for these points.
To make our life easier, let us write
$A_{p}$ for $NH_{\rho}(L,L)(p)$, i.e., \eqref{E:toshow} becomes
\begin{align*}
2d_{b\Omega}(q)\left[C c_{3}A_{p}^{2}-A_{p}\right]\geq\frac{\epsilon}{4}r(q).
\end{align*}
If $C c_{3}A_{p}^{2}-A_{p}\geq 0$, then \eqref{E:toshow} holds trivially. Moreover, increasing $C$ does not destroy this non-negativity.
Suppose that $C c_{3}A_{p}^{2}-A_{p}<0$. First notice that there exists a constant $c_{4}>0$ such that
$d_{b\Omega}(q)\leq c_{4}|\rho(q)|$
for all $q\in\Omega\cap U$. Since $\rho=re^{C\sigma}$, it follows that
$d_{b\Omega}(q)\leq c_{4}e^{C\sigma(q)}|r(q)|$.
Thus, to prove \eqref{E:toshow} it is sufficient to show
\begin{align*}
2c_{4}e^{C\sigma(q)}|r(q)|\left[C c_{3}A_{p}^{2}-A_{p}\right]&\geq \frac{\epsilon}{4}r(q),\;\;
\text{which is equivalent to}\\
e^{C\sigma(q)}\left[C c_{3}A_{p}^{2}-A_{p}\right]&\geq-\frac{\epsilon}{8c_{4}}.
\end{align*}
Let $U_{C}\subset U$ be a neighborhood of $b\Omega$ such that
$z\in\Omega\cap U_{C}$ implies that $e^{C\sigma(z)}\leq 2e^{C\sigma(\pi(z))}$. Notice that $U_{C}$ is a true neighborhood of $b\Omega$, since $\sigma$ is smooth near $b\Omega$. Moreover, in the situation which we are considering, i.e., where $\pi(q)$ is a weakly pseudoconvex boundary point, we then have that $q\in\Omega_{W}\cap
U_{C}$ implies $e^{C\sigma(q)}\leq 2$.
Therefore, to obtain \eqref{E:toshow} it is sufficient that
\begin{align*}
Cc_{3}A_{p}^{2}-A_{p}\geq-\frac{\epsilon}{16 c_{4}}
\end{align*}
holds on $\Omega_{W}\cap U_{C}$. We remark that neither $c_{3}, c_{4}$ nor $A_{p}$ depend on the
choice of $C$. Thus, choosing
\begin{align*}
C=\max\left\{0,\max_{p\in b\Omega\;\text{weak}}\frac{\frac{-\epsilon}{16c_{4}}+A_{p}}{c_{3}A_{p}^{2}}
\right\}
\end{align*}
proves \eqref{E:toshow} on $\Omega\cap U_{C}$, which implies that
\begin{align}\label{E:Estonweak}
H_{r}(\xi,\xi)(q)\geq \frac{\epsilon}{2}r(q)|\xi|^{2}-c_{2}|\langle\partial r(q),\xi\rangle|^{2}
\end{align}
holds on $\Omega_{W}\cap U_{C}$.
\medskip
Let us show now that an estimate similar to \eqref{E:Estonweak} holds near $\Omega_W\cap U_C$.
Note first that our computations leading up to
\eqref{E:Estonweak} imply that $N^r H_r(L^r,L^r)\leq\frac{\epsilon}{16}$ holds on the set of the weakly
pseudoconvex boundary points of $\Omega$. Hence by continuity, there exists a neighborhood
$W$ of the set of weakly pseudoconvex boundary points such that
$N^rH_r(L^r,L^r)\leq\frac{\epsilon}{8}$ on $W\cap b\Omega$. We may assume
that $W\subset U_C$ and that $q\in W\cap\Omega$ implies $\pi(q)\in W\cap b\Omega$.
Using Taylor's formula, it follows
for $q\in W\cap\Omega$ that
\begin{align*}
H_r(L^r,L^r)(q)&\geq
H_r(L^r,L^r)(\pi(q))+
\frac{\epsilon}{4}r(q)+\mathcal{O}(r^2(q))\\
&\geq H_r(L^r,L^r)(\pi(q))+\frac{\epsilon}{2}r(q)
\end{align*}
after possibly shrinking of $W$.
Another application of Taylor's formula gives us for $q\in W\cap\Omega$
\begin{align*}
H_r(\xi,\xi)(q)\geq&
|a|^2\left[H_r(L^r,L^r)(\pi(q))+\frac{\epsilon}{2}r(q)\right]
+
|b|^2H_r(N^r,N^r)\\
&\hspace{3.35cm}+2|a||b|\left[|H_r(L^r,N^r)(\pi(q))|+\mathcal{O}(r(q))\right]\\
\geq&
|a|^2\left[H_r(L^r,L^r)(\pi(q))+\epsilon r(q)\right]-c_5|\langle\partial r(q),\xi\rangle|^2\\
&\hspace{3.3cm}+2|a||b||H_r(L^r,N^r)(\pi(q))|,
\end{align*}
where the last step follows by the Cauchy-Schwarz inequality for some constant
$c_5>0$ sufficiently large. Since $H_r(L^r,L^r)(\pi(q))$ is positive, we only need to estimate the term
$|H_r(L^r,N^r)(\pi(q))|$.
Note first that $r$ is not plurisubharmonic on $b\Omega$ at strictly
pseudoconvex boundary points, though $\rho$ is. However, since any derivative of $\sigma$
is $\mathcal{O}(H_\rho(N^r,L^r))$ on $b\Omega$ and since $\rho$ is plurisubharmonic on $b\Omega$,
it follows that there exists a constant $c_6>0$ such that
\begin{align*}
|H_r(L^r,N^r)|^2\leq c_6H_r(L^r,L^r)\left[H_r(N^r,N^r)+c_6\right]\;\;\text{on}\;\;b\Omega.
\end{align*}
The Cauchy-Schwarz inequality implies now, that for some constant $c_7>0$ we have
\begin{align}\label{E:EstonnbhdW}
H_r(\xi,\xi)(q)\geq
\epsilon r(q)|\xi|^2-c_7\left|\langle\partial r(q),\xi\rangle\right|^2
\end{align}
for $q\in W\cap \Omega$.
We define $r_{1}=r+Kr^{2}$ for some $K>2c_{7}$. Note that
\begin{align*}
H_{r_{1}}(\xi,\xi)(q)=(1+2Kr)H_{r}(\xi,\xi)(q)+2K|\langle\partial r,\xi\rangle|^{2}.
\end{align*}
Let $U_{K}=\{\;z\in W\;|\;1+2Kr(z)\geq\frac{1}{2}\;\}$, then \eqref{E:EstonnbhdW} implies
for $q\in\Omega\cap U_{K}$
\begin{align*}
H_{r_{1}}(\xi,\xi)(q)&\geq\frac{1}{2}\epsilon r(q)|\xi|^{2}
+(2K-c_{7})|\langle\partial r(q),\xi\rangle|^{2}\\
&\geq \epsilon r_{1}(q)|\xi|^{2}+K|\langle\partial r_{1}(q),\xi\rangle|^{2}.
\end{align*}
That is, we have shown that \eqref{E:MainEst} holds on $\Omega\cap U_K$.
Next we show that \eqref{E:MainEst} also holds near the remaining strictly pseudoconvex boundary
points. We note that $S=b\Omega\setminus(W \cap b\Omega)$ is a closed
subset of the set of the strictly pseudoconvex boundary points. This implies, as long as $K>0$ is
chosen
sufficiently large, that there exists a neighborhood $U_{S}$ of $S$ such that $r_{1}$ is strictly
plurisubharmonic on $\Omega\cap U_{S}$. In particular, there exists a neighborhood $V$ of
$b\Omega$ such that
\begin{align*}
H_{r_{1}}(\xi,\xi)(q)\geq\epsilon r_{1}(q)|\xi|^{2}+K|\langle\partial r_{1}(q),\xi\rangle|^{2}
\end{align*}
for all $q\in\Omega\cap V$ and $\xi\in\mathbb{C}^{2}$. This proves \eqref{E:MainEst}.
\medskip
The proof of \eqref{E:MainEst2} is very similar to the above proof of \eqref{E:MainEst}. In fact, we only
need to change a few signs to derive \eqref{E:MainEst2}. First, one realizes that the basic estimate
\eqref{E:TayloronHrho} becomes: there exist a neighborhood $U$ of $b\Omega$ and a constant
$c_{2}>0$ such that
\begin{align*}
H_{\rho}(\xi,\xi)(q)\geq |a|^{2}\left[ 2d_{b\Omega}(q)(NH_{\rho}(L,L)(p)-\frac{\epsilon}{4}\rho(q)\right]
-c_{2}|b|^{2}
\end{align*}
for $q\in\Omega^{C}\cap U$. Here, as before, the points $q$ in consideration are such that their
orthogonal projections $\pi(q)=p$
onto $b\Omega$ are weakly pseudoconvex boundary points. As one would expect, we have an
obstruction to plurisubharmonicity of $\rho$ outside of $\overline{\Omega}$ at those weakly
pseudoconvex boundary points where $H_{\rho}(L,L)$ decreases along the outward normal. Thus,
we should multiply $\rho$ by a smooth, positive function which is strictly plurisubharmonic at those
boundary points where $NH_{\rho}(L,L)$ is negative, i.e., we work with the function
$r=\rho e^{C|H_{\rho}(N,L)|^{2}}$ for $C>0$. Using arguments analog to the ones in the proof of
\eqref{E:MainEst}, one can then show that for any $\epsilon>0$ and $K>0$, there exist a
neighborhood $V$ of $b\Omega$ and a constant $C>0$ such that the complex Hessian of
$r_{2}=r+Kr^{2}$ satisfies \eqref{E:MainEst2} on $\Omega^{C}\cap V$.
\medskip
\section{Proof of Corollary \ref{C:DFexponent}}\label{S:DFexponent}
In the following section, we give the proof of Corollary \ref{C:DFexponent}. We start out with part (i) by
showing first that for
any $\eta\in(0,1)$ there exist a $\delta>0$, a smooth defining function $r$ of $\Omega$ and a
neighborhood $W$ of $b\Omega$ such that
$g_{1}=-(-re^{-\delta|z|^{2}})^{\eta}$ is strictly plurisubharmonic on $\Omega\cap W$.
Let $\eta\in(0,1)$ be fixed, and $r$ be a smooth defining function of $\Omega$. For notational ease
we write $\phi=\delta|z|^{2}$ for $\delta>0$. Here, $r$ and $\delta$ are fixed and to be chosen later.
Let us compute the complex Hessian of $g_{1}$ on $\Omega\cap W$:
\begin{align*}
H_{g_{1}}(\xi,\xi)=&\eta(-r)^{\eta-2}e^{-\phi\eta}
\left[
(1-\eta)\right.\left|\langle\partial r,\xi\rangle\right|^{2}-rH_{r}(\xi,\xi)\\
&+2r\eta Re\left(\langle\partial r,\xi\rangle\langle\overline{\partial\phi,\xi}\rangle\right)
\left.
-r^{2}\eta\left|\langle\partial\phi,\xi\rangle\right|^{2}
+r^{2}H_{\phi}(\xi,\xi)\right].
\end{align*}
An application of the Cauchy-Schwarz inequality gives
\begin{align*}
2r\eta Re\left(\langle\partial r,\xi\rangle\langle\overline{\partial\phi,\xi}\rangle\right)
\geq -(1-\eta)\left|\langle\partial r,\xi\rangle\right|^{2}
-\frac{r^{2}\eta^{2}}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2}.
\end{align*}
Therefore, we obtain for the complex Hessian of $g$ on $\Omega$ the following:
\begin{align*}
H_{g_{1}}(\xi,\xi)
\geq
\eta(-g_{1})(-r)^{-1}\left[H_{r}(\xi,\xi)
+
(-r)\left\{H_{\phi}(\xi,\xi)-\frac{\eta}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2}
\right\}
\right].
\end{align*}
Notice that
\begin{align*}
H_{\phi}(\xi,\xi)-\frac{\eta}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2}
&=
\delta\left(H_{|z|^{2}}(\xi,\xi)-\frac{\eta}{1-\eta}\delta\left|\langle\overline{z},
\xi\rangle\right|^{2}
\right)\\
&\geq
\delta|\xi|^{2}\left(
1-\frac{\eta D}{1-\eta}\delta
\right),
\end{align*}
where $D:=\max_{z\in\overline{\Omega}}|z|^{2}$. Now set
$\delta=\frac{1-\eta}{2\eta D}$; it is noteworthy that $\delta$ goes to $0$ as $\eta$ approaches
$1^{-}$. We now have
\begin{align*}
H_{\phi}(\xi,\xi)-\frac{\eta}{1-\eta}\left|\langle\partial\phi,\xi\rangle\right|^{2}
\geq\frac{\delta}{2}|\xi|^{2},
\end{align*}
which implies that
\begin{align}\label{E:generalDFest}
H_{g_{1}}(\xi,\xi)\geq \eta(-g_{1})(-r)^{-1}
\left[
H_{r}(\xi,\xi)+\frac{\delta}{2}(-r)|\xi|^{2}
\right]
\end{align}
holds on $\Omega$.
By \eqref{E:MainEst}
there exist a neighborhood $W$ of $b\Omega$ and a smooth defining function $r_{1}$
of $\Omega$ such that
\begin{align*}
H_{r_{1}}(\xi,\xi)(q)
\geq
\frac{\delta}{4} r_{1}(q)|\xi|^{2}
\end{align*}
for all $q\in\Omega\cap W$. Setting $r=r_{1}$ and using \eqref{E:generalDFest}, we obtain
\begin{align*}
H_{g_{1}}(\xi,\xi)(q)\geq
\eta(-g_{1}(q))\cdot\frac{\delta}{4}|\xi|^{2}\;\;\text{for}\;\;q\in\Omega\cap W.
\end{align*}
It follows by standard arguments that there exists a defining function $\widetilde{r}_{1}$ such that
$-(-\widetilde{r}_{1})^{\eta}$ is strictly
plurisubharmonic on $\Omega$; for details see pg. 133 in \cite{DF1977A}.
This proves part (i) of Corollary \ref{C:DFexponent}.
\medskip
The proof of part (ii) is similar to the proof of part (i). Let
$\eta>1$ be fixed. We would like to show that there exists a neighborhood $V$ of $b\Omega$ such
that $g_{2}=(re^{\delta|z|^{2}})^{\eta}$ is strictly
plurisubharmonic on
$\overline{\Omega}^{C}\cap V$ for some smooth defining function $r$ and some constant
$\delta>0$. Let $W$ be a neighborhood of $b\Omega$.
Choose $\delta=\frac{\eta-1}{2\eta D}$, where $D=\max_{z\in\overline{W}}|z|^{2}$. Then calculations
similar to the ones in the proof of part (i) yield
\begin{align*}
H_{g_{2}}(\xi,\xi)\geq \eta g_{2}r^{-1}\left[
H_{r}(\xi,\xi)+\frac{\delta}{2}r|\xi|^{2}
\right]\;\;\text{on}\;\;\overline{\Omega}^{C}\cap W.
\end{align*}
By \eqref{E:MainEst2} there exist a neighborhood $V$ of $b\Omega$ and a smooth defining function
$r_{2}$ of $\Omega$ such that
\begin{align*}
H_{r_{2}}(\xi,\xi)(q)\geq-\frac{\delta}{4}r_{2}(q)|\xi|^{2}
\end{align*}
for all $q\in\overline{\Omega}^{C}\cap V$. Since we may assume that $V\subset W$, it follows that
\begin{align*}
H_{g_{2}}(\xi,\xi)(q)\geq\eta g_{2}(q)\cdot\frac{\delta}{4}|\xi|^{2}\;\;\text{for}\;\;
q\in\overline{\Omega}^{C}\cap V,
\end{align*}
which proves \eqref{E:MainEst2}.
|
1,314,259,993,472 | arxiv | \section{Introduction}
\figconflict
\IEEEPARstart{S}{emantic} segmentation is a fundamental research field in computer vision. It aims to predict the class of each pixel in a given image. The availability of large labeled datasets~\cite{everingham2011pascal,zhou2017scene} and the development of deep neural networks~\cite{minaee2020image} have resulted in significant advancements. The vast majority of semantic segmentation research focuses on the scenario where all data is jointly available for training.
However, for many real-world applications, this might not be the case, and it would be necessary to incrementally learn the semantic segmentation algorithm. Examples include scenarios where the learner has only limited memory and cannot store all data (common in robotics), or where privacy policies might prevent the sharing of data (common in health care)~\cite{de2019continual,parisi2019continual}.
Incremental learning aims to mitigate catastrophic forgetting~\cite{mccloskey1989catastrophic} which occurs when neural networks update their parameters for new tasks. Most work has focused on image classification~\cite{de2019continual,parisi2019continual,kemker2018measuring}, while less attention has been dedicated to other applications, such as object detection~\cite{shmelkov2017incremental} and semantic segmentation~\cite{tasar2019incremental,michieli2019incremental,cermelli2020modeling}. Recently MiB~\cite{cermelli2020modeling} achieved state-of-the-art results on incremental semantic segmentation. Its main novelty is to model the background distribution shift during each incremental training session by reformulating the conventional distillation loss. However, the forgetting of learned knowledge is still severe due to the lack of previous labeled data.
The main challenge of incremental semantic segmentation is the inaccessibility of previous data. We here consider the scenario where no data of previous tasks can be stored, as is common in incremental semantic segmentation~\cite{michieli2019incremental,cermelli2020modeling}. Storing of data could be prohibited due to privacy concerns or government regulations, and is one of the settings studied in class-incremental learning~\cite{parisi2019continual}. To address this problem, we propose to use self-training to exploit unlabeled auxiliary data. Self-training~\cite{nigam2000analyzing,zou2018unsupervised,zou2019confidence} has been successfully applied to semi-supervised learning and domain adaptation. It aims to first predict pseudo labels of the unlabeled data and then learn from them iteratively. To the best of our knowledge, self-training has not yet been explored for incremental learning. One of the obstacles to self-training for class-incremental semantic segmentation is the fusion of the pseudo labels from previous and current models. We show some challenges in Figure~\ref{fig:v-conflict}. Note that our ‘new model’ is trained to be optimal for the labels of the current task, and while learning these is forgetting previous classes. To successfully join the knowledge of the old and new models, we aim to apply self-training on auxiliary data. However, as seen in Figure~\ref{fig:v-conflict} joining the label information from both models is not straightforward.
Therefore, we propose self-training for incremental semantic segmentation. The idea is to introduce self-training of unlabeled data to aid incremental semantic segmentation task. Specifically, we first train a new model with new data, then pseudo labels are predicted and fused on auxiliary unlabeled data by both old and new models. We retrain the model on auxiliary data to mitigate catastrophic forgetting. Simply fusing the pseudo labels from two models causes problems due to conflicting predictions. Thus, we propose a conflict reduction mechanism. Additionally, predicted pseudo label from neural networks are often over-confident~\cite{pereyra2017regularizing, zou2019confidence}, which might mislead the training process. We therefore propose to maximize self-entropy to smooth the predicted distribution and reduce the confidence of predictions. Our method outperforms state-of-the-art methods by a significant relative gain up to 114\% on Pascal-VOC 2012 and 8.5\% on the challenging ADE20K.
Our main contributions are:
\begin{itemize}
\item We are the first to apply self-training for class-incremental semantic segmentation to mitigate forgetting by rehearsal of previous knowledge using auxiliary unlabeled data.
\item We propose a conflict reduction mechanism to tackle the conflict problem when fusing the pseudo labels from old and new models for auxiliary data.
\item We show that maximizing the self-entropy loss can smooth the overconfident predictions and further improve performance.
\item We demonstrate state-of-the-art results, obtaining up to 114\% relative gain on Pascal-VOC 2012 dataset and 8.5\% on the challenging ADE20K dataset compared to MiB method.
\end{itemize}
\section{Related Work}
\subsubsection{Semantic Segmentation.}
Image segmentation has achieved significant improvements with advance of deep neural networks~\cite{minaee2020image}. Fully Convolutional Networks (FCNs)~\cite{long2015fully} is one of the first works for semantic segmentation to use only convolutional layers, and can take any arbitrary size of input images and output segmentation maps. Encoder-Decoder based architectures are popular for semantic segmentation. The deconvolutional (transposed convolutional) layer~\cite{noh2015learning} is proposed to generate accurate segmentation maps. SegNet~\cite{badrinarayanan2017segnet} proposes to use indices of encoder max-pooling to upsample the corresponding decoder. Additionally, multi-resolution information~\cite{zhao2017pyramid,he2019adaptive}, attention mechanism~\cite{chen2016attention,fu2019dual} and dilated convolution (atrous convolution)~\cite{chen2017deeplab,chen2017rethinking,chen2018encoder} are further developed to improve performance. Mask R-CNN~\cite{he2017mask} obtains promising performance on instance segmentation for each object.
\subsubsection{Incremental Learning.}
The problem of catastrophic forgetting~\cite{mccloskey1989catastrophic} has been studied extensively in recent years when neural networks are required to adapt to new tasks. Most work has been focused on image classification. It can be ruffly divided into three categories according to~\cite{de2019continual,parisi2019continual,kemker2018measuring}: Regularization-based~\cite{li2017learning,kirkpatrick2017overcoming,zenke2017continual,jung2018less, aljundi2018memory}, rehearsal-based~\cite{rebuffi2017icarl,chaudhry2018riemannian} and architecture-based methods~\cite{rusu2016progressive,mallya2018piggyback,mallya2018packnet}. Regularization-based methods alleviate forgetting of previously learned knowledge by introducing additional regularization term to constraint the output embeddings or parameters while training the current task. Knowledge distillation has been very popular for several methods~\cite{li2017learning,castro2018end, hou2019learning}.
Rehearsal based methods usually need to store exemplars (small amounts of data) from the previous tasks which are used to be replayed. Some approaches propose alternative ways of replay to avoid storing exemplars, including generative model to do rehearsal. Architecture based methods dynamically grows the network to increase capacity to learn new tasks, while the old part of the network can be protected from forgetting.
Due to privacy issues or memory limits, it is not always possible to access to data from previous tasks, which causes catastrophic forgetting of previous knowledge. In the paper, we consider this more difficult setting of \emph{exemplar-free class-IL} in which the storing of previous task data is prohibited. Unlabeled data is seen as an alternative to secure privacy and mitigate forgetting. There are some works applied to image classification. ~\cite{zhang2020class} propose to train a separate model with only new data and then use auxiliary data to train a student model using distillation loss with both new and old models. ~\cite{lee2019overcoming} propose confidence-based sampling to build an external dataset.
Recently, the attention of continual learning has also moved to other applications, such as object detection~\cite{shmelkov2017incremental}, and semantic segmentation~\cite{michieli2019incremental,cermelli2020modeling}. Shmelkov et al.~\cite{shmelkov2017incremental} propose to use distillation loss on both bounding box regression and classification outputs for object detection. A incremental few-shot detection (iFSD) setting is proposed in~\cite{perez2020incremental}, where new classes must be
learned incrementally (without revisiting base classes)
and with few examples. Michieli et al. ~\cite{michieli2019incremental} propose to use distillation both on the output logits and intermediate features for incremental semantic segmentation. Recently, MiB (Cermelli et al.~\cite{cermelli2020modeling}) achieves the state-of-the-art performance by considering previous classes as background for the current task and current classes as background for distillation. It is also investigated in remote sensing~\cite{tasar2019incremental} and medical data~\cite{ozdemir2018learn} for incremental semantic segmentation.
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=1\textwidth]{./figure/framework_main.pdf}
\caption{Overview of our method. It includes two models, one learned from previous sessions $\mathcal{T}^{t-1}$ (upper left) and one from the current session $\mathcal{T}^t$ (bottom left). Pseudo labels are generated and fused by leveraging unlabeled data (right). The conflict reduction mechanism is proposed to generate more accurate pseudo labels for further training. }
\label{fig:framework}
\end{center}
\end{figure*}
\subsubsection{Self-Training.}
Self-training~\cite{nigam2000analyzing,grandvalet2005semi,lee2013pseudo} is one simple but powerful alternate strategy, which aims to leverage unlabeled data with a teacher model trained on existing labeled data as “pseudo-labeled” training samples. Self-training iteratively generates one-hot pseudo-labels corresponding to prediction confidence by a teacher model, and then retrains network based on these pseudo-labels with unlabeled data. Recently,It has achieved significant success on semi-supervised learning~\cite{grandvalet2005semi,triguero2015self} and domain adaptation~\cite{zou2018unsupervised, chen2011co}. However, the predicted pseudo labels tend to be over-confident, which might mislead the training process and hurt the learning behaviour~\cite{bagherinezhad2018label}. Different methods to solve noisy label learning such as label smoothing~\cite{pereyra2017regularizing} and confidence regularization~\cite{zou2018unsupervised,zou2019confidence} are proposed to mitigate this phenomena. In this work, we explore self-training for learning semantic segmentation sequentially with confidence regularization.
\section{Proposed Method}
\subsection{Class-Incremental Semantic Segmentation}
Semantic segmentation aims to assign each pixel $(x_i,x_j)(0 < i\leq h, 0 <j \leq w)$ of image $x$ a label
$y_{i,j} \in \mathcal{Y}, \mathcal{Y}\in \{0, 1, ..., N-1\}$, representing the semantic class. Here,
$h$ and $w$ are the height and width of the input image $x$,
$N$ is the number of classes, and we define class $0$ to be the background.
The setting for class incremental learning (CIL) for semantic segmentation was first defined by
\cite{michieli2019incremental,cermelli2020modeling}. Training is conducted for CIL along $T$ different training sessions. During each training session, we only have training data of newly available classes, while the training data of the previously learned classes are no longer accessible. Each session introduces \emph{novel categories} to be learned.
Specifically, the $t_{th}$ training session contains data $\mathcal{T}^t= (X^t, Y^t)$, where $X^t$ contains the input images for the current task and $Y^t$ is the corresponding ground truth. The current label set $\mathcal{Y}^{t}$ is the combination of the previous label set $\mathcal{Y}^{t-1}$ and a set of new classes $C^{t}$, such that $\mathcal{Y}^{t}=\mathcal{Y}^{t-1} \cup C^t$. Only pixels of new classes are annotated in $X^t$ and the remaining pixels are assigned as background. The loss function for the current training session is defined as follows:
\begin{equation}
\label{eq:obj-general}
\mathcal{L}_{ce}(\theta^t)= \frac{1}{|\mathcal{T}^t|
}\sum_{(x,y)\in\mathcal{T}^t}
\ell_{ce}({\theta^t};x,y)
\end{equation}
where $\ell_{ce}$ is the standard cross-entropy loss used for supervised semantic segmentation and $|\mathcal{T}^t|$ is total number of samples in current training session $\mathcal{T}^t$.
\subsection{Self-Training for Incremental Learning}
A naive approach to address the class-incremental learning (CIL) problem is to train a model $f_{\theta^t}$ on each set $X^{t}$ sequentially by simply fine-tuning $f_{\theta^t}$ from the previous model $f_{\theta^{t-1}}$. This approach would suffer from catastrophic forgetting, as the parameters of the model are biased to the current categories because no samples from previous data $X^{t-1}$ are replayed. As discussed in the related work, various approaches to prevent forgetting could be considered, like regularization~\cite{li2017learning,kirkpatrick2017overcoming,zenke2017continual,jung2018less, aljundi2018memory}, or rehearsal methods~\cite{rebuffi2017icarl,chaudhry2018riemannian}. Instead, we propose to use self-training. To the best of our knowledge, we are the first to investigate self-training for incremental learning. Our goal is to use unlabeled data to generate \emph{pseudo labels} from the previous model and current model. In this way, the models are able to 'revisit' the previous knowledge and avoid catastrophic forgetting of previously learned categories.
We present an overview of our framework on incremental semantic segmentation with a self-training mechanism shown in Figure~\ref{fig:framework}. We first initialize the current model $f_{\theta^t}$ from $f_{\theta^{t-1}}$ (trained on the input data set $X^{t-1}$ in the training session $\mathcal{T}^{t-1}$) and then update parameters using $X^{t}$ in training session $\mathcal{T}^t$, the same procedure as used for the Fine-tuning method for CIL. This new model is trained to be optimal for the new classes $C^t$, however it forgets the previous classes $Y^{t-1}$. To overcome this problem, we combine the knowledge from both the old and new models in order to predict all categories we have seen.
We generate pseudo label $\hat{P}^{t-1}$ and $\hat{P}^{t}$ by feeding the unlabeled data $A$ to the previous model $f_{\theta^{t-1}}$ and the current model $f_{\theta^t}$, respectively. The generated pseudo labels of the auxiliary data from both models have the potential to represent all categories the models have encountered.
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=0.95\textwidth]{./figure/train-2.pdf}
\caption{Illustration of fusing pseudo labels by conflict reduction mechanism. The category 'train' is not learned in previous training sessions, therefore it is most likely to be predicted as the similar category 'bus' but not well segmented (top image). After learning 'train' on current task, it is predicted as 'train' correctly (bottom image). To generate more accurate pseudo labels, conflict reduction is proposed to fuse the two predictions (right image).}
\label{fig:conflict}
\end{center}
\end{figure*}
We require a strategy to fuse the pseudo-labels of both models for each image in the auxiliary dataset into a single pseudo-labeled image. We first propose a fusion based on the idea that the predictions from the previous model $\hat{P}^{t-1}$ should be trusted and we only change those background pixels that are considered foreground in the current model $\hat{P}^t$. In the next section, we improve this fusion of pseudo labels by considering a conflict reduction mechanism.
When $\hat{P}^{t-1}$ and $\hat{P}^{t}$ both consider the pixel as background, we assign background label 0 to the corresponding pixel in the fused pseudo label $\hat{P}^{t}_A$ directly; when $\hat{P}^{t-1}$ consider the pixel as background (label is 0), while $\hat{P}^{t}$ classify it as foreground (label is larger than 0), $\hat{P}^{t}_{A}$ is equal to $\hat{P}^t$ since the pixel is likely to be the category learned in the current model $f_{\theta^t}$; If $f_{\theta^{t-1}}$ considers the pixel as foreground, $\hat{P}^{t}_{A}$ is equal to $\hat{P}^{t-1}$, we assume the previous model has higher priority than the current model since the previous model is accumulating more and more knowledge. The final fusion of pseudo labels for these auxiliary data $\hat{P}^{t}_{A}$ can be presented as follows:
\begin{equation}
\hat{P}^{t}_{A}=
\begin{cases}
0 & \text{ if } \hat{P}^t =0 \text{ and } \hat{P}^{t-1}=0 \\
\hat{P}^t & \text{ if } \hat{P}^t>0 \text{ and } \hat{P}^{t-1}=0 \\
\hat{P}^{t-1}& \text{ if } \hat{P}^{t-1} >0
\end{cases}
\label{eq:pseudo1}
\end{equation}
Finally, we update the joint model (initialized from $f_{\theta^{t-1}}$) using the auxiliary data $A$ and pseudo label $\hat{P}^{t}_{A}$. We repeat the above procedure until all tasks are learned. As we mention above, the auxiliary unlabeled dataset is expected to be related to the labeled datatset, where they share some similar or overlapped categories. For many practical applications, the assumption of an available auxiliary dataset is realistic. For instance, in the application of autonomous driving, there is abundant amount of unlabeled data available for training. In our experiments, we will show results where we use the COCO dataset as the auxiliary dataset for Pascal-VOC 2012, and the Places365 for ADE20K in Section~\ref{sec:exp} . We will also investigate how unrelated data affects the self-training process.
\subsection{Conflict Reduction}
In the above section, we introduce how self-training is adapted in the framework for incremental semantic segmentation to help mitigate catastrophic forgetting of old categories. The pseudo labels of the auxiliary data are generated from the previous and current model, respectively. Then the two pseudo labels are fused into the final pseudo label directly, however, there might be wrong fusions due to similar categories that are often mis-classified. Therefore, we propose conflict reduction to further improve the accuracy of pseudo label fusion.
Assume 'bus' is the category added in the training session $\mathcal{T}^{t-s}$ $(s\geq 1)$ , 'train' is learned in the current training session $\mathcal{T}^t$ (as seen in Figure~\ref{fig:conflict}). When an image from the auxiliary data containing 'train' is fed into the model $f_{\theta^{t-1}}$, as 'train' has never been learned in the previous training sessions, the model assigns the maximum probability to the most similar category label 'bus' due to the usage of cross entropy loss. Following Eq.\ref{eq:pseudo1}, the fused pseudo labels are automatically assigned from $\hat{P}^{t-1}$ if the previous model regards the pixels as foreground with no need to check the pseudo label $\hat{P}^t$ obtained from the current model. This results in the mis-classification and a drop in performance of the overall semantic segmentation system. In fact, in this case the current model $f_{\theta^t}$ labels the train as 'train' very confidently, since it just learned to recognize trains from data $X^t$. Conflict frequently occurs when similar categories appear such as 'sheep' and 'cow', 'sofa' and 'chair', 'bus' and 'train' et al.. Therefore, a Conflict Reduction module is proposed to obtain better fusion when both the previous and current models consider the pixel as foreground ( $\hat{P}^t>0$ and $\hat{P}^{t-1}>0$). Therefore, we update the Eq.\ref{eq:pseudo1} as follows:
\begin{equation}
\label{eq:conflict}
\hat{P}^{t}_{A}=
\begin{cases}
0 & \text{ if } \hat{P}^t =0 \text{ and } \hat{P}^{t-1}=0 \\
\hat{P}^t & \text{ if } \hat{P}^t >0 \text{ and } \hat{P}^{t-1}=0 \\
\hat{P}^{t-1} & \text{ if } \hat{P}^t=0 \text{ and } \hat{P}^{t-1}>0 \\
\hat{P}^t & \text{ if } \hat{P}^t, \hat{P}^{t-1} >0 \text{ and } \text{max} (\hat{q}^{t}) > \text{max} (\hat{q}^{t-1} ) \\
\hat{P}^{t-1} & \text{ if } \hat{P}^t, \hat{P}^{t-1} >0 \text{ and } \text{max} (\hat{q}^{t}) <\text{max} (\hat{q}^{t-1}) \\
\end{cases}
\end{equation}
where $\hat{q}^{t-1}$ and $\hat{q}^{t}$ are output probabilities from previous and current models, respectively.
\subsection{Maximizing Self-Entropy}
Pseudo-labeling is a simple yet effective technique commonly used in self-training learning. One major concern of training with pseudo label is the lack of guarantee for label correctness~\cite{zou2019confidence}. To address this problem, maximizing the self-entropy loss is explored in this work to relax the pseudo labels and redistribute a certain amount of confidence to other classes. We soften the pseudo label by maximizing the self-entropy loss $\mathcal{L}_{se}(\theta^t)$ according to:
\begin{equation}
\label{eq:total}
\mathcal{L}(\theta^t) = \mathcal{L}_{ce}(\theta^t) - \lambda * \mathcal{L}_{se}(\theta^t)
\end{equation}
where,
\begin{equation}
\mathcal{L}_{se}(\theta^t)= - \frac{1}{|\mathcal{T}^t|
}\sum_{}
\hat{q}\log \hat{q}
\end{equation}
\subsection{Algorithm}
We provide a detailed algorithm of our incremental semantic segmentation procedure in Algorithm~\ref{table:algorithm}. For the first task (line 5), it is similar to standard incremental learning methods. For the remaining tasks, our method can be divided into two main parts: pseudo label fusion (lines 7-9) and model retraining (line 10).
\tablealgorithm
\section{Experiments}
\label{sec:exp}
\tablepascal
\subsection{Experimental setups}
In this section, we provide details for the datasets, evaluation metrics, implementations and compared methods. Code will be made available upon acceptance of this manuscript.
\subsubsection{Datasets} We evaluate all methods using Pascal-VOC 2012 and ADE20K. Pascal-VOC 2012~\cite{everingham2011pascal} has 10,582 images for training, 1,449 images for validation and 1,456 images for testing. Images contain 20 foreground object classes and one background class. ADE20K~\cite{zhou2017scene} is a large scale dataset containing 20,210 images in the training set, 2,000 images in the validation set, and 3,000 images in the testing set. It contains 150 classes of both stuff and objects. For auxiliary datasets, we choose COCO 2017 train set~\cite{lin2014microsoft} with 80 classes and 118K images for Pascal-VOC 2012 and Places365~\cite{zhou2017places} with 365 classes and 1.8M images for ADE20K.
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=0.32\textwidth]{./figure/19-1_s_new.pdf}
\includegraphics[width=0.32\textwidth]{./figure/15-5_s_new.pdf}
\includegraphics[width=0.32\textwidth]{./figure/15-1_s_new.pdf}
\caption{Mean IoU on Pascal VOC 2012 for 19-1, 15-5 and 15-1 scenarios as a function of proportional of unlabeled data. The curve starts from 1\% (about 1K images) and ends at 100\%. The horizontal axis is in logarithmic scale. }
\label{fig:datasize}
\end{center}
\end{figure*}
\subsubsection{Implementation Details} We follow the same implementation as proposed in~\cite{cermelli2020modeling}. For all methods, we use the same framework in Pytorch. The Deeplab-v3 architecture~\cite{chen2017rethinking} with ResNet-101~\cite{he2016deep} backbone is used for all methods. In-place activated batch normalization is used and the backbone network is initilized with an ImageNet pretrained model~\cite{rota2018place}. We train the network with SGD and an initial learning rate of $10^{-2}$ for the first task and $10^{-3}$ for the rest of the tasks as in~\cite{cermelli2020modeling}. We train the current model on Pascal-VOC 2012 for 30 epochs and on ADE20K for 60 epochs. The self-training procedure is trained on unlabeled data for one pass, the trade-off $\lambda$ between cross entropy and self-entropy is set to 1. 20\% of training data are used as validation set and the final results are on the standard test set.
\subsubsection{Compared Methods} We consider the Fine-tuning (FT) baseline and the Joint Training (Joint) upper bound. Additionally, we compare with several regularization-based methods adapted to incremental semantic segmentation including ILT~\cite{michieli2019incremental}, LwF~\cite{li2017learning}, LwF-MC~\cite{rebuffi2017icarl},
RW~\cite{chaudhry2018riemannian},
EWC~\cite{kirkpatrick2017overcoming} and
PI~\cite{zenke2017continual}. We also compare with state-of-the-art method MiB~\cite{cermelli2020modeling}. Additionally, MiB can be further trained with unlabeled data by generating pseudo labels with learned current model, we denote it as MiB + Aux by leveraging unlabeled data using pseudo labels. All results are reported in mean Intersection-over-Union (mIoU) in percentage. It is averaged for all classes of each task after learning all tasks. Note that none of the methods has access to exemplars from previous tasks.
\subsection{On Pascal-VOC 2012}
We compare different methods in three different scenarios as in~\cite{cermelli2020modeling}. 19-1 means we first learn a model with first 19 classes and then learn the remaining class as the second task. For 15-5 scenario, there are 15 classes for the first task and the remaining 5 classes as the second task. For 15-1, the first task is the same as in 15-5 scenario, but the remaining 5 classes are learned one-by-one resulting in a total of six tasks. For all scenarios, we consider two different settings. \textit{Disjoint} setting assumes images of current task only contain the current or previous classes. While \textit{Overlapped} setting assumes that at each training session, it contains all images with at least one pixel of novel classes. Previous classes are labeled as background for both settings.
\subsubsection{Addition of one class (19-1)} As shown in Table ~\ref{tab:pascal}, in this scenario, FT and PI obtain the worst results, where they forget almost all of the first task, and perform poorly on the new task with 6.2\% and 5.9\% of mIoU in the \textit{Disjoint} setting, respectively. EWC and RW, both weights-based regularization methods, improve quite a lot compared to PI. Interestingly, activation-based regularization methods LwF, LwF-MC and ILT perform significant better in both \textit{Disjoint} and \textit{Overlapped} settings. MiB is superior to all previous methods but inferior to our method by a large margin. Our method surpasses MiB on overall mIoU by 8.0\% for the \textit{Disjoint} setting and 6.7\% for the \textit{Overlapped} setting. When MiB is further trained with unlabeled data (MiB + Aux) using generated pseudo labels, it improves by 2.2\% for the \textit{Disjoint} setting and 11.3\% for the \textit{Overlapped} setting on class 20, while keeping similar performance for the first 19 classes. Note that our method is very close to Joint training performance on both settings (75.4\% to 77.4\% and 74.5\% to 77.4\%).
\subsubsection{Single-step addition of five classes (15-5)} Similar conclusions can be drawn in this scenario. Our method outperforms MiB by 6.6\% in the \textit{Disjoint} setting and 2.1\% in the \textit{Overlapped} setting.
The performance gain due to additional pseudo labels for MiB+Aux is more obvious in this scenario (when compared to the 19-1 setting).
However, there is still a big gap compared to our method. This shows that our proposed techniques are more efficient to leverage knowledge from unlabeled data. Our method achieves similar overall results (71.3\% and 71.1\%) in both settings, showing the robustness of our proposed method.
We report some qualitative results for different incremental methods (Ours, MiB and Fine-tuning) on Pascal-VOC 2012 with 15-5 scenario in Fig.~\ref{fig:voc}. The results demonstrate the superiority of our approach. FT totally forgets previously learned classes (first row and third row) but correctly predicts new classes (second row), while our approach obtains sharper (e.g. person, bike), more coherent (e.g. potted plant) and finer-border (e.g. cow) predictions compared to the state-of-the-art method MiB.
\subsubsection{Multi-step addition of five classes (15-1)} This is the most challenging scenario of the three. There are five more tasks after learning the first task, therefore, it is more difficult to prevent forgetting. From Table~\ref{tab:pascal}, we can observe that in general all methods forget more in this scenario. MiB only achieves 46.2\% and 35.1\% for the first task in two settings after learning all tasks. While our method achieves 70.1\% and 71.4\% of mIoU. Meanwhile our method outperforms MiB on new tasks by a large margin (21.4\% and 26.5\% respectively) as well. Overall, we gain 23.3\% for the \textit{Disjoint} setting and 33.9\% for the \textit{Overlapped} setting. We also see a significant improvement for MiB + Aux compared to the original MiB, from 37.9\% to 39.9\% for \textit{Disjoint} and from 29.7\% to 37.3\% for the \textit{Overlapped} setting. Again, without any more expensive annotation processes, unlabeled data is beneficial not only for our proposed method, but also very useful for the state-of-the-art method MiB to further boost performance.
\figvoc
\subsection{Ablation study}
In this section, we perform an ablation study on several different aspects including the proposed techniques, the relationship between performance and the amount of unlabeled data used, epochs for self-training and hyper-parameter $\lambda$.
\subsubsection{Impact of different proposed components} In Table~\ref{tab:ablation} we conduct an ablation study on the Pascal-VOC 2012 \textit{Disjoint} setting. In the first row, we show the performance of FT on three scenarios. And in the second row, MiB is the current state-of-the-art method. When we use self-training (ST) with unlabeled data, the performance is largely improved in all three scenarios, which shows the effectiveness of self-training on unlabeled data in incremental semantic segmentation. Conflict reduction (+CR) further improves the results on all of these three scenarios by 0.6\%, 4.2\% and 3.2\%, respectively. Maximizing self-entropy (+MS) strategy further obtains 0.4\% better on the 19-1 scenario and 0.8\% better results on the 15-5 scenario. On more challenging 15-1 scenario, MS boosts performance significantly by 2.4\%.
Note that the gain for the 19-1 scenario is small compared to the other two scenarios because there is only one category ('TV') for the second task, it does not contribute much to the overall performance even without learning it. Therefore, for incremental learning, the other two scenarios are more relevant.
\tableablation
\begin{table}[tb]
\begin{center}
\caption{Mean IoU on the Pascal VOC 2012 by using 1\% of unlabeled data (COCO) with different number of epochs. \textit{Disjoint} setting is used in this experiment.}
\resizebox{0.37\textwidth}{!}{%
\begin{tabular}{c||cccc}
Epochs & 1 & 5 & 10 & 20 \\ \hline
19-1 & 70.1 & 72.8 & 73.2 & 73.1 \\
15-5 & 46.4 & 66.9 & 68.3 & 68.8 \\
15-1 & 50.5 & 54.1 & 56.9 & 58.1
\label{table:epoch}
\end{tabular}}
\end{center}
\end{table}
\subsubsection{The amount of unlabeled data} We also evaluate the relationship between mean IoU and the size of unlabeled data by randomly selecting a portion of unlabeled data (see Figure~\ref{fig:datasize}). We experiment on \textit{Disjoint} setup for 19-1, 15-5 and 15-1 scenarios. Notably, for 19-1 scenario, our method beats MiB by using only 1\% unlabeled data, and the mIoU continually increases when more unlabeled data is used. For 15-5 scenario, our method achieves similar results as MiB by only using 2\% of unlabeled data. It keeps improving by increasing unlabeled data and peaks when 70\% of unlabeled data are used. Similar conclusion can be observed for the 15-1 scenario, our method outperforms MiB by a large margin with only 1\% unlabeled data. When adding more unlabeled data, the curve goes up consistently until it reaches the best performance and then it drops a bit in the end.
\subsubsection{Number of self-training epochs} Without any specific mention, we only pass all unlabeled data once into the network for efficiency consideration throughout the paper in our experiments. In this section, we consider different multiple passes when only 1\% unlabeled data is available from COCO dataset as shown in Table ~\ref{table:epoch}. As expected, compared to only passing it once, training for more epochs achieves significant gain for all three scenarios. Specifically, when we increase the self-training epochs from 1 to 20, it improves from 70.1\% to 73.1 for 19-1 scenario, from 46.4 to 68.8 for 15-5 scenario and from 50.5 to 58.1 for 15-1 scenario, respectively. It is the most effective to 15-5 scenario because the difficulty of this scenario is between the other two scenarios. It also shows that by training multiple runs using pseudo labels the performance can be further boosted even in the low-data regime, when we have little unlabeled data.
\subsubsection{Impact of trade-off $\lambda$} This parameter controls the strength between the cross-entropy and self-entropy loss.
In this section, we report results using various $\lambda$'s on the 15-5 scenario of the \textit{Disjoint} setup. As shown in Table~\ref{tab:lambda}, by changing $\lambda$ from 0.1 to 5, the overall performance goes up first and then goes down. When 0.5 and 1 are used, it obtains the best performance. Using $\lambda=1$ has similar performance as 0.5 and slightly better on the new task, without any specific mention, $\lambda=1$ is used throughout the paper.
\begin{table}[tb]
\begin{center}
\caption{The trade-off $\lambda$ between cross entropy and self-entropy on 15-5 scenario. Mean IoU is reported for both tasks and also overall performance is in the last row.}
\resizebox{0.4\textwidth}{!}{%
\begin{tabular}{c||cccc}
Trade-off $\lambda$ & 0.1 & 0.5 & 1 & 5 \\ \hline
1-15 & 76.8 & 77.4 & 77.1 & 72.4 \\
16-20 & 53.1 & 55.4 & 55.6 & 49.7 \\
all &
70.9
& 71.9 & 71.7 & 66.7 \\
\end{tabular}}
\label{tab:lambda}
\end{center}
\end{table}
\subsection{On different self-training datasets}
In this section, we compare several datasets used as self-training datasets for Pascal-VOC to show how sensitive our method is with respect to the unlabeled data.
We choose a general-purpose ImageNet validation set, two fine-grained CUB\_200\_2011 (Birds) and Flowers datasets together with the COCO dataset as different unlabeled data to show how our method performs on Pascal-VOC 15-5 scenario (see Table~\ref{tab:fid}). It is surprising that using ImageNet (validation set) provides similar performance as using COCO dataset (mIoU: 70.0 vs 71.3) for all classes. It suggests that datasets with diverse categories can be a good option. As expected, it fails on CUB (mIoU: 10.7) and Flowers (mIoU: 5.1) datasets, since fine-grained classes do not contain diverse objects.
We did a preliminary experiment on measuring the relatedness of datasets mathematically with Fréchet Inception Distance (FID) score\cite{heusel2017gans}. FID is originally proposed to measure the similarity of two distributions, such as generated fake images and real images, which is popular in measuring the quality of generative models. We use FID here to measure the closeness between labeled datasets and unlabeled datasets. We compute FID between Pascal-VOC and other unlabeled datasets, we obtain an FID score with COCO (25.8), ImageNet (43.8), CUB (153.5) and Flowers (222.3). Smaller FID scores indicate that the distribution of two datasets are closer, which is consistent with performance we obtained for self-training. Therefore such a measure could be used to select good auxiliary data.
\begin{table}[tb]
\begin{center}
\caption{Different auxiliary datasets to generate pseudo labels. Mean IoU is reported for both tasks on 15-5 scenario.}
\resizebox{0.45\textwidth}{!}{%
\begin{tabular}{c||c|cc|c}
& & \multicolumn{3}{c}{\textbf{15-5}} \\
Candidate Dataset & FID Score & 1-15 & 16-20 & all \\ \hline
COCO & \textbf{25.8} & \textbf{76.9} & \textbf{54.3} & \textbf{71.3} \\
ImageNet (Val) & 43.8 & 75.5 & 53.5 & 70.0 \\
CUB & 153.5 & 4.0 & 30.9 & 10.7 \\
Flowers & 222.3 & 2.6 & 12.8 & 5.1\\
\end{tabular}}
\label{tab:fid}
\end{center}
\end{table}
\tableade
\figade
\subsection{On ADE20K}
Following~\cite{cermelli2020modeling}, we report average mIoU of two different class orders on ADE20K dataset. In this experiment, we only compare with activation-based regularization methods LwF, LwF-MC, ILT (much better than PI, EWC and RW), the state-of-the-art method MiB and MiB with auxiliary data (MiB + Aux). Here we consider three scenarios: 100-50 means 100 classes for the first task and 50 classes for the second tasks. 100-10 considers 100 classes as the first task and the rest classes are divided into 5 tasks. Lastly, in 50-50 scenario, 150 classes are equally distributed in three tasks with 50 classes each.
\subsubsection{Single-step addition of 50 classes (100-50).} As shown in Table~\ref{tab:ade}, FT forgets the first task totally because of the large number of classes. As seen from Joint, the overall mIoU is 38.9\%, which is much less compared to Pascal-VOC 2012, indicating this is a more challenging dataset. MiB achieves relatively robust results in this scenario, while our method outperforms it by 0.5\% in average for 150 classes. Notably, our method prevents the forgetting of previous task effectively but obtains worse performance on the second task compared to MiB. Interestingly, we have seen that for Pascal-VOC dataset, MiB + Aux outperforms baseline MiB in most cases, while it fails to further improve the performance in this case. The reason could be that ADE-20K is more challenging and accuracy itself is much lower than on Pascal-VOC, which could introduce more noise during training.
It further shows the importance of our proposed framework to leverage unlabeled data, which leads to superior performance.
We report some qualitative results for different incremental methods (Ours, MiB and Fine-tuning) on this senario. On this challenging scenario with more categories, our approach is capable of segmenting more objects correctly (e.g. the wall, apparel, floor, painting, box, computer) than MiB.
\subsubsection{Multi-step of addition 50 classes (100-10).} This is a more challenging scenario with six tasks in total. Most methods fail in this scenario, only achieving about 1.0\% mIoU (FT, LwF and ILT). ILT performs better than LwF-MC in most scenarios on Pascal-VOC but worse results on ADE20K. Our method still outperforms MiB overall by 2.2\%. More specifically, the gain is significant for the first four tasks from 1.8\% (first task) to 11.3\% (third task). And the performance of the fifth task is comparable, while ours obtains worse result for the last task. Again, using auxiliary data for MiB (MiB + Aux) leads to worse results compared to the original MiB, which shows that without specific design, the unlabeled data can be harmful.
\subsubsection{Three steps of 50 classes (50-50).} This is a more balanced scenario where each training session has the same number of classes. Similar to previous scenarios, we improve overall mIoU of MiB from 27.0\% to 29.0\%. Specifically, the gain is 4.5\% for the first task and 3.9\% for the second task. One general observation for all scenarios is that there is still a large gap between incremental learning and Joint for the ADE dataset. This suggests that incremental segmentation learning has to be further developed and improved.
\section{Conclusions}
In this work, we improve incremental semantic segmentation with self-training. Unlabeled data is leveraged to combine the previous and current models. Importantly, conflict reduction provides more accurate pseudo labels for self-training. We have achieved state-of-the-art performance on two large datasets Pascal-VOC 2012 and ADE20K. We show that our method can obtain superior results on Pascal-VOC 2012 using only 1\% unlabeled data. Qualitative results show significant more accurate segmentation maps are generated compared to the other methods. As future work, it is interesting to choose unlabeled data selectively, efficiently and actively. Although we have obtained promising gain for incremental semantic segmentation, the gap to Joint Training for the challenging ADE20K dataset indicates that CIL is still an open research topic.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,314,259,993,473 | arxiv | \section{#1}\setcounter{equation}{0}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\nonumber}{\nonumber}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\underline{1}}{\underline{1}}
\newcommand{\underline{2}}{\underline{2}}
\newcommand{\underline{a}}{\underline{a}}
\newcommand{\underline{b}}{\underline{b}}
\newcommand{\underline{c}}{\underline{c}}
\newcommand{\underline{d}}{\underline{d}}
\newcommand{\underline{i}}{\underline{i}}
\newcommand{\underline{j}}{\underline{j}}
\newcommand{\underline{k}}{\underline{k}}
\newcommand{\underline{l}}{\underline{l}}
\newcommand{\underline{I}}{\underline{I}}
\newcommand{\underline{J}}{\underline{J}}
\newcommand{\underline{K}}{\underline{K}}
\newcommand{\underline{m}}{\underline{m}}
\newcommand{\underline{g}}{\underline{g}}
\newcommand{\underline{T}}{\underline{T}}
\newcommand{\underline{S}}{\underline{S}}
\newcommand{\underline{Z}}{\underline{Z}}
\newcommand{\underline{\cP}}{\underline{{\cal P}}}
\newcommand{\underline{\cF}}{\underline{{\cal F}}}
\newcommand{\underline{F}}{\underline{F}}
\newcommand{\underline{G}}{\underline{G}}
\newcommand{\underline{\Lambda}}{\underline{\Lambda}}
\newcommand{\underline{\Sigma}}{\underline{\Sigma}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\def\dt#1{{\buildrel {\hbox{\LARGE .}} \over {#1}}}
\newcommand{\bm}[1]{\mbox{\boldmath$#1$}}
\newcommand{\overbar}[1]{\mkern 1.5mu\overline{\mkern-1.5mu#1\mkern-1.5mu}\mkern 1.5mu}
\def\double #1{#1{\hbox{\kern-2pt $#1$}}}
\newcommand{{\hat{m}}}{{\hat{m}}}
\newcommand{{\hat{n}}}{{\hat{n}}}
\newcommand{{\hat{p}}}{{\hat{p}}}
\newcommand{{\hat{q}}}{{\hat{q}}}
\newcommand{{\hat{a}}}{{\hat{a}}}
\newcommand{{\hat{b}}}{{\hat{b}}}
\newcommand{{\hat{c}}}{{\hat{c}}}
\newcommand{{\hat{d}}}{{\hat{d}}}
\newcommand{{\hat{e}}}{{\hat{e}}}
\newcommand{{\hat{M}}}{{\hat{M}}}
\newcommand{{\hat{N}}}{{\hat{N}}}
\newcommand{{\hat{A}}}{{\hat{A}}}
\newcommand{{\hat{B}}}{{\hat{B}}}
\newcommand{{\hat{C}}}{{\hat{C}}}
\newcommand{{\hat{\alpha}}}{{\hat{\alpha}}}
\newcommand{{\hat{\beta}}}{{\hat{\beta}}}
\newcommand{{\hat{\gamma}}}{{\hat{\gamma}}}
\newcommand{{\hat{\delta}}}{{\hat{\delta}}}
\newcommand{{\hat{\rho}}}{{\hat{\rho}}}
\newcommand{{\hat{\tau}}}{{\hat{\tau}}}
\newcommand{{\dot\gamma}}{{\dot\gamma}}
\newcommand{{\dot\delta}}{{\dot\delta}}
\newcommand{{\tilde{\sigma}}}{{\tilde{\sigma}}}
\newcommand{{\tilde{\omega}}}{{\tilde{\omega}}}
\renewcommand{\Bar}{\overline}
\newcommand{{\underline{\alpha}}}{{\underline{\alpha}}}
\newcommand{{\underline{\beta}}}{{\underline{\beta}}}
\newcommand{{\underline{\gamma}}}{{\underline{\gamma}}}
\newcommand{{\underline{\delta}}}{{\underline{\delta}}}
\newcommand{{\underline{\hal}}}{{\underline{{\hat{\alpha}}}}}
\newcommand{{\underline{\hbe}}}{{\underline{{\hat{\beta}}}}}
\newcommand{{\underline{\hga}}}{{\underline{{\hat{\gamma}}}}}
\newcommand{{\underline{\hde}}}{{\underline{{\hat{\delta}}}}}
\newcommand{{\underline{\hrh}}}{{\underline{{\hat{\rho}}}}}
\newcommand{{\nabla}}{{\nabla}}
\newcommand{{\bar{\sigma}}}{{\bar{\sigma}}}
\newcommand{{\theta}}{{\theta}}
\newcommand{{\bar{\theta}}}{{\bar{\theta}}}
\newcommand{\mathsf{Sp}}{\mathsf{Sp}}
\newcommand{\mathsf{SU}}{\mathsf{SU}}
\newcommand{\mathsf{SL}}{\mathsf{SL}}
\newcommand{\mathsf{GL}}{\mathsf{GL}}
\newcommand{\mathsf{SO}}{\mathsf{SO}}
\newcommand{\mathsf{O}}{\mathsf{O}}
\newcommand{\mathsf{U}}{\mathsf{U}}
\newcommand{\mathsf{S}}{\mathsf{S}}
\newcommand{\mathsf{PSU}}{\mathsf{PSU}}
\newcommand{\mathsf{PSL}}{\mathsf{PSL}}
\newcommand{\mathsf{OSp}}{\mathsf{OSp}}
\newcommand{\mathsf{Spin}}{\mathsf{Spin}}
\newcommand{\mathsf{Mat}}{\mathsf{Mat}}
\newcommand{\begin{subequations}}{\begin{subequations}}
\newcommand{\end{subequations}}{\end{subequations}}
\newcommand{{\oplus}}{{\oplus}}
\newcommand{{\ominus}}{{\ominus}}
\newcommand{{\bar{\theta}}}{{\bar{\theta}}}
\newcommand{{\overline{1}}}{{\overline{1}}}
\newcommand{{\overline{2}}}{{\overline{2}}}
\newcommand{{\bar{\Delta}}}{{\bar{\Delta}}}
\newcommand{{\bar{A}}}{{\bar{A}}}
\newcommand{{\bar{B}}}{{\bar{B}}}
\newcommand{{\bar{\Phi}}}{{\bar{\Phi}}}
\newcommand{{\bar{\chi}}}{{\bar{\chi}}}
\newcommand{{\mathbb D}}{{\mathbb D}}
\newcommand{{\mathbb \DB}}{{\mathbb \bar{D}}}
\numberwithin{equation}{section}
\begin{document}
\begin{titlepage}
\begin{flushright}
Feb, 2023
\end{flushright}
\vspace{2mm}
\begin{center}
\Large \bf Three-point functions of conserved supercurrents \\ in 3D ${\cal N}=1$ SCFT: general formalism \\ for arbitrary superspins
\end{center}
\begin{center}
{\bf
Evgeny I. Buchbinder and Benjamin J. Stone}
{\footnotesize{
{\it Department of Physics M013, The University of Western Australia\\
35 Stirling Highway, Crawley W.A. 6009, Australia}} ~\\
}
\end{center}
\begin{center}
\texttt{Email: evgeny.buchbinder@uwa.edu.au, \\ benjamin.stone@research.uwa.edu.au}
\end{center}
\vspace{4mm}
\begin{abstract}
\baselineskip=14pt
\noindent
We analyse the general structure of the three-point functions of conserved higher-spin supercurrents in 3D, ${\cal N}=1$ superconformal field theory.
It is shown that supersymmetry imposes additional restrictions on correlation functions of conserved higher-spin currents. We develop a manifestly supersymmetric formalism
to compute the three-point function $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$, where $\mathbf{J}^{}_{s_{1}}$, $\mathbf{J}'_{s_{2}}$ and $\mathbf{J}''_{s_{3}}$
are conserved higher-spin supercurrents with superspins $s_{1}$, $s_{2}$ and $s_{3}$ respectively (integer or half-integer).
Using a computational approach limited only by computer power, we analytically impose the constraints arising from the superfield conservation equations and symmetries under permutations of superspace points. Explicit solutions for three-point functions are presented and we provide a complete classification of the results for $s_{i} \leq 20 $; the pattern is very clear, and we propose that our classification holds for arbitrary superspins. We demonstrate that Grassmann-even three-point functions are fixed up to one parity-even structure and one parity-odd structure,
while Grassmann-odd three-point functions are fixed up to a single parity-even structure. The existence of the parity-odd structure in the Grassmann-even correlation functions is subject to a set of triangle inequalities in the superspins. For completeness, we also analyse the structure of three-point functions involving conserved higher-spin supercurrents and scalar superfields.
\end{abstract}
\end{titlepage}
\newpage
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\tableofcontents
\vspace{1cm}
\bigskip\hrule
\section{Introduction}\label{section1}
A well known implication of conformal
symmetry~\cite{Polyakov:1970xd, Schreier:1971um, Migdal:1971xh, Migdal:1971fof,Ferrara:1972cq,Ferrara:1973yt, Koller:1974ut, Mack:1976pa, Fradkin:1978pp, Stanev:1988ft,Osborn:1993cr}
is that the general form of two- and three-point correlation
functions of primary operators is fixed up to finitely many parameters. However, constructing explicit solutions for three-point functions of conserved current operators such as the
energy-momentum tensor, vector currents, and more generally, higher-spin currents, remains an open problem. An interesting feature of three-dimensional conformal field theories is the existence of
parity-odd structures in the three-point functions of conserved currents. These structures were overlooked in the seminal work by
Osborn \& Petkou \cite{Osborn:1993cr} (see also~\cite{Erdmenger:1996yc}), which
introduced the group-theoretic formalism to study the three-point functions of the energy-momentum tensor and vector currents. The parity-odd structures were discovered later using a
polarisation spinor approach in \cite{Giombi:2011rz}, where results for three-point functions of conserved (bosonic) higher-spin currents were obtained.
Soon after, it was proven by Maldacena and Zhiboedov in~\cite{Maldacena:2011jn} that correlation functions involving the energy-momentum tensor and higher-spin currents are equal to those
of free field theories.\footnote{An assumption of the Maldacena-Zhiboedov theorem is that the conformal theory under consideration possesses a unique spin-2 conserved current -- the
energy-momentum tensor. This assumption, however, does not hold in the presence of fermionic higher-spin currents. Hence, it also does not hold in superconformal theories
possessing conserved higher-spin supercurrents.}
This can be viewed as an extension of the Coleman-Mandula theorem \cite{Coleman:1967ad} to conformal field theories;
it was originally proven in three dimensions and was generalised to four- and higher-dimensional cases in~\cite{Zhiboedov:2012bm, Stanev:2012nq, Stanev:2013qra, Alba:2013yda, Alba:2015upa}.
In three dimensional theories the general structure of the three-point function $\langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle$,
where $J^{}_{s}$ denotes a conserved current of arbitrary spin-$s$, is fixed up to the following form~\cite{Giombi:2011rz, Maldacena:2011jn}:\footnote{Recall: in a $d$-dimensional CFT,
a conserved current of spin-$s$ is a totally
symmetric and traceless tensor $J_{m_{1} ... m_{s}}$ of scale dimension $\Delta_{J} = s+d-2$, satisfying the conservation equation $\partial^{m_{1}} J_{m_{1} ... m_{s}} = 0$.}
\begin{equation}
\langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle = a_{1} \langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle_{E_{1}} + a_{2} \langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle_{E_{2}} + b \langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle_{O} \, ,
\end{equation}
where $\langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle_{E_{1}}$, $\langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle_{E_{2}}$ are parity-even solutions corresponding to free field theories, and $\langle J^{}_{s_{1}} J'_{s_{2}} J''_{s_{3}} \rangle_{O}$ is a parity-violating, or parity-odd solution which is not generated by a free CFT. The existence of the parity-odd solution is subject to the following triangle inequalities on the spins:
\begin{align}
s_{1} \leq s_{2} + s_{3} \, , && s_{2} \leq s_{1} + s_{3} \, , && s_{3} \leq s_{1} + s_{2} \, .
\end{align}
If any of the above inequalities are not satisfied, then the odd solution is incompatible with current conservation. Parity-odd solutions are unique to three dimensions, and have been shown to arise in Chern-Simons theories interacting with parity-violating matter \cite{Aharony:2011jz, Giombi:2011kc, Maldacena:2012sf, Jain:2012qi, GurAri:2012is, Aharony:2012nh, Giombi:2016zwa, Chowdhury:2017vel, Sezgin:2017jgm, Skvortsov:2018uru, Inbasekar:2019wdw}. Existence and uniqueness of the odd solution has been proven in \cite{Giombi:2016zwa}, while methods to obtain explicit solutions for arbitrary spin are contained in \cite{Zhiboedov:2012bm,Stanev:2012nq,Stanev:2013eha,Buchbinder:2022mys}.
A natural follow-up question arises: in conformal field theories, what are the implications of supersymmetry on the general structure of three-point correlation functions? The study of correlation functions in superconformal theories has been carried out in diverse dimensions using the group-theoretic approach developed in the following
publications \cite{Osborn:1998qu, Park:1998nra, Park:1999pd, Park:1999cw, Kuzenko:1999pi, Nizami:2013tpa, Buchbinder:2015qsa, Buchbinder:2015wia,
Kuzenko:2016cmf, Buchbinder:2021gwu, Buchbinder:2021izb, Buchbinder:2021kjk, Buchbinder:2021qlb, Jain:2022izp,Buchbinder:2021qlb,Buchbinder:2022cqp,Buchbinder:2022kmj,Buchbinder:2022mys}. It has been shown that
superconformal symmetry imposes additional restrictions on the three-point functions of conserved currents compared to non-supersymmetric theories. For example, it was pointed out
in~\cite{Buchbinder:2021gwu} that there is an apparent tension between supersymmetry and the existence of parity-violating structures.
In contrast with the non-supersymmetric case, parity-odd structures are not found in the three-point functions of the energy-momentum tensor and conserved
vector currents \cite{Buchbinder:2015qsa,Buchbinder:2015wia,Kuzenko:2016cmf,Buchbinder:2021gwu}.
For three-point functions of higher-spin currents the results are more unclear, however, it was shown in \cite{Buchbinder:2021qlb} that parity-odd structures can appear in the
three-point functions of currents belonging to a superspin-$2$ current multiplet. Such a multiplet contains independent conserved currents of spin-$2$ and
spin-$\tfrac{5}{2}$ (the spin-$2$ current is not equal to but possesses the same properties as the energy-momentum tensor). In general, for three-point functions
involving conserved higher-spin currents, the conditions under which parity-violating structures can arise in supersymmetric theories are not well understood.
The intent of this paper is to address these concerns and provide a complete classification of conserved three-point functions in 3D ${\cal N}=1$ superconformal field theory. To do this we develop a general formalism to study the three-point function
\begin{equation} \label{3D N=1 three-point function}
\langle \mathbf{J}^{}_{s_{1}}(z_{1}) \, \mathbf{J}'_{s_{2}}(z_{2}) \, \mathbf{J}''_{s_{3}}(z_{3}) \rangle \, ,
\end{equation}
where $z_{1}, z_{2}, z_{3}$ are points in 3D ${\cal N}=1$ Minkowski superspace, and the superfield $\mathbf{J}_{s}(z)$ is a conserved higher-spin supercurrent of superspin-$s$ (integer or half-integer). These currents are primary superfields transforming in an irreducible representation of the 3D ${\cal N}=1$ superconformal algebra, $\mathfrak{so}(3,2|1) \cong \mathfrak{osp}(1|2;\mathbb{R})$. They are described by totally symmetric spin-tensors of rank $2s$, $\mathbf{J}_{\alpha_{1} ... \alpha_{2s}}(z) = \mathbf{J}_{(\alpha_{1} ... \alpha_{2s})}(z)$, and satisfy the following superfield conservation equation:
\begin{equation} \label{Conserved supercurrent}
D^{\alpha_{1}} \mathbf{J}_{\alpha_{1} \alpha_{2} ... \alpha_{2s}}(z) = 0\, ,
\end{equation}
where $D^{\alpha}$ is the conventional covariant spinor derivative in ${\cal N}=1$ superspace. As a result of the superfield conservation equation \eqref{Conserved supercurrent}, conserved supercurrents have scale dimension $\Delta_{\mathbf{J}} = s + 1$ (saturating the unitary bound), and at the component level contain independent conserved currents of spin-$s$ and $s+\tfrac{1}{2}$ respectively. The most important examples of conserved supercurrents in superconformal field theory are the supercurrent and flavour current multiplets, corresponding to the cases $s=\tfrac{3}{2}$ and $s = \tfrac{1}{2}$ respectively (for a review of the properties of supercurrent and flavour current multiplets in 3D theories, see \cite{Buchbinder:2015qsa,Korovin:2016tsq} and the references there-in). The supercurrent multiplet contains the energy-momentum tensor and the supersymmetry current.\footnote{In ${\cal N}$-extended superconformal theories, the supercurrent multiplet also contains the $R$-symmetry currents.} Likewise, the flavour current multiplet contains a conserved vector current. Three-point correlation functions of these currents contain important physical information about a given superconformal field theory and are highly constrained by superconformal symmetry.
The general structure of three-point functions of conserved (higher-spin) currents in 3D ${\cal N}=1$ superconformal field theory was proposed in~\cite{Nizami:2013tpa} to be fixed up to the
following form:
\begin{equation}
\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle = a \, \langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle_{E} + b \, \langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle_{O} \, ,
\label{zh1}
\end{equation}
where $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle_{E}$ is a parity-even solution, and $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle_{O}$
is a parity-odd solution. However, as was pointed out above there is a tension between supersymmetry and existence of parity-odd structures, which means that the coefficient $b$ in~\eqref{zh1}
vanishes in many correlators. In this paper we provide a complete classification for when the parity-odd structures are allowed and when they are not. In particular, we show that
the odd solution does not appear in correlation functions that are overall Grassmann-odd (or fermionic).
In the Grassmann-even (bosonic) three-point functions the existence of the parity-odd solution is subject to the following superspin triangle inequalities:
\begin{align}
s_{1} \leq s_{2} + s_{3} \, , && s_{2} \leq s_{1} + s_{3} \, , && s_{3} \leq s_{1} + s_{2} \, .
\label{zh2}
\end{align}
When the triangle inequalities are simultaneously satisfied there is one even solution and one odd solution, however, if any of the above inequalities
are not satisfied then the odd solution is incompatible with the superfield conservation equations. Our classification is in perfect agreement with our previous results
in~\cite{Buchbinder:2015qsa, Buchbinder:2021gwu} for the three-point functions of the energy-momentum tensor and conserved vector currents. They belong to the supermultiplets
of superspins $s=\tfrac{3}{2}$ and $s = \tfrac{1}{2}$ respectively and, hence, their three-point functions in superspace are Grassmann-odd.
Based on our classification, it is implied that they do not possess parity-odd
contributions, which is in agreement with the earlier results. Our classification is also in agreement with our previous result in~\cite{Buchbinder:2021qlb} for the three-point function
of the conserved supercurrent of superspin-2. This three-point function is Grassmann-even in superspace and since the triangle inequalities~\eqref{zh2} are satisfied a parity-odd contribution is allowed.
Our method assumes only the constraints imposed by superconformal symmetry and superfield conservation equations; within the framework of our formalism we reproduce all known
results concerning the structure of three-point functions of conserved supercurrents in 3D ${\cal N}=1$ SCFT. We present new results for three-point functions involving higher-spin supercurrents,
obtaining explicit and completely analytic results. We also analyse three-point functions involving scalar superfields, thus covering essentially all possible three-point function in 3D ${\cal N}=1$
superconformal field theory. Our method is based on a computational approach (by means of analytic/symbolic computer algebra in \textit{Mathematica}) which constructs all possible structures for
the correlation function for a given set of superspins $s_1, s_2$ and $s_3$, consistent with its superconformal properties. Next, we extract the linearly independent structures by systematic
application of linear dependence relations and then impose the superfield conservation equations and symmetries under permutations of superspace points. As a result we obtain the three-point
function in a very explicit form which can be presented for relatively high superspins.
The method can be applied for arbitrary superspins and is limited only by computer power. Due to these limitations we were able to
carry out computations up to $s_{i} = 20$ (a ``soft" limit, after which the calculations take many hours), however, with a sufficiently powerful computer one could extend this bound even further.
The computational approach we have developed (based on the same method as in~\cite{Buchbinder:2022mys}) is completely algorithmic; one simply chooses the superspins of the fields and the
solution for the three-point function consistent with conservation and point-switch symmetries is generated.
The analysis is computationally intensive for higher-spins; to streamline the calculations we develop a hybrid, index-free formalism which combines the group-theoretic superspace formalism
introduced by Osborn~\cite{Osborn:1998qu}
and Park~ \cite{Park:1999cw,Park:1999pd}, and a method based on contraction of tensor indices with auxiliary spinors.
This method is widely used throughout the literature to construct correlation functions of higher-spin currents (see e.g.~\cite{Giombi:2011rz, Costa:2011mg,
Stanev:2012nq, Zhiboedov:2012bm, Nizami:2013tpa, Elkhidir:2014woa}), however, this particular approach describes the correlation function completely in terms of a polynomial, ${\cal H}(\boldsymbol{X},\Theta; u,v,w)$, which is a function of
two superconformally covariant three-point building blocks, $\boldsymbol{X}$ and $\Theta$, and the auxiliary spinor variables $u$, $v$, and $w$. As a result one does not have
to work with the superspace points explicitly when imposing the superfield conservation equations.
The results of this paper are organised as follows. In section \ref{section2} we review the essentials of the group theoretic formalism used to construct correlation functions of
primary superfields in 3D ${\cal N}=1$ SCFT. In section \ref{section3} we outline a method to impose all constraints arising from superfield conservation equations and point-switch symmetries on three-point functions of conserved higher-spin supercurrents.
In particular, we introduce an index-free, auxiliary spinor formalism which allows us to construct a generating function for the three-point functions and we outline the important aspects of our computational approach. Section \ref{section4} is then devoted to the analysis of three-point functions involving conserved supercurrents. As a test of our approach, we present an explicit analysis for three-point correlation functions involving
combinations of supercurrent and flavour current multiplets, reproducing the known results \cite{Buchbinder:2015qsa, Buchbinder:2021gwu}.
The results are then expanded to include conserved higher-spin supercurrents, for which we provide many examples and confirm the results of \cite{Buchbinder:2021qlb}. Here we also
resolve a contradiction in the literature concerning the structure of the three-point function $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{1/2} \mathbf{J}''_{2} \rangle$; it was found
in~\cite{Nizami:2013tpa} that this three-point function contains a parity-odd solution, however, it was shown later in~\cite{Buchbinder:2021qlb} that parity-odd structures are inconsistent
with conservation equations. In this paper we re-examine this three-point function and provide a straightforward explanation, based on the triangle
inequalities~\eqref{zh2}, for why this structure cannot appear. In section \ref{section5},
for completeness, we perform the analysis of correlation functions involving
combinations of scalar superfields and conserved higher-spin supercurrents. Finally, in section \ref{section6} we comment on the general results in the context of superconformal field theories.
The appendices are devoted to mathematical conventions and various useful identities.
\section{Superconformal symmetry in three-dimensions}\label{section2}
In this section we will review the pertinent aspects of the group-theoretic formalism used to compute three-point correlation functions of primary superfields in 3D ${\cal N}=1$ superconformal field theories. For a more detailed review of the formalism the reader may consult \cite{Park:1999cw,Buchbinder:2015qsa}.
\subsection{Superconformal transformations and primary superfields}
Let us begin by reviewing infinitesimal superconformal transformations and the transformation laws of primary superfields. This section closely follows the notation of \cite{Kuzenko:2006mv,Kuzenko:2010rp,Kuzenko:2010bd}. Now consider 3D, ${\cal N}=1$ Minkowski superspace $\mathbb{M}^{3 | 2}$, parameterised by coordinates $z^{A} = (x^{a} , \theta^{\alpha})$, where $a = 0,1,2$, $\alpha = 1,2$ are Lorentz and spinor indices respectively.
We consider infinitesimal superconformal transformations
\begin{equation}
\delta z^{A} = \xi z^{A} \hspace{3mm} \Longleftrightarrow \hspace{3mm} \delta x^{a} = \xi^{a}(z) + \text{i} (\gamma^{a})_{\alpha \beta} \, \xi^{\alpha}(z) \, \theta^{\beta} \, ,
\hspace{8mm} \delta \theta^{\alpha} = \xi^{\alpha}(z) , \,
\label{new1}
\end{equation}
which are associated with the real first-order differential operator
\begin{equation}
\xi = \xi^{A}(z) \, \partial_{A} = \xi^{a}(z) \, \partial_{a} + \xi^{\alpha}(z) D_{\alpha} \, . \label{Superconformal Killing vector field}
\end{equation}
This operator satisfies the master equation $[\xi , D_{\alpha} ] \propto D_{\beta}$, from which we obtain
\begin{equation}
\xi^{\alpha} = \frac{\text{i}}{6} D_{\beta} \xi^{\alpha \beta} \, .
\end{equation}
As a consequence, the conformal Killing equation is satisfied
\begin{equation}
\partial_{a} \xi_{b} + \partial_{b} \xi_{a} = \frac{2}{3} \eta_{a b} \partial_{c} \xi^{c} \, .
\label{new2}
\end{equation}
The solutions to the master equation are called the superconformal Killing vector fields of Minkowski superspace \cite{Buchbinder:1998qv,Kuzenko:2010rp}. They span a Lie algebra isomorphic to the superconformal algebra $\mathfrak{osp}(1 | 2 ; \mathbb{R})$.
The components of the operator $\xi$ were calculated explicitly in \cite{Park:1999cw,Buchbinder:2015qsa}, and are found to be
\begin{subequations}
\begin{align}
\begin{split}
\xi^{\alpha \beta} &= a^{\alpha \beta} - \lambda^{\alpha}{}_{\gamma} x^{\gamma \beta} - x^{\alpha \gamma} \lambda_{\gamma}{}^{\beta} + \sigma x^{\alpha \beta} + 4 \text{i} \epsilon^{(\alpha} \theta^{\beta)} \\
& \hspace{20mm} + x^{\alpha \gamma} x^{\beta \delta} b_{\gamma \delta} + \text{i} b_{\delta}^{(\alpha } x^{\beta) \delta} \theta^{2} - 4 \text{i} \eta_{\gamma} x^{\gamma(\alpha} \theta^{\beta)} \, , \label{Superconformal killing vector - component 1}
\end{split}
\end{align}
\vspace{-5mm}
\begin{align}
\xi^{\alpha} &= \epsilon^{\alpha} - \lambda^{\alpha}{}_{\beta} \theta^{\beta} + \frac{1}{2} \sigma \theta^{\alpha} + b_{\beta \gamma} \boldsymbol{x}^{\beta \alpha} \theta^{\gamma} + \eta_{\beta} ( 2 \text{i} \theta^{\beta} \theta^{\alpha} - \boldsymbol{x}^{\beta \alpha} ) \, , \label{Superconformal killing vector - component 2}
\end{align}
%
\begin{equation}
a_{\alpha \beta} = a_{\beta \alpha} \, , \hspace{5mm} \lambda_{\alpha \beta} = \lambda_{\beta \alpha} \, , \hspace{2mm} \lambda^{\alpha}{}_{\alpha} = 0 \, , \hspace{5mm} b_{\alpha \beta} = b_{\beta \alpha} \, .
\end{equation}
\end{subequations}
The bosonic parameters $a_{\alpha \beta}$, $\lambda_{\alpha \beta}$, $\sigma$, $b_{\alpha \beta}$ correspond to infinitesimal translations,
Lorentz transformations, scale transformations and special conformal transformations respectively, while the
fermionic parameters $\epsilon^{\alpha}$ and $\eta^{\alpha}$ correspond to $Q$-supersymmetry and $S$-su\-per\-sym\-met\-ry transformations.
Furthermore, the identity $D_{[\alpha} \xi_{\beta]} \propto \ve_{\alpha \beta} $ implies that
\begin{equation}
[\xi , D_{\alpha} ] = - ( D_{\alpha} \xi^{\beta}) D_{\beta} = \lambda_{\alpha}{}^{\beta}(z) D_{\beta} - \frac{1}{2} \sigma(z) D_{\alpha} \, ,
\end{equation}
\begin{equation}
\lambda_{\alpha \beta}(z) = - D_{(\alpha} \xi_{\beta)} \, , \hspace{5mm} \sigma(z) = D_{\alpha} \xi^{\alpha} \, .
\label{new4}
\end{equation}
The local parameters $\lambda^{\alpha \beta}(z)$, $\sigma(z)$ are interpreted as being associated with combined special-conformal/Lorentz and scale transformations respectively, and appear in the transformation laws for primary tensor superfields. For later use let's also introduce the $z$-dependent $S$-supersymmetry parameter
\begin{equation}
\eta_{\alpha}(z) = -\frac{\text{i}}{2} D_{\alpha} \sigma(z) \,.
\label{new5}
\end{equation}
Explicit calculations of the local parameters give \cite{Park:1999cw,Buchbinder:2015qsa}
\begin{subequations}
\begin{align}
\lambda^{\alpha \beta}(z) &= \lambda^{\alpha \beta} - x^{\gamma (\alpha} b^{\beta)}_{\gamma} + 2 \text{i} \eta^{(\alpha} \theta^{\beta)} - \frac{\text{i}}{2} b^{\alpha \beta} \theta^{2} \, , \label{Local parameter 1} \\
\sigma(z) &= \sigma + b_{\alpha \beta} x^{\alpha \beta} + 2 \text{i} \theta^{\alpha} \eta_{\alpha} \, , \label{Local parameter 3} \\[2mm]
\eta_{\alpha}(z) &= \eta_{\alpha} - b_{\alpha \beta} \theta^{\beta} \, . \label{Local parameter 4}
\end{align}
\end{subequations}
Now consider a tensor superfield $\Phi_{{\cal A}}(z)$ transforming in an irreducible representation of the Lorentz group with respect to the index ${\cal A}$. Such a superfield is called primary with dimension $\Delta$ if it possesses the following superconformal transformation properties
\begin{equation}
\delta \Phi_{{\cal A}} = - \xi \Phi_{{\cal A}} - \Delta \sigma(z) \Phi_{{\cal A}} + \lambda^{\alpha \beta}(z) (M_{\alpha \beta})_{{\cal A}}{}^{{\cal B}} \Phi_{{\cal B}} \,,
\label{new6}
\end{equation}
where $\xi$ is the superconformal Killing vector, $\sigma(z)$, $\lambda^{\alpha \beta}(z)$ are $z$-dependent parameters associated with $\xi$, and the matrix $M_{\alpha \beta}$ is a Lorentz generator.
\subsubsection{Conserved supercurrents}\label{subsection2.3}
In this paper we are primarily interested in the structure of three-point correlation functions involving conserved higher-spin supercurrents. In 3D, ${\cal N}=1$ theories, a conserved higher-spin supercurrent of superspin-$s$ (integer or half-integer), is defined as a totally symmetric spin-tensor of rank $2s$, $\mathbf{J}_{\alpha_{1} \dots \alpha_{2s} }(z) = \mathbf{J}_{(\alpha_{1} \dots \alpha_{2s}) }(z) = \mathbf{J}_{\alpha(2s) }(z)$, satisfying a conservation equation of the form:
\begin{equation} \label{Conserved current}
D^{\alpha_{1}} \mathbf{J}_{\alpha_{1} \alpha_{2} \dots \alpha_{2s}}(z) = 0 \, ,
\end{equation}
where $D^{\alpha}$ is the conventional covariant spinor derivative \eqref{Covariant spinor derivatives}. Conserved currents are primary superfields as they possesses the following infinitesimal superconformal transformation properties \cite{Buchbinder:1998qv,Park:1999cw,Buchbinder:2015qsa}:
\begin{equation}
\delta \mathbf{J}_{\alpha_{1} \dots \alpha_{2s}}(z) = - \xi \mathbf{J}_{\alpha_{1} \dots \alpha_{2s}}(z) - \Delta_{\mathbf{J}} \, \sigma(z) \, \mathbf{J}_{\alpha_{1} \dots \alpha_{2s}}(z) + 2s \, \lambda_{( \alpha_{1} }{}^{\delta}(z) \, \mathbf{J}_{\alpha_{2} \dots \alpha_{2s}) \delta}(z) \, .
\end{equation}
The dimension $\Delta_{\mathbf{J}}$ is constrained by the conservation condition \eqref{Conserved current} to $\Delta_{\mathbf{J}} = s+1$. Higher-spin supercurrents possess the following component structure:
\begin{equation}
\mathbf{J}_{\alpha(2s)}(z) = J^{(0)}_{\alpha(2s)}(x) + J^{(1)}_{\alpha(2s+1)}(x) \, \theta^{\alpha_{2s+1}} + \tilde{J}^{(1)}_{(\alpha_{1} ... \alpha_{2s-1}}(x) \, \theta^{}_{\alpha_{2s})} + J^{(2)}_{\alpha(2s)}(x) \, \theta^{2} \, .
\end{equation}
After imposing \eqref{Conserved current}, a short calculation gives $\tilde{J}^{(1)} = 0$, while $J^{(2)}$ is a function of $J^{(0)}_{\alpha(2s)}$.
On the other hand, the components $J^{(0)}$, $J^{(1)}$ satisfy the following conservation equations:
\begin{equation}
\pa^{\alpha_{1} \alpha_{2}} J^{(0)}_{\alpha_{1} \alpha_{2} \alpha(2s-2)}(x) = 0 \, , \hspace{10mm} \pa^{\alpha_{1} \alpha_{2}} J^{(1)}_{\alpha_{1} \alpha_{2} \alpha(2s -1)}(x) = 0 \, .
\end{equation}
Hence, at the component level, a higher-spin supercurrent of superspin-$s$ contains conserved conformal currents of spin-$s$ and spin-$(s+\tfrac{1}{2})$ respectively.
\subsection{Two-point building blocks}
Given two superspace points $z_{1}$ and $z_{2}$, we define the two-point functions
\begin{equation}
\boldsymbol{x}_{12}^{\alpha \beta} = (x_{1} - x_{2})^{\alpha \beta} + 2 \text{i} \theta^{(\alpha}_{1} \theta^{\beta)}_{2} - \text{i} \theta^{\alpha}_{12} \theta^{\beta}_{12} \, , \hspace{10mm} \theta^{\alpha}_{12} = \theta_{1}^{\alpha} - \theta_{2}^{\alpha} \, , \label{Two-point building blocks 1}
\end{equation}
which transform under the superconformal group as follows
\begin{subequations}
\begin{align}
\tilde{\delta} \boldsymbol{x}_{12}^{\alpha \beta} &= - \bigg( \lambda^{\alpha}{}_{\gamma}(z_{1}) - \frac{1}{2} \delta^{\alpha}{}_{\gamma} \, \sigma(z_{1}) \bigg) \boldsymbol{x}_{12}^{\gamma \beta} - \boldsymbol{x}_{12}^{\alpha \gamma} \bigg( \lambda_{\gamma}{}^{\beta}(z_{2}) - \frac{1}{2} \delta_{\gamma}{}^{\beta} \sigma(z_{2}) \bigg) \, , \label{Two-point building blocks 1 - transformation law 1} \\[2mm]
\tilde{\delta} \theta_{12 }^{\alpha} &= - \bigg( \lambda^{\alpha}{}_{\beta}(z_{1}) - \frac{1}{2} \delta^{\alpha}{}_{\beta} \, \sigma(z_{1}) \bigg) \theta_{12}^{\beta} - \boldsymbol{x}_{12}^{\alpha \beta} \, \eta_{\beta}(z_{2}) \,. \label{Two-point building blocks 1 - transformation law 2}
\end{align}
\end{subequations}
Here the total variation $\tilde{\delta}$ is defined by its action on an $n$-point function $\Phi(z_{1},...,z_{n})$ as
\begin{equation}
\tilde{\delta} \Phi(z_{1},...,z_{n}) = \sum_{i=1}^{n} \xi_{z_{i}} \Phi(z_{1},...,z_{n}) \, . \label{Total variation}
\end{equation}
Only \eqref{Two-point building blocks 1 - transformation law 1} transforms covariantly under superconformal transformations, as \eqref{Two-point building blocks 1 - transformation law 2}
contains an inhomogeneous piece in its transformation law. Therefore it will not appear as a building block in two- or three-point correlation functions.
Due to the useful property, $\boldsymbol{x}_{21}^{\alpha \beta} = - \boldsymbol{x}_{12}^{\beta \alpha}$, the two-point function \eqref{Two-point building blocks 1} can be split into symmetric and antisymmetric parts as follows:
\begin{equation}
\boldsymbol{x}_{12}^{\alpha \beta} = x_{12}^{\alpha \beta} + \frac{\text{i}}{2} \ve^{\alpha \beta} \theta^{2}_{12} \, , \hspace{10mm} \theta_{12}^{2} = \theta_{12}^{\alpha} \theta^{}_{12 \, \alpha} \, . \label{Two-point building blocks 1 - properties 1}
\end{equation}
The symmetric component
\begin{equation}
x_{12}^{\alpha \beta} = (x_{1} - x_{2})^{\alpha \beta} + 2 \text{i} \theta^{(\alpha}_{1} \theta^{\beta)}_{2} \, , \label{Two-point building blocks 1 - properties 2}
\end{equation}
is recognised as the bosonic part of the standard two-point superspace interval. The two-point functions possess the property:
\begin{align} \label{Two-point building blocks - properties 1}
\boldsymbol{x}_{12}^{\alpha \sigma} \boldsymbol{x}^{}_{21 \, \sigma \beta} = \boldsymbol{x}_{12}^{2} \delta_{\beta}^{\alpha} \, , \hspace{5mm} \boldsymbol{x}_{12}^{2} = - \frac{1}{2} \boldsymbol{x}_{12}^{\alpha \beta} \boldsymbol{x}^{}_{12 \, \alpha \beta} \, .
\end{align}
Hence, we find
\begin{equation} \label{Two-point building blocks 4}
(\boldsymbol{x}_{12}^{-1})^{\alpha \beta} = - \frac{\boldsymbol{x}_{12}^{ \beta \alpha}}{\boldsymbol{x}_{12}^{2}} \, .
\end{equation}
It is now useful to introduce the normalised two-point functions, denoted by $\hat{\boldsymbol{x}}_{12}$,
\begin{align} \label{Two-point building blocks 3}
\hat{\boldsymbol{x}}_{12 \, \alpha \beta} = \frac{\boldsymbol{x}_{12 \, \alpha \beta}}{( \boldsymbol{x}_{12}^{2})^{1/2}} \, , \hspace{10mm} \hat{\boldsymbol{x}}_{12}^{\alpha \sigma} \hat{\boldsymbol{x}}^{}_{21 \, \sigma \beta} = \delta_{\beta}^{\alpha} \, .
\end{align}
Under superconformal transformations, $\boldsymbol{x}_{12}^{2}$ transforms with local scale parameters, while \eqref{Two-point building blocks 3} transforms with local Lorentz parameters
\begin{subequations}
\begin{align}
\tilde{\delta} \boldsymbol{x}_{12}^{2} &= ( \sigma(z_{1}) + \sigma(z_{2}) ) \, \boldsymbol{x}_{12}^{2} \, , \label{Two-point building blocks 2 - transformation law 1} \\
\tilde{\delta} \hat{\boldsymbol{x}}_{12}^{\alpha \beta} &= - \lambda^{\alpha}{}_{\gamma}(z_{1}) \, \hat{\boldsymbol{x}}_{12}^{\gamma \beta} - \hat{\boldsymbol{x}}_{12}^{\alpha \gamma} \, \lambda_{\gamma}{}^{\beta}(z_{2}) \, . \label{Two-point building blocks 3 - transformation law 1}
\end{align}
\end{subequations}
There are also the following differential identities for the action of covariant spinor derivatives on the two-point functions:
\begin{equation}
D_{(1) \gamma} \boldsymbol{x}_{12}^{\alpha \beta} = - 2 \text{i} \theta^{\beta}_{12} \delta_{\gamma}^{\alpha} \, , \hspace{10mm} D_{(1) \alpha} \boldsymbol{x}_{12}^{\alpha \beta} = - 4 \text{i} \theta^{\beta}_{12} \, , \label{Two-point building blocks 1 - differential identities}
\end{equation}
where $D_{(i) \alpha}$ acts on the superspace point $z_{i}$. From here we can now construct an operator analogous to the conformal inversion tensor acting on the space of symmetric traceless spin-tensors of arbitrary rank. Given a two-point function $\boldsymbol{x}$, we define the operator
\begin{equation} \label{Higher-spin inversion operators a}
{\cal I}_{\alpha(k) \beta(k)}(\boldsymbol{x}) = \hat{\boldsymbol{x}}_{(\alpha_{1} (\beta_{1}} \dots \hat{\boldsymbol{x}}_{ \alpha_{k}) \beta_{k})} \, ,
\end{equation}
along with its inverse
\begin{equation} \label{Higher-spin inversion operators b}
{\cal I}^{\alpha(k) \beta(k)}(\boldsymbol{x}) = \hat{\boldsymbol{x}}^{(\alpha_{1} (\beta_{1}} \dots \hat{\boldsymbol{x}}^{ \alpha_{k}) \beta_{k})} \, .
\end{equation}
The spinor indices may be raised and lowered using the standard conventions as follows:
\begin{align}
{\cal I}_{\alpha(k)}{}^{\beta(k)}(\boldsymbol{x}) &= \ve^{\beta_{1} \gamma_{1}} \dots \ve^{\beta_{k} \gamma_{k}} \, {\cal I}_{\alpha(k) \gamma(k)}(\boldsymbol{x}) \, .
\end{align}
Now due to the property
\begin{equation}
{\cal I}_{\alpha(k) \beta(k)}(-\boldsymbol{x}) = (-1)^{k} {\cal I}_{\alpha(k) \beta(k)}(\boldsymbol{x}) \, ,
\end{equation}
the following identity holds for products of inversion tensors:
\begin{align} \label{Higher-spin inversion operators - properties}
{\cal I}_{\alpha(k) \sigma(k)}(\boldsymbol{x}_{12}) \, {\cal I}^{\sigma(k) \beta(k)}(\boldsymbol{x}_{21}) &= \delta_{(\alpha_{1}}^{(\beta_{1}} \dots \delta_{\alpha_{k})}^{\beta_{k})} \, .
\end{align}
The objects \eqref{Higher-spin inversion operators a}, \eqref{Higher-spin inversion operators b} prove to be essential in the construction of correlation functions of primary operators with arbitrary spin. Indeed, the vector representation of the inversion tensor may be recovered in terms of the spinor two-point functions as follows:
\begin{equation}
I_{m n}(x) = - \frac{1}{2} \, \text{Tr}( \gamma_{m} \, \hat{\boldsymbol{x}} \, \gamma_{n} \, \hat{\boldsymbol{x}} )|_{\theta = 0} \, .
\end{equation}
\subsection{Three-point building blocks}
Essential to the analysis of three-point correlation functions are three-point covariants/building blocks. Indeed, given three superspace points, $z_{1}, z_{2}, z_{3}$, one can define the objects, ${\cal Z}_{k} = ( \boldsymbol{X}_{ij} , \Theta_{ij} )$ as follows:
\begin{subequations} \label{Three-point building blocks 1}
\begin{align}
\boldsymbol{X}_{ij \, \alpha \beta} &= -(\boldsymbol{x}_{ik}^{-1})_{\alpha \gamma} \boldsymbol{x}_{ij}^{\gamma \delta} (\boldsymbol{x}_{kj}^{-1})_{\delta \beta} \, , \hspace{5mm} \Theta_{ij \, \alpha} = (\boldsymbol{x}_{ik}^{-1})_{\alpha \beta} \theta_{ki}^{\beta} - (\boldsymbol{x}_{jk}^{-1})_{\alpha \beta} \theta_{kj}^{\beta} \, , \label{Three-point building blocks}
\end{align}
\end{subequations}
where the labels $(i,j,k)$ are a cyclic permutation of $(1,2,3)$. These objects possess the important property $\boldsymbol{X}_{ij \, \alpha \beta} = - \boldsymbol{X}_{ji \, \beta \alpha}$. As a consequence, the three-point building blocks~\eqref{Three-point building blocks 1} possess many properties similar to those of the two-point building blocks
\begin{align}
\boldsymbol{X}_{ij}^{\alpha \sigma} \boldsymbol{X}^{}_{ji \, \sigma \beta} = \boldsymbol{X}_{ij}^{2} \delta_{\beta}^{\alpha} \, , \hspace{5mm} \boldsymbol{X}_{ij}^{2} = - \frac{1}{2} \boldsymbol{X}_{ij}^{\alpha \beta} \boldsymbol{X}^{}_{ij \, \alpha \beta} \, .
\end{align}
Hence, we find
\begin{equation}
(\boldsymbol{X}_{ij}^{-1})^{\alpha \beta} = - \frac{\boldsymbol{X}_{ij}^{ \beta \alpha}}{\boldsymbol{X}_{ij}^{2}} \, .
\end{equation}
It is also useful to note that one may decompose $\boldsymbol{X}_{ij}$ into symmetric and anti-symmetric parts similar to \eqref{Two-point building blocks 1 - properties 1} as follows:
\begin{equation}
\boldsymbol{X}_{ij \, \alpha \beta} = X_{ij \, \alpha \beta} - \frac{\text{i}}{2} \ve_{\alpha \beta} \Theta_{ij}^{2} \, , \hspace{10mm} X_{ij \, \alpha \beta} = X_{ij \, \beta \alpha} \, , \label{Three-point building blocks 1a - properties 3}
\end{equation}
where the symmetric spin-tensor, $X_{ij \, \alpha \beta}$, can be equivalently represented by the three-vector $X_{ij \, m} = - \frac{1}{2} (\gamma_{m})^{\alpha \beta} X_{ij \, \alpha \beta}$. Since the building blocks possess the same properties up to cyclic permutations of the points, we will only examine the properties of $\boldsymbol{X}_{12}$ and $\Theta_{12}$, as these objects appear most frequently in our analysis of correlation functions. One can compute
\begin{equation}
\boldsymbol{X}_{12}^{2} = - \frac{1}{2} \boldsymbol{X}_{12}^{\alpha \beta} \boldsymbol{X}_{12 \, \alpha \beta}^{} = \frac{\boldsymbol{x}_{12}^{2}}{\boldsymbol{x}_{13}^{2} \boldsymbol{x}_{23}^{2}} \, , \hspace{10mm} \Theta_{12}^{2} = \Theta^{\alpha}_{12} \Theta^{}_{12 \, \alpha} \, . \label{Three-point building blocks 2}
\end{equation}
The building block $\boldsymbol{X}_{12}$ also possesses the following superconformal transformation properties:
\begin{subequations}
\begin{align}
\tilde{\delta} \boldsymbol{X}_{12 \, \alpha \beta} &= \lambda_{\alpha}{}^{\gamma}(z_{3}) \boldsymbol{X}_{12 \, \gamma \beta} + \boldsymbol{X}_{12 \, \alpha \gamma} \, \lambda^{\gamma}{}_{\beta}(z_{3}) - \sigma(z_{3}) \boldsymbol{X}_{12 \, \alpha \beta} \, , \label{Three-point building blocks 1a - transformation law 1} \\[2mm]
\tilde{\delta} \Theta_{12 \, \alpha} &= \Big( \lambda_{\alpha}{}^{\beta}(z_{3}) - \frac{1}{2} \, \delta_{\alpha}{}^{\beta} \sigma(z_{3}) \Big) \Theta_{12 \, \beta} \, , \label{Three-point building blocks 1a - transformation law 2}
\end{align}
\end{subequations}
and, therefore
\begin{equation}
\tilde{\delta} \boldsymbol{X}_{12}^{2} = - 2 \sigma(z_{3}) \boldsymbol{X}_{12}^{2} \, , \hspace{10mm} \tilde{\delta} \Theta_{12}^{2} = - \sigma(z_{3}) \, \Theta_{12}^{2} \, , \label{Three-point building blocks 2 - transformation law 1}
\end{equation}
i.e. $(\boldsymbol{X}_{12}$, $\Theta_{12})$ is superconformally covariant at $z_{3}$. As a consequence, one can identify the three-point superconformal invariant
\begin{equation}
\boldsymbol{J} = \frac{\Theta_{12}^{2}}{\sqrt{\boldsymbol{X}_{12}^{2}}} \hspace{5mm} \Longrightarrow \hspace{5mm} \tilde{\delta} \boldsymbol{J} = 0 \, ,
\end{equation}
which proves to be invariant under permutations of the superspace points, i.e.
\begin{equation}
\boldsymbol{J} = \frac{\Theta_{12}^{2}}{\sqrt{\boldsymbol{X}_{12}^{2}}} = \frac{\Theta_{31}^{2}}{\sqrt{\boldsymbol{X}_{31}^{2}}} = \frac{\Theta_{23}^{2}}{\sqrt{\boldsymbol{X}_{23}^{2}}} \, . \label{Superconformal invariants}
\end{equation}
Analogous to the two-point functions, it is also useful to introduce the normalised three-point building blocks, denoted by $\hat{\boldsymbol{X}}_{ij}$, $\hat{\Theta}_{ij}$,
\begin{align} \label{Normalised three-point building blocks}
\hat{\boldsymbol{X}}_{ij \, \alpha \beta} = \frac{\boldsymbol{X}_{ij \, \alpha \beta}}{( \boldsymbol{X}_{ij}^{2})^{1/2}} \, , \hspace{10mm} \hat{\Theta}_{ij}^{\alpha} = \frac{ \Theta_{ij}^{\alpha} }{(\boldsymbol{X}_{ij}^{2})^{1/4}} \, ,
\end{align}
such that
\begin{align}
\hat{\boldsymbol{X}}_{ij}^{\alpha \sigma} \hat{\boldsymbol{X}}^{}_{ji \, \sigma \beta} = \delta_{\beta}^{\alpha} \, , \hspace{10mm} \boldsymbol{J} = \hat{\Theta}_{ij}^{2} \, .
\end{align}
Compared with the standard three-point building blocks, \eqref{Three-point building blocks 1}, the objects \eqref{Normalised three-point building blocks} transform only with local Lorentz parameters. Now given an arbitrary three-point building block, $\boldsymbol{X}$, let us construct the following higher-spin inversion operator:
\begin{equation}
{\cal I}_{\alpha(k) \beta(k)}(\boldsymbol{X}) = \hat{\boldsymbol{X}}_{ (\alpha_{1} (\beta_{1}} \dots \hat{\boldsymbol{X}}_{\alpha_{k}) \beta_{k})} \, , \label{Inversion tensor identities - three point functions a}
\end{equation}
along with its inverse
\begin{equation}
{\cal I}^{\alpha(k) \beta(k)}(\boldsymbol{X}) = \hat{\boldsymbol{X}}^{(\alpha_{1} (\beta_{1}} \dots \hat{\boldsymbol{X}}^{ \alpha_{k}) \beta_{k})} \, . \label{Inversion tensor identities - three point functions b}
\end{equation}
These operators possess properties similar to the two-point higher-spin inversion operators \eqref{Higher-spin inversion operators a}, \eqref{Higher-spin inversion operators b}, and are essential to the analysis of three-point correlation functions involving higher-spin primary superfields. In particular, one can prove the following useful identities involving $\boldsymbol{X}_{ij}$ and $\Theta_{ij}$ at different superspace points:
\begin{subequations}
\begin{align}
{\cal I}_{\alpha}{}^{\sigma}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta}{}^{\gamma}(\boldsymbol{x}_{13}) \, {\cal I}_{\sigma \gamma}(\boldsymbol{X}_{12}) &= {\cal I}_{\alpha \beta}(\boldsymbol{X}_{23}) \, , \label{Three-point building blocks 1a - properties 1}\\[2mm]
{\cal I}_{\alpha}{}^{\gamma}(\boldsymbol{x}_{13}) \, \hat{\Theta}_{12 \, \gamma} &= \hat{\Theta}^{I}_{23 \, \alpha} \, , \label{Three-point building blocks 1a - properties 2}
\end{align}
\end{subequations}
where we have defined
\begin{equation}
\hat{\Theta}^{I}_{ij \alpha} = {\cal I}_{\alpha \beta}(-\boldsymbol{X}_{ij}) \, \hat{\Theta}^{\beta}_{ij}\,.
\label{zh3}
\end{equation}
Using the inversion operators above, the identity \eqref{Three-point building blocks 1a - properties 1} (and cyclic permutations) admits the following generalisation to higher-spins
\begin{equation}
{\cal I}_{\alpha(k)}{}^{\sigma(k)}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta(k)}{}^{\gamma(k)}(\boldsymbol{x}_{13}) \, {\cal I}_{\sigma(k) \gamma(k)}(\boldsymbol{X}_{12}) = {\cal I}_{\alpha(k) \beta(k)}(\boldsymbol{X}_{23}) \, . \label{Inversion tensor identities - higher spin case}
\end{equation}
Due to the transformation properties \eqref{Three-point building blocks 1a - transformation law 1}, \eqref{Three-point building blocks 1a - transformation law 2} it is often useful to make the identifications $(\boldsymbol{X}_{1}, \Theta_{1}) := (\boldsymbol{X}_{23}, \Theta_{23})$, $(\boldsymbol{X}_{2}, \Theta_{2}) := (\boldsymbol{X}_{31}, \Theta_{31})$, $(\boldsymbol{X}_{3}, \Theta_{3}) := (\boldsymbol{X}_{12}, \Theta_{12})$, in which case we have e.g. $\boldsymbol{X}_{21} = - \boldsymbol{X}_{3}^{\text{T}}$; we will switch between these notations when convenient. Let us now introduce the following analogues of the covariant spinor derivative and supercharge operators involving the three-point objects:
\begin{equation}
{\cal D}_{(i) \alpha} = \frac{\partial}{\partial \Theta^{\alpha}_{i}} + \text{i} (\gamma^{m})_{\alpha \beta} \Theta^{\beta}_{i} \frac{\partial}{\partial X^{m}_{i}} \, , \hspace{5mm} {\cal Q}_{(i) \alpha} = \text{i} \frac{\partial}{\partial \Theta^{\alpha}_{i}} + (\gamma^{m})_{\alpha \beta} \Theta^{\beta}_{i} \frac{\partial}{\partial X^{m}_{i}} \, , \label{Supercharge and spinor derivative analogues}
\end{equation}
which obey the standard commutation relations
\begin{equation}
\big\{ {\cal D}_{(i) \alpha} , {\cal D}_{(i) \beta} \big\} = \big\{ {\cal Q}_{(i) \alpha} , {\cal Q}_{(i) \beta} \big\} = 2 \text{i} \, (\gamma^{m})_{\alpha \beta} \frac{\partial}{\partial X^{m}_{i}} \, .
\end{equation}
Some useful identities involving~\eqref{Supercharge and spinor derivative analogues} are, e.g.
\begin{equation}
{\cal D}_{(3) \gamma} \boldsymbol{X}_{3 \, \alpha \beta} = - 2 \text{i} \ve_{\gamma \beta} \Theta_{3 \, \alpha} \, , \hspace{5mm} {\cal Q}_{(3) \gamma} \boldsymbol{X}_{3 \, \alpha \beta} = - 2 \ve_{\gamma \alpha} \Theta_{3 \, \beta} \, . \label{Three-point building blocks 1a - differential identities 1}
\end{equation}
We must also account for the fact that correlation functions of primary superfields obey differential constraints as a result of superfield conservation equations. Using \eqref{Two-point building blocks 1 - differential identities} we obtain the following identities
\begin{subequations}
\begin{align}
D_{(1) \gamma} \boldsymbol{X}_{3 \, \alpha \beta} &= 2 \text{i} (\boldsymbol{x}^{-1}_{13})_{\alpha \gamma} \Theta_{3 \, \beta} \, , \hspace{5mm} D_{(1) \alpha} \Theta_{3 \, \beta} = - (\boldsymbol{x}_{13}^{-1})_{\beta \alpha} \, , \label{Three-point building blocks 1c - differential identities 1}\\[2mm]
D_{(2) \gamma} \boldsymbol{X}_{3 \, \alpha \beta} &= 2 \text{i} (\boldsymbol{x}^{-1}_{23})_{\beta \gamma} \Theta_{3 \, \beta} \, , \hspace{5mm} D_{(2) \alpha} \Theta_{3 \, \beta} = (\boldsymbol{x}_{23}^{-1})_{\beta \alpha} \, . \label{Three-point building blocks 1c - differential identities 2}
\end{align}
\end{subequations}
Now given a function $f(\boldsymbol{X}_{3} , \Theta_{3})$, there are the following differential identities which arise as a consequence of \eqref{Three-point building blocks 1a - differential identities 1}, \eqref{Three-point building blocks 1c - differential identities 1} and \eqref{Three-point building blocks 1c - differential identities 2}:
\begin{subequations}
\begin{align}
D_{(1) \gamma} f(\boldsymbol{X}_{3} , \Theta_{3}) &= (\boldsymbol{x}_{13}^{-1})_{\alpha \gamma} {\cal D}_{(3)}^{\alpha} f(\boldsymbol{X}_{3} , \Theta_{3}) \, , \label{Three-point building blocks 1c - differential identities 3} \\[2mm]
D_{(2) \gamma} f(\boldsymbol{X}_{3} , \Theta_{3}) &= \text{i} (\boldsymbol{x}_{23}^{-1})_{\alpha \gamma} {\cal Q}_{(3)}^{\alpha} f(\boldsymbol{X}_{3} , \Theta_{3}) \, . \label{Three-point building blocks 1c - differential identities 4}
\end{align}
\end{subequations}
These will prove to be essential for imposing differential constraints on three-point correlation functions of primary superfields.
\section{General formalism for correlation functions of primary superfields}\label{section3}
In this section we develop a formalism to construct correlation functions of primary superfields in 3D superconformal field theories. We utilise a hybrid method which combines auxiliary spinors
with the approach of~\cite{Park:1999cw, Buchbinder:2015qsa}.
\subsection{Two-point functions}\label{subsection3.1}
Let $\Phi_{{\cal A}}$ be a primary superfield with dimension $\Delta$, where ${\cal A}$ denotes a collection of Lorentz spinor indices. The two-point correlation function of $\Phi_{{\cal A}}$ is fixed by superconformal symmetry to the form
\begin{equation} \label{Two-point correlation function}
\langle \Phi_{{\cal A}}(z_{1}) \, \Phi^{{\cal B}}(z_{2}) \rangle = c \, \frac{{\cal I}_{{\cal A}}{}^{{\cal B}}(\boldsymbol{x}_{12})}{(\boldsymbol{x}_{12}^{2})^{\Delta}} \, ,
\end{equation}
where ${\cal I}$ is an appropriate representation of the inversion tensor and $c$ is a constant real parameter. The denominator of the two-point function is determined by the conformal dimension of $\Phi_{{\cal A}}$, which guarantees that the correlation function transforms with the appropriate weight under scale transformations.
\subsection{Three-point functions}\label{subsection3.2}
In this subsection we will review the various properties of three-point correlation functions in 3D ${\cal N}=1$ superconformal field theory. First we present the superfield ansatz
introduced by Park in \cite{Park:1999cw}. We then develop a new index free formalism utilising auxiliary spinors to simplify the overall form of three-point function, with the ultimate aim of constructing a generating function for arbitrary spins.
\subsubsection{Superfield ansatz}\label{subsubsection3.2.1}
Concerning three-point correlation functions, let $\Phi$, $\Psi$, $\Pi$ be primary superfields with scale dimensions $\Delta_{1}$, $\Delta_{2}$ and $\Delta_{3}$ respectively. The three-point function may be
constructed using the general ansatz
\begin{align}
\langle \Phi_{{\cal A}_{1}}(z_{1}) \, \Psi_{{\cal A}_{2}}(z_{2}) \, \Pi_{{\cal A}_{3}}(z_{3}) \rangle = \frac{ {\cal I}^{(1)}{}_{{\cal A}_{1}}{}^{{\cal A}'_{1}}(\boldsymbol{x}_{13}) \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{x}_{23}) }{(\boldsymbol{x}_{13}^{2})^{\Delta_{1}} (\boldsymbol{x}_{23}^{2})^{\Delta_{2}}}
\; {\cal H}_{{\cal A}'_{1} {\cal A}'_{2} {\cal A}_{3}}(\boldsymbol{X}_{12}, \Theta_{12}) \, , \label{Three-point function - general ansatz}
\end{align}
where the tensor ${\cal H}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}$ encodes all information about the correlation function, and is related to the leading singular OPE coefficient \cite{Osborn:1993cr}. It is highly constrained by superconformal symmetry as follows:
\begin{enumerate}
\item[\textbf{(i)}] Under scale transformations of $\mathbb{M}^{3|2}$, $z = ( x, \theta ) \mapsto z' = ( \lambda^{-2} x, \lambda^{-1} \theta )$, hence, the three-point covariants transform as $( \boldsymbol{X}, \Theta) \mapsto ( \boldsymbol{X}', \Theta') = ( \lambda^{2} \boldsymbol{X}, \lambda \Theta )$. As a consequence, the correlation function transforms as
%
\begin{equation}
\langle \Phi_{{\cal A}_{1}}(z_{1}') \, \Psi_{{\cal A}_{2}}(z_{2}') \, \Pi_{{\cal A}_{3}}(z_{3}') \rangle = (\lambda^{2})^{\Delta_{1} + \Delta_{2} + \Delta_{3}} \langle \Phi_{{\cal A}_{1}}(z_{1}) \, \Psi_{{\cal A}_{2}}(z_{2}) \, \Pi_{{\cal A}_{3}}(z_{3}) \rangle \, ,
\end{equation}
which implies that ${\cal H}$ obeys the scaling property
%
\begin{equation}
{\cal H}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}( \lambda^{2} \boldsymbol{X}, \lambda \Theta) = (\lambda^{2})^{\Delta_{3} - \Delta_{2} - \Delta_{1}} \, {\cal H}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta) \, , \hspace{5mm} \forall \lambda \in \mathbb{R} \, \backslash \, \{ 0 \} \, .
\end{equation}
This guarantees that the correlation function transforms correctly under scale transformations.
\item[\textbf{(ii)}] If any of the fields $\Phi$, $\Psi$, $\Pi$ obey differential equations, such as conservation laws in the case of conserved currents, then the tensor ${\cal H}$ is also constrained by differential equations which may be derived with the aid of identities \eqref{Three-point building blocks 1c - differential identities 3}, \eqref{Three-point building blocks 1c - differential identities 4}.
\item[\textbf{(iii)}] If any (or all) of the operators $\Phi$, $\Psi$, $\Pi$ coincide, the correlation function possesses symmetries under permutations of spacetime points, e.g.
%
\begin{equation}
\langle \Phi_{{\cal A}_{1}}(z_{1}) \, \Phi_{{\cal A}_{2}}(z_{2}) \, \Pi_{{\cal A}_{3}}(z_{3}) \rangle = (-1)^{\epsilon(\Phi)} \langle \Phi_{{\cal A}_{2}}(z_{2}) \, \Phi_{{\cal A}_{1}}(z_{1}) \, \Pi_{{\cal A}_{3}}(z_{3}) \rangle \, ,
\end{equation}
%
where $\epsilon(\Phi)$ is the Grassmann parity of $\Phi$. As a consequence, the tensor ${\cal H}$ obeys constraints which will be referred to as ``point-switch identities".
\end{enumerate}
The constraints above fix the functional form of ${\cal H}$ (and therefore the correlation function) up to finitely many independent parameters. Hence, using the general formula \eqref{H ansatz}, the problem of computing three-point correlation functions is reduced to deriving the general structure of the tensor ${\cal H}$ subject to the above constraints.
\subsubsection{A note on conserved three-point functions}\label{subsubsection3.2.2}
An important aspect of this construction is that depending on the way in which one constructs the general ansatz \eqref{H ansatz}, it can be difficult to impose conservation equations on one of the three fields due to a lack of useful identities such as \eqref{Three-point building blocks 1c - differential identities 1}, \eqref{Three-point building blocks 1c - differential identities 2}. For this reason it is useful to switch between the various representations of the three-point function. To illustrate this process more clearly, consider the following example; suppose we have obtained a solution for the correlation function $\langle \Phi_{{\cal A}_{1}}(z_{1}) \, \Psi_{{\cal A}_{2}}(z_{2}) \, \Pi_{{\cal A}_{3}}(z_{3}) \rangle$, with the ansatz
\begin{equation} \label{H ansatz}
\langle \Phi_{{\cal A}_{1}}(z_{1}) \, \Psi_{{\cal A}_{2}}(z_{2}) \, \Pi_{{\cal A}_{3}}(z_{3}) \rangle = \frac{ {\cal I}^{(1)}{}_{{\cal A}_{1}}{}^{{\cal A}'_{1}}(\boldsymbol{x}_{13}) \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{x}_{23}) }{(\boldsymbol{x}_{13}^{2})^{\Delta_{1}} (\boldsymbol{x}_{23}^{2})^{\Delta_{2}}}
\; {\cal H}_{{\cal A}'_{1} {\cal A}'_{2} {\cal A}_{3}}(\boldsymbol{X}_{12}, \Theta_{12}) \, .
\end{equation}
All information about this correlation function is encoded in the tensor ${\cal H}$, and one can impose conservation on $z_{1}$ and $z_{2}$ using the identities \eqref{Three-point building blocks 1c - differential identities 1}, \eqref{Three-point building blocks 1c - differential identities 2}, \eqref{Three-point building blocks 1c - differential identities 3}, \eqref{Three-point building blocks 1c - differential identities 4}. However, this particular formulation of the three-point function prevents us from imposing conservation on $z_{3}$ in a straightforward way. Let us now reformulate the ansatz with $\Pi$ at the front as follows:
\begin{equation} \label{Htilde ansatz}
\langle \Pi_{{\cal A}_{3}}(z_{3}) \, \Psi_{{\cal A}_{2}}(z_{2}) \, \Phi_{{\cal A}_{1}}(z_{1}) \rangle = \frac{ {\cal I}^{(3)}{}_{{\cal A}_{3}}{}^{{\cal A}'_{3}}(\boldsymbol{x}_{31}) \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{x}_{21}) }{(\boldsymbol{x}_{31}^{2})^{\Delta_{3}} (\boldsymbol{x}_{21}^{2})^{\Delta_{2}}}
\; \tilde{{\cal H}}_{{\cal A}_{1} {\cal A}'_{2} {\cal A}'_{3} }(\boldsymbol{X}_{23}, \Theta_{23}) \, .
\end{equation}
In this case, all information about this correlation function is now encoded in the tensor $\tilde{{\cal H}}$, which has a completely different structure compared to ${\cal H}$. Conservation on $\Pi$ can now be imposed by treating $z_{3}$ as the first point with the aid of identities analogous to \eqref{Three-point building blocks 1c - differential identities 3}, \eqref{Three-point building blocks 1c - differential identities 4}. We now require an equation relating the tensors ${\cal H}$ and $\tilde{{\cal H}}$, which correspond to different representations of the same correlation function. Equating the two ansatz above, we obtain the following:
\begin{align} \label{Htilde and H relation}
\tilde{{\cal H}}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X}_{23}, \Theta_{23}) &= (\boldsymbol{x}_{13}^{2})^{\Delta_{3} - \Delta_{1}} \bigg(\frac{\boldsymbol{x}_{21}^{2}}{\boldsymbol{x}_{23}^{2}} \bigg)^{\hspace{-1mm} \Delta_{2}} \, {\cal I}^{(1)}{}_{{\cal A}_{1}}{}^{{\cal A}'_{1}}(\boldsymbol{x}_{13}) \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal B}_{2}}(\boldsymbol{x}_{12}) \, {\cal I}^{(2)}{}_{{\cal B}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{x}_{23}) \nonumber \\[-2mm]
& \hspace{50mm} \times {\cal I}^{(3)}{}_{{\cal A}_{3}}{}^{{\cal A}'_{3}}(\boldsymbol{x}_{13}) \, {\cal H}_{{\cal A}'_{1} {\cal A}'_{2} {\cal A}'_{3}}(\boldsymbol{X}_{12}, \Theta_{12}) \, ,
\end{align}
where we have ignored any signs due to Grassmann parity. Before we can simplify the above equation, we must understand how the inversion tensor acts on ${\cal H}(\boldsymbol{X},\Theta)$. Now let:
\begin{align}
{\cal H}_{ {\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X},\Theta) &= \boldsymbol{X}^{\Delta_{3} - \Delta_{3}- \Delta_{1}} \hat{{\cal H}}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta) \, ,
\end{align}
where $\hat{{\cal H}}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta)$ is homogeneous degree 0 in $(\boldsymbol{X}, \Theta)$, i.e.
\begin{align}
\hat{{\cal H}}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\lambda^{2} \boldsymbol{X}, \lambda \Theta) &= \hat{{\cal H}}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta) \, .
\end{align}
The tensor $\hat{{\cal H}}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta)$ can be constructed from totally symmetric, homogeneous degree 0 combinations of $\ve$, $\boldsymbol{X}$ and $\Theta$, compatible with the set of indices ${\cal A}_{1}, {\cal A}_{2}, {\cal A}_{3}$, hence, we consider the following objects:
\begin{align}
\ve_{\alpha \beta} \, , \hspace{5mm} \hat{\boldsymbol{X}}_{\alpha \beta} \, , \hspace{5mm} \hat{\Theta}_{\alpha} \, , \hspace{5mm} (\hat{\boldsymbol{X}} \cdot \hat{\Theta})_{\alpha} = \hat{\boldsymbol{X}}_{\alpha \beta} \hat{\Theta}^{\beta} \, , \hspace{5mm} \boldsymbol{J} = \hat{\Theta}^{2} \, .
\end{align}
Now to simplify \eqref{Htilde and H relation}, consider
\begin{equation}
{\cal I}^{(1)}{}_{{\cal A}_{1}}{}^{{\cal A}'_{1}}(\boldsymbol{x}_{13}) \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{x}_{13}) \, {\cal I}^{(3)}{}_{{\cal A}_{3}}{}^{{\cal A}'_{3}}(\boldsymbol{x}_{13}) \, \hat{{\cal H}}_{{\cal A}'_{1} {\cal A}'_{2} {\cal A}'_{3}}(\boldsymbol{X}_{12}, \Theta_{12}) \, .
\end{equation}
Only combinations of the following fundamental products may appear in the result:
\begin{subequations}
\begin{align}
{\cal I}_{\alpha}{}^{\alpha'}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta}{}^{\beta'}(\boldsymbol{x}_{13}) \, \ve_{\alpha' \beta'} &= - \ve_{\alpha \beta} \, , \\
{\cal I}_{\alpha}{}^{\alpha'}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta}{}^{\beta'}(\boldsymbol{x}_{13}) \, \hat{\boldsymbol{X}}_{12 \, \alpha' \beta'} &= \hat{\boldsymbol{X}}_{23 \, \alpha \beta} \, , \\
{\cal I}_{\alpha}{}^{\alpha'}(\boldsymbol{x}_{13}) \, \hat{\Theta}_{12 \, \alpha'} &= \hat{\Theta}^{I}_{23 \, \alpha} \, , \\
{\cal I}_{\alpha}{}^{\alpha'}(\boldsymbol{x}_{13}) \, (\hat{\boldsymbol{X}}_{12} \cdot \hat{\Theta}_{12})_{\alpha'} &= - (\hat{\boldsymbol{X}}_{23} \cdot \hat{\Theta}^{I}_{23})_{\alpha} \, ,
\end{align}
\end{subequations}
where $\hat{\Theta}^{I}_{ij}$ was defined in~\eqref{zh3}.
For correlation functions involving the superconformal invariant, $\boldsymbol{J}$, we must note that $\boldsymbol{J}^{I} = (\hat{\Theta}^{I})^2 = - \boldsymbol{J}$. These identities are consequences of \eqref{Three-point building blocks 1a - properties 1}, \eqref{Three-point building blocks 1a - properties 2}. If we now denote the above transformations by ${\cal I}_{13}$, it acts on $\hat{{\cal H}}(\boldsymbol{X}_{12},\Theta_{12})$ as follows:
\begin{subequations}
\begin{align} \label{Inversion even objects}
\hat{\boldsymbol{X}}_{12} \xrightarrow[]{{\cal I}_{13}} \hat{\boldsymbol{X}}_{23} \, , \hspace{10mm} \hat{\Theta}_{12} \xrightarrow[]{{\cal I}_{13}} \hat{\Theta}^{I}_{23} \, ,
\end{align}
%
\vspace{-10mm}
%
\begin{align} \label{Inversion odd objects}
\ve \xrightarrow[]{{\cal I}_{13}} -\ve \, , \hspace{10mm} \hat{\boldsymbol{X}}_{12} \cdot \hat{\Theta}_{12} \xrightarrow[]{{\cal I}_{13}} - \hat{\boldsymbol{X}}_{23} \cdot \hat{\Theta}^{I}_{23} \, ,\hspace{10mm} \boldsymbol{J} \xrightarrow[]{{\cal I}_{13}} - \boldsymbol{J}^{I} \, .
\end{align}
\end{subequations}
Hence, due to their transformation properties under ${\cal I}$, the objects \eqref{Inversion even objects} are classified as ``parity-even" as they are invariant under ${\cal I}$, while the objects \eqref{Inversion odd objects} are classified as ``parity-odd", as they are pseudo-invariant under ${\cal I}$. At this point it is convenient to partition our solution into ``even" and ``odd" sectors as follows:
\begin{equation}
{\cal H}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X}, \Theta) = {\cal H}^{(+)}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X}, \Theta) + {\cal H}^{(-)}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X}, \Theta) \, ,
\end{equation}
where ${\cal H}^{(+)}$ contains all structures that are invariant under ${\cal I}$, and ${\cal H}^{(-)}$ contains all structures that are pseudo-invariant under ${\cal I}$. With this choice of convention, as a consequence of \eqref{Three-point building blocks 1a - properties 1}, \eqref{Three-point building blocks 1a - properties 2}, the following relation holds:
\begin{align} \label{Hc and H relation}
\hat{{\cal H}}^{I \, (\pm)}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}_{23}, \Theta_{23}) &= \pm \, {\cal I}^{(1)}{}_{{\cal A}_{1}}{}^{{\cal A}'_{1}}(\boldsymbol{x}_{13}) \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{x}_{13}) \nonumber \\
& \hspace{20mm} \times {\cal I}^{(3)}{}_{{\cal A}_{3}}{}^{{\cal A}'_{3}}(\boldsymbol{x}_{13}) \, \hat{{\cal H}}^{(\pm)}_{{\cal A}'_{1} {\cal A}'_{2} {\cal A}'_{3}}(\boldsymbol{X}_{12}, \Theta_{12}) \, ,
\end{align}
where $\hat{{\cal H}}^{I \, (\pm)}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta) = \hat{{\cal H}}^{(\pm)}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta^{I})$. A result analogous to \eqref{Inversion even objects}, \eqref{Inversion odd objects} that follows from the properties of the inversion tensor acting on $(\boldsymbol{X}, \Theta)$ is
\begin{subequations}
\begin{align} \label{Inverison even objects - X}
\hat{\boldsymbol{X}} \xrightarrow[]{{\cal I}_{\boldsymbol{X}}} - \hat{\boldsymbol{X}} \, , \hspace{10mm} \hat{\Theta} \xrightarrow[]{{\cal I}_{\boldsymbol{X}}} \hat{\Theta}^{I} \, ,
\end{align}
%
\vspace{-10mm}
%
\begin{align} \label{Inverison odd objects - X}
\ve \xrightarrow[]{{\cal I}_{\boldsymbol{X}}} -\ve \, , \hspace{10mm} \hat{\boldsymbol{X}} \cdot \hat{\Theta} \xrightarrow[]{{\cal I}_{\boldsymbol{X}}} \hat{\boldsymbol{X}} \cdot \hat{\Theta}^{I}\, ,\hspace{10mm} \boldsymbol{J} \xrightarrow[]{{\cal I}_{\boldsymbol{X}}} - \boldsymbol{J}^{I} \, .
\end{align}
\end{subequations}
Hence, to obtain the desired transformation properties as in \eqref{Inversion even objects}, \eqref{Inversion odd objects}, we consider ${\cal H}(-\boldsymbol{X}, \Theta)$ and obtain the formula
\begin{align} \label{H inversion}
{\cal H}^{I \, (\pm)}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta) = \pm \, {\cal I}^{(1)}{}_{{\cal A}_{1}}{}^{{\cal A}'_{1}}(\boldsymbol{X}) \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{X}) \, {\cal I}^{(3)}{}_{{\cal A}_{3}}{}^{{\cal A}'_{3}}(\boldsymbol{X}) \, {\cal H}^{(\pm)}_{{\cal A}'_{1} {\cal A}'_{2} {\cal A}'_{3}}(-\boldsymbol{X}, \Theta) \, ,
\end{align}
which is generally more simple to compute. After substituting \eqref{Hc and H relation} into \eqref{Htilde and H relation}, we obtain the following relation between ${\cal H}$ and $\tilde{{\cal H}}$:
\begin{equation} \label{Htilde and Hc relation}
\tilde{{\cal H}}^{(\pm)}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X},\Theta) = \pm \, (\boldsymbol{X}^{2})^{\Delta_{1} - \Delta_{3}} \, {\cal I}^{(2)}{}_{{\cal A}_{2}}{}^{{\cal A}'_{2}}(\boldsymbol{X}) \, {\cal H}^{I \, (\pm)}_{{\cal A}_{1} {\cal A}'_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta) \, .
\end{equation}
It is now apparent that ${\cal I}$ acts as an intertwining operator between the various representations of the correlation function. Once $\tilde{{\cal H}}$ is obtained we can then impose conservation on $\Pi$ as if it were located at the ``first point'', using identities analogous to \eqref{Three-point building blocks 1c - differential identities 3}, \eqref{Three-point building blocks 1c - differential identities 4}.
If we now consider the correlation function of three conserved primary superfields $\mathbf{J}^{}_{\alpha(I)}$, $\mathbf{J}'_{\beta(J)}$, $\mathbf{J}''_{\gamma(K)}$, where $I=2s_{1}$, $J=2s_{2}$, $K=2s_{3}$, then the general ansatz is
\begin{align} \label{Conserved correlator ansatz}
\langle \, \mathbf{J}^{}_{\alpha(I)}(z_{1}) \, \mathbf{J}'_{\beta(J)}(z_{2}) \, \mathbf{J}''_{\gamma(K)}(z_{3}) \rangle = \frac{ {\cal I}_{\alpha(I)}{}^{\alpha'(I)}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta(J)}{}^{\beta'(J)}(\boldsymbol{x}_{23}) }{(\boldsymbol{x}_{13}^{2})^{\Delta_{1}} (\boldsymbol{x}_{23}^{2})^{\Delta_{2}}}
\; {\cal H}_{\alpha'(I) \beta'(J) \gamma(K)}(\boldsymbol{X}_{12}, \Theta_{12}) \, ,
\end{align}
where $\Delta_{i} = s_{i} + 1$. The constraints on ${\cal H}$ are then as follows:
\begin{enumerate}
\item[\textbf{(i)}] {\bf Homogeneity:}
%
\begin{equation}
{\cal H}_{\alpha(I) \beta(J) \gamma(K)}(\lambda^{2} \boldsymbol{X}, \lambda \Theta) = (\lambda^{2})^{\Delta_{3} - \Delta_{2} - \Delta_{1}} \, {\cal H}_{\alpha(I) \beta(J) \gamma(K)}(\boldsymbol{X}, \Theta) \, ,
\end{equation}
It is often convenient to introduce $\hat{{\cal H}}_{\alpha(I) \beta(J) \gamma(K)}(\boldsymbol{X}, \Theta)$, such that
%
\begin{align}
{\cal H}_{\alpha(I) \beta(J) \gamma(K)}(\boldsymbol{X},\Theta) &= \boldsymbol{X}^{\Delta_{3} - \Delta_{3}- \Delta_{1}} \hat{{\cal H}}_{\alpha(I) \beta(J) \gamma(K)}(\boldsymbol{X}, \Theta) \, ,
\end{align}
%
where $\hat{{\cal H}}_{\alpha(I) \beta(J) \gamma(K)}(\boldsymbol{X}, \Theta)$ is homogeneous degree 0 in $(\boldsymbol{X}, \Theta)$, i.e.
%
\begin{align}
\hat{{\cal H}}_{\alpha(I) \beta(J) \gamma(K)}(\lambda^{2} \boldsymbol{X}, \lambda \Theta) &= \hat{{\cal H}}_{\alpha(I) \beta(J) \gamma(K)}(\boldsymbol{X}, \Theta) \, .
\end{align}
\item[\textbf{(ii)}] {\bf Differential constraints:} \\
%
After application of the identities \eqref{Three-point building blocks 1c - differential identities 3}, \eqref{Three-point building blocks 1c - differential identities 4} we obtain the following constraints:
%
\begin{subequations}
\begin{align}
\text{Conservation at $z_{1}$:} && {\cal D}^{\alpha} {\cal H}_{\alpha \alpha(I - 1) \beta(J) \gamma(K)}(\boldsymbol{X}, \Theta) &= 0 \, , \\
\text{Conservation at $z_{2}$:} && {\cal Q}^{\beta} {\cal H}_{\alpha(I) \beta \beta(J-1) \gamma(K)}(\boldsymbol{X}, \Theta) &= 0 \, , \\
\text{Conservation at $z_{3}$:} && {\cal Q}^{\gamma} \tilde{{\cal H}}_{\alpha(I) \beta(J) \gamma \gamma(K-1) }(\boldsymbol{X}, \Theta) &= 0 \, ,
\end{align}
\end{subequations}
%
where
%
\begin{equation}
\tilde{{\cal H}}^{(\pm)}_{\alpha(I) \beta(J) \gamma(K) }(\boldsymbol{X}, \Theta) = (\boldsymbol{X}^{2})^{\Delta_{1} - \Delta_{3}} \, {\cal I}_{\beta(J)}{}^{\beta'(J)}(\boldsymbol{X}) \, {\cal H}^{I \, (\pm)}_{\alpha(I) \beta'(J) \gamma(K)}(\boldsymbol{X}, \Theta) \, .
\end{equation}
%
\item[\textbf{(iii)}] {\bf Point-switch symmetries:} \\
%
If the fields $\mathbf{J}$ and $\mathbf{J}'$ coincide, then we obtain the following point-switch identity
%
\begin{equation}
{\cal H}_{\alpha(I) \beta(I) \gamma(K)}(\boldsymbol{X}, \Theta) = (-1)^{\epsilon(\mathbf{J})} {\cal H}_{\beta(I) \alpha(I) \gamma(K)}(-\boldsymbol{X}^{\text{T}}, -\Theta) \, ,
\end{equation}
%
where $\epsilon(\mathbf{J})$ is the Grassmann parity of $\mathbf{J}$. Likewise, if the fields $\mathbf{J}$ and $\mathbf{J}''$ coincide, then we obtain the constraint
%
\begin{equation}
\tilde{{\cal H}}_{\alpha(I) \beta(J) \gamma(I) }(\boldsymbol{X}, \Theta) = (-1)^{\epsilon(\mathbf{J})} {\cal H}_{\gamma(I) \beta(J) \alpha(I)}(-\boldsymbol{X}^{\text{T}}, -\Theta) \, .
\end{equation}
%
\end{enumerate}
In practice, imposing these constraints on correlation functions involving higher-spin supercurrents quickly becomes unwieldy using the tensor formalism, particularly due to the sheer number of possible tensor structures for a given set of superspins. Hence, in the next subsections we will develop an index-free formalism to handle the computations efficiently, using the same approach as \cite{Buchbinder:2022mys}.
\subsubsection{Auxiliary spinor formalism}\label{subsubsection3.2.3}
Suppose we must analyse the constraints on a general spin-tensor ${\cal H}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta)$, where ${\cal A}_{1} = \{ \alpha_{1}, ... , \alpha_{I} \}, {\cal A}_{2} = \{ \beta_{1}, ... , \beta_{J} \}, {\cal A}_{3} = \{ \gamma_{1}, ... , \gamma_{K} \}$ represent sets of totally symmetric spinor indices associated with the fields at points $z_{1}$, $z_{2}$ and $z_{3}$ respectively. We introduce sets of commuting auxiliary spinors for each point; $u$ at $z_{1}$, $v$ at $z_{2}$, and $w$ at $z_{3}$, where the spinors satisfy
\begin{align}
u^2 &= \varepsilon_{\alpha \beta} \, u^{\alpha} u^{\beta}=0\,, &
v^{2} &= \varepsilon_{\alpha \beta} \, v^{\alpha} v^{\beta}=0\,, & w^{2} &= \varepsilon_{\alpha \beta} \, w^{\alpha} w^{\beta}=0\,.
\label{extra1}
\end{align}
Now if we define the objects
\begin{subequations}
\begin{align}
\boldsymbol{u}^{{\cal A}_{1}} &\equiv \boldsymbol{u}^{\alpha(I)} = u^{\alpha_{1}} \dots u^{\alpha_{I}} \, , \\
\boldsymbol{v}^{{\cal A}_{2}} &\equiv \boldsymbol{v}^{\beta(J)} = v^{\beta_{1}} \dots v^{\beta_{J}} \, , \\
\boldsymbol{w}^{{\cal A}_{3}} &\equiv \boldsymbol{w}^{\gamma(K)} = w^{\gamma_{1}} \dots w^{\gamma_{K}} \, ,
\end{align}
\end{subequations}
then the generating polynomial for ${\cal H}$ is constructed as follows:
\begin{equation} \label{H - generating polynomial}
{\cal H}(\boldsymbol{X}, \Theta; u,v,w) = \,{\cal H}_{ {\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X}, \Theta) \, \boldsymbol{u}^{{\cal A}_{1}} \boldsymbol{v}^{{\cal A}_{2}} \boldsymbol{w}^{{\cal A}_{3}} \, . \\
\end{equation}
There is a one-to-one mapping between the space of symmetric traceless spin tensors and the polynomials constructed using the above method. Indeed, the tensor ${\cal H}$ is extracted from the polynomial by acting on it with the following partial derivative operators:
\begin{subequations}
\begin{align}
\frac{\pa}{\pa \boldsymbol{u}^{{\cal A}_{1}} } &\equiv \frac{\pa}{\pa \boldsymbol{u}^{\alpha(I)}} = \frac{1}{I!} \frac{\pa}{\pa u^{\alpha_{1}} } \dots \frac{\pa}{\pa u^{\alpha_{I}}} \, , \\
\frac{\pa}{\pa \boldsymbol{v}^{{\cal A}_{2}} } &\equiv \frac{\pa}{\pa \boldsymbol{v}^{\beta(J)}} = \frac{1}{J!} \frac{\pa}{\pa v^{\beta_{1}} } \dots \frac{\pa}{\pa v^{\beta_{J}}} \, , \\
\frac{\pa}{\pa \boldsymbol{w}^{{\cal A}_{3}} } &\equiv \frac{\pa}{\pa \boldsymbol{w}^{\gamma(K)}} = \frac{1}{K!} \frac{\pa}{\pa w^{\gamma_{1}} } \dots \frac{\pa}{\pa w^{\gamma_{K}}} \, .
\end{align}
\end{subequations}
The tensor ${\cal H}$ is then extracted from the polynomial as follows:
\begin{equation}
{\cal H}_{{\cal A}_{1} {\cal A}_{2} {\cal A}_{3}}(\boldsymbol{X}, \Theta) = \frac{\pa}{ \pa \boldsymbol{u}^{{\cal A}_{1}} } \frac{\pa}{ \pa \boldsymbol{v}^{{\cal A}_{2}}} \frac{\pa}{ \pa \boldsymbol{w}^{{\cal A}_{3}} } \, {\cal H}(\boldsymbol{X}, \Theta; u, v, w) \, .
\end{equation}
Auxiliary spinors are widely used
in the construction of correlation functions throughout the literature (see e.g.~\cite{Giombi:2011rz, Costa:2011mg, Stanev:2012nq, Zhiboedov:2012bm, Nizami:2013tpa, Elkhidir:2014woa}),
however, usually the entire correlator is contracted with auxiliary variables and as a result one produces a polynomial
depending on all three superspace points and the auxiliary spinors. In contrast, this approach contracts the auxiliary spinors with the tensor ${\cal H}_{ {\cal A}_{1} {\cal A}_{2} {\cal A}_{3} }(\boldsymbol{X}, \Theta)$,
which depends only on $\boldsymbol{X}$, $\Theta$. As a result, it is straightforward to impose constraints on the correlation function as ${\cal H}$ does not depend on any of the superspace points explicitly.
The full three-point function may be translated into the auxiliary spinor formalism; recalling that $I = 2s_{1}$, $J = 2s_{2}$, $K = 2s_{3} $, first we define:
\begin{subequations}
\begin{align}
\mathbf{J}^{}_{s_{1}}(z_{1}; u) & = \mathbf{J}_{\alpha(I)}(z_{1}) \, \boldsymbol{u}^{\alpha(I)} \, , & \mathbf{J}'_{s_{2}}(z_{2}; v) &= \mathbf{J}_{\beta(J)}(z_{2}) \, \boldsymbol{v}^{\alpha(J)} \, ,
\end{align}
\vspace{-10mm}
\begin{align}
\mathbf{J}''_{s_{3}}(z_{3}; w) &= \mathbf{J}_{\gamma(K)}(z_{3}) \, \boldsymbol{w}^{\gamma(K)} \, .
\end{align}
\end{subequations}
The general ansatz for the three-point function is as follows:
\begin{align}
\langle \, \mathbf{J}^{}_{s_{1}}(z_{1}; u) \, \mathbf{J}'_{s_{2}}(z_{2}; v) \, \mathbf{J}''_{s_{3}}(z_{3}; w) \rangle = \frac{ {\cal I}^{(I)}(\boldsymbol{x}_{13}; u, \tilde{u}) \, {\cal I}^{(J)}(\boldsymbol{x}_{23}; v, \tilde{v}) }{(\boldsymbol{x}_{13}^{2})^{\Delta_{1}} (\boldsymbol{x}_{23}^{2})^{\Delta_{2}}}
\; {\cal H}(\boldsymbol{X}_{12}, \Theta_{12}; \tilde{u},\tilde{v},w) \, ,
\end{align}
where
\begin{equation}
{\cal I}^{(s)}(\boldsymbol{x}; u,\tilde{u}) \equiv {\cal I}^{(s)}_{\boldsymbol{x}}(u,\tilde{u}) = \boldsymbol{u}^{\alpha(s)} {\cal I}_{\alpha(s)}{}^{\alpha'(s)}(\boldsymbol{x}) \, \frac{\pa}{\pa \tilde{\boldsymbol{u}}^{\alpha'(s)}} \, ,
\end{equation}
is the inversion operator acting on polynomials degree $s$ in $\tilde{u}$, and $\Delta_{i} = s_{i} + 1$. After converting the constraints summarised in the previous subsection into the auxiliary spinor formalism, we obtain:
\begin{enumerate}
\item[\textbf{(i)}] {\bf Homogeneity:}
%
\begin{equation}
{\cal H}(\lambda^{2} \boldsymbol{X}, \lambda \Theta ; u(I), v(J), w(K)) = (\lambda^{2})^{\Delta_{3} - \Delta_{2} - \Delta_{1}} \, {\cal H}(\boldsymbol{X}, \Theta; u(I), v(J), w(K)) \, ,
\end{equation}
%
where we have used the notation $u(I)$, $v(J)$, $w(K)$ to keep track of the homogeneity of the auxiliary spinors $u$, $v$ and $w$.
%
\item[\textbf{(ii)}] {\bf Differential constraints:} \\
%
First, define the following three differential operators:
%
\begin{align}
D_{1} = {\cal D}^{\alpha} \frac{\pa}{\pa u^{\alpha}} \, , && D_{2} = {\cal Q}^{\alpha} \frac{\pa}{\pa v^{\alpha}} \, , && D_{3} = {\cal Q}^{\alpha} \frac{\pa}{\pa w^{\alpha}} \, .
\end{align}
%
Conservation on all three points may be imposed using the following constraints:
%
\begin{subequations} \label{Conservation equations}
\begin{align}
\text{Conservation at $z_{1}$:} && D_{1} \, {\cal H}(\boldsymbol{X}, \Theta; u(I), v(J), w(K)) &= 0 \, , \\[1mm]
\text{Conservation at $z_{2}$:} && D_{2} \, {\cal H}(\boldsymbol{X}, \Theta; u(I), v(J), w(K)) &= 0 \, , \\[1mm]
\text{Conservation at $z_{3}$:} && D_{3} \, \tilde{{\cal H}}(\boldsymbol{X}, \Theta; u(I), v(J), w(K)) &= 0 \, ,
\end{align}
\end{subequations}
where, in the auxiliary spinor formalism, $\tilde{{\cal H}} = \tilde{{\cal H}}^{(+)} + \tilde{{\cal H}}^{(-)}$ is computed as follows:
%
\begin{equation}
\tilde{{\cal H}}^{(\pm)}(\boldsymbol{X}, \Theta; u(I), v(J), w(K) ) = \pm (\boldsymbol{X}^{2})^{\Delta_{1} - \Delta_{3}} {\cal I}^{(J)}_{\boldsymbol{X}}(v,\tilde{v}) \, {\cal H}^{I \, (\pm)}(\boldsymbol{X}, \Theta; u(I), \tilde{v}(J), w(K)) \, ,
\end{equation}
%
where ${\cal I}^{(s)}_{\boldsymbol{X}}(v,\tilde{v}) \equiv {\cal I}^{(s)}(\boldsymbol{X}; v,\tilde{v})$.
%
\item[\textbf{(iii)}] {\bf Point switch symmetries:} \\
%
If the fields $\mathbf{J}$ and $\mathbf{J}'$ coincide (hence $I = J$), then we obtain the following point-switch constraint
%
\begin{equation} \label{Point switch A}
{\cal H}(\boldsymbol{X}, \Theta; u(I), v(I), w(K)) = (-1)^{\epsilon(\mathbf{J})} {\cal H}(- \boldsymbol{X}^{\text{T}}, - \Theta; v(I), u(I), w(K)) \, ,
\end{equation}
%
where, again, $\epsilon(\mathbf{J})$ is the Grassmann parity of $\mathbf{J}$. Similarly, if the fields $\mathbf{J}$ and $\mathbf{J}''$ coincide (hence $I = K$) then we obtain the constraint
%
\begin{equation} \label{Point switch B}
\tilde{{\cal H}}(\boldsymbol{X}, \Theta; u(I), v(J), w(I)) = (-1)^{\epsilon(\mathbf{J})} {\cal H}(- \boldsymbol{X}^{\text{T}}, - \Theta; w(I), v(J), u(I)) \, .
\end{equation}
%
\end{enumerate}
To find an explicit solution for the polynomial \eqref{H - generating polynomial}, one must now consider all possible scalar combinations of $\boldsymbol{X}$, $\Theta$, $\ve$, $u$, $v$ and $w$ with the appropriate homogeneity. Hence, let us introduce the following structures: \\[2mm]
\textbf{Bosonic:}
\begin{subequations} \label{Basis scalar structures 1}
\begin{align}
P_{1} &= \ve_{\alpha \beta} v^{\alpha} w^{\beta} \, , & P_{2} &= \ve_{\alpha \beta} w^{\alpha} u^{\beta} \, , & P_{3} &= \ve_{\alpha \beta} u^{\alpha} v^{\beta} \, , \\
\mathbb{Q}_{1} &= \hat{\boldsymbol{X}}_{\alpha \beta} v^{\alpha} w^{\beta} \, , & \mathbb{Q}_{2} &= \hat{\boldsymbol{X}}_{\alpha \beta} w^{\alpha} u^{\beta} \, , & \mathbb{Q}_{3} &= \hat{\boldsymbol{X}}_{\alpha \beta} u^{\alpha} v^{\beta} \, , \\
\mathbb{Z}_{1} &= \hat{\boldsymbol{X}}_{\alpha \beta} u^{\alpha} u^{\beta} \, , & \mathbb{Z}_{2} &= \hat{\boldsymbol{X}}_{\alpha \beta} v^{\alpha} v^{\beta} \, , & \mathbb{Z}_{3} &= \hat{\boldsymbol{X}}_{\alpha \beta} w^{\alpha} w^{\beta} \, ,
\end{align}
\end{subequations}
\textbf{Fermionic:}
\begin{subequations} \label{Basis scalar structures 2}
\begin{align}
R_{1} &= \ve_{\alpha \beta} u^{\alpha} \hat{\Theta}^{\beta} \, , & R_{2} &= \ve_{\alpha \beta} v^{\alpha} \hat{\Theta}^{\beta} \, , & R_{3} &= \ve_{\alpha \beta} w^{\alpha} \hat{\Theta}^{\beta} \, , \\
\mathbb{S}_{1} &= \hat{\boldsymbol{X}}_{\alpha \beta} u^{\alpha} \hat{\Theta}^{\beta} \, , & \mathbb{S}_{2} &= \hat{\boldsymbol{X}}_{\alpha \beta} v^{\alpha} \hat{\Theta}^{\beta} \, , & \mathbb{S}_{3} &= \hat{\boldsymbol{X}}_{\alpha \beta} w^{\alpha} \hat{\Theta}^{\beta} \, .
\end{align}
\end{subequations}
A general solution for ${\cal H}(\boldsymbol{X}, \Theta)$ is comprised of all possible combinations of $P_{i}, \mathbb{Q}_{i}, \mathbb{Z}_{i}, R_{i}, \mathbb{S}_{i}$ and $\boldsymbol{J}$ which possess the correct homogeneity in $u$, $v$ and $w$. Comparing with \eqref{Inversion even objects}, \eqref{Inversion odd objects}, we can identify the objects $P_{i}$, $\mathbb{S}_{i}$ and $\boldsymbol{J}$ as being ``parity-odd" due to their transformation properties under inversions.
For the subsequent analysis of conserved three-point functions, due to the property \eqref{Three-point building blocks 1a - properties 3}, and the fact that in ${\cal N}=1$ theories $\Theta^{3} = 0 \implies \boldsymbol{X}^{2} = X^{2}$, it is generally more convenient to construct the polynomial in terms of the symmetric spin-tensor, $X_{\alpha \beta}$, rather than $\boldsymbol{X}_{\alpha \beta}$, resulting in the polynomial ${\cal H}(X, \Theta)$. Hence, we expand $\mathbb{Q}_{i}, \mathbb{Z}_{i}, \mathbb{S}_{i}$ as follows:
\begin{align}
\mathbb{Q}_{i} = Q_{i} - \frac{\text{i}}{2} \, P_{i} \, \boldsymbol{J} \, , && \mathbb{Z}_{i} &= Z_{i} \, ,
&& \mathbb{S}_{i} = S_{i} \, ,
\end{align}
where we have defined
\begin{subequations}
\begin{align}
Q_{1} &= \hat{X}_{\alpha \beta} v^{\alpha} w^{\beta} \, , & Q_{2} &= \hat{X}_{\alpha \beta} w^{\alpha} u^{\beta} \, , & Q_{3} &= \hat{X}_{\alpha \beta} u^{\alpha} v^{\beta} \, , \\
Z_{1} &= \hat{X}_{\alpha \beta} u^{\alpha} u^{\beta} \, , & Z_{2} &= \hat{X}_{\alpha \beta} v^{\alpha} v^{\beta} \, , & Z_{3} &= \hat{X}_{\alpha \beta} w^{\alpha} w^{\beta} \, , \\
S_{1} &= \hat{X}_{\alpha \beta} u^{\alpha} \hat{\Theta}^{\beta} \, , & S_{2} &= \hat{X}_{\alpha \beta} v^{\alpha} \hat{\Theta}^{\beta} \, , & S_{3} &= \hat{X}_{\alpha \beta} w^{\alpha} \hat{\Theta}^{\beta} \, .
\end{align}
\end{subequations}
The polynomial ${\cal H}(X, \Theta)$ is now constructed from all possible combinations of $P_{i}$, $Q_{i}$, $Z_{i}$, $R_{i}$, $S_{i}$
and $\boldsymbol{J}$. Once a general solution for ${\cal H}(X, \Theta)$ is obtained, one can convert back to ``covariant form", ${\cal H}(\boldsymbol{X}, \Theta)$, by making the replacements
\begin{align}
Q_{i} \rightarrow \mathbb{Q}_{i} + \frac{\text{i}}{2} \, P_{i} \, \boldsymbol{J} \, , && Z_{i} \rightarrow \mathbb{Z}_{i} \, ,
&& S_{i} \rightarrow \mathbb{S}_{i} \, .
\end{align}
\subsubsection{Generating function method}\label{subsubsection3.2.4}
In general, it is a non-trivial technical problem to come up with an exhaustive list of possible solutions for ${\cal H}(X,\Theta;u,v,w)$ for a given set of superspins, however, this process can be simplified by introducing generating functions for the polynomial ${\cal H}(X,\Theta; u, v, w)$. First we introduce the function ${\cal F}(X)$, defined as follows:
\begin{align} \label{Generating function 1}
{\cal F}(X) &= X^{\delta} P_{1}^{k_{1}} P_{2}^{k_{2}} P_{3}^{k_{3}} Q_{1}^{l_{1}} Q_{2}^{l_{2}} Q_{3}^{l_{3}} Z_{1}^{m_{1}} Z_{2}^{m_{2}} Z_{3}^{m_{3}}
\end{align}
where, typically, $\delta = \Delta_{3} - \Delta_{2} - \Delta_{1}$. The generating functions for Grassmann-even and Grassmann-odd correlators in ${\cal N}=1$ theories are then defined as follows:
\begin{align} \label{Generating function 2}
{\cal G}(X,\Theta \, | \, \Gamma) &= \begin{cases}
{\cal F}(X) \, \boldsymbol{J}^{\sigma} \, , & \text{Bosonic} \\
{\cal F}(X) \, R_{1}^{p_{1}} R_{2}^{p_{2}} R_{3}^{p_{3}} S_{1}^{q_{1}} S_{2}^{q_{2}} S_{3}^{q_{3}} \, , & \text{Fermionic}
\end{cases}
\end{align}
Here the non-negative integers, $ \Gamma = \{ k_{i}, l_{i}, m_{i}, p_{i}, q_{i}, \sigma\}$, $i=1,2,3$, are constrained; for overall bosonic correlation functions they are solutions to the following linear system
\begin{subequations} \label{Diophantine equations 1}
\begin{align}
k_{2} + k_{3} + l_{2} + l_{3} + 2m_{1} &= I \, , \\
k_{1} + k_{3} + l_{1} + l_{3} + 2m_{2} &= J \, , \\
k_{1} + k_{2} + l_{1} + l_{2} + 2m_{3} &= K \, ,
\end{align}
\end{subequations}
with $\sigma = 0,1$. Likewise, for overall fermionic correlation functions, the integers $\Gamma$ are solutions to the following system
\begin{subequations} \label{Diophantine equations 2}
\begin{align}
k_{2} + k_{3} + l_{2} + l_{3} + 2m_{1} + p_{1} + q_{1} &= I \, , \\
k_{1} + k_{3} + l_{1} + l_{3} + 2m_{2} + p_{2} + q_{2} &= J \, , \\
k_{1} + k_{2} + l_{1} + l_{2} + 2m_{3} + p_{3} + q_{3} &= K \, , \\
p_{1} + p_{2} + p_{3} + q_{1} + q_{2} + q_{3} &= 1 \, ,
\end{align}
\end{subequations}
where $I = 2s_{1}$, $J = 2s_{2}$, $K = 2s_{3}$ specify the spin-structure of the correlation function. These equations are obtained by comparing the homogeneity of the auxiliary spinors $u$, $v$, $w$ in the generating functions \eqref{Generating function 2}, against the index structure of the tensor ${\cal H}$. The solutions correspond to a linearly dependent basis of structures in which the polynomial ${\cal H}$ can be decomposed. Using \textit{Mathematica} it is straightforward to generate all possible solutions to \eqref{Diophantine equations 1}, \eqref{Diophantine equations 2} for fixed values of the superspins.
Now let us assume there exists a finite number of solutions $\Gamma_{i}$, $i = 1, ..., N$ to \eqref{Diophantine equations 1}, \eqref{Diophantine equations 2} for a given choice of $I,J,K$. The set of solutions $\Gamma = \{ \Gamma_{i} \}$ may be partitioned into ``even" and ``odd" sets $\Gamma^{+}$ and $\Gamma^{-}$ respectively by counting the number of pseudo-invariant basis structures present in a particular solution. Therefore we define:
\begin{align}
\Gamma^{+} = \Gamma|_{ \, k_{1} + k_{2} + k_{3} + q_{1} + q_{2} + q_{3} + \sigma \, ( \hspace{-1mm}\bmod 2 ) = 0} \, , && \Gamma^{-} = \Gamma|_{ \, k_{1} + k_{2} + k_{3} + q_{1} + q_{2} + q_{3} + \sigma\, ( \hspace{-1mm} \bmod 2 ) = 1} \, .
\end{align}
Hence, the even solutions are those such that $k_{1} + k_{2} + k_{3} + q_{1} + q_{2} + q_{3} + \sigma = \text{even}$ (i.e contains an even number of parity-odd building blocks), while the odd solutions are those such that $k_{1} + k_{2} + k_{3} + q_{1} + q_{2} + q_{3} + \sigma= \text{odd}$ (contains an odd number of parity-odd building blocks). Let $|\Gamma^{+}| = N^{+}$ and $|\Gamma^{-}| = N^{-}$, with $N = N^{+} + N^{-}$, then the most general ansatz for the polynomial ${\cal H}$ in \eqref{H - generating polynomial} is as follows:
\begin{equation} \label{H decomposition}
{\cal H}(X, \Theta; u, v, w) = {\cal H}^{(+)}(X, \Theta; u, v, w) + {\cal H}^{(-)}(X, \Theta; u, v, w) \, ,
\end{equation}
where
\begin{subequations}
\begin{align}
{\cal H}^{(+)}(X, \Theta; u, v, w) &= \sum_{i=1}^{N^{+}} A_{i} \, {\cal G}(X, \Theta \, | \,\Gamma^{+}_{i}) \, , \\
{\cal H}^{(-)}(X, \Theta; u, v, w) &= \sum_{i=1}^{N^{-}} B_{i} \, {\cal G}(X, \Theta \, | \, \Gamma^{-}_{i}) \, ,
\end{align}
\end{subequations}
and $A_{i}$ and $B_{i}$ are real constants. Using this method one can generate all the possible structures for a given set of superspins $(s_{1}, s_{2}, s_{3} )$, however, at this stage we must recall that the solutions generated using this approach are linearly dependent. To form a linearly independent set of solutions we must systematically take into account the following non-linear relations between the primitive structures:
\begin{subequations}
\begin{align} \label{Linear dependence 1}
Z_{2} Z_{3} + P_{1}^{2} - Q_{1}^{2} &= 0 \, , \\
Z_{1} Z_{3} + P_{2}^{2} - Q_{2}^{2} &= 0 \, , \\
Z_{1} Z_{2} + P_{3}^{2} - Q_{3}^{2} &= 0 \, ,
\end{align}
\end{subequations}
\vspace{-8mm}
\begin{subequations}
\begin{align} \label{Linear dependence 2}
P_{1} Z_{1} + P_{2} Q_{3} + P_{3} Q_{2} &= 0 \, , & Q_{1} Z_{1} - Q_{2} Q_{3} - P_{2} P_{3} &= 0 \, , \\
P_{2} Z_{2} + P_{1} Q_{3} + P_{3} Q_{1} &= 0 \, , & Q_{2} Z_{2} - Q_{1} Q_{3} - P_{1} P_{3} &= 0 \, , \\
P_{3} Z_{3} + P_{1} Q_{2} + P_{2} Q_{1} &= 0 \, , & Q_{3} Z_{3} - Q_{1} Q_{2} - P_{1} P_{2} &= 0 \, .
\end{align}
\end{subequations}
These allow elimination of the combinations $Z_{i} Z_{j}$, $Z_{i} P_{i}$, $Z_{i} Q_{i}$. There is also another relation involving triple products:
\begin{align} \label{Linear dependence 3}
P_{1} P_{2} P_{3} + P_{1} Q_{2} Q_{3} + P_{2} Q_{1} Q_{3} + P_{3} Q_{1} Q_{2} &= 0 \, ,
\end{align}
which allows elimination of $P_{1} P_{2} P_{3}$. The relations above are identical to those appearing in the 3D CFT case \cite{Buchbinder:2022mys}, however, they must be supplemented by relations involving the fermionic structures:
\begin{subequations}
\begin{align} \label{Linear dependence 4}
P_{1} R_{1} - Q_{2} S_{2} + Q_{3} S_{3} &= 0 \, , & P_{1} S_{1} - Q_{2} R_{2} + Q_{3} R_{3} &= 0 \, , \\
P_{2} R_{2} - Q_{3} S_{3} + Q_{1} S_{1} &= 0 \, , & P_{2} S_{2} - Q_{3} R_{3} + Q_{1} R_{1} &= 0 \, , \\
P_{3} R_{3} - Q_{1} S_{1} + Q_{2} S_{2} &= 0 \, , & P_{3} S_{3} - Q_{1} R_{1} + Q_{2} R_{2} &= 0 \, ,
\end{align}
\end{subequations}
\vspace{-8mm}
\begin{subequations}
\begin{align} \label{Linear dependence 5}
Z_{1} R_{2} - Q_{3} R_{1} + P_{3} S_{1} &= 0 \, , & Z_{2} R_{1} - Q_{3} R_{2} - P_{3} S_{2} &= 0 \, , \\
Z_{2} R_{3} - Q_{1} R_{2} + P_{1} S_{2} &= 0 \, , & Z_{3} R_{2} - Q_{1} R_{3} - P_{1} S_{3} &= 0 \, , \\
Z_{3} R_{1} - Q_{2} R_{3} + P_{2} S_{3} &= 0 \, , & Z_{1} R_{3} - Q_{2} R_{1} - P_{2} S_{1} &= 0 \, ,
\end{align}
\end{subequations}
\vspace{-8mm}
\begin{subequations}
\begin{align} \label{Linear dependence 6}
Z_{1} S_{2} - Q_{3} S_{1} + P_{3} R_{1} &= 0 \, , & Z_{2} S_{1} - Q_{3} S_{2} - P_{3} R_{2} &= 0 \, , \\
Z_{2} S_{3} - Q_{1} S_{2} + P_{1} R_{2} &= 0 \, , & Z_{3} S_{2} - Q_{1} S_{3} - P_{1} R_{3} &= 0 \, , \\
Z_{3} S_{1} - Q_{2} S_{3} + P_{2} R_{3} &= 0 \, , & Z_{1} S_{3} - Q_{2} S_{1} - P_{2} R_{1} &= 0 \, .
\end{align}
\end{subequations}
These allow for elimination of the products $P_{i} R_{i}$, $P_{i} S_{i}$, $Z_{i} R_{j}$, $Z_{i} S_{j}$. As a consequence of \eqref{Linear dependence 4}, the following also hold:
\begin{subequations}
\begin{align}
P_{1} R_{1} + P_{2} R_{2} + P_{3} R_{3} &= 0 \, , \\
P_{1} S_{1} + P_{2} S_{2} + P_{3} S_{3} &= 0 \, .
\end{align}
\end{subequations}
Applying the above relations to a set of linearly dependent polynomial structures significantly reduces the number of structures to consider for a given three-point function, since we are now restricted to linearly independent contributions. This process is relatively straightforward to implement using Mathematica's pattern matching functions.
Now that we have taken care of linear-dependence, it now remains to impose conservation on all three points in addition to the various point-switch symmetries; introducing the objects $P_{i}, Q_{i}, Z_{i}, R_{i}, S_{i}$ proves to streamline this analysis significantly. First let us consider conservation;
to impose conservation on $z_{1}$, (for either sector) we compute
\begin{align}
D_{1} {\cal H}(X, \Theta; u,v,w) &= D_{1} \Bigg\{ \sum_{i=1}^{N} c_{i} \, {\cal G}(X, \Theta \, | \, \Gamma_{i}) \Bigg\} \nonumber \\
&= \sum_{i=1}^{N} c_{i} \, D_{1} {\cal G}(X, \Theta \, | \, \Gamma_{i}) \, .
\end{align}
We then solve for the coefficient $c_{i}$ such that the result vanishes. To impose the superfield conservation equations, the identities \eqref{Derivative identities} are essential. The same approach applies for conservation on $z_{2}$.
Next, to impose conservation on $z_{3}$ we must first obtain an explicit expression for $\tilde{{\cal H}}(\boldsymbol{X},\Theta)$ in terms of ${\cal H}(\boldsymbol{X},\Theta)$, that is, we must compute (e.g. for the even sector)
\begin{equation}
\tilde{{\cal H}}(\boldsymbol{X}, \Theta; u(I), v(J), w(K) ) = (\boldsymbol{X}^{2})^{\Delta_{1} - \Delta_{3}} {\cal I}^{(J)}_{\boldsymbol{X}}(v,\tilde{v}) \, {\cal H}^{I}(\boldsymbol{X}, \Theta; u(I), \tilde{v}(J), w(K)) \, .
\end{equation}
Recall that any solution for ${\cal H}(\boldsymbol{X}, \Theta)$ can be written in terms of the structures \eqref{Basis scalar structures 1}, \eqref{Basis scalar structures 2}; given the transformation properties \eqref{Hc and H relation}, and \eqref{Htilde and Hc relation}, the computation of ${\cal H}^{I}(\boldsymbol{X}, \Theta)$ from ${\cal H}(\boldsymbol{X}, \Theta)$ is equivalent to the following replacements:
\begin{subequations} \label{Inversion transformation 1}
\begin{align}
P_{1} &\rightarrow - P_{1} \, , & P_{2} &\rightarrow -P_{2} \, , & P_{3} &\rightarrow -P_{3} \, , \\
R_{1} &\rightarrow - \mathbb{S}_{1} \, , & R_{2} &\rightarrow - \mathbb{S}_{2} \, , & R_{3} &\rightarrow - \mathbb{S}_{3} \, , \\
\mathbb{S}_{1} &\rightarrow R_{1} \, , & \mathbb{S}_{2} &\rightarrow R_{2} \, , & \mathbb{S}_{3} &\rightarrow R_{3} \, .
\end{align}
\end{subequations}
Now to compute $\tilde{{\cal H}}(\boldsymbol{X},\Theta)$ from ${\cal H}^{I}(\boldsymbol{X}, \Theta)$, we make use of the fact that $P_{1}$, $P_{3}$, $\mathbb{Q}_{1}$, $\mathbb{Q}_{3}$, $\mathbb{Z}_{2}$, $R_{2}$, and $\mathbb{S}_{2}$ are the only objects with $\tilde{v}$ dependence, and apply the identities
\begin{subequations} \label{Inversion transformation 2}
\begin{align}
{\cal I}_{\boldsymbol{X}}(v,\tilde{v}) \, P_{1} &= - \mathbb{Q}_{1} \, , & {\cal I}_{\boldsymbol{X}}(v,\tilde{v}) \, P_{3} &= \mathbb{Q}_{3} + \text{i} P_{3} \boldsymbol{J} \, , \\
{\cal I}_{\boldsymbol{X}}(v,\tilde{v}) \, \mathbb{Q}_{1} &= - P_{1} + \text{i} \, \mathbb{Q}_{1} \boldsymbol{J} \, , & {\cal I}_{\boldsymbol{X}}(v,\tilde{v}) \, \mathbb{Q}_{3} &= P_{3} \, , \\
{\cal I}_{\boldsymbol{X}}(v,\tilde{v}) \, R_{2} &= -\mathbb{S}_{2} \, , & {\cal I}_{\boldsymbol{X}}(v,\tilde{v}) \, \mathbb{S}_{2} &= - R_{2} \, ,
\end{align}
\vspace{-8mm}
\begin{align}
{\cal I}^{(2)}_{\boldsymbol{X}}(v,\tilde{v}) \, \mathbb{Z}_{2} = - \mathbb{Z}_{2} \,.
\end{align}
\end{subequations}
Hence, given a solution for the polynomial ${\cal H}(\boldsymbol{X}, \Theta)$, the computation of $\tilde{{\cal H}}(\boldsymbol{X}, \Theta)$ is now equivalent to the following replacements of the basis structures \eqref{Basis scalar structures 1}, \eqref{Basis scalar structures 2}:
\begin{subequations} \label{Htilde structure replacements}
\begin{align}
P_{1} &\rightarrow \mathbb{Q}_{1} \, , & P_{2} &\rightarrow - P_{2} \, , & P_{3} &\rightarrow - \mathbb{Q}_{3} - \text{i} P_{3} \boldsymbol{J} \, , \\
\mathbb{Q}_{1} &\rightarrow - P_{1} + \text{i} \, \mathbb{Q}_{1} \boldsymbol{J} \, , & \mathbb{Q}_{2} &\rightarrow \mathbb{Q}_{2} \, , & \mathbb{Q}_{3} &\rightarrow P_{3} \, , \\
\mathbb{Z}_{1} &\rightarrow \mathbb{Z}_{1} \, , & \mathbb{Z}_{2} &\rightarrow - \mathbb{Z}_{2} \, , & \mathbb{Z}_{3} &\rightarrow \mathbb{Z}_{3} \, \\
R_{1} &\rightarrow - \mathbb{S}_{1} \, , & R_{2} &\rightarrow R_{2} \, , & R_{3} &\rightarrow - \mathbb{S}_{3} \, , \\
\mathbb{S}_{1} &\rightarrow R_{1} \, , & \mathbb{S}_{2} &\rightarrow - \mathbb{S}_{2} \, , & \mathbb{S}_{3} &\rightarrow R_{3} \, .
\end{align}
\end{subequations}
These rules are obtained by combining \eqref{Inversion transformation 1}, \eqref{Inversion transformation 2}. Conservation on $z_{3}$ can now be imposed using the operator $D_{3}$.
It now remains to find out how point-switch symmetries act on the basis structures; this analysis is more simple when working with ${\cal H}(X,\Theta)$, instead of ${\cal H}(\boldsymbol{X},\Theta)$. For permutation of superspace points $z_{1}$ and $z_{2}$, we have $X \rightarrow - X$, $\Theta \rightarrow - \Theta$, $u \leftrightarrow v$. This results in the following replacement rules for the basis objects \eqref{Basis scalar structures 1}, \eqref{Basis scalar structures 2}:
\begin{subequations} \label{Point switch A - basis}
\begin{align}
P_{1} &\rightarrow - P_{2} \, , & P_{2} &\rightarrow - P_{1} \, , & P_{3} &\rightarrow - P_{3} \, , \\
Q_{1} &\rightarrow - Q_{2} \, , & Q_{2} &\rightarrow - Q_{1} \, , & Q_{3} &\rightarrow - Q_{3} \, , \\
Z_{1} &\rightarrow - Z_{2} \, , & Z_{2} &\rightarrow - Z_{1} \, , & Z_{3} &\rightarrow - Z_{3} \, , \\
R_{1} &\rightarrow - R_{2} \, , & R_{2} &\rightarrow - R_{1} \, , & R_{3} &\rightarrow - R_{3} \, , \\
S_{1} &\rightarrow S_{2} \, , & S_{2} &\rightarrow S_{1} \, , & S_{3} &\rightarrow S_{3} \, .
\end{align}
\end{subequations}
Likewise, for permutation of superspace points $z_{1}$ and $z_{3}$ we have $X \rightarrow - X$, $\Theta \rightarrow - \Theta$, $u \leftrightarrow w$, resulting in the following replacements:
\begin{subequations} \label{Point switch B - basis}
\begin{align}
P_{1} &\rightarrow - P_{3} \, , & P_{2} &\rightarrow - P_{2} \, , & P_{3} &\rightarrow - P_{1} \, , \\
Q_{1} &\rightarrow - Q_{3} \, , & Q_{2} &\rightarrow - Q_{2} \, , & Q_{3} &\rightarrow - Q_{1} \, , \\
Z_{1} &\rightarrow - Z_{3} \, , & Z_{2} &\rightarrow - Z_{2} \, , & Z_{3} &\rightarrow - Z_{1} \, , \\
R_{1} &\rightarrow - R_{3} \, , & R_{2} &\rightarrow - R_{2} \, , & R_{3} &\rightarrow - R_{1} \, , \\
S_{1} &\rightarrow S_{3} \, , & S_{2} &\rightarrow S_{2} \, , & S_{3} &\rightarrow S_{1} \, .
\end{align}
\end{subequations}
We have now developed all the formalism necessary to analyse the structure of three-point correlation functions in 3D ${\cal N}=1$ SCFT. To summarise, in the remaining sections of this paper we will analyse the three-point functions of conserved higher-spin supercurrents (for both integer and half-integer superspin) using the following method:
\begin{enumerate}
\item For a given set of superspins, we construct all possible (linearly dependent) structures for the polynomial ${\cal H}(X, \Theta; u,v,w)$, which is governed by the solutions to \eqref{Diophantine equations 1}, \eqref{Diophantine equations 2}. The solutions are sorted into even and odd sectors.
\item We systematically apply the linear dependence relations \eqref{Linear dependence 1}, \eqref{Linear dependence 2}, \eqref{Linear dependence 3}, \eqref{Linear dependence 4}, \eqref{Linear dependence 5}, \eqref{Linear dependence 6} to the set of all polynomial structures. This is sufficient to form the most general linearly independent ansatz for the correlation function.
\item Using the method outlined in subsection \ref{subsubsection3.2.3}, we impose the superfield conservation equations on the correlation function, resulting in the differential contraints \eqref{Conservation equations} on ${\cal H}$. The result of each computation is a large polynomial in the basis structures \eqref{Basis scalar structures 1}, \eqref{Basis scalar structures 2}. The linear dependence relations are systematically applied to this polynomial again to ensure that it is composed of only linearly independent terms. The coefficients are read off the structures, resulting in algebraic constraint relations on the coefficients $A_{i}, B_{i}$. This process significantly reduces the number of structures in the three-point function.
\item Once the general form of the polynomial ${\cal H}(X,\Theta; u,v,w)$ (associated with the conserved three-point function $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$) is obtained for a given set of superspins $(s_{1},s_{2}, s_{3})$, we then impose any symmetries under permutation of superspace points, that is, \eqref{Point switch A} and \eqref{Point switch B} (if applicable). In certain cases, imposing these constraints can eliminate the remaining structures. The solution is then converted into covariant form ${\cal H}(\boldsymbol{X},\Theta; u,v,w)$.
\end{enumerate}
The computations are done completely analytically with the use of Mathematica and the Grassmann package. By using pattern matching functions, the calculations are carried out purely amongst the basis structures \eqref{Basis scalar structures 1}, \eqref{Basis scalar structures 2}, as a result we do not have to fix superspace points to certain values. The only chosen parameters are the spins. Due to computational limitations we could carry out the analysis up to $s_{i} = 20$ (some steps of the calculations involve millions of terms), however, with more optimisation and sufficient computational resources this approach should hold for arbitrary superspins.
Since there are an enormous number of possible three-point functions with $s_{i} \leq 20$, we present the final results (in the form of Mathematica outputs) for ${\cal H}(\boldsymbol{X},\Theta; u,v,w)$ for some particularly interesting examples, as the solutions and coefficient constraints become cumbersome to present beyond cases involving low superspins. We are primarily interested in counting the number of independent polynomial structures after imposing all the constraints.
\section{Three-point functions of conserved supercurrents}\label{section4}
In the next subsections we analyse the structure of three-point correlation functions involving conserved higher-spin supercurrents. As a test of our approach we begin with an analysis of three-point functions involving currents with low superspins, such as the supercurrent and flavour current multiplets.
\subsection{Supercurrent and flavour current correlators}\label{subsection4.1}
The most important examples of conserved supercurrents in 3D ${\cal N}=1$ superconformal field theories are the supercurrent and flavour current multiplets. The supercurrent multiplet is described by the spin-tensor superfield, $J_{\alpha(3)}(z)$, with scale dimension $\Delta_{J} = 5/2$. It satisfies $D^{\alpha_{1}} J_{\alpha_{1} \alpha_{2} \alpha_{3}}(z) = 0$ and contains the energy-momentum tensor, $T_{\alpha(4)}(x) = D_{(\alpha_{1}} J_{\alpha_{2} \alpha_{3} \alpha_{4})}(z) |_{\theta = 0} $, and the supersymmetry current, $Q_{\alpha(3)}(x) = J_{\alpha(3)}(z) |_{\theta = 0} $, as its independent component fields. Likewise, the flavour current multiplet is described by a spinor superfield, $L_{\alpha}(z)$, with scale dimension $\Delta_{L} = 3/2$. It satisfies the superfield conservation equation $D^{\alpha} L_{\alpha}(z) = 0$, and contains a conserved vector current $V_{\alpha(2)}(x) = D_{(\alpha_{1}} L_{\alpha_{2})}(z) |_{\theta = 0} $. Three-point functions of these supercurrents were originally studied in \cite{Buchbinder:2015qsa,Buchbinder:2021gwu} (for analysis of three-point functions of the component currents in 3D/4D CFT see \cite{Buchbinder:2022cqp,Buchbinder:2022mys}), here we present the solutions for them using our formalism. The possible three-point functions involving the supercurrent and flavour current multiplets are:
\begin{align} \label{Low-superspin component correlators}
\langle L_{\alpha}(z_{1}) \, L_{\beta}(z_{2}) \, L_{\gamma}(z_{3}) \rangle \, , && \langle L_{\alpha}(z_{1}) \, L_{\beta}(z_{2}) \, J_{\gamma(3)}(z_{3}) \rangle \, , \\
\langle J_{\alpha(3)}(z_{1}) \, J_{\beta(3)}(z_{2}) \, L_{\alpha}(z_{3}) \rangle \, , && \langle J_{\alpha(3)}(z_{1}) \, J_{\beta(3)}(z_{2}) \, J_{\gamma(3)}(z_{3}) \rangle \, .
\end{align}
We note that in all cases the correlation functions are overall Grassmann-odd, hence, it's expected that each of them are fixed up to a single parity-even solution after imposing conservation on all three points. The analysis of these three-point functions is relatively straightforward using our computational approach.
\newpage
\noindent\textbf{Correlation function} $\langle L L L \rangle$\textbf{:}
Let us first consider $\langle L L L \rangle$; within the framework of our formalism we study the three-point function $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{1/2} \mathbf{J}''_{1/2} \rangle$. The general ansatz for this correlation function, according to \eqref{Conserved correlator ansatz} is
\begin{align}
\langle \mathbf{J}^{}_{\alpha}(z_{1}) \, \mathbf{J}'_{\beta}(z_{2}) \, \mathbf{J}''_{\gamma}(z_{3}) \rangle = \frac{ {\cal I}_{\alpha}{}^{\alpha'}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta}{}^{\beta'}(\boldsymbol{x}_{23}) }{(\boldsymbol{x}_{13}^{2})^{3/2} (\boldsymbol{x}_{23}^{2})^{3/2}}
\; {\cal H}_{\alpha' \beta' \gamma}(\boldsymbol{X}_{12}, \Theta_{12}) \, .
\end{align}
Using the formalism outlined in subsection \ref{subsection3.2}, all information about this correlation function is encoded in the following polynomial:
\begin{align}
{\cal H}(\boldsymbol{X}, \Theta; u(1), v(1), w(1)) = {\cal H}_{ \alpha \beta \gamma }(\boldsymbol{X}, \Theta) \, \boldsymbol{u}^{\alpha} \boldsymbol{v}^{\beta} \boldsymbol{w}^{\gamma} \, .
\end{align}
Using Mathematica we solve \eqref{Diophantine equations 2} for the chosen spins and substitute each solution into the generating function \eqref{Generating function 2}. This provides us with the following list of linearly dependent polynomial structures for the polynomial ${\cal H}(X,\Theta;u,v,w)$ in the even and odd sectors respectively:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{LLLlist.pdf} &&
\end{flalign*}
After systematic application of the linear dependence relations \eqref{Linear dependence 1}-\eqref{Linear dependence 6} we obtain the following linearly independent sets:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{LLLlistI.pdf} &&
\end{flalign*}
Next, we impose conservation on all three points, where we obtain the following constraints on the coefficients $A_{i}$ and $B_{i}$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{LLLconstraints.pdf} &&
\end{flalign*}
and the explicit solution for ${\cal H}(\boldsymbol{X}, \Theta; u,v,w)$
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{1-1-1.pdf} &&
\end{flalign*}
Hence, the three-point function is fixed up to a single parity-even polynomial structure.
After imposing symmetries under permutation of spacetime points,
e.g. $\mathbf{J}=\mathbf{J}'=\mathbf{J}''$, the remaining structure vanishes.
This vanishing result is not surprising because it corresponds to the contribution proportional to the symmetric invariant tensor of the flavour symmetry group.
In four dimensions this contribution is related to the chiral anomaly which does not exist in three dimensions.
The correlator $\langle \mathbf{J}_{1/2} \, \mathbf{J}_{1/2} \, \mathbf{J}_{1/2} \rangle$ has, however, a non-vanishing contribution proportional to the totally antisymmetric structure constants.
In our analysis in this paper any possible ``antisymmetric" contributions are ignored when we impose the point-switch identities.
The most general form of three-point function of flavour current multiplets was found explicitly in~\cite{Buchbinder:2015qsa, Buchbinder:2021gwu} and we will not discuss it here.
\vspace{2mm}
\noindent
\textbf{Correlation function} $\langle L L J \rangle$\textbf{:}
The next example to consider is the mixed correlator $\langle L L J \rangle$; to study this case we may examine the correlation function $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{1/2} \mathbf{J}''_{3/2} \rangle$. Using the general formula, the ansatz for this three-point function is
\begin{align}
\langle \mathbf{J}^{}_{\alpha}(z_{1}) \, \mathbf{J}'_{\beta}(z_{2}) \, \mathbf{J}''_{\gamma(3)}(z_{3}) \rangle = \frac{ {\cal I}_{\alpha}{}^{\alpha'}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta}{}^{\beta'}(\boldsymbol{x}_{23}) }{(\boldsymbol{x}_{13}^{2})^{3/2} (\boldsymbol{x}_{23}^{2})^{3/2}}
\; {\cal H}_{\alpha' \beta' \gamma(3)}(\boldsymbol{X}_{12}, \Theta_{12}) \, .
\end{align}
Using the formalism outlined in \ref{subsection3.2}, all information about this correlation function is encoded in the following polynomial:
\begin{align}
{\cal H}(\boldsymbol{X}, \Theta; u(1), v(1), w(3)) = {\cal H}_{ \alpha \beta \gamma(3) }(\boldsymbol{X}, \Theta) \, \boldsymbol{u}^{\alpha} \boldsymbol{v}^{\beta} \boldsymbol{w}^{\gamma(3)} \, .
\end{align}
After solving \eqref{Diophantine equations 2}, we obtain the following list of polynomial structures for ${\cal H}(X,\Theta;u,v,w)$ in the even and odd sectors respectively:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{LLJlist.pdf} &&
\end{flalign*}
After systematic application of the linear dependence relations \eqref{Linear dependence 1}-\eqref{Linear dependence 6} we obtain the following linearly independent sets:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{LLJlistI.pdf} &&
\end{flalign*}
Next, we impose conservation on all three points; we obtain the following constraints on the coefficients $A_{i}$ and $B_{i}$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{LLJconstraints.pdf} &&
\end{flalign*}
and the explicit solution for ${\cal H}(\boldsymbol{X}, \Theta; u,v,w)$
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{1-1-3.pdf} &&
\end{flalign*}
Hence, after conservation, the three-point function is fixed up to a single even structure. This structure is also compatible with the symmetry $\mathbf{J}=\mathbf{J}'$, therefore $\langle L L J \rangle$ is fixed up to a single structure. \\[5mm]
\noindent
\textbf{Correlation function} $\langle J J L \rangle$\textbf{:}
The next example to consider is the mixed correlator $\langle J J L \rangle$; to study this case we may examine the correlation function $\langle \mathbf{J}^{}_{3/2} \mathbf{J}'_{3/2} \mathbf{J}''_{1/2} \rangle$. Using the general formula, the ansatz for this three-point function is
\begin{align}
\langle \mathbf{J}^{}_{\alpha(3)}(z_{1}) \, \mathbf{J}'_{\beta(3)}(z_{2}) \, \mathbf{J}''_{\gamma}(z_{3}) \rangle = \frac{ {\cal I}_{\alpha(3)}{}^{\alpha'(3)}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta(3)}{}^{\beta'(3)}(\boldsymbol{x}_{23}) }{(\boldsymbol{x}_{13}^{2})^{5/2} (\boldsymbol{x}_{23}^{2})^{5/2}}
\; {\cal H}_{\alpha'(3) \beta'(3) \gamma}(\boldsymbol{X}_{12}, \Theta_{12}) \, .
\end{align}
Using the formalism outlined in \ref{subsection3.2}, all information about this correlation function is encoded in the following polynomial:
\begin{align}
{\cal H}(\boldsymbol{X}, \Theta; u(3), v(3), w(1)) = {\cal H}_{ \alpha(3) \beta(3) \gamma }(\boldsymbol{X}, \Theta) \, \boldsymbol{u}^{\alpha(3)} \boldsymbol{v}^{\beta(3)} \boldsymbol{w}^{\gamma} \, .
\end{align}
After solving \eqref{Diophantine equations 2}, we obtain the following list of (linearly dependent) polynomial structures in the even and odd sectors respectively:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{JJLlist.pdf} &&
\end{flalign*}
After systematic application of the linear dependence relations \eqref{Linear dependence 1}-\eqref{Linear dependence 6} we obtain the following linearly independent sets:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{JJLlistI.pdf} &&
\end{flalign*}
Next, we impose conservation on all three points; we obtain the following constraints on the coefficients $A_{i}$ and $B_{i}$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{JJLconstraints.pdf} &&
\end{flalign*}
and the explicit solution for ${\cal H}(\boldsymbol{X}, \Theta; u,v,w)$
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{3-3-1.pdf} &&
\end{flalign*}
Hence, after imposing conservation on all three points, the three-point function is fixed up to a single even structure. This structure is not compatible with the symmetry property $\mathbf{J} = \mathbf{J}'$, hence, $\langle J J L \rangle = 0$. \\[5mm]
\noindent
\textbf{Correlation function} $\langle J J J \rangle$\textbf{:}
The last example to consider is the three-point function of the supercurrent, $\langle J J J \rangle$. To study it we may examine the correlation function $\langle \mathbf{J}^{}_{3/2} \mathbf{J}'_{3/2} \mathbf{J}''_{3/2} \rangle$. Using the general formula, the ansatz for this three-point function is
\begin{align}
\langle \mathbf{J}^{}_{\alpha(3)}(z_{1}) \, \mathbf{J}'_{\beta(3)}(z_{2}) \, \mathbf{J}''_{\gamma(3)}(z_{3}) \rangle = \frac{ {\cal I}_{\alpha(3)}{}^{\alpha'(3)}(\boldsymbol{x}_{13}) \, {\cal I}_{\beta(3)}{}^{\beta'(3)}(\boldsymbol{x}_{23}) }{(\boldsymbol{x}_{13}^{2})^{5/2} (\boldsymbol{x}_{23}^{2})^{5/2}}
\; {\cal H}_{\alpha'(3) \beta'(3) \gamma(3)}(\boldsymbol{X}_{12}, \Theta_{12}) \, .
\end{align}
Using the formalism outlined in \ref{subsection3.2}, all information about this correlation function is encoded in the following polynomial:
\begin{align}
{\cal H}(\boldsymbol{X}, \Theta; u(3), v(3), w(3)) = {\cal H}_{ \alpha(3) \beta(3) \gamma(3) }(\boldsymbol{X}, \Theta) \, \boldsymbol{u}^{\alpha(3)} \boldsymbol{v}^{\beta(3)} \boldsymbol{w}^{\gamma(3)} \, .
\end{align}
In this case there are a vast number of linearly dependent structures to consider and the list is too large to present, however, after application of the linear dependence relations \eqref{Linear dependence 1}-\eqref{Linear dependence 6} we obtain the following linearly independent structures:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{JJJlistI.pdf} &&
\end{flalign*}
Next, we impose conservation on all three points and obtain the following constraints on the coefficients $A_{i}$ and $B_{i}$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{JJJconstraints.pdf} &&
\end{flalign*}
and the explicit solution for ${\cal H}(\boldsymbol{X}, \Theta; u,v,w)$
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.95\textwidth]{3-3-3.pdf} &&
\end{flalign*}
Hence the three-point function $\langle \mathbf{J}^{}_{3/2} \mathbf{J}'_{3/2} \mathbf{J}''_{3/2} \rangle$ is fixed up to a single parity-even structure. The remaining polynomial structures are also compatible with the symmetry property $\mathbf{J}=\mathbf{J}'=\mathbf{J}''$, hence, the supercurrent three-point function $\langle J J J \rangle$ is fixed up to a single parity-even structure. In terms of the number of independent structures, these results are consistent with \cite{Buchbinder:2015qsa}.
\subsection{General structure of \texorpdfstring{$\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$
}{< J J' J'' >}}
\label{subsection4.2}
We performed a comprehensive analysis of the general structure of the three-point correlation function
$\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$ using our computational approach.
Due to computational limitations we were able to carry out this analysis for $s_{i} \leq 20$, however, the pattern in the solutions is very clear and we propose that the results
stated in this section hold for arbitrary superspins. We also want to emphasise that for given $(s_1, s_2, s_3)$ our method produces a result which can be presented in an explicit form even for relatively
high superspins (see examples below). With a sufficiently powerful computer one can extend our results to larger values of $s_i$.
Based on our analysis we found that the general structure of the three-point correlation function
$\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$ is constrained to the following form:
\begin{equation}
\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle = a \, \langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle_{E} + b \, \langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle_{O} \, .
\end{equation}
One of our main conclusions is that the odd structure, $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle_{O}$ does not appear in correlators that are overall
Grassmann-odd (or fermionic). The existence of the odd solution in the Grassmann-even (bosonic) correlators depend on the following superspin triangle inequalities:
\begin{align} \label{Triangle inequalities}
s_{1} &\leq s_{2} + s_{3} \, , & s_{2} &\leq s_{1} + s_{3} \, , & s_{3} &\leq s_{1} + s_{2} \, .
\end{align}
When the triangle inequalities are simultaneously satisfied, there is one even solution and one odd solution, however, if any of the above inequalities are not satisfied then the odd solution is incompatible
with current conservation.
Further, if any of the $ \mathbf{J}$, $ \mathbf{J}'$, $ \mathbf{J}''$ coincide then the resulting point-switch symmetries can kill off the remaining structures.
Before we discuss in more detail Grassmann-even and Grassmann-odd correlators and present explicit examples we would like to make some general comments.
In particular, we observe that if the triangle inequalities are simultaneously satisfied, each polynomial structure in the three-point functions can be written as a product of at most 5 of the $P_{i}$, $Q_{i}$, with the $Z_{i}$
completely eliminated. Another useful observation is that the triangle inequalities can be encoded in a discriminant, $\sigma$, which we define as follows:
\begin{align} \label{Discriminant}
\sigma(s_{1}, s_{2}, s_{3}) = q_{1} q_{2} q_{3} \, , \hspace{10mm} q_{i} = s_{i} - s_{j} - s_{k} - 1 \, ,
\end{align}
where $(i,j,k)$ is a cyclic permutation of $(1,2,3)$. For $\sigma(s_{1}, s_{2}, s_{3}) < 0$, there is one even solution and one odd solution, while for $\sigma(s_{1}, s_{2}, s_{3}) \geq 0$ there is a single even solution.
Also recall that the correlation function can be encoded in a tensor ${\cal H}$, which is a function of two three-point covariants, $X$ and $\Theta$. There are three different (equivalent) representations of a given three-point function,
call them ${\cal H}^{(i)}$, where the superscript $i$ denotes which point we set to act as the ``third point" in the ansatz \eqref{H ansatz}. As shown in subsection \ref{subsubsection3.2.1},
the representations are related by the intertwining operator, ${\cal I}$. Since the dimensions of the conserved supercurrents $\Delta_i$ are related to the superspins as $\Delta_i= s_i+1$
it follows that each ${\cal H}^{(i)}$ is homogeneous of degree $q_{i}$. Then it follows that the odd structure survives if and only if $\forall i$, $q_{i} < 0$. In other words,
each ${\cal H}^{(i)}$ must be a rational function of $X$ and $\Theta$ with homogeneity $q_{i} < 0$. The discriminant \eqref{Discriminant} simply encodes information about whether the ${\cal H}^{(i)}$
are simultaneously of negative homogeneity.
\subsubsection{Grassmann-even correlators}
The complete classification of results for Grassmann-even conserved three-point functions, including cases where there is a point-switch symmetry, is as follows:
\begin{itemize}
\item In all cases we have examined ($s_{i} \leq 20$) there is one even solution and one odd solution, however, the odd solution vanishes if the superspin triangle inequalities are not satisfied.
\item $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \rangle$:
Note that in this case $s_2$ must be an integer.
For $s_{2}$ even, the solutions survive the point-switch symmetry for arbitrary $s_{1}$ (integer or half-integer). For $s_{2}$ odd the point-switch symmetry is not satisfied and the three-point function vanishes.
\item $\langle \mathbf{J}_{s} \, \mathbf{J}_{s} \, \mathbf{J}_{s} \rangle$: in this case $s$ is restricted to integer values. For $s$ even the solutions are compatible with the point-switch symmetries.
\end{itemize}
The number of linearly independent structures grows rapidly with the superspins, therefore we only present results for some low superspin cases after imposing conservation on all three points.
\vspace{4mm}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1} \mathbf{J}'_{1} \mathbf{J}''_{1} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{2-2-2.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{1/2} \mathbf{J}''_{2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{1-1-4.pdf} &&
\end{flalign*}
This three-point function was initially studied in \cite{Nizami:2013tpa}, where it was shown that a parity odd solution could arise. However, it was proven later in \cite{Buchbinder:2021qlb} that such a structure cannot be consistent with the superfield conservation equations. The approach we have developed also confirms that a parity-odd solution cannot exist; this is further supported by the fact that the superspin triangle inequalities are not satisfied for this three-point function.
\vspace{4mm}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{1/2} \mathbf{J}''_{3} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{1-1-6.pdf} &&
\end{flalign*}
This is another case where the superspin triangle inequalities are not satisfied, hence, the odd structure vanishes as expected.
\vspace{4mm}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{3/2} \mathbf{J}''_{2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{1-3-4.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{3/2} \mathbf{J}'_{3/2} \mathbf{J}''_{2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{3-3-4.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{2} \mathbf{J}'_{2} \mathbf{J}''_{2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{4-4-4.pdf} &&
\end{flalign*}
This three-point function has been studied explicitly using a tensor formalism in \cite{Buchbinder:2021qlb}, where it was shown that a parity-odd solution could arise in the three-point function. The approach we have developed can compute this correlator in seconds.
\vspace{4mm}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1} \mathbf{J}'_{2} \mathbf{J}''_{4} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{2-4-8.pdf} &&
\end{flalign*}
In this case we note that the superspin triangle inequalities are not satisfied and therefore the odd solution vanishes after current conservation.
\vspace{4mm}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{2} \mathbf{J}'_{2} \mathbf{J}''_{4} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{4-4-8.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{4} \mathbf{J}'_{4} \mathbf{J}''_{4} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{8-8-8.pdf} &&
\end{flalign*}
\subsubsection{Grassmann-odd correlators}
The classification of results for Grassmann-odd three-point functions, including cases where there is a point-switch symmetry, is as follows:
\begin{itemize}
\item In all cases we have examined ($s_{i} \leq 20$), the three-point functions are fixed up to a single parity-even solution after conservation on all three points. In general, any parity-odd structures are incompatible with conservation.
\item $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \rangle$:
Note that in this case $s_2$ must be half-integer.
For $s_{1} \neq s_{2}$, the classification is as follows:
\begin{itemize}
\item Let $s_{2} = 2k+\tfrac{1}{2}$, $k \in \mathbb{Z}_{\geq 0}$; for arbitrary $s_{1}$ (integer or half-integer) the point-switch symmetry is not satisfied and therefore the three-point function vanishes in general.
\item Let $s_{2} = 2k+\tfrac{3}{2}$, $k \in \mathbb{Z}_{\geq 0}$; for arbitrary $s_{1}$ (integer or half-integer) the point-switch symmetry is satisfied and therefore the three-point function is fixed up to a single parity-even structure.
\end{itemize}
\item $\langle \mathbf{J}_{s} \, \mathbf{J}_{s} \, \mathbf{J}_{s} \rangle$: for $s = 2k + \tfrac{3}{2}$, $k \in \mathbb{Z}_{\geq 0}$ the solution is compatible with the point-switch symmetry. For $s = 2k+\tfrac{1}{2}$, $k \in \mathbb{Z}_{\geq 0}$ the three-point function vanishes.
\end{itemize}
We now present results after imposing conservation on all three points.
\vspace{2mm}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{3/2} \mathbf{J}''_{5/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{1-3-5.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{2} \mathbf{J}'_{2} \mathbf{J}''_{1/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{4-4-1.pdf} &&
\end{flalign*}
In this instance we note that the superspin triangle inequalities are not satisfied and therefore the odd solution vanishes after current conservation.
\vspace{4mm}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{2} \mathbf{J}'_{2} \mathbf{J}''_{3/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{4-4-3.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{3/2} \mathbf{J}'_{3/2} \mathbf{J}''_{5/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{3-3-5.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{3/2} \mathbf{J}'_{3/2} \mathbf{J}''_{7/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{3-3-7.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{2} \mathbf{J}'_{2} \mathbf{J}''_{7/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{4-4-7.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{5/2} \mathbf{J}'_{5/2} \mathbf{J}''_{5/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{5-5-5.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{7/2} \mathbf{J}'_{7/2} \mathbf{J}''_{7/2} \rangle$\textbf{:}
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{7-7-7.pdf} &&
\end{flalign*}
\section{Three-point functions of scalar superfields}\label{section5}
For completeness, in this section we analyse three-point correlation functions involving scalar superfields and conserved supercurrents. Some of the three-point functions contain parity-odd solutions, with their existence depending on both triangle inequalities and the weights of the scalars. We found that the following general results hold:
\begin{subequations}
\begin{align}
\langle {\cal O} \, {\cal O}' \, \mathbf{J}_{s} \rangle &= a \, \langle {\cal O} \, {\cal O}' \, \mathbf{J}_{s} \rangle_{E} \, , \\[2mm]
\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \, {\cal O} \rangle &= a \, \langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \, {\cal O} \rangle_{E} + b \, \langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \, {\cal O} \rangle_{O} \, .
\end{align}
\end{subequations}
The correlation functions are analysed using the same methods as in the previous sections; the full classification of results (for cases where there is a point-switch symmetry), is summarised below:
\begin{itemize}
\item $\langle {\cal O} \, {\cal O}' \, \mathbf{J}_{s} \rangle$: in general there are solutions only for $\Delta_{{\cal O}} = \Delta_{{\cal O}'}$. For the Grassmann-even case the solution satisfies the point-switch symmetry ${\cal O} = {\cal O}'$ only for even $s$. For the Grassmann-odd case the solution satisfies the point-switch symmetry only for $s = 2k+\tfrac{3}{2}$, $k \in \mathbb{Z}_{\geq 0}$.
\item $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \, {\cal O} \rangle$: for $s_{1} \neq s_{2}$, there is a single even solution for $\Delta_{{\cal O}}=1$, otherwise the three-point function vanishes. For $s_{1} = s_{2}$ there is one even and one odd solution and the point-switch symmetries are satisfied.
\end{itemize}
We now present explicit solutions for the above cases.
\vspace{4mm}
\noindent
\textbf{Correlation function} $\langle {\cal O} \, {\cal O}' \, \mathbf{J}_{1/2} \rangle$\textbf{:}\\
For $\delta_{1} = \delta_{2} = \delta$, there is a single even solution compatible with conservation
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{0-0-1-A.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle {\cal O} \, {\cal O}' \, \mathbf{J}_{1} \rangle$\textbf{:} \\
For $\delta_{1} = \delta_{2} = \delta$, there is a single even solution compatible with conservation
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{0-0-2.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle {\cal O} \, {\cal O}' \, \mathbf{J}_{3/2} \rangle$\textbf{:}\\
For $\delta_{1} = \delta_{2} = \delta$, there is a single even solution compatible with conservation
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{0-0-3-A.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle {\cal O} \, {\cal O}' \, \mathbf{J}_{2} \rangle$\textbf{:}\\
For $\delta_{1} = \delta_{2} = \delta$, there is a single even solution compatible with conservation
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{0-0-4-A.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{1/2} {\cal O} \rangle$\textbf{:}\\
In this case, the superspin triangle inequalities are satisfied and there is one even and one odd solution for arbitrary $\delta$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{1-1-0.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1/2} \mathbf{J}'_{3/2} {\cal O} \rangle$\textbf{:}\\
In this case there is a solution only for $\delta = 1$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{1-3-0.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{3/2} \mathbf{J}'_{3/2} {\cal O} \rangle$\textbf{:} \\
In this case, the superspin triangle inequalities are satisfied and there is one even and one odd solution for arbitrary $\delta$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{3-3-0.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{1} \mathbf{J}'_{2} {\cal O} \rangle$\textbf{:} \\
In this case there is a solution only for $\delta = 1$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{2-4-0.pdf} &&
\end{flalign*}
\noindent
\textbf{Correlation function} $\langle \mathbf{J}^{}_{2} \mathbf{J}'_{2} {\cal O} \rangle$\textbf{:} \\
In this case, the superspin triangle inequalities are satisfied and there is one even and one odd solution for arbitrary $\delta$:
\begin{flalign*}
\hspace{5mm} \includegraphics[width=0.97\textwidth]{4-4-0.pdf} &&
\end{flalign*}
\section{Conclusion}\label{section6}
The purpose of this paper was to develop a formalism to determine the general structure of three-point correlation functions of conserved supercurrents for arbitrary
superspins in three-dimensional superconformal field theory. Our method produces explicit results up to $s_{i} = 20$ and is limited only by computer power.
We found that the main difference in the general structure of the three-point function $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$
is whether it is Grassmann-odd or Grassmann-even in superspace. If $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$ is Grassmann-odd
(that is the sum of the superspins is half-integer) then the correlator is fixed up to a single parity-even contribution. If $\langle \mathbf{J}^{}_{s_{1}} \mathbf{J}'_{s_{2}} \mathbf{J}''_{s_{3}} \rangle$
is Grassmann-even (that is the sum of the superspins is an integer) then it is fixed up to one even solution and one odd solution; the existence of the latter, however, depends on whether the triangle
inequalities are satisfied.
The pattern of the number of independent structures is clear and we have sufficient evidence to propose that our classification of results holds in general.
There are various possible directions to extend our results. An open question is whether it is possible to find generating functions for arbitrary
superspins that encapsulate the results in this paper, similar to the ones found in non-supersymmetric
theories \cite{Stanev:2012nq,Zhiboedov:2012bm}. It would also be interesting to apply our methods to superconformal theories in higher dimensions
(see~\cite{Buchbinder:2021izb,Buchbinder:2021kjk,Buchbinder:2022kmj} for recent progress) and to ${\cal N}$-extended superconformal theories.
Correlation functions of higher-spin currents in conformal theories with extended supersymmetry have practically not been studied, however recent progress has been reported in~\cite{Jain:2022izp}.
An important difference compared to the ${\cal N}=1$ case is that conserved currents can carry indices of the $R$-symmetry group.
Concerning the study of three-point functions in four dimensions, in~\cite{Buchbinder:2022kmj} a method was introduced to study three-point functions of
conserved supercurrents $J_{\alpha (r) \dot\alpha (r)}$ for arbitrary superspins in 4D ${\cal N}=1$ superconformal field theories. Explicit solutions were constructed for three-point functions involving
higher-spin supercurrents and flavour current multiplets. The method of~\cite{Buchbinder:2022kmj} was an extension of the one used in \cite{Buchbinder:2021kjk} where
the classification problem was solved for generic three-point functions of conserved fermionic currents $S_{\alpha(k)}$ of arbitrary rank in 4D ${\cal N}=1$ SCFT. We believe that the
formalism developed in the present paper will generalise directly to 4D ${\cal N}=1$ theories and will allow us to extend the results of~\cite{Buchbinder:2022kmj}.
We leave these considerations for a future study.
\section*{Acknowledgements}
The authors are grateful to Sergei Kuzenko and Jessica Hutomo for valuable discussions. We also acknowledge the use of Matthew Headrick’s \textit{Grassmann} Mathematica package for computations with fermionic variables. The work of E.I.B. is supported in part by the Australian Research Council, project No. DP200101944. The work of B.S. is supported by the \textit{Bruce and Betty Green Postgraduate Research Scholarship} under the Australian Government Research Training Program.
|
1,314,259,993,474 | arxiv |
\section{INTRODUCTION}
Usability and accessibility are the commonly used terms in the context of enhancing the user experience for users of the world wide web~\cite{hanson2013progress}.
While there is no universally accepted view or definition for usability, a commonly accepted definition presented in ISO 9241-11 standard explains that a product is considered to be useful if the specified users can accomplish specified tasks effectively, efficiently, and with satisfaction \cite{iso1998ergonomic}.
Currently, most public and private activities heavily rely on web-based services, making it critical for the web to be usable on multiple devices such as desktops, tablets and mobile phones \cite{jeong2020dynamic}, by any individual, irrespective of any physical or mental barriers \cite{sullivan2000barriers}.
Usability of websites is influenced by the level of accessibility of websites \cite{giraud2018web}.
Services that involve promoting businesses, training defense systems and so on using recommender systems also rely on the web portals for crowd sourced information such as identifying areas of interest using user eye tracking, user keyboard movement and so on \cite{eraslan2020best, mazumdar2020cold, flores2021utilizing, vidyapu2020investigating}. An accessible portal could thus be capable of collecting more information from a wide and diverse range of individuals.
Web accessibility has attracted significant attention from researchers and governments across the globe since its inception to provide better access to the websites \cite{rau2016evaluation}.
Researchers have emphasized the need for web developers to abide by accessibility guidelines of websites to support broader usability of web \cite{lazar2004improving}. Moreover, many websites are being designed based on the templates, thus requiring care to be taken to incorporate accessibility principles in designing these templates \cite{alarte2019web}.
User satisfaction and consequently user experience on visiting the website is considered as an important criteria while designing a website \cite{alexander2021influence}. Many attempts such as approaches to improve the trustworthiness of the websites, protect visitors of the website, designing culturally adapted websites and so on are being made to increase the usability and usage of the websites \cite{carpineto2020experimental, alexander2021influence, roy2021integrated}.
Despite the massive digitization, a significant amount of web content and e-services are not accessible to a large section of users today~\cite{hanson2013progress}.
When it comes to utilizing the web, a diverse range of users exists, including visually impaired or disabled groups and older adults, who might find it difficult to read through a page \cite{fritz2019customization}.
Providing content in a similar fashion to all types of user groups as presented to user groups without disabilities makes the web inaccessible to the rest of these groups \cite{carvalho2018accessibility}.
While improving reliability and trustworthiness of a website \cite{carpineto2020experimental, roy2021integrated} and presenting culturally relevant elements on the page \cite{alexander2021influence} are some important aspects to ensure customer safety and satisfaction, making the website accessible to diverse user groups and thus providing better user experience to a diverse range of users is also essential to increase the reach of the website.
Several attempts are being made to address this challenge of web accessibility \cite{moreno2019harmonization, crespo2016social4all, gay2010achecker}.
In order to overcome these challenges, standard guidelines such as WCAG have been proposed to support website developers and designers to ensure website accessibility.
The latest revised WCAG guidelines are WCAG 2.2\footnote{\url{https://www.w3.org/TR/WCAG22/}} proposed in the year 2021.
Several tools such as Achecker \cite{gay2010achecker} and CAC \cite{klein2014checking} have been developed to evaluate websites against different versions of guidelines such as WCAG 1.0\footnote{\url{https://www.w3.org/TR/WAI-WEBCONTENT/}}, WCAG 2.0\footnote{\url{https://www.w3.org/TR/WCAG20/}} guidelines, Stanca Act\footnote{\url{https://www.levelaccess.com/accessibility-regulations/italy/}} (Italian accessibility guidelines), and so on. Most of the existing tools focus on highlighting errors based on WCAG 2.0 guidelines, while fewer tools exist for evaluating websites based on WCAG 2.1 guidelines \cite{klein2014checking}, and no tools exist to evaluate websites based on WCAG 2.2 guidelines.
World wide web consortium (W3C) lists 132 web accessibility evaluation tools for WCAG 2.0, 67 evaluation tools for WCAG 2.1, while none are listed for WCAG 2.2.\footnote{\url{https://www.w3.org/WAI/ER/tools/}}
Of the 67 tools, only 15 are available as open-source to evaluate websites against WCAG 2.1 guidelines.
Tools supporting command-line interfaces help verify the accessibility of a large number of websites automatically~\cite{fernandes2011web}.
In contrast, tools designed as browser plugins help in an easy and quick understanding of the accessibility of the website. However, based on the above criteria, \textit{QualWeb}\footnote{\url{http://qualweb.di.fc.ul.pt/evaluator/about}} is the only tool listed to support both command-line and browser plugin facility.
We observe that \textit{QualWeb} is not available as a browser extension yet.
It also requires the further installation of other packages such as npm and revised chromium-browser to use the command line interface version, making it difficult to use the tool.
Even after installing the required dependencies, many errors occurred, preventing the functioning of the tool\footnote{Snapshots of the errors occurred are presented in this document - {\url{https://osf.io/k9v8a/?view_only=9b7799ccf554412f9cdaafa61da4bf52}}}.
This indicates the need for better tools and approaches that could evaluate the accessibility of websites against WCAG 2.1 guidelines and consequently be used in web development.
Hence, in this paper, we propose \textit{WAccess\footnote{\url{https://sites.google.com/iittp.ac.in/waccess}}}, a tool to assess web accessibility of websites against the latest WCAG 2.2 guidelines, along with WCAG 2.1 and 2.0 guidelines.
\textit{WAccess} displays a \textit{list of errors} with respect to \textit{accessibility guidelines}, the \textit{code snippet} that causes the \textit{error} and a \textit{suggested fix}.
We call the collective set of guidelines considered for \textit{WAccess} as WCAG2 series.
Indian Government websites contain vast information and are critical for good governance in the country \cite{paul2020accessibility}.
Since a large number of users are intending to use the government's e-services, especially in the Indian context, the massive volume of government information is incorporated onto the web \cite{ismail2016accessibility}.
This aspect resulted in the growth of research on evaluating the accessibility of government websites \cite{ismail2018accessibility, mtebe2017accessibility, rau2016evaluation, karaim2019usability}.
However, these evaluations are performed on a smaller number of websites ranging from 10 to 302 website evaluations, as the existing tools for evaluating web accessibility are complex in nature for performing guideline automated analysis. Also, these evaluations are based on WCAG 2.0 and WCAG 1.0, which were proposed prior to WCAG 2.1, in 2010. While \citet{narasimhan2012accessibility} have automatically evaluated accessibility of a larger number of GoI websites (7800) using \textit{Achecker\footnote{\url{https://achecker.achecks.ca/checker/index.php}}}, this evaluation is confined only to WCAG 2.0 guidelines, owing to the limitation of \textit{AChecker} in supporting evaluation only upto WCAG 2.0.
As an attempt towards analyzing accessibility of multiple government websites against WCAG2 series, we performed a study on the accessibility of 2227 Indian government websites using \textit{WAccess}, and through this study, \textit{WAccess} could detect 6131929 violations. The results of the study are presented here\footnote{\url{https://osf.io/fnx4t/?view_only=1769872b5dd447fbbb30fe47ecf88ece}}. Source code of \textit{WAccess} is available at: \url{https://github.com/Kowndinya2000/WAccess}.
The contributions of this paper are:
\begin{enumerate}
\item \textit{WAccess}, a tool to check the conformance of websites against web accessibility standards specified by W3C, in selected guidelines of the latest WCAG 2.2, WCAG 2.1 and WCAG 2.0.
\item Accessibility study on 2227 Indian government websites present in the Government of India web directory using \textit{WAccess} tool.
\item A list of violations for each of the websites considered for the accessibility study.
\end{enumerate}
The rest of the paper is organized as follows: we describe the related work in Section \ref{sec:related_work}. We then discuss the design of \textit{WAccess} in Section \ref{sec:design}. Section \ref{sec:eval} presents evaluation of Indian government websites by \textit{WAccess}, followed by discussion and limitations in Section \ref{sec:discussion}. We conclude the paper and present some future work in Section \ref{sec:conclusion}.
\section{RELATED WORK}
\label{sec:related_work}
Web Accessibility is considered as an important issue in the current digital world, leading to the emergence of several approaches and guidelines to improve accessibility \cite{moreno2019harmonization, crespo2016social4all, gay2010achecker, broccia2020flexible}. Moreno et al. have emphasized the need for standardizing the web accessibility standards across the world, and suggested the use of WCAG guidelines \cite{moreno2019harmonization}.
There are several tools and techniques in the literature to evaluate websites' accessibility against WCAG guidelines \cite{gay2010achecker, klein2014checking, takata2004accessibility}.
Takata et al. \cite{takata2004accessibility} proposed a tool to verify the accessibility and syntactic correctness of a website. The tool supports verification of any XML document, by separating out the guidelines to facilitate easy modification of the guidelines \cite{takata2004accessibility}. \textit{Achecker} \cite{gay2010achecker} is a standalone open-source tool to analyze the extent to which a website adheres to a set of accessibility guidelines. It facilitates users to choose the desired accessibility guidelines, which are to be evaluated for a website from a pre-loaded list \cite{gay2010achecker}. \textit{WAVE} tool has been proposed to identify accessibility errors with respect to WCAG guidelines, to support web developers in developing web pages that are accessible to all, irrespective of individuals with disabilities\footnote{\url{http://wave.webaim.org/}}. \textit{CAC} evaluates a website against WCAG 2.0 guidelines for accessibility issues \cite{klein2014checking}. \textit{CAC} also reports issues to the users by highlighting them on the webpage and proposes possible solutions to resolve the issues \cite{klein2014checking}. Crespo et al. also suggest a novel approach to support the rectification of a few accessibility issues in websites, based on evaluating adherence to a set of accessibility guidelines \cite{crespo2016social4all}. Broccia et al. have highlighted the need for tools and approaches to validate accessibility of websites and presented a tool to support WCAG 2.1 guidelines \cite{broccia2020flexible}. They further used the presented tool to evaluate twelve websites in six different categories including health, public bodies, travel and so on \cite{broccia2020flexible}.
Web usability evaluation of government websites has been performed by several countries such as China \cite{rau2016evaluation}, Tanzania \cite{mtebe2017accessibility}, Kyrgyz Republic \cite{ismailova2017web}, India \cite{katre2011expert} and researchers have observed that most of the websites fail to meet the minimal accessibility standards. Recently, Spina has analyzed WCAG 2.1 guidelines for libraries and found that it is important to update the existing tools to support WCAG 2.1 \cite{spina2019wcag}.
There is scarce research on web accessibility in the Indian context. Researchers have evaluated web accessibility of banking websites \cite{kaur2014banking}, educational institutions~\cite{ismail2018accessibility} among other websites. An expert manual study of 28 Government of India (GoI) websites found that most of the websites are either down or have accessibility issues in the year 2011 \cite{katre2011expert}.
A recent case study was performed on 164 Indian government ministry websites to assess usability, accessibility and mobile readiness with the help of an automatic tool named TAW\footnote{\url{https://www.tawdis.net/}}, however the evaluation is based on WCAG 2.0 guidelines only~\cite{agrawal2021assessing}.
A study performed by the Center for Internet and Society on 7800 websites of GoI using existing web accessibility evaluation tools based on WCAG 2.0 found an average of 63 errors per home page, with a few pages crossing 1000 errors~\cite{narasimhan2012accessibility}.
In a study conducted to assess the accessibility of 15 GoI portals concerning WCAG 2.0 (2008) and GIGW Guidelines, \citet{patra2018accessibility} have listed specific aspects that are to be considered to improve accessibility websites \cite{patra2018accessibility}.
Recently,~\citet{paul2020accessibility} studied the accessibility and usability of 65 Indian government websites against WCAG 2.0 and WCAG 1.0 using automated tools specified for these guidelines \cite{paul2020accessibility}.
The results revealed that the considered websites do not prioritize accessibility aspect, eventually leading to the low quality of government websites in India~\cite{paul2020accessibility}.
As mentioned above, majority of the existing approaches and tools for WCAG 2.1 are proprietary. Only 13 tools are open-source, out of which only \textit{QualWeb} works both as a browser plugin and a command line interface.
However, \textit{QualWeb} did not work when attempted to run and also has the overhead of installing other packages as pre-requisites to run.
Furthermore, no tools exist for the latest WCAG 2.2 guidelines.
Hence, we propose \textit{WAccess}, as a Google Chrome plugin that aims to verify the accessibility of websites based on WCAG2 series. We evaluated 2227 Indian government websites with \textit{WAccess}.
\section{DESIGN AND DEVELOPMENT OF \textit{WAccess}}
\label{sec:design}
\subsection{WEB ACCESSIBILITY GUIDELINES}
The Web Accessibility Initiative (WAI) of the W3C has proposed several standards with the goal of “access to the Web by everyone,” with each of the standards having a layered set of principles, guidelines, success criteria, and sufficient and advisory techniques.
WCAG 1.0 is one of the initial web accessibility guidelines that appeared in 1999 with a revised version of WCAG 2.0, 2.1 in 2008 and 2018 respectively and the recent standard version, WCAG 2.2, in 2021.
Each of these standards is backward compatible with each other. Since the start of WCAG 2.0, three conformance levels of accessibility denoted as A (basic accessibility), AA (desirable accessibility), AAA (full accessibility) are introduced that could be customized as per specific needs of the web content and web content providers. In addition to WCAG, two more standards - Authoring Tool Accessibility Guidelines (ATAG) 2.0 and User Agent Accessibility Guidelines (UAAG) 2.0, are proposed to assist web developers and users with disabilities by enhancing user agents in the websites, such as text-to-speech support.
\begin{figure}[h]
\centering
\includegraphics[height = 7cm, width = \linewidth]{Images/WAccess-Overview.png}
\caption{Design Methodology of \textit{WAccess}}
\label{fig:approach}
\end{figure}
\subsection{DESIGN METHODOLOGY}
\textit{WAccess} is implemented as a browser plugin, when activated, displays accessibility violations on the web console.
By analyzing the guideline definitions and success criteria, we framed rules and formulated the necessary conditions for each guideline in individual JavaScript files. We narrowed down the scope of accessibility testing to the extent we can automate by considering part of the success criterion of guidelines requiring human investigation. For instance, (1) Guideline 2.4.4 requires the purpose of a link to be effectively described in its linked text (2) According to Guideline 1.1.1, providing null alt text will help assistive technology to ignore decorative images, but whether image is decorative or not is subjective,
are not in the scope of \textit{WAccess}. The above described situations will require human observation to make a decision on whether the guideline has been violated or not.
Violations highlighted by \textit{WAccess} would form a subset of the all possible violations, thus implying if \textit{WAccess} highlights zero violations for a guideline then it does not mean the success criterion is completely met but it means no violations were spotted in the implementation scope of \textit{WAccess}.
\textit{WAccess} plugin has been designed based on the approach shown in Fig \ref{fig:approach}.
\emph{WAccess} was developed to help determine if web content meets accessibility standards with WCAG 2.2, 2.1 and 2.0 in consideration.
We focus on WCAG 2.0, 2.1 and 2.2 which are based on four core principles and 29 guidelines with each guideline having multiple success criteria.
The four core principles are as follows:
\begin{itemize}
\item\textbf{Perceivable:} Users must be able to perceive all relevant information in your content.
\item\textbf{Operable:} Users must be able to operate the interface successfully.
\item \textbf{Understandable:} Users must be able to understand the information and operation of the interface.
\item \textbf{Robust:} Content must be accessible to all users and should be easily interpreted by wider range of user agents.
\end{itemize}
\begin{table}[]
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Principle & ID & Description & Level & Total Violations & Websites \\ \hline
\multirow{5}{*}{Operable} & 2.4.11 & Focus Appearance (Minimum) & AA & 1393698 & 2227 \\ \cline{2-6}
& 2.4.12 & Focus Appearance (Enhanced) & AAA & 1654014 & 2227 \\ \cline{2-6}
& 2.4.13 & Page Break Navigation & A & 2142 & 292 \\ \cline{2-6}
& 2.5.7 & Dragging Movements & AA & 346825 & 2102 \\ \cline{2-6}
& 2.5.8 & Target Size (Minimum) & AA & 1001991 & 2221 \\ \hline
\multirow{2}{*}{Understandable} & 3.2.7 & Visible Controls & A & 0 & 0 \\ \cline{2-6}
& 3.3.7 & Accessible Authentication & A & 9358 & 1458 \\ \hline
\end{tabular}
\caption{7 WCAG 2.2 Guidelines considered for developing \textit{WAccess} with total violations observed and number of websites violating each guideline}
\label{tab:table_2.2}
\end{table}
\begin{table}[]
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Principle & ID & Description & Level & Total Violations & Websites \\ \hline
\multirow{4}{*}{Perceivable} & 1.3.5 & Identify Input Purpose & AA & 7991 & 1303 \\ \cline{2-6}
& 1.3.6 & Identify Purpose & AAA & 3899 & 923 \\ \cline{2-6}
& 1.4.11 & Non-text Contrast & AA & 40558 & 1547 \\ \cline{2-6}
& 1.4.13 & Content on Hover or Focus & AA & 0 & 0 \\ \hline
\multirow{4}{*}{Operable} & 2.1.4 & Character Key Shortcuts & A & 0 & 0 \\ \cline{2-6}
& 2.3.3 & Animation from Interactions & AAA & 0 & 0 \\ \cline{2-6}
& 2.5.3 & Label in Name & A & 62923 & 1782 \\ \cline{2-6}
& 2.5.5 & Target Size & AAA & 56497 & 1823 \\ \hline
Robust & 4.1.3 & Status Messages & AA & 0 & 0 \\ \hline
\end{tabular}
\caption{9 WCAG 2.1 Guidelines considered for developing \textit{WAccess} with total violations observed and number of websites violating each guideline}
\label{tab:table_2.1}
\end{table}
\begin{table}[]
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Principle & ID & Description & Level & Total Violations & Websites \\ \hline
\multirow{6}{*}{Perceivable} & 1.1.1 & Non-text Content & A & 93040 & 1996 \\ \cline{2-6}
& 1.3.1 & Info and Relationships & A & 3364 & 850 \\ \cline{2-6}
& 1.4.1 & Use of Color & A & 0 & 0 \\ \cline{2-6}
& 1.4.3 & Contrast (Minimum) & AA & 469170 & 2168 \\ \cline{2-6}
& 1.4.4 & Resize text & AA & 32545 & 1448 \\ \cline{2-6}
& 1.4.6 & Contrast (Enhanced) & AAA & 521437 & 2170 \\ \hline
\multirow{4}{*}{Operable} & 2.1.1 & Keyboard & A & 0 & 0 \\ \cline{2-6}
& 2.2.2 & Pause, Stop, Hide & A & 1283 & 834 \\ \cline{2-6}
& 2.4.4 & Link Purpose (In Context) & A & 230606 & 2049 \\ \cline{2-6}
& 2.4.6 & Headings and Labels & AA & 2208 & 2208 \\ \hline
\multirow{2}{*}{Understandable} & 3.1.1 & Language of Page & A & 2187 & 2187 \\ \cline{2-6}
& 3.3.2 & Labels or Instructions & A & 193624 & 1678 \\ \hline
Robust & 4.1.1 & Parsing & A & 2569 & 584 \\ \hline
\end{tabular}
\caption{13 WCAG 2.0 Guidelines considered for developing \textit{WAccess} with total violations observed and number of websites violating each guideline}
\label{tab:table_2.0}
\end{table}
\textit{WAccess} considers 7 WCAG 2.2 guidelines, 9 WCAG 2.1 guidelines, 13 WCAG 2.0 guidelines, which do not require human intervention, as shown in Tables \ref{tab:table_2.2}, \ref{tab:table_2.1}, and \ref{tab:table_2.0} to evaluate the accessibility of a website.
We integrate the guidelines into 4 classes -(i) \textit{aria-related}, (ii) \textit{color-contrast related}, (iii) \textit{HTML-check related} and (iv) \textit{interaction-related}.
These guidelines are reviewed to identify and sort the best practices required to meet the criteria of all guidelines.
Based on the best practices observed, rules are defined to evaluate a web page against the specified criteria.
Scripts to check the accessibility of a website based on the rules defined are written using JQuery.
Each of these scripts are designed to address one accessibility guideline. A manifest.json file is built to run the all these guideline specific javascript files.
\section{Evaluation and Results}
\label{sec:eval}
Through Digital India, government of India made several of its services and governance available through digital platforms that is accessible to all the citizens of the country.
Considering the frequent and wide use of government of India websites, we chose to perform a case study on these websites to investigate the accessibility aspect. GOI web directory is a one source guide to all the government websites of India. We navigated to each state directory and downloaded the PDFs containing URLs of the e-websites and curated them by extracting the links from all the PDFs collected. Of the 3995 websites collected we found 2227 to be actively working and we carried out the analysis on them and further reported the results. For each website we stored the number of violations for each guideline and respective error and fix messages along with code snippet. We consolidated number of violations into different categories and plotted different graphs for each of the WCAG2 series (2.0, 2.1 and 2.2) in Fig. \ref{fig:wcag_2_0}, Fig. \ref{fig:wcag_2_1} and Fig. \ref{fig:wcag_2_2} respectively and violation distribution across each of the three WCAG guideline series in Fig. \ref{fig:wcag_total}.
\begin{table}[]
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{2}{*}{Guideline} & \multicolumn{2}{c|}{Number of Violations} & \multirow{2}{*}{Conformance Level} \\ \cline{2-3}
& Aadhaar & Commerce & \\ \hline
1.1.1 & 57 & 202 & \multirow{7}{*}{A} \\ \cline{1-3}
1.3.1 & 3 & 2 & \\ \cline{1-3}
2.4.4 & 324 & 546 & \\ \cline{1-3}
2.4.13 & 0 & 22 & \\ \cline{1-3}
2.5.3 & 138 & 77 & \\ \cline{1-3}
3.1.1 & 1 & 1 & \\ \cline{1-3}
3.3.7 & 6 & 1 & \\ \hline
1.3.5 & 5 & 1 & \multirow{8}{*}{AA} \\ \cline{1-3}
1.4.3 & 1085 & 833 & \\ \cline{1-3}
1.4.4 & 177 & 7 & \\ \cline{1-3}
1.4.11 & 5 & 48 & \\ \cline{1-3}
2.4.6 & 1 & 1 & \\ \cline{1-3}
2.4.11 & 2404 & 2417 & \\ \cline{1-3}
2.5.7 & 398 & 757 & \\ \cline{1-3}
2.5.8 & 1967 & 2101 & \\ \hline
1.3.6 & 22 & 15 & \multirow{5}{*}{AAA} \\ \cline{1-3}
1.4.6 & 1175 & 1201 & \\ \cline{1-3}
2.4.12 & 2513 & 1709 & \\ \cline{1-3}
2.5.5 & 58 & 118 & \\ \cline{1-3}
Total & 10339 & 10059 & \\ \hline
\end{tabular}
\caption{Violations observed for UIDAI Aadhaar and Commerce Website}
\label{tab:case_study}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.6]{Images/chart_wcag_2_0.png}
\caption{Distribution of violations with respect to WCAG 2.0 guidelines}
\label{fig:wcag_2_0}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = \linewidth]{Images/chart_wcag_2_1.png}
\caption{Distribution of violations with respect to WCAG 2.1 guidelines}
\label{fig:wcag_2_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = \linewidth]{Images/chart_wcag_2_2.png}
\caption{Distribution of violations with respect to WCAG 2.2 guidelines}
\label{fig:wcag_2_2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = \linewidth]{Images/chart-groups.png}
\caption{Distribution of violations across WCAG guideline series}
\label{fig:wcag_total}
\end{figure}
\subsection{Guidelines and Violations}
In this section, we describe the violations resulted from the study on the chosen government websites.
We tabulated the violations observed for each guideline and number of websites violating each guideline for the three WCAG collections separately in Tables \ref{tab:table_2.2}, \ref{tab:table_2.1}, and \ref{tab:table_2.0}.
We base our observations from the results obtained in Fig. \ref{fig:wcag_2_0}, Fig. \ref{fig:wcag_2_1}, Fig. \ref{fig:wcag_2_2}, Fig. \ref{fig:wcag_total}, and Tables \ref{tab:table_2.2}, \ref{tab:table_2.1}, \ref{tab:table_2.0}, and \ref{tab:case_study}. We observed 6.1 million violations across the 2227 websites, of which 4.57 million violations correspond to WCAG 2.2, 0.17 million to WCAG 2.1 and 1.55 million to WCAG 2.0 guidelines, as shown in the Fig. \ref{fig:wcag_total}.
\textit{Guideline 1.3.5 Identify Input Purpose (AA)} requires the purpose for each input field to be defined in order to programmatically determine the information later.
Ensuring this guideline criterion helps to retrieve information such as contact, billing information or login details automatically by the browser and this can benefit users with poor memory.
Users with movement disorder can make use of the auto-fill feature by lessening the requirement for manual input in filling forms.
This guideline has been violated at least once by 1303 among the 2227 websites chosen, with 161 being the maximum number of violations observed for a website\footnote{\url{http://wapcos.gov.in/}}.
\textit{1.3.6 Identify Purpose (AAA)} intents to guarantee that people interface components on the website can be programmatically determined. This empowers user agents to decide the visual presentation of purpose of components there by making them more understandable for the user.
Achieving the success criterion guarantees recognizing purpose behind UI elements and people with trouble in perusing or with intellectual inabilities (difficult to retain focus) would find this guideline useful.
1304 websites have not violated this guideline, while it has been violated for a maximum of 55\footnote{\url{http://msmediagra.gov.in/aboutus.htm}} times among the websites chosen.
\textit{Guideline 1.4.11 Non-Text Contrast (AA)} tells that low contrast controls are hard to see, and might be totally missed by individuals with a visual weakness, and hence the visual representation of UI components and graphical objects should have a contrast ratio of at least three to one against adjacent colors. The purpose of this success criterion is to ensure that active user interface components and meaningful graphics are recognizable by users with tolerably low vision without the need of additional assistive technologies. This guideline was violated at least once by 1547 websites, with a maximum at 391\footnote{\url{http://jkhighereducation.nic.in/order.html}} by any website. Less than 30 violations per website make 75\% of the non-zero violations.
\textit{Guideline 2.5.3 Label in Name (A)} intends to guarantee that text which label a component visually should match the text associated with the component programmatically. This empowers sighted users who depend on screen readers as the text they hear apparently matches the text they see on the screen
More than 90\% of the websites failed to meet this guideline. One among the 2227 websites chosen, violated this guideline for a maximum of 1478\footnote{\url{http://sknau.ac.in/}} times. Maximum number of websites violating this guideline fell in the violation range 11-30.
\textit{Guideline 2.5.5 Target Size (AAA)} requires the controls to be large enough to see and touch. Users with low vision might experience issues in seeing a small target. Implementing the success criterion will assist individuals having difficulty with fine motor movements in activating a small target. Only 5.2\% of the websites do not violate this guideline. Maximum of 891\footnote{\url{http://www.tourism.rajasthan.gov.in/}} violations were observed for this guideline.
\textit{Guideline 2.5.7 Dragging Movements (AA)} Success criterion of this guideline ensures dragging elements to not limit to single pointer access. Drag and drop actions require a reasonably precise dragging motion, hence for users struggling with performing dragging cannot maintain contact in keeping their finger on the button or screen without accidentally releasing.
Almost all the websites (94.3\%) found to be violating this guideline. Maximum number of violations referring to this guideline were ranged from 501-1000, with a maximum violated by any website at 9109.
\textit{Guideline 3.3.7 Accessible Authentication (A)} states that users with certain cognitive disabilities struggle with memorizing authentication credentials (such as usernames and passwords). This guideline requires at least one other authentication mechanism such as forget password link, and this would help people with memory related issues or perception-processing in-capacities to be able to authenticate without much of a cognitive function.
Nearly, 34.5\% of the websites, found to be meeting the requirements referring to this guideline. Majority of the websites violated this guideline less than 10 times, with maximum number of violations at 161\footnote{\url{ http://wapcos.gov.in/}} for any website.
\textit{Guideline 2.4.11 Focus Appearance (Minimum) (AA)} states that when UI components get keyboard focus then the focus indicator should be clearly visible and this indicates where a user is on the page. This goes beyond the existing 2.4.7 visible focus requirement, by defining a minimum size of the focus indicator and a minimum contrast. The level AA variant expects a contrast ratio of 3:1.
All the websites chosen violated this guideline at least once. Majority of the websites violated this guideline at least 3 times and at most 9109\footnote{\url{https://www.mstcindia.co.in/}} times.
\textit{Guideline 2.4.12 Focus Appearance (Enhanced) (AAA)} also focuses on appearance of elements on focus, however calling for higher level of conformance. This level AAA variant expects a contrast ratio of 4.5:1. None of the websites passed the success criterion for this guideline with cumulative violations reaching 1.6 million.
\textit{Guideline 1.1.1 Non-text Content (A)}
requires text alternative for information conveyed by non-text content.
Visually impaired users can hear the alt text of the image with a screen reader. Individuals with hearing disabilities can view the text description of an audio information.
This guideline has been violated at least once by 1996 among the 2227 websites chosen, with maximum number of violations observed for a website\footnote{\url{http://westbengalforest.gov.in/}} at 1030.
\textit{Guideline 1.3.1 Info and Relationships (A)} states that information and relationships presented through auditory or visual formatting should be preserved when the presentation format changes. User agents can be benefited by the success criterion as user agents can adapt content according to user needs.
1377 websites have not violated this guideline, while it has been violated for a maximum of 87 times among the websites chosen.
\textit{Guideline 1.4.3 Contrast (Minimum) (AA)} intends to provide enough contrast between the text in its background so that it can be read by users with moderately low vision without the need for an additional contrast enhancing assistive technology. This is a level AA variant requiring a minimum contrast ratio of 1:4.5 for smaller fonts and 1:3 for larger fonts. The guideline contrast minimum (1.4.3), was violated at least once by 2168 websites, with a maximum at 5044 by any website. Violations ranging from 61 to 500 make 67.4\% of the non-zero violations.
\textit{Guideline 1.4.6 Contrast (Enhanced) (AAA)} is a level AAA variant and requires a minimum contrast ratio of 1:7 for smaller fonts and 1:4.5 for larger fonts. Only 2.5\% of the websites do not violate this guideline. Maximum of 5305 violations were observed for this guideline.
\textit{Guideline 1.4.4 Resize Text (AA)} states that without the help of an assistive technology, text (omitting captions and images of text) should be able to be resized up to 200 percent without loosing its functionality or content. This helps users with low vision to increase the text size and read better. As part of this guideline we tried to highlight HTML elements such as bold, italic and font and suggested not to use them as these elements will be able handled well by screen readers.
One among the 2227 websites chosen, violated this guideline for a maximum of 1518\footnote{\url{http://www.upe.bsnl.co.in/}} times. Maximum number of websites violating this guideline fell in the violation range 1-10.
\textit{Guideline 2.2.2 Pause, Stop, Hide (A)} requires the functionality of content being operable with a keyboard. Visually impaired people would use keyboard with a screen reader. Users with low vision can be benefited with content that is navigable through keyboard.
Nearly 37\% of the websites found to be violating this guideline. Almost all the violations referring to this guideline were ranged from 1-10, with a maximum violated by any website at 13\footnote{\url{http://techedu.rajasthan.gov.in/}}.
\textit{Guideline 2.4.4 Link Purpose (In Contrast) (A)} states that the purpose of each link should be able to be determined from the link's text alone or from the link's text in combination with its programmatically determined accessibility name. The success criterion helps people with motion impairment in deciding whether they should follow a link and they can skip pages that they are not interested in visiting. Only 8\% of the websites found to be meeting the requirements referring to this guideline. Majority of the websites that violated this guideline fell in the range 61-500, with maximum number of violations at 1892\footnote{\url{http://sknau.ac.in/}} for any website.
\textit{Guideline 2.4.6 Headings and Labels (AA)} states that headings or labels should describe the topic or purpose. Clear headings will enable users to get information more easily. For individuals with reading disabilities or short-term memory might find descriptive headings useful. In the implementation of \textit{WAccess} we evaluated the ordering for header tags and did not verify the purpose as it requires manual investigation.
Of the 2227 websites 2208 were found violating the guideline and number of violations are one each per website.
\textit{Guideline 3.1.1 Language of Page (A)} requires the HTML page to have the a valid language id defined in the \textit{lang} attribute. Failing to meet this guideline criterion makes it difficult to identify the language of the page and might affect the screen reader's performance in catching the correct accent or pronunciation. Meeting the guideline requirements will help the people who use text-to-speech software. We only calculate the length of the language code in the implementation of \textit{WAccess}. Of the 2227 websites 2187 found violating the guideline and number of violations were observed to be one per website.
\textit{Guideline 3.3.2 Labels or Instructions (A)} requires labels or instructions to be provided to the user for identifying controls in a form such that users know the expected input data. Meeting the success criterion helps people with cognitive, language and learning disabilities to enter the correct information. This guideline has been violated at least once by 1678 among the 2227 websites chosen, with maximum number of violations observed for a website at 9801.
\textit{Guideline 4.1.1 Parsing (A)} requires the HTML tags to have opening and closing tags. This guideline also does not allow duplicate id attributes. Meeting the guideline's success criterion enables assistive technologies parse the content without issues. 73\% of the websites have not violated this guideline, while maximum number of violations observed for any website is at 242\footnote{\url{https://keralataxes.gov.in/}}.
\begin{figure*}
\centering
\includegraphics[width = \linewidth]{Images/aadhaar-waccess.png}
\caption{A snapshot depicting results of \textit{WAccess}. [A] depicts the UIDAI website which is evaluated by \textit{WAccess}. [B] is the web console highlighting the list of errors with respect to guidelines. [C] is the violation block pertaining to a guideline consisting respective WCAG guideline ID, error, code snippet, and fix}
\label{fig:aadhaar}
\end{figure*}
\subsubsection{Guidelines with zero violations}
In the evaluation of 2227 websites, the guidelines 1.4.1, 1.4.13, 2.1.1, 2.1.4, 2.3.3, 3.2.7, and 4.1.3 displayed zero violations. This does not imply that the success criterion of these guidelines has been completely achieved as the implementation of \textit{WAccess} might not be complete. For example, in guideline 2.1.1 that checks the operability through keyboard, checking the effect of an image on receiving focus from keyboard on the rest of layout would be possible to be identified only by manually assessing the visual outlook of the web page and cannot be programmatically verified. Hence, zero violations for 2.1.1 guideline for a specific website does not claim its conformance.
\subsection{Illustration of \textit{WAccess}}
We demonstrate the usage of \textit{WAccess} by navigating across two Indian government websites, UIDAI (Fig. \ref{fig:aadhaar}[A]), and Commerce website (Fig. \ref{fig:commerce}[A]). The UIDAI website contains unique identification details of all citizens in the country and is used by billions of Indian population, and the Commerce website contains services and merchandise with respect to foreign trade and public sector.
Fig. \ref{fig:aadhaar}[B] and Fig. \ref{fig:commerce} [B] displays a list of deviations from the accessibility guidelines as errors identified by \textit{WAccess} with respect to the defined guidelines, the code snippet that caused the violation, and a suggested fix.
These errors are presented with respect to each guideline as represented in Fig. \ref{fig:aadhaar}[C], and Fig. \ref{fig:commerce}[C].
We observed that \textit{WAccess} could list out guidelines that are not being followed by a website from the 13 WCAG 2.0, 9 WCAG 2.1 and 7 WCAG 2.2 guidelines considered in its design. Number of violations observed for both of these websites are presented in Table \ref{tab:case_study}
\subsubsection{UIDAI website:}
Through \textit{WAccess,} we found 10,339 guideline violations on this website. Nearly 89\% of the violations attribute to the guidelines 1.4.3, 1.4.6, 2.5.8, 2.4.11 and 2.4.12. Less than 10 violations were observed for the guidelines 1.3.1, 1.3.5, 1.4.11, 2.4.6, 2.4.13, 3.1.1 and 3.3.7.
Guidelines conforming to conformance level AA, took a significant share in the number of violations (about 59\%), while for A, the number of violations were observed to be lesser (only around 5.2\%). Guidelines referring to minimum conformance level AAA, formed 36.4\% of the total violations.
\subsubsection{Commerce website:}
Demonstration of using \textit{WAccess} for the commerce website is depicted in Fig. \ref{fig:commerce}. Through \textit{WAccess,} we found 10,059 guideline violations on this website. Violations referring to guidelines 1.4.6, 2.4.11, 2.4.12 and 2.5.8 constitute a share of 74\% of the total violations.
Less than 10 violations were observed for the guidelines 1.3.1, 1.3.5, 1.4.4, 2.4.6, 3.1.1 and 3.3.7. Guidelines conforming to conformance level AA, took a significant share in the number of violations (about 62\%), while for the conformance level A, the number of violations were observed to be the least (only around 8.46\%). Guidelines referring to minimum conformance level AAA, formed 30\% of the total violations.
\begin{figure*}
\centering
\includegraphics[width = \linewidth]{Images/commerce-waccess.png}
\caption{A snapshot depicting results of \textit{WAccess}. [A] depicts the Commerce website which is evaluated by \textit{WAccess}. [B] is the web console highlighting the list of errors with respect to guidelines. [C] is the violation block pertaining to a guideline consisting respective WCAG guideline ID, error, code snippet, and fix}
\label{fig:commerce}
\end{figure*}
\section{Discussion and Limitations}
\label{sec:discussion}
In this paper, we presented \textit{WAccess}, a tool for checking web accessibility, based on WCAG 2.0, 2.1 and 2.2 guidelines.
\textit{WAccess} evaluates accessibility with respect to 13 WCAG 2.0, 9 WCAG 2.1 and 7 WCAG 2.2 guidelines.
Though WCAG 2.0 and WCAG 2.1 comprise more number of guidelines, some of them require human intervention, restricting the scope for automated evaluation of the websites. Also, some of the selected guidelines contain rules (success criteria) of which few might require manual intervention. In order to improve the coverage of \textit{WAccess}, we selected only those human-independent parts of success criteria of the guidelines, that can be realised by automated analysis. As a result, if a specific guideline is satisfied, only a part of the guideline could be satisfied (partially satisfied) and it can not be assured that the whole guideline is satisfied, thus indicating the scope of false positives. An approach that includes artificial intelligence aspects to simulate manual analysis could be explored and integrated with \textit{WAccess} to address this concern.
However, there exists no false negatives, i.e., if a website is marked to be violating a guideline, it is assured not to be satisfying the specific guideline.
The manual analysis to assess the correctness of \textit{WAccess} was conducted only on two randomly selected websites of the 2227 websites considered for automated analysis. The correctness of the violations displayed in the console of these websites based on the rule-based approach followed by \textit{WAccess} was manually validated. While care has been taken to incorporate the success criteria of the guidelines in the rule-based approach, thus validating the correctness of \textit{WAccess}, a large-scale manual analysis could be performed to provide stronger insights on its correctness.
The current version of \textit{WAccess} is an extension to the browser and can be incorporated into automated scripts to analyse large number of websites. However, a command-line interface version of the tool could ease the automation task further.
\\
\\
\fbox{
\begin{minipage}{45em}
\textit{\textbf{Key findings:}}
\begin{itemize}
\item Selected guidelines implemented in \textit{WAccess} comprise 14 guidelines of level A conformance, 10 guidelines of level AA conformance, and 5 guidelines of level AAA conformance.
\item No website among the 2227 chosen, complies with all of the three conformance levels.
\item Cumulatively, guidelines conforming to minimum level (A) make nearly 54.5\% of total 6.1 million violations.
\item Level AAA violations take the next huge share among the observed violations at 2.2 million and Level AA constitute about 10\% of total violations.
\item We have highlighted the accessibility violations for 2227 Indian government websites with \textit{WAccess}, and observed that violations pertaining to considered WCAG 2.2 guidelines make around 4.57 million which is a significant share of the total 6.1 million violations. Among the considered guidelines, WCAG 2.0 constitutes 25\% and WCAG 2.1 only 2.8\% of 6.1 million violations.
\item However, this result is based only on the 29 considered guidelines from WCAG 2.0, 2.1 and 2.2 series and might differ if all the WCAG 2.0, 2.1 and 2.2 guidelines are considered for evaluating the websites.
\end{itemize}
\end{minipage}
}
\newline
\section{Conclusion and Future Work}
\label{sec:conclusion}
In this paper, we presented \textit{WAccess}, as an open source tool to assess the web accessibility of websites based on WCAG 2.0, 2.1 and 2.2 guidelines. Though there are multiple tools available to evaluate websites against WCAG 2.1 guidelines, these tools do not support automated evaluation of large number of websites, and are not open source. Further, there do not exist any tools to check the conformance of a website against WCAG 2.2 guidelines.
\textit{WAccess} is a browser extension based on a total of 29 WCAG guidelines, 13 from WCAG 2.0, 9 from WCAG 2.1 and 7 from WCAG 2.2, and supports large scale accessibility evaluation. We used \textit{WAccess} to automatically detect accessibility violations in 2227 Government of India websites. The results of the evaluation showed the deviations of each website with respect to the 29 guidelines being considered. These deviations are displayed as errors in each web page's browser-console, along with the code snippet that caused the deviation and a possible fix to rectify the deviation.
\textit{WAccess} can further be explored to include a broader scope of guidelines by avoiding human intervention through the implementation of advanced techniques in Artificial Intelligence and Machine Learning domains. We also plan to enhance the existing version of \textit{WAccess} by improving the user interface of the tool and by employing better technologies for the development of the tool.
The current browser extension based version of \textit{WAccess} could be extended as a command-line interface version to further ease the task of automated analysis.
\textit{WAccess} currently suggests fixes to the webpage based on the violations against WCAG 2.0, 2.1 and 2.2 guidelines. It can be further improved to support automated or semi-automated refactoring of the websites during website development, thus, consequently helping web developers in abiding by the accessibility guidelines, towards making the websites accessible to everyone. Other forms of \textit{WAccess} tool, such as open API, webpage, and so on, can also be developed to support a broader range of audience, and a wider range of studies, aimed to analyze the accessibility of websites.
\textit{WAccess} could be extended to the newly drafted WCAG 3.0 guidelines in the future, and can also be extended to support other domain-specific accessibility guidelines, such as GIGW, a set of guidelines for government websites in the Indian context.
|
1,314,259,993,475 | arxiv |
\section{Introduction}
The $\psi(3770)$ resonance is the lowest-mass $c\bar c$ state lying
above the open charm-pair threshold (3.73~GeV$/c^2$).
Since its width is 2 orders of magnitude larger than
that of the $\psi(3686)$ resonance, it is traditionally expected to
decay to $D\bar D$ meson pairs
with a branching fraction of more than $99\%$~\cite{Eithtin_chmonuim_prd1978}.
This would be consistent with other conventional mesons lying in the energy region between the open-charm
and open-bottom thresholds.
However, if a meson lying in this region
contains not only a $c\bar c$ pair but also a number of constituent gluons
or additional light quarks and antiquarks,
it may more easily decay to non-$D\bar D$ final states
(such as a lower-mass $c\bar c$ pair plus pions~\cite{Y4260_hybrid}
or light hadrons~\cite{Voloshin_Charmonium})
than conventional mesons.
In addition, if there are some unknown conventional or unconventional mesons
nearby the $c\bar c$ state under study, the measured
non-open-charm-pair decay branching fraction of the $c\bar c$ state
could also be large~\cite{RongG_CPC_34_778_Y2010}.
For this reason, searching for non-open-charm-pair decays of the mesons
lying in this region has become a way to search for unconventional mesons.
In 2003, the BES Collaboration found the first non-open-charm-pair final
state of $J/\psi\pi^+\pi^-$~\cite{hepnp28_325,plb605_63}
in data taken at 3.773~GeV. Since the final state $J/\psi\pi^+\pi^-$ cannot be directly produced
in $e^+e^-$ annihilation,
this process is interpreted to be a hadronic transition $\psi(3770) \rightarrow J/\psi\pi^+\pi^-$,
although it has not been excluded that this final state may be a decay product
of some other possible structures~\cite{bes2_prl_2structures} which may exist
in this energy region.
Following this observation, the CLEO Collaboration found
that $\psi(3770)$ can also decay into $J/\psi\pi^0\pi^0$, $J/\psi\eta$~\cite{prl96_082004},
$\gamma\chi_{c0}$~\cite{prd74_031106},
$\gamma\chi_{c1}$~\cite{prl96_182002} and $\phi\eta$~\cite{prd73_012002}.
In the CLEO-c measurements, the $\chi_{c0}$ and $\chi_{c1}$ were reconstructed
with $\chi_{c0}\to$ light hadrons and $\chi_{c1}\to\gamma J/\psi$, respectively.
These observations stimulate strong interest in studying other non-$D\bar D$ decays of the $\psi(3770)$,
as well as searching for non-open-charm-pair decays of
other mesons lying in the energy region between the open charm-pair
and open bottom-pair thresholds,
particularly searching for $J/\psi {\rm X}$ or $c\bar c {\rm X}$
(where X denotes any other particle, or $n\pi$, $n$K, and $\eta$, where $n=1,2,3\ldots$)
decays of these mesons in this energy region.
Within an $S$-$D$ mixing model, the $\psi(3770)$ resonance
is assumed to be predominantly the $1^3D_1$ $c\bar c$ state with a small admixture of the $2^3S_1$ state.
Based on this assumption, Refs.~\cite{prd44_3562,prd64_094002,prd69_094019,prd72_054026} predict
the partial widths of $\psi(3770)$ $E1$ radiative transitions,
but with large uncertainties. For example,
the partial widths for $\psi(3770)\to \gamma\chi_{c1}$ and
$\psi(3770)\to \gamma\chi_{c2}$ range from 59 to 183~keV
and from 3 to 24~keV, respectively.
In addition,
the transition $\psi(3770)\to \gamma\chi_{c2}$ has yet to be
observed.
Therefore, precision measurements of partial widths of the
$\psi(3770)\to\gamma\chi_{c1,2}$ processes
are critical to test the above mentioned models, and
to better understand the nature of the $\psi(3770)$, as well as to find
the origin of the non-$D\bar D$ decays of the $\psi(3770)$.
In this paper, we report a measurement of
the branching fraction for the transition $\psi(3770) \rightarrow \gamma \chi_{c1}$
and search for the transition $\psi(3770) \rightarrow \gamma \chi_{c2}$
based on $(2916.94\pm29.17)$~pb$^{-1}$ of $e^+e^-$ data~\cite{bes3_lum} taken at $\sqrt{s}=3.773$~GeV
with the BESIII detector~\cite{bes3}
operated at the BEPCII collider.
\section{BESIII Detector}
The BESIII~\cite{bes3} detector is a cylindrical detector with a
solid-angle coverage of $93\%$ of $4\pi$ that operates at the BEPCII~\cite{bes3}
$e^+e^-$ collider. It consists of several main
components. A 43-layer main drift chamber (MDC) surrounding the beam
pipe performs precise determinations of charged particle
trajectories and provides ionization energy loss ($dE/dx$)
measurements that are used for charged-particle identification. An
array of time-of-flight counters (TOF) is located radially outside of
the MDC and provides additional charged particle identification
information. The time resolution of the TOF system is 80~ps (110~ps) in the
barrel (end-cap) regions,
corresponding to better than 2$\sigma$ $K/\pi$ separation for momenta below about 1 GeV/c.
The solid angle coverage of the barrel TOF is $|\cos \theta|<0.83$,
while that of the end cap is $0.85<|\cos \theta|<0.95$,
where $\theta$ is the polar angle.
A CsI(Tl) electromagnetic calorimeter (EMC) surrounds the
TOF and is used to measure the energies of photons and electrons.
The angular coverage of the barrel EMC is
$|\cos \theta| <0.82$. The two end caps cover
$0.83<|\cos \theta|<0.93$.
A solenoidal superconducting magnet located outside the EMC provides a 1
T magnetic field in the central tracking region of the detector. The
iron flux return of the magnet is instrumented with about 1200~m$^2$ of
resistive plate muon counters (MUC) arranged in nine layers in the barrel
and eight layers in the end caps that are used to identify muons with
momentum greater than 500~MeV/$c$.
The BESIII detector response is studied using samples of Monte Carlo (MC)
simulated events which are simulated with a {\sc geant4}-based~\cite{geant4} detector simulation software
package, {\sc boost}~\cite{BOOST}.
The production of the $\psi(3770)$ resonance is simulated with the
Monte Carlo event generator $\mathcal{KK}$, {\sc kkmc}~\cite{kkmc}.
The decays of $\psi(3770)\to\gamma\chi_{cJ}$ ($J=0,1,2$) are generated with {\sc EvtGen}~\cite{besevtgen} according to the expected angular
distributions~\cite{angular_model}.
In order to study possible backgrounds,
Monte Carlo samples of inclusive $\psi(3770)$ decays,
$e^+e^-\to(\gamma)J/\psi$, $e^+e^-\to(\gamma)\psi(3686)$,
and $e^+e^-\to q\bar q$ ($q=u,d,s$) are also generated.
For inclusive decays of $\psi(3770)$, $\psi(3686)$ and $J/\psi$,
the known decay modes are generated by {\sc EvtGen} with branching fractions
taken from the PDG~\cite{pdg2014},
while the remaining unknown decay modes are modeled by
{\sc LundCharm}~\cite{lundcharm}.
In addition,
the background process $e^+e^-\to\tau^+\tau^-$ is generated with {\sc kkmc},
while the backgrounds from
$e^+e^-\to(\gamma)e^+e^-$ and $e^+e^-\to(\gamma)\mu^+\mu^-$
are generated with the generator {\sc babayaga}~\cite{babayaga}.
\section{Analysis}
In this analysis, the process $\psi(3770) \rightarrow \gamma \chi_{cJ}$ ($J=1,2$) is reconstructed
using the decay chain $\chi_{cJ} \rightarrow \gamma J/\psi$, $J/\psi\to\ell^+\ell^-$ ($\ell=e$ or
$\mu$).
\subsection{Event selection}
Events that contain two good photon candidates and
exactly two oppositely charged tracks
are selected for further analysis.
For the selection of photons, the deposited energy of a neutral cluster in the EMC
is required to be greater than 50~MeV.
Time information from the EMC is used to
suppress electronic noise and energy deposits unrelated to the event.
To exclude false photons originating from charged tracks,
the angle between the photon candidate and the nearest charged track
is required to be greater than $10^{\circ}$.
Charged tracks are reconstructed from hit patterns in the MDC.
For each charged track, the polar angle $\theta$ is required to satisfy $|\cos\theta|<0.93$.
All charged tracks are required to have a distance of closest approach
to the average $e^+e^-$ interaction point that is less than 1.0~cm
in the plane perpendicular to the beam and less than 15.0~cm along the beam direction.
Electron and muon candidates can be well separated with the
ratio $E/p$,
where $E$ is the energy deposited in the EMC
and $p$ is the momentum measured in the MDC.
If the ratio $E/p$ is greater than 0.7, the charged track is identified as an electron or positron.
Otherwise, if the energy deposited in the EMC is
in the range from 0.05 to 0.35~GeV, the charged track is identified as a muon.
The $J/\psi$ candidates are reconstructed from pairs of leptons with
momenta in a range from 1.2 to 1.9~GeV$/c$.
In the selection of the $\gamma\gamma e^+e^-$ mode, we further require that the cosine of the polar
angle of the positron and electron, $\theta_{e^+}$ and $\theta_{e^-}$, satisfy $\cos\theta_{e^+}<0.5$
and $\cos\theta_{e^-}>-0.5$ to reduce the number of background events from radiative Bhabha scattering.
To exclude background events from $J/\psi\pi^0$ and $J/\psi\eta$
with $\pi^0\to\gamma\gamma$ and $\eta\to\gamma\gamma$,
the invariant mass of the two photons is required to be outside of the $\pi^0$
mass window (0.124, 0.146)~GeV$/c^2$ and the $\eta$ mass window (0.537, 0.558)~GeV$/c^2$.
\subsection{Kinematic fit and mass spectrum of $\gamma J/\psi$}
In order to both reduce background and improve the mass resolution,
a kinematic fit is performed under the $\gamma\gamma\ell^+\ell^-$ hypothesis.
We constrain the total energy and the components of the total momentum
to the expected center-of-mass energy and the three-momentum,
taking into account the small beam crossing angle. In addition to these,
we constrain the invariant mass of the $\ell^+\ell^-$ pair to the $J/\psi$ mass.
If the $\chi^2$ of the 5-constraint (5C) kinematic fit is less than 25,
the event is kept for further analysis.
The energy of the $\gamma$ from the transition $\psi(3770) \rightarrow \gamma \chi_{cJ}$ for $J=1,2$
is lower than that of the $\gamma$ from the subsequent transition $\chi_{cJ} \rightarrow \gamma J/\psi$,
while the energy of the $\gamma$ from the transition $\psi(3770) \rightarrow \gamma \chi_{c0}$
is usually higher than that of the $\gamma$ from the subsequent transition $\chi_{c0} \rightarrow \gamma J/\psi$.
To reconstruct the $\chi_{c1}$ and $\chi_{c2}$ from the radiative decay of the $\psi(3770)$,
we examine the invariant mass of $\gamma^H J/\psi$,
where $\gamma^H$ refers to the higher energetic photon in the final state $\gamma\gamma\ell^+\ell^-$.
Figure~\ref{mass_ghjpsi_mc} (a) shows the distribution
of the invariant masses of $\gamma^{H} J/\psi$
from the Monte Carlo events of $\psi(3770) \rightarrow \gamma \chi_{cJ}\to\gamma\gamma J/\psi\to\gamma\gamma\ell^+\ell^-$,
which were generated at $\sqrt{s}=3.773$~GeV.
Due to the wrong combination of the photon and $J/\psi$,
the transition $\psi(3770)\to\gamma\chi_{c0}$ produces a broad distribution on the lower side;
the events shown in the peak located at $\sim 3.51$~GeV$/c^2$ are
from the $\psi(3770) \rightarrow \gamma \chi_{c1}$ decay;
while the events from the peak located at $\sim 3.56$~GeV$/c^2$ are
from the $\psi(3770) \rightarrow \gamma \chi_{c2}$ decay.
\begin{figure}[!hbt]
\includegraphics[width=0.5\textwidth]{Mgll_distributions_MC_replot_16Feb15_xdev1.eps}
\caption{
Invariant mass spectra of the selected $\gamma^{H}J/\psi$
combinations from Monte Carlo events generated at $\sqrt s = 3.773$~GeV,
(a) is for the events from $\psi(3770) \rightarrow \gamma \chi_{cJ}\to\gamma\gamma J/\psi\to \gamma\gamma \ell^+\ell^-$ decays,
(b) is for the background events,
and
(c) is the $e^+e^-\to(\gamma_{\rm ISR})\psi(3686)$, $\psi(3686)\rightarrow\gamma \chi_{cJ}\to\gamma\gamma J/\psi\to\gamma\gamma\ell^+\ell^-$
events.
}
\label{mass_ghjpsi_mc}
\end{figure}
\begin{figure}[!hbt]
\includegraphics[width=0.5\textwidth]{Data_MllgH_unbinFit_v1_replot1.eps}
\caption{
Invariant mass spectrum of the $\gamma^{\rm H}J/\psi$ combinations selected from data.
The dots with error bars represent the data.
The solid (red) line shows the fit.
The dashed (blue) line shows the smooth background.
The long-dashed (green) line is the sum of the smooth background
and the contribution from $e^+e^-\to(\gamma_{\rm ISR})\psi(3686)$ production.
}
\label{mass_ghjpsi_data}
\end{figure}
Figure~\ref{mass_ghjpsi_data} shows the invariant-mass distribution
of $\gamma^{H} J/\psi$ from the data.
There are two clear peaks corresponding to the $\chi_{c1}$ (left) and
the $\chi_{c2}$ (right) signals.
Due to the small branching fraction ($\sim 1\%$) and the wrong combination of the photon and $J/\psi$,
the events from $\chi_{c0}\to\gamma J/\psi$ decays are not clearly observed in Fig.~\ref{mass_ghjpsi_data}.
\subsection{Background studies}
In the selected candidate events,
there are both signal events
for $\psi(3770) \rightarrow \gamma \chi_{cJ}\to\gamma\gamma J/\psi$
and background events.
These background events originate from several sources, including
(1) decays of the $\psi(3770)$ other than the signal modes in question,
(2) $e^+e^-\to(\gamma)e^+e^- $, $e^+e^-\to(\gamma) \mu^+\mu^-$ and $e^+e^-\to(\gamma)\tau^+\tau^-$,
where the $\gamma$ in parentheses denotes the inclusion of photons from initial state radiation (ISR)
and final state radiation (FSR),
(3) continuum light hadron production,
(4) ISR $J/\psi$ events,
(5) cross contamination between the $e^+e^-$ and $\mu^+\mu^-$ modes of the signal events, and
(6) $e^+e^-\to(\gamma_{\rm ISR})\psi(3686)$ events produced at $\sqrt{s} = 3.773$~GeV,
where the notation ``$\gamma_{\rm ISR}$'' denotes the inclusion of produced
$\psi(3686)$ due to radiative photon in the initial state.
Figure~\ref{mass_ghjpsi_mc} (b) shows different components of the selected $\gamma\gamma J/\psi$ events
misidentified from the Monte Carlo simulated background events for $e^+e^- \to (\gamma) e^+e^-$,
$e^+e^- \to (\gamma) \mu^+\mu^-$,
and
continuum light hadron production,
which are generated at $\sqrt{s}=3.773$~GeV.
The shape of the invariant-mass distribution for these background events
can be well described with a polynomial function.
Using MC simulation,
the contributions from decays of the $\psi(3770)$ other than the signal mode,
$e^+e^- \to (\gamma) \tau^+\tau^-$, ISR $J/\psi$ events,
and cross contamination between the $e^+e^-$ and $\mu^+\mu^-$ modes of the signal events
are found to be negligible.
In addition to the
backgrounds described above, the background events
from $e^+e^-\to(\gamma_{\rm ISR})\psi(3686)$ with $\psi(3686)\to\gamma\chi_{cJ}$ ($\chi_{cJ}\to\gamma J/\psi$,
$J/\psi\to\ell^+\ell^-$) decays can also satisfy the event selection criteria.
This kind of background produced near $\sqrt s = 3.773$ GeV has
the same event topology as that of $\psi(3770)\to\gamma\chi_{cJ}$ decays
and are indistinguishable from the signal events.
The number of background events from $\psi(3686)$ decays can be estimated using
\begin{linenomath*}
\begin{eqnarray}\label{Eq_Nbkg_psip}
N_{\psi(3686)\to\gamma\chi_{cJ}} &=& \sigma^{\rm obs}_{\psi(3686)\to\gamma\chi_{cJ}} \times \mathcal L \times \mathcal B_{\chi_{cJ}\to\gamma J/\psi}
\nonumber \\
&\times& \mathcal B_{J/\psi\to\ell^+\ell^-} \times \eta_{\psi(3686)\to\gamma\chi_{cJ}},
\end{eqnarray}
\end{linenomath*}
where
$\sigma^{\rm obs}_{\psi(3686)\to\gamma\chi_{cJ}}$ is the
observed cross section of $e^+e^-\to \gamma_{\rm ISR}\psi(3686)$
with $\psi(3686)\to\gamma\chi_{cJ}$ at $\sqrt{s}=3.773$ GeV,
$\mathcal L$ is the integrated luminosity of the data used in the analysis,
$\mathcal B_{\chi_{cJ}\to\gamma J/\psi}$
is the decay branching fraction
of $\chi_{cJ}\to\gamma J/\psi$, $\mathcal B_{J/\psi\to\ell^+\ell^-}$
is
the sum of branching fractions of $J/\psi\to e^+e^-$ and $J/\psi\to\mu^+\mu^-$ decays,
and
$\eta_{\psi(3686)\to\gamma\chi_{cJ}}$ represents the rate of misidentifying
the $\psi(3686)\to\gamma\chi_{cJ}$ events as $\psi(3770)\to\gamma\chi_{cJ}$ signal events.
The observed
cross section for $e^+e^-\to \gamma_{\rm ISR}\psi(3686)\to\gamma\chi_{cJ}$
at $\sqrt{s}$ is obtained with
\begin{linenomath*}
\begin{align}
& \sigma^{\rm obs}_{\psi(3686)\to\gamma\chi_{cJ}} \nonumber \\
& = \int \sigma^{\rm D}_{\psi(3686)\to \gamma\chi_{cJ}}(s^\prime) f(s^\prime) F(x,s) G(s,s^{\prime\prime}) ds^{\prime\prime} dx,
\label{Eq_XSEC_psip_obs}
\end{align}
\end{linenomath*}
where
$\sigma^{\rm D}_{\psi(3686)\to \gamma\chi_{cJ}}(s^\prime)$ is the dressed
cross section for $\psi(3686)\to \gamma\chi_{cJ}$ decay,
$s^\prime=s(1-x)$ is the square of the actual center-of-mass energy of the $e^+e^-$ after radiating the photons,
$x$ is the fraction of the radiative energy to the beam energy,
$f(s^\prime)$ is a phase space factor,
$F(x,s)$ is the sampling function for the radiative energy fraction $x$
at $\sqrt s$~\cite{Structure_Function},
$G(s,s^{\prime\prime})$ is a Gaussian function describing the distribution of the $e^+e^-$ collision
energy with an energy spread $\sigma_E=1.37$ MeV at BEPCII.
$\sigma^{\rm D}_{\psi(3686)\to \gamma\chi_{cJ}}(s^\prime)$
is calculated with
\begin{linenomath*}
\begin{align}
& \sigma^{\rm D}_{\psi(3686)\to \gamma\chi_{cJ}}(s^{\prime}) \nonumber \\
& = \frac{12\pi\Gamma^{ee}_{\psi(3686)}\Gamma^{\rm tot}_{\psi(3686)} \mathcal B(\psi(3686)\to\gamma\chi_{cJ})}
{({s^\prime}^2-M_{\psi(3686)}^2)^2 + (\Gamma^{\rm tot}_{\psi(3686)}M_{\psi(3686)})^2},
\label{Eq_XSEC_psip_B}
\end{align}
\end{linenomath*}
where
$\Gamma^{ee}_{\psi(3686)}$ and $\Gamma^{\rm tot}_{\psi(3686)}$ are, respectively,
the leptonic and total width of the $\psi(3686)$,
$M_{\psi(3686)}$ is the mass of the $\psi(3686)$,
and
$\mathcal B(\psi(3686)\to\gamma\chi_{cJ})$
denotes the decay branching fraction of $\psi(3686)\to\gamma\chi_{cJ}$ ($J=0,1,2$).
The phase space factor is equal to~\cite{fcor}
\begin{linenomath*}
\begin{equation}\label{Eq_fphsp}
f(s^\prime) = ( {E_\gamma(s^\prime)}/{E_\gamma^0} )^3,
\end{equation}
\end{linenomath*}
where
$E_\gamma(s^\prime)$ and $E_\gamma^0$ are the energies of the photon
in the $\psi(3686)\to\gamma\chi_{cJ}$ decay at $e^+e^-$ energies of $\sqrt{s^\prime}$
and $M_{\psi(3686)}$, respectively.
The rates $\eta_{\psi(3686)\to\gamma\chi_{cJ}}$ of misidentifying $\psi(3686)\to\gamma\chi_{cJ}$
as $\psi(3770)\to\gamma\chi_{cJ}$
are $4.16\times10^{-3}$, $6.88\times10^{-3}$ and $8.86\times10^{-3}$ for
$\chi_{c0}$, $\chi_{c1}$ and $\chi_{c2}$, respectively,
which are estimated with
Monte Carlo simulated events for $\psi(3686)\to\gamma\chi_{cJ}$
generated at $\sqrt{s}=3.773$ GeV.
With
the parameters of the $\psi(3686)$ ($M_{\psi(3686)}=3686.109^{+0.012}_{-0.014}$ MeV,
$\Gamma^{\rm tot}_{\psi(3686)}=299\pm8$ keV and $\Gamma^{ee}_{\psi(3686)}=2.36\pm0.04$ keV),
the luminosity of the data,
the decay branching fractions and the misidentification rates,
we obtain
the numbers of background events from
$\psi(3686)\to\gamma\chi_{cJ}\to\gamma\gamma J/\psi\to\gamma\gamma\ell^+\ell^-$ decays
to be
$5.3 \pm 0.3$ $\chi_{c0}$,
$225.4 \pm 11.7$ $\chi_{c1}$
and
$158.4 \pm 8.5$ $\chi_{c2}$,
where the errors are mainly due to
the uncertainties of the $\psi(3686)$ resonance parameters,
the luminosity,
the branching fractions of $\psi(3686)\to\gamma\chi_{cJ}$, $\chi_{cJ}\to\gamma J/\psi$ and $J/\psi\to\ell^+\ell^-$ decays.
\subsection{Signal events for $\psi(3770)\rightarrow$ $\gamma\chi_{cJ}$}
To extract the number of signal events,
we fit the invariant-mass spectrum
of $\gamma^{H} J/\psi$ shown in Fig.~\ref{mass_ghjpsi_data}
with a function describing the shape of the mass spectrum.
The function is constructed with the Monte Carlo simulated signal shape
as shown in Fig.~\ref{mass_ghjpsi_mc} (a) to describe the signal, a
fourth-order polynomial for the smooth background, and the
Monte Carlo simulated mass shape
for the $e^+e^-\to(\gamma_{\rm ISR})\psi(3686)$ process
with a yield fixed to the predicted size of the corresponding peaking background.
In the fit the expected number of $\psi(3770)\to\gamma\chi_{c0}$ is fixed at $60.1 \pm 8.6$ events,
which is estimated with the branching fraction
for $\psi(3770) \to \gamma \chi_{c0}$ decay~\cite{pdg2014}
and the total number of $\psi(3770)$
as well as the reconstruction efficiency.
The error in the estimated number of events is from the uncertainties of the branching fractions
for $\psi(3770) \to \gamma \chi_{c0}$,
$\chi_{c0}\to\gamma J/\psi$ and $J/\psi\to\ell^+\ell^-$~\cite{pdg2014},
the total number of $\psi(3770)$ and the reconstruction efficiency.
The fit returns $654.2\pm40.3$ and $34.7\pm29.4$ signal events
for $\psi(3770) \to \gamma \chi_{c1}$ and $\psi(3770) \to \gamma \chi_{c2}$ decays, respectively.
The red solid line in Fig.~\ref{mass_ghjpsi_data} shows the best fit.
To estimate the statistical significance of observing
$\psi(3770) \to \gamma \chi_{c2}$ signal events,
we perform a fit with the $\chi_{c2}$ signal amplitude fixed at zero.
Transforming the ratio of the fit likelihoods into the number of
standard deviations at which the null hypothesis can be excluded
gives a statistical signal significance of $1.2$ standard deviations.
\section{Result}
\subsection{Total number of $\psi(3770)$}
The total number of $\psi(3770)$ produced in the data sample is given by
\begin{linenomath*}
\begin{equation}
N_{\psi(3770)} = \sigma_{\psi(3770)}^{\rm obs}\times\mathcal L,
\end{equation}
\end{linenomath*}
where $\sigma_{\psi(3770)}^{\rm obs}$ is the total cross section for $\psi(3770)$ production
at 3.773~GeV in $e^+e^-$ annihilation, which includes tree-level and both
ISR and vacuum polarization contributions.
The BES-II Collaboration previously measured the cross section
$\sigma^{\rm obs}_{\psi(3770)}(\sqrt{s})|_{{\sqrt{s} = 3.773~{\rm GeV}}} = (7.15 \pm 0.27 \pm 0.27)$ nb~\cite{psipp_cs},
which was obtained by weighting two independent measurements of this cross section~\cite{bes2_3, psipp_cs_2}.
Using this cross section $\sigma^{\rm obs}_{\psi(3770)}(\sqrt{s})|_{{\sqrt{s} = 3.773~{\rm GeV}}}$ and
the luminosity
of the data~\cite{bes3_lum}, we obtain the total number of $\psi(3770)$ produced in the data sample
to be
\begin{linenomath*}
$$N_{\psi(3770)}=(20.86 \pm 1.13)\times 10^6,$$
\end{linenomath*}
where the error is due to the uncertainties of the total cross section for $\psi(3770)$ production
and the luminosity of the data.
\subsection{Branching fraction}
The branching fractions for $\psi(3770) \to \gamma \chi_{c1}$ and $\psi(3770) \to \gamma \chi_{c2}$ decays
are determined with
\begin{linenomath*}
\begin{align}
& \mathcal B({\psi(3770)\to\gamma\chi_{c1,2}}) = \nonumber \\
& \phantom{=} \frac {N_{\psi(3770)\to\gamma\chi_{c1,2}} }
{N_{\psi(3770)} \mathcal B_{\chi_{c1,2}\to\gamma J/\psi}\mathcal B_{J/\psi \to \ell^+\ell^-} \epsilon_{\psi(3770) \to \gamma \chi_{c1,2}} },
\label{Eq_Bf}
\end{align}
\end{linenomath*}
where
$N_{{\psi(3770)\to\gamma\chi_{c1,2}}}$ is the observed number of signal events for
$\psi(3770) \to \gamma \chi_{c1,2}$ decays,
$\mathcal B_{\chi_{c1,2}\to\gamma J/\psi}$ is the branching fraction for $\chi_{c1,2}\to\gamma J/\psi$,
$\mathcal B_{J/\psi \to \ell^+\ell^-}$ is the branching fraction for $J/\psi \to \ell^+\ell^-$ decay,
and $\epsilon_{\psi(3770) \to \gamma \chi_{c1,2}}$ is the efficiency
for reconstructing this decay.
The reconstruction efficiencies for observing
$\psi(3770)\to\gamma\chi_{c1}$ and $\psi(3770)\to\gamma\chi_{c2}$ decays
are determined with Monte Carlo simulated events for these decays.
With large Monte Carlo samples, the efficiencies are found to be
$\epsilon_{\psi(3770) \to \gamma \chi_{c1}}=(31.25\pm 0.10)\%$ and
$\epsilon_{\psi(3770) \to \gamma \chi_{c2}}=(28.77\pm 0.10)\%$,
where the errors are statistical.
Inserting the corresponding numbers into Eq.~(\ref{Eq_Bf})
yields the branching fractions
\begin{linenomath*}
\begin{equation}
\mathcal B(\psi(3770) \to \gamma \chi_{c1}) = (2.48\pm0.15\pm0.23)\times10^{-3},
\end{equation}
\end{linenomath*}
and
\begin{linenomath*}
\begin{equation}\label{bf_gXc2}
\mathcal B(\psi(3770) \to \gamma \chi_{c2}) = (0.25\pm 0.21\pm 0.18)\times10^{-3},
\end{equation}
\end{linenomath*}
where the first errors are statistical and the second systematic.
The systematic uncertainty in the measured branching fractions of $\psi(3770)\to\gamma\chi_{c1}$ and $\psi(3770)\to\gamma\chi_{c2}$
includes eight contributions:
(1) the uncertainty in the total number of $\psi(3770)$ ($5.4\%$),
which contains the uncertainty in the observed cross section for $\psi(3770)$
production at $\sqrt{s}=3.773$~GeV~\cite{psipp_cs} and
the uncertainty in the luminosity measurement~\cite{bes3_lum},
(2) the uncertainty in the particle identification ($0.1\%$)
determined by comparing the lepton identification efficiencies for data and
Monte Carlo events, which are measured using
the lepton samples selected from the
$\psi(3686)\to\pi^+\pi^-J/\psi$, $J/\psi\to\ell^+\ell^-$ process,
(3) the uncertainty in the extra $\cos\theta_{e^\pm}$ requirement ($0.1\%$)
estimated by comparing
the acceptances of this requirement for data and Monte Carlo events,
which are determined using the electron samples selected from the $\psi(3686)\to\pi^+\pi^-J/\psi$,
$J/\psi\to e^+e^-$ process,
(4) the uncertainty due to photon selection
($1.0\%$ per photon~\cite{photon}),
(5) the uncertainty associated with the kinematic fit ($2.1\%$)
determined
by comparing the $\chi^2$ distributions and the efficiencies of the $\chi^2<25$ requirement
for data and Monte Carlo simulation,
which are obtained using the $\psi(3686)\to\gamma\gamma\ell^+\ell^-$ events
selected from data taken at $\sqrt{s}=3.686$~GeV and
the corresponding Monte Carlo samples,
(6) the uncertainty in the reconstruction efficiency ($0.3\%$)
arising from the Monte Carlo statistics,
(7) the uncertainties in the branching fractions
of $\chi_{c1,2}\to\gamma J/\psi$ and $J/\psi\to\ell^+\ell^-$ decays
($3.6\%$ for $\gamma \chi_{c1}$, $3.7\%$ for $\gamma \chi_{c2}$~\cite{pdg2014}),
and
(8) the uncertainty associated with the fit to the mass spectrum
($6.1\%$ for $\gamma \chi_{c1}$, $73.2\%$ for $\gamma \chi_{c2}$)
determined by changing the fitting range,
changing the order of the polynomial,
varying the magnitude of the peaking background
from the radiative $\psi(3686)$ tail by $\pm1\sigma$
and
using an alternative signal function (Monte Carlo shape convoluted with a Gaussian function).
These systematic uncertainties are summarized in Table~\ref{Tab:sys_err}.
Adding these systematic uncertainties in quadrature yields total systematic uncertainties
of $9.4\%$ and $73.6\%$ for $\psi(3770) \to \gamma \chi_{c1}$ and
$\psi(3770) \to \gamma \chi_{c2}$ decays, respectively.
\begin{table}[!hbp]
\caption{
Summary of the systematic uncertainties (\%) in the measurements of the
branching fractions for $\psi(3770) \to \gamma \chi_{c1}$ and $\gamma \chi_{c2}$.
}
\label{Tab:sys_err}
\begin{ruledtabular}
\begin{tabular}{lcc}
Source& $\gamma\chi_{c1}$ & $\gamma\chi_{c2}$ \\
\hline
Total number of $\psi(3770)$ & 5.4 & 5.4 \\
Particle identification & 0.1 & 0.1 \\
$\cos\theta_{e^\pm}$ cut & 0.1 & 0.1 \\
Photon selection & 2.0 & 2.0 \\
Kinematic fit & 2.1 & 2.1 \\
Efficiency & 0.3 & 0.3 \\
Branching fractions & 3.6 & 3.7 \\
Fit to the mass spectrum & 6.1 & 73.2 \\
\hline
Total & 9.4 & 73.6 \\
\end{tabular}
\end{ruledtabular}
\end{table}
To obtain an upper limit on $\mathcal B(\psi(3770) \to \gamma \chi_{c2})$,
we integrate a likelihood function from zero to the value of
$\mathcal B(\psi(3770) \to \gamma \chi_{c2})$ corresponding to $90\%$ of
the integral from zero to infinity.
The likelihood function is a Gaussian function constructed with
the mean value of $\mathcal B$ and a standard deviation
which includes both the statistical and systematic errors.
Using this method, an upper limit on the branching fraction of $\psi(3770)\to\gamma\chi_{c2}$ is set to
\begin{linenomath*}
\begin{equation}
\mathcal B(\psi(3770) \to \gamma \chi_{c2}) < 0.64\times10^{-3}
\end{equation}
\end{linenomath*}
at the $90\%$ confidence level (C.L.).
\subsection{Partial width}
Using the total width $\Gamma^{\rm tot}_{\psi(3770)}=(27.2 \pm 1.0)$ MeV~\cite{pdg2014},
we transform the measured branching fractions to the transition widths.
This yields
\begin{linenomath*}
$$\Gamma(\psi(3770)\rightarrow\gamma\chi_{c1})=(67.5 \pm 4.1 \pm 6.7)~{\rm keV}$$
and the upper limit at the $90\%$ C.L.
$$\Gamma(\psi(3770)\rightarrow\gamma\chi_{c2})<17.4~{\rm keV}.$$
\end{linenomath*}
The measured partial widths for these two transitions are
compared to several theoretical predictions in Table~\ref{tab:Cmp_G}.
\begin{table}
\caption{Comparison of measured partial widths with theoretical predictions,
where $\phi$ is the mixing angle of the $S$-$D$ mixing model.}
\label{tab:Cmp_G}
\begin{ruledtabular}
\begin{tabular}{lcc}
\multirow{2}{*}{Experiment/theory} & \multicolumn{2}{c}{$\Gamma(\psi(3770) \to \gamma\chi_{cJ})$ (keV)} \\
& $J=1$ & $J=2$ \\
\hline
This work & $67.5 \pm 4.1 \pm 6.7$ & $<17.4$ \\
\hline
Ding-Qin-Chao~\cite{prd44_3562} \\
~~ nonrelativistic & 95 & 3.6 \\
~~ relativistic & 72 & 3.0 \\
\hline
Rosner $S$-$D$ mixing~\cite{prd64_094002}\\
~~ $\phi = 12^{\circ}$~\cite{prd64_094002} & $73\pm 9$ & $24\pm 4$ \\
~~ $\phi =(10.6\pm1.3)^{\circ}$~\cite{BESIII_physics} & $79\pm 6$ & $21\pm 3$ \\
~~ $\phi = 0^{\circ}$ (pure $1^3D_1$ state)~\cite{BESIII_physics} & 133 & 4.8 \\
\hline
Eichten-Lane-Quigg~\cite{prd69_094019} \\
~~ nonrelativistic & 183 & 3.2 \\
~~ with coupled-channel corr. & 59 & 3.9 \\
\hline
Barnes-Godfrey-Swanson~\cite{prd72_054026} \\
~~ nonrelativistic & 125 & 4.9 \\
~~ relativistic & 77 & 3.3 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Partial cross section}
Using the cross section
$\sigma_{\psi(3770)}=(9.93\pm0.77)$~nb
for $\psi(3770)$ production at $\sqrt{s}=3.773$~GeV,
which is calculated using $\psi(3770)$ resonance parameters~\cite{pdg2014},
together with the measured branching fractions for these two decays,
we obtain the partial cross section for the $\psi(3770)\to\gamma\chi_{c1}$ transition to be
\begin{linenomath*}
$$\sigma(\psi(3770) \to \gamma \chi_{c1}) =(24.6\pm 1.5 \pm3.0)~{\rm pb}$$
and the upper limit at the $90\%$ C.L. on the partial cross section for the $\psi(3770)\to\gamma\chi_{c2}$ transition to be
$$\sigma(\psi(3770) \to \gamma \chi_{c2})<6.4~{\rm pb}.$$
\end{linenomath*}
\section{Summary}
By analyzing 2.92~fb$^{-1}$ of data collected at $\sqrt{s}=3.773$~GeV with
the BESIII detector operated at the BEPCII, we measure
$\mathcal B(\psi(3770) \to \gamma \chi_{c1})=(2.48 \pm 0.15 \pm 0.23) \times 10^{-3}$
and set a $90\%$ C.L. upper limit $\mathcal B(\psi(3770) \to \gamma \chi_{c2}) < 0.64 \times 10^{-3}$.
This measured branching fraction for $\psi(3770) \to \gamma \chi_{c1}$
is consistent within error with
$\mathcal B(\psi(3770) \to \gamma \chi_{c1})=(2.8 \pm 0.5\pm 0.4) \times 10^{-3}$
measured by CLEO-c~\cite{prl96_182002},
but the precision of this measurement is improved by more than a factor of 2.
\begin{acknowledgments}
The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contracts No. 2009CB825204, and No. 2015CB856700; the National Natural Science Foundation of China (NSFC) under Contracts No. 10935007, No. 11125525, No. 11235011, No. 11322544, No. 11335008, and No. 11425524; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts No. 11179007, No. U1232201, and No. U1332201; CAS under Contracts No. KJCX2-YW-N29, and No. KJCX2-YW-N45; the 100 Talents Program of CAS; INPAC and the Shanghai Key Laboratory for Particle Physics and Cosmology; the German Research Foundation DFG under Contract No. Collaborative Research Center CRC-1044; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Development of Turkey under Contract No. DPT2006K-120470; the Russian Foundation for Basic Research under Contract No. 14-07-91152; the U. S. Department of Energy under Contracts No. DE-FG02-04ER41291, No. DE-FG02-05ER41374, No. DE-FG02-94ER40823, and No. DESC0010118; the U.S. National Science Foundation; the University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; and the WCU Program of the National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.
\end{acknowledgments}
|
1,314,259,993,476 | arxiv | \section{\label{Introduction}Introduction}
Understanding the interaction of matter with strong and ultrastrong laser pulses is a fundamental and rapidly developing field of research. With the remarkable advances
in laser technology in the past few decades, experimental investigation
of the interaction became feasible \cite{fs_lasers_RevModPhys_2000,as_lasers_RevModPhys_2009,strong_fields_in_periodic_systems_RevModPhys_2018} and provided insight into the strongly nonlinear
domain of optical processes, associated with various unique phenomena, such
as high harmonic generation \cite{HHG_Kulander_PRL_1992,HHG_theory_Corkum_PRA_1994}, above threshold
dissociation and ionization \cite{ATI_Kulander_PRL_1993}, bond softening and
hardening effects \cite{Bandrauk1,Bandrauk_JPC_1987,Bandrauk2,Bandrauk3,Bucksbaum1,Bucksbaum2,Bucksbaum3},
and light-induced conical intersections (LICIs) \cite{LICI1,LICI2,LICI3,LICI4,LICI5,Gabi6,Bandrauk_LICI,Nimrod2,LICI7}\footnote{Conical intersections (CIs) are geometries where two electronic states of a molecule share the same energy, providing a very efficient channel for nonradiative relaxation processes to the ground state on an ultrafast time scale. For CIs to be formed in a molecular system one needs two independent degrees of freedom, which constitute the space in which the CIs can exist. Therefore, CIs between different electronic states can only occur for molecules with at least three atoms. For a diatomic molecule, which has only one vibrational degree of freedom, it is not possible for two electronic states of the same symmetry to become degenerate, as required by the well-known noncrossing rule. However, this statement is true only in free space.}.
LICIs may form even in diatomic molecules, when the laser light
not only rotates the molecule but can also couple the vibrational with the emerging rotational degree of freedom. Theoretical and experimental studies have demonstrated that the light-induced nonadiabatic effects have significant impact on different observable dynamical properties, such as molecular alignment, dissociation probability, or angular distribution of photofragments \cite{LICI1,LICI2,LICI3,LICI4,LICI5,Gabi6,Bandrauk_LICI}.
Recently, signatures of light-induced nonadiabatic phenomena
have been successfully identified in the classical field-dressed static rovibronic spectrum of diatomics \cite{LICI_in_spectrum_Szidarovszky_JPCL_2018}.
As an alternative to interactions of atoms or molecules with intense laser fields, strong
light-matter coupling can also be achieved, both for atoms and molecules, by their confinement
in microscale or nanoscale optical cavities \cite{Edina, Ruggenthaler2018, Domokos_PRA_2015}. Such systems are usually described in terms of field-dressed or polariton states, which are the eigenstates of the full ``atom/molecule + radiation field'' system \cite{cavity_Ebbesen_AccChemRes_2016,cavity_Flick_PNAS_2017,cavity_Feist_ACSphotonics_2018, review_ultrastrong_coupling_Kockum_2018}.
With decreasing cavity size the quantized nature of the radiation field eventually becomes important
and strong photon-matter coupling as well as a significant modification
of the atomic and molecular properties may occur even if the photon
number is (close to) zero \cite{cavity_Ebbesen_AccChemRes_2016,cavity_Flick_PNAS_2017,cavity_Feist_ACSphotonics_2018,cavity_Herrera_PRL_2016,cavity_Galego_NatCommun_2016,cavity_Galego_PRL_2017,cavity_Luk_JCTC_2017}.
For example, quantum modeling efforts have shown that strong resonant coupling of a cavity radiation
field with an electronic transition can decouple the electronic and
nuclear degrees of freedom in molecular ensembles \cite{cavity_Herrera_PRL_2016},
strong coupling of molecules to a confined light mode could suppress
photoisomerization \cite{cavity_Galego_NatCommun_2016}, and that collective motion of
molecules could be triggered by a single photon of a cavity \cite{cavity_Galego_PRL_2017,cavity_Luk_JCTC_2017,collective_CI_cavity_Vendrell_arxiv_2018}.
In addition to exploring strong light-matter coupling, processes in
nanoscale cavities could also be used to study how atoms/molecules interact
with non-classical states of the quantized light field \cite{cavity_Triana_JPCA_2018,cavity_MCTDH_Vendrell_CP_2018}.
Previous theoretical models mostly treated atoms or molecules in reduced
dimensions or via some simplified model \cite{cavity_Herrera_PRL_2016,cavity_Galego_NatCommun_2016,cavity_Triana_JPCA_2018,cavity_MCTDH_Vendrell_CP_2018,cavity_Galego_PRX_2015,cavity_Kowalewski_JPCL_2016, cavity_chemical_reactivity_Feist_arxiv_2018,spectra_of_vibDressedpolaritons_Zeb_ACSphotonics_2018,dark_vibronic_polaritons_Herrera_PRL_2017,theory_of_organic_cavities_Herrera_ACSphotonics_2018},
and most often utilized the concept of polariton states formed by
coupling the electronic and photonic degrees of freedom. Nuclear motion
is then thought to proceed on the polariton surfaces. Alternatively,
decoupling the electronic motion from the nuclear and photonic degrees
of freedom, known as the cavity Born--Oppenheimer approximation, has
sometimes been pursued \cite{cavity_Flick_PNAS_2017,cavityBO_Flick_JCTC_2017}.
On the experimental side, the coupling of molecules to cavity radiation
field was shown to modify chemical landscapes and reaction dynamics
\cite{cavity_exp_Hutchison_AngChem_2012}, as well as the absorption spectra \cite{cavity_Schwartz_CPC_2013,ultrastrong_coupling_spectrosc_Jino_FarDisc_2015}
of molecules. Furthermore, intermolecular non-radiative energy transfer
was found to be enhanced by the formation of polariton states \cite{cavity_Zhong_AngChem_2016},
and the hybridization of molecular vibrational states through strong light-matter
coupling in a microcavity \cite{cavity_Muallem_JPCL_2016} was also observed.
For atoms or molecules interacting with a cavity mode (near) resonant to an electronic transition, one usually distinguishes three regimes of field-matter coupling strengths \cite{cavity_Flick_PNAS_2017,cavity_Galego_PRX_2015}, as depicted in Fig. \ref{Fig:couplingRegions}: weak, strong, and ultrastrong. In the weak-coupling regime, see the left panel of Fig. \ref{Fig:couplingRegions}, the diabatic picture of photon-dressed potential energy curves (PECs) holds and the cavity mode only couples the excited electronic state with the ground electronic state dressed by a photon. In the strong-coupling regime, shown in the middle panel of Fig. \ref{Fig:couplingRegions}, polariton states are formed and the adiabatic picture becomes appropriate for describing the excited state manifold, while the ground state remains essentially unchanged. Finally, in the ultrastrong-coupling regime, see the right panel of Fig. \ref{Fig:couplingRegions}, nonresonant couplings become strong enough to significantly modify the electronic ground state, as well.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{figures/coupling-regions.pdf}
\caption{\label{Fig:couplingRegions}The three regimes of coupling strength , and the related field-dressed PECs, of a molecule interacting with a resonant cavity mode. The diabatic surfaces $V_1$ and $V_2$ are indicated with continuous line, while the polariton surfaces $W_0, W_1$ and $W_2$ are indicated with dashed lines. $\hslash \omega_c$ is the cavity photon energy.}
\end{figure}
In reality, the dashed polariton surfaces of Fig. \ref{Fig:couplingRegions} are strictly valid only if the molecular axis is parallel to the preferred polarization direction of the cavity field.
In contrast, when the molecular axis is perpendicular to the polarization direction of the cavity, the light-matter coupling vanishes and the diabatic picture (continuous potentials in Fig. \ref{Fig:couplingRegions}) is the relevant one.
In fact, the orientation of a rotating molecule can change continuously between these two extreme positions, and the diabatic and cavity-induced polariton surfaces are continuously transformed into each other.
Therefore, due to the rotation of the molecule, the upper and lower adiabatic surfaces are not completely separated but a conical intersection (CI) emerges between them, see Fig. \ref{Fig:LICI}, at which point the nonadiabatic couplings become infinitely strong.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{figures/cavity-3d-pot.pdf}
\caption{\label{Fig:LICI}Two dimensional polariton surfaces ($W_1$ and $W_2$) of the Na$_2$ dimer in the cavity. The one photon coupling of the cavity ($\varepsilon_c$) corresponds to a classical field intensity of 64 GWcm$^{-2}$. The cavity mode wavelength is $\lambda_c=653$ nm. The red arrow denotes the position of the light-induced conical intersection.}
\end{figure}
At the vicinity of this CI, which is created by the quantum light and never present in field-free diatomics, the Born$-$Oppenheimer picture \cite{27BoOp,54BoHu} breaks down. The nuclear dynamics proceed on the coupled polariton surfaces and motions along the vibrational and rotational coordinates become intricately coupled. It must be stressed that even in the case of diatomics, considering rotations completely changes the paradigm and physical picture with respect to the description when only the vibrational, electronic, and photonic degrees of freedom are taken into account.
Moreover, in contrast to field-free polyatomic molecules, where conical intersections are dictated by nature, they are either present or not, light-induced conical intersections in the cavity are always present between the polariton surfaces. Even for a diatomic molecule, the appropriate description needs to account for rotations, which are coupled nonadiabatically to the vibrational, electronic, and photonic modes of the system.
The purpose of the present study
is to investigate the field-dressed rovibronic spectrum of diatomics
in the framework of cavity quantum electrodynamics (QED). We complement previous theoretical approaches by accounting for all molecular degrees of freedom, \textit{i.e.}, we treat rotational, vibrational, electronic, and photonic degrees of freedom on an equal footing. Furthermore, we incorporate for the first time the concept of LICIs with the quantized radiation field.
Our goals with are two-fold.
First, we investigate the field-dressed rovibronic spectrum of our test system, the homonuclear Na$_2$ molecule, in order to understand the effects of the cavity on the spectrum and to identify the direct signatures of a LICI created by the quantized cavity radiation field.
Second, the coupling strength and cavity-mode wavelength dependence of the spectrum from weak to ultrastrong coupling regimes is investigated. We identify the formation of polariton states in the strong coupling regime as well as the impact of nonresonant couplings on the spectrum in the ultrastrong coupling regime. Surprisingly, the effects of nonresonant couplings can be seen at coupling strengths much smaller than those necessary for significantly distorting the ground-state PEC, as long as molecular rotations are properly accounted for. An important conclusion of this study is that for the simulation of freely rotating molecules confined in a cavity the appropriate treatment of rotations as well as vibrations is mandatory.
\section{\label{Theory}Theoretical approach}
For simulating the weak-field absorption spectrum of molecules confined in small optical cavities, we first
determine the field-dressed states, \textit{i.e.}, the eigenstates
of the full ``molecule + radiation field'' system, and then we compute the dipole transition amplitudes
between the field-dressed states with respect to a probe pulse. We
assume the probe pulse to be weak; therefore, transitions induced
by it should be dominated by one-photon processes. This implies that
the standard approach \cite{BunkerJensen} of using first-order time-dependent
perturbation theory to compute the transition amplitudes should be
adequate.
\subsection{The field-dressed states}
Within the framework of QED and the electric dipole representation, the Hamiltonian of a molecule interacting with a single cavity mode can be written as \cite{Cohen-Tannoudji}
\begin{equation}
\hat{H}_{{\rm tot}}=\hat{H}_{{\rm mol}}+\hat{H}_{{\rm rad}}+\hat{H}_{{\rm int}},\label{eq:CavityHamiltonian_general}
\end{equation}
where $\hat{H}_{{\rm mol}}$ is the field-free molecular Hamiltonian,
$\hat{H}_{{\rm rad}}$ is the radiation-field Hamiltonian, and $\hat{H}_{{\rm int}}$
is the interaction term between the molecular dipole moment and the
electric field. For a single radiation mode and an appropriate choice
of the origin \cite{Cohen-Tannoudji},
\begin{equation}
\hat{H}_{{\rm rad}}=\hslash\omega _c\hat{a}^{\dagger}\hat{a}\label{eq:Hrad}
\end{equation}
and
\begin{equation}
\hat{H}_{{\rm int}}=-\sqrt{\frac{\hslash\omega _c}{2\epsilon_{0}V}}\mathbf{\hat{d}\hat{e}}\left(\hat{a}^{\dagger}+\hat{a}\right)
=-\frac{\varepsilon_c}{\sqrt{2}}\mathbf{\hat{d}\hat{e}}\left(\hat{a}^{\dagger}+\hat{a}\right),\label{eq:Hint}
\end{equation}
where $\varepsilon_c = \sqrt{\hslash\omega _c/(\epsilon_{0}V)}$ is the cavity one-photon field, $\hat{a}^{\dagger}$ and $\hat{a}$ are photon creation
and annihilation operators, respectively, $\omega_c$ is the frequency
of the cavity mode, $\hslash$ is Planck's constant divided by $2\pi$,
$\epsilon_{0}$ is the electric constant, $V$ is the volume of the
electromagnetic mode, $\mathbf{\hat{d}}$ is the molecular dipole
moment, and $\mathbf{\hat{e}}$ is the polarization vector of the
cavity mode.
In the case of diatomic molecules, representing the Hamiltonian of
Eq. (\ref{eq:CavityHamiltonian_general}) in a direct product basis
composed of two field-free molecular electronic states and the Fock
states of the radiation field gives
\begin{equation}
\hat{H}=\begin{bmatrix}\hat{H}_{{\rm m}} & \hat{A}(2) & 0 & \ldots\\
\hat{A}^{\dagger}(2) & \hat{H}_{{\rm m}}+\hslash\omega_c & \hat{A}(3) & \ldots\\
0 & \hat{A}^{\dagger}(3) & \hat{H}_{{\rm m}}+2\hslash\omega_c & \ldots\\
\vdots & \vdots & \vdots & \ddots
\end{bmatrix},\label{eq:CavityHamiltonian}
\end{equation}
where
\begin{equation}
\hat{H}_{{\rm m}}=\begin{bmatrix}\hat{T} & 0\\
0 & \hat{T}
\end{bmatrix}+\begin{bmatrix}V_{1}(R) & 0\\
0 & V_{2}(R)
\end{bmatrix},\label{eq:MolecularHamiltonian}
\end{equation}
and
\begin{equation}
\hat{A}(N)=\begin{bmatrix}g_{11}(R,\theta)\sqrt{N} & g_{12}(R,\theta)\sqrt{N}\\
g_{21}(R,\theta)\sqrt{N} & g_{22}(R,\theta)\sqrt{N}
\end{bmatrix},\label{eq:CavityAmx}
\end{equation}
with
\begin{equation}
g_{ij}(R,\theta)=-\sqrt{\frac{\hslash\omega}{2\epsilon_{0}V}}d_{ij}(R){\rm cos}(\theta),\label{eq:quantized_A}
\end{equation}
where $R$ is the internuclear distance, $V_{i}(R)$ is the $i$th
PEC, $\hat{T}$ is the nuclear kinetic energy
operator, $d_{ij}(R)$ is the transition dipole moment matrix element
between the $i$th and $j$th electronic states, and $\theta$ is
the angle between the electric field polarization vector and the transition
dipole vector, assumed to be parallel to the molecular axis.
In Eq. (\ref{eq:CavityHamiltonian}), the first, second, third, etc.
columns(rows) correspond to zero, one, two, etc. photon number in
the bra(ket) vectors of the cavity mode, respectively.
Expanding Eq. (\ref{eq:CavityHamiltonian}) using Eqs. (\ref{eq:MolecularHamiltonian})
and (\ref{eq:CavityAmx}), and assuming a homonuclear diatomic molecule having no permanent dipole, gives
\begin{equation}
\hat{H}=
\begin{bmatrix}\hat{T}+V_{1}(R) & 0 & 0 & g_{12}(R,\theta)\sqrt{2} & 0 & \cdots \\
0 & \hat{T}+V_{2}(R) & g_{21}(R,\theta)\sqrt{2} & 0 & 0 & \cdots \\
0 & g_{12}(R,\theta)\sqrt{2} & \hat{T}+V_{1}(R)+\hslash\omega_c & 0 & 0 & \cdots \\
g_{21}(R,\theta)\sqrt{2} & 0 & 0 & \hat{T}+V_{2}(R)+\hslash\omega_c & g_{21}(R,\theta)\sqrt{3} & \cdots \\
0 & 0 & 0 & g_{12}(R,\theta)\sqrt{3} & \hat{T}+V_{1}(R)+2\hslash\omega_c & \cdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{bmatrix},\label{eq:CavityHamiltonian_detailed}
\end{equation}
which is the working Hamiltonian used in this study.
The $\vert\Psi_{i}^{{\rm FD}}\rangle$ field-dressed states, \textit{i.e.},
the eigenstates of the Hamiltonian of Eq. (\ref{eq:CavityHamiltonian_general}),
\begin{equation}
\hat{H}_{{\rm tot}}\vert\Psi_{i}^{{\rm FD}}\rangle=E_{i}^{{\rm FD}}\vert\Psi_{i}^{{\rm FD}}\rangle
\end{equation}
are obtained by diagonalizing the Hamiltonian of Eq. (\ref{eq:CavityHamiltonian_detailed})
in the basis of field-free rovibrational states. Then, the field-dressed states can be expressed as the linear
combination of products of field-free molecular rovibronic states
and Fock states of the dressing field, $i.e.$,
\begin{equation}
\vert\Psi_{i}^{{\rm FD}}\rangle=\sum_{J,v,\alpha,N}C_{i,\alpha vJN}\vert\alpha vJ\rangle\vert N\rangle=\sum_{J,v,N}C_{i,1vJN}\vert1vJ\rangle\vert N\rangle+\sum_{J,v,N}C_{i,2vJN}\vert2vJ\rangle\vert N\rangle,\label{eq:FieldDressedStates_small_photon_number}
\end{equation}
where $\vert jvJ\rangle$ is a field-free rovibronic state, in which
the molecule is in the $j$th electronic, $v$th vibrational, and
$J$th rotational state, $\vert N\rangle$ is a Fock state of the
dressing field with photon number $N$, and $C_{i,jvJN}$ are
expansion coefficients obtained by diagonalizing the Hamiltonian of
Eq. (\ref{eq:CavityHamiltonian_detailed}) in the basis of the field-free
rovibrational states.
\subsection{Transitions between field-dressed states}
Let us now compute the absorption spectrum with respect to a weak probe
pulse, whose photon number is represented by the letter $M$. Using
first-order time-dependent perturbation theory, the transition amplitude
between two field-dressed states, induced by the weak probe pulse,
can be expressed as \cite{Jaynes_Cummings_1963,Cohen-Tannoudji}
\begin{equation}
\langle\Psi_{i}^{{\rm FD}}\vert\langle M\vert\mathbf{\hat{d}\hat{E}}\vert M'\rangle\vert\Psi_{j}^{{\rm FD}}\rangle=\langle\Psi_{i}^{{\rm FD}}\vert\hat{d}{\rm cos}(\theta)\vert\Psi_{j}^{{\rm FD}}\rangle\langle M\vert\hat{E}\vert M'\rangle.\label{eq:transition_amplitude_general}
\end{equation}
In Eq. (\ref{eq:transition_amplitude_general}), the electric field
operator $\hat{E}$ stands for the weak probe pulse, and we
assume that the probe pulse has a polarization axis identical to
that of the cavity mode. Since $\hat{E}$ is proportional to the sum
of a creation and an annihilation operator acting on $\vert M'\rangle$,
Eq. (\ref{eq:transition_amplitude_general}) leads to the well-known
result that the transition amplitude is non-zero only if $M=M'\pm1$,
$i.e.$, Eq. (\ref{eq:transition_amplitude_general}) accounts for
single-photon absorption or stimulated emission.
The matrix element of the operator $\hat{d}{\rm cos}(\theta)$ between
two field-dressed states of Eq. (\ref{eq:FieldDressedStates_small_photon_number})
gives
\begin{equation}
\begin{aligned} & \langle\Psi_{i}^{{\rm FD}}\vert\hat{d}{\rm cos}(\theta)\vert\Psi_{j}^{{\rm FD}}\rangle=\\
& =\left(\sum_{J,v,\alpha,N}C_{i,\alpha vJN}^{*}\langle\alpha vJ\vert\langle N\vert\right)\hat{d}{\rm cos}(\theta)\left(\sum_{J',v',\alpha',N'}C_{j,\alpha'v'J'N'}\vert\alpha'v'J'\rangle\vert N'\rangle\right)=\\
& =\sum_{J,v,\alpha,N,J',v',\alpha',N'}C_{i,\alpha vJN}^{*}C_{j,\alpha'v'J'N'}\langle\alpha vJ\vert\hat{d}{\rm cos}(\theta)\vert\alpha'v'J'\rangle\delta_{N,N'}=\\
& =\sum_{J,v,J',v',N}C_{i,1vJN}^{*}C_{j,2v'J'N}\langle1vJ\vert\hat{d}{\rm cos}(\theta)\vert2v'J'\rangle+\sum_{J,v,J',v',N}C_{i,2vJN}^{*}C_{j,1v'J'N}\langle2vJ\vert\hat{d}{\rm cos}(\theta)\vert1v'J'\rangle.
\end{aligned}
\label{eq:transition_amplitude_between_cavity_FD_states}
\end{equation}
In the last line of Eq. (\ref{eq:transition_amplitude_between_cavity_FD_states}),
the first(second) term represents transitions, in which the first(second) electronic
state contributes from the $i$th field-dressed state and the second(first)
electronic state contributes from the $j$th field-dressed state.
Assuming that the \textit{i}th state is the initial state, the first term
in the last line of Eq. (\ref{eq:transition_amplitude_between_cavity_FD_states})
leads to the usual field-free absorption spectrum in the limit of
the light-matter coupling going to zero. In all spectra shown
below, we plot the absolute square of the transition amplitudes, as
computed by Eq. (\ref{eq:transition_amplitude_between_cavity_FD_states}),
or their convolution with a Gaussian function.
\section{Computational details}
We test the theoretical framework developed in Eqs. (\ref{eq:CavityHamiltonian_general}-\ref{eq:transition_amplitude_between_cavity_FD_states}) on the Na$_{2}$ molecule, for which the $V_{1}(R)$ and $V_{2}(R)$ PECs correspond to the ${\rm X}^{1}\Sigma{\rm _{g}^{+}}$ and the
${\rm A}^{1}\Sigma{\rm _{u}^{+}}$ electronic states, respectively.
The PECs and the transition dipole are taken
from Refs. \citenum{Na2_PEC_Magnier_JCP_1993} and \citenum{Na2_TDM_Zemke_JMS_1981}, respectively. The field-free rovibrational eigenstates of Na$_2$ on the $V_{1}(R)$ and $V_{2}(R)$ PECs are computed using 200 spherical-DVR basis function \cite{D2FOPI_Szidarovszky_PCCP_2010} with the related grid points placed in the internuclear coordinate range $(0,10)$ bohr.
Unless indicated otherwise, the set of field-free rovibrational eigenstates used to represent the Hamiltonian of Eq. (\ref{eq:CavityHamiltonian_detailed}) was composed of all states with $J<30$ and an energy not exceeding the zero point energy of the respective PEC by more than 5000 cm$^{-1}$. The maximum photon number in the cavity mode was set to two.
\section{Results and discussion}
The PECs of Na$_2$ employed are given in Fig. \ref{PEC}, along with some important physical processes. The main results of this study are conveniently depicted in Figs. \ref{FDS_weakcoupling}--\ref{W-I-D}. Each figure will be discussed separately.
\subsection{The field-dressed states}
The left panel in Fig. \ref{PEC} shows PECs of Na$_{2}$ dressed with different number of photons in the cavity radiation field, as well as vibrational probability densities for direct-product states of the $\vert jvJ\rangle\vert N\rangle$ form. As apparent from Eq. (\ref{eq:CavityHamiltonian_detailed})
and illustrated by the double-headed arrows in Fig.\ref{PEC}, light-matter
interaction can give rise to resonant $\vert1$ $v$ $J\rangle\vert N\rangle\leftrightarrow\vert2$
$v'$ $J\pm1\rangle\vert N-1\rangle$ and non-resonant $\vert1$ $v$
$J\rangle\vert N\rangle\leftrightarrow\vert2$ $v'$ $J\pm1\rangle\vert N+1\rangle$
type couplings, which lead to the formation of field-dressed states,
see Eq. (\ref{eq:FieldDressedStates_small_photon_number}). The terms
``resonant'' and ``non-resonant''
indicate whether the direct-product states that are coupled
are close in energy or not, see Fig. \ref{PEC}. Naturally, resonant
couplings are much more efficient in mixing the direct-product states
than non-resonant couplings.
For comparison, the right panel of Fig. \ref{PEC} shows the light-dressed PECs of Na$_{2}$ in a laser field \cite{LICI_in_spectrum_Szidarovszky_JPCL_2018}. Because nonresonant couplings are omitted in the usual Floquet description \cite{Floquet} of laser light-dressed molecules, these couplings are not shown in the right panel of Fig. \ref{PEC}. It is clear from Fig. \ref{PEC} that the absorption spectrum of field-dressed molecules should be considerably different for the cavity-dressed and laser-dressed cases. The most significant difference is that while in the cavity the ground state is primarily a field-free eigenstate in vacuum, which is only deformed at relatively large coupling strengths through nonresonant couplings, the laser light-dressed state correlating to the field-free ground state contains a mixture of field-free eigenstates due to the strong resonant coupling in this case.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{figures/pots-cavity.pdf}
\includegraphics[width=0.45\textwidth]{figures/pots-floquet.pdf}
\caption{\label{PEC}Left: PECs of Na$_{2}$, dressed with different number of photons of the cavity field, obtained with a dressing-light wavelength of $\lambda=653$ nm.
Vibrational probability densities are drawn for states of $\vert1$
$0$ $0\rangle\vert m\rangle$ type (dashed black lines on the $V_{1}(R)+m\hslash\omega_c$
PECs), and for states of $\vert2$ $6$ $1\rangle\vert m\rangle$ type (dotted red lines on the $V_{2}(R)+m\hslash\omega_c$ PECs). Couplings
induced by the cavity radiation field are indicated by the two double-headed
arrows. The continuous green double-headed arrow represents $\vert1$
$v$ $J\rangle\vert m\rangle\leftrightarrow\vert2$ $v'$ $J\pm1\rangle\vert m-1\rangle$
type resonant couplings, while the dashed purple double-headed arrow
represents $\vert1$ $v$ $J\rangle\vert m\rangle\leftrightarrow\vert2$
$v'$ $J\pm 1\rangle\vert m+1\rangle$ type non-resonant couplings. Finally,
the vertical brown wavy arrow indicates transitions between the two
manifolds of field-dressed states, resulting form the absorption of
a photon of the weak probe pulse. Right: same as left panel, but for Na$_2$ dressed by laser light. }
\end{figure}
\begin{figure}[h]
\begin{minipage}{0.58\columnwidth}%
\includegraphics[width=0.95\textwidth]{figures/wf-spectra-0.pdf}
\end{minipage}
\begin{minipage}{0.41\columnwidth}%
\includegraphics[width=0.95\textwidth]{figures/wf-spectra-1.pdf}
\end{minipage}
\caption{\label{FDS_weakcoupling}Field-dressed spectra obtained with different values of the light-matter coupling strengths for a cavity mode wavelength of $\lambda=653$ nm. Coupling strength values are indicated by the intensity of a classical light field giving a coupling strength equal to the one-photon coupling of the cavity. The envelope
lines depict the spectra convolved with a Gaussian function having a standard deviation of $\sigma=50$ cm$^{-1}$.}
\end{figure}
\begin{figure}[h!]
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}ccc}
\includegraphics[width=0.32\columnwidth]{figures/spectrum-I-1-1D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/spectrum-I-1-2D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/pots-cavity-1.pdf}\tabularnewline
\includegraphics[width=0.32\columnwidth]{figures/spectrum-I-4-1D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/spectrum-I-4-2D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/pots-cavity-4.pdf}\tabularnewline
\includegraphics[width=0.32\columnwidth]{figures/spectrum-I-16-1D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/spectrum-I-16-2D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/pots-cavity-16.pdf}\tabularnewline
\includegraphics[width=0.32\columnwidth]{figures/spectrum-I-64-1D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/spectrum-I-64-2D-50.pdf} & \includegraphics[width=0.32\columnwidth]{figures/pots-cavity-64.pdf}\tabularnewline
\end{tabular*}%
\caption{\label{FDS_weakToUScoupling}First two columns: field-dressed spectra obtained with different values of the light-matter coupling strengths for a cavity mode wavelength of $\lambda=653$ nm. Coupling strength values are indicated by the intensity of a classical light field giving a coupling strength equal to the one-photon coupling of the cavity. The envelope lines depict the spectra convoluted with a Gaussian function having a standard deviation of $\sigma=50$ cm$^{-1}$. The labels ``1D'' and ``2D'' stand for vibration only and rovibrational calculations, defined by using $J_{\rm max}=1$ and $J_{\rm max}=30$, respectively. Solid and dashed lines correspond to calculations including or excluding the off resonant couplings in the Hamiltonian, respectively. Third column: diabatic and adiabatic PECs at different light-matter coupling strength values.}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=0.45\columnwidth]{figures/spectra-sweep-I-1-2D-30.pdf}
\includegraphics[width=0.45\columnwidth]{figures/spectra-sweep-I-1-z-2D-30.pdf}%
\tabularnewline
\includegraphics[width=0.45\columnwidth]{figures/spectra-sweep-I-4-2D-30.pdf}
\includegraphics[width=0.45\columnwidth]{figures/spectra-sweep-I-4-z-2D-30.pdf}%
\tabularnewline
\includegraphics[width=0.45\columnwidth]{figures/spectra-sweep-I-16-2D-30.pdf}
\includegraphics[width=0.45\columnwidth]{figures/spectra-sweep-I-16-z-2D-30.pdf}%
\tabularnewline
\caption{\label{W-I-D} Cavity mode wavenumber-dependence of the field-dressed spectrum obtained at three different coupling strengths, computed using Eq. (\ref{eq:transition_amplitude_between_cavity_FD_states}). Coupling strength values are indicated by the intensity of a classical light field giving a coupling strength equal to the one-photon coupling of the cavity. The spectrum is convoluted at each fixed cavity mode wavelength with a Gaussian function having $\sigma=30$ cm$^{-1}$.}
\end{figure}
\subsection{Spectra in the weak coupling regime}
Figure \ref{FDS_weakcoupling} shows field-dressed spectra obtained with different values of the $\varepsilon _c = \sqrt{\hslash\omega_c/(\epsilon_{0}V)}$ cavity one-photon field strength in the weak-coupling regime for a cavity-mode wavelength of $\lambda=653$ nm. Although the light-matter coupling strength and the cavity-mode wavelength are not completely independent in a cavity (see Eq. (\ref{eq:quantized_A})), we treat them as independent parameters. This can be rationalized partially by Eq. (\ref{eq:quantized_A}), which shows that the coupling strength could be changed independently by changing the cavity volume, while keeping the cavity length responsible for the considered cavity radiation mode fixed.
The spectra in Fig. \ref{FDS_weakcoupling} were computed assuming that the initial state is the
ground state of the full system. The left panel of Fig. \ref{FDS_weakcoupling} reflects features similar to those observed
in the spectrum of Na$_{2}$ dressed by laser fields \cite{LICI_in_spectrum_Szidarovszky_JPCL_2018}.
With increasing light-matter coupling, the overall intensity of the
spectrum slightly increases at almost all wavenumbers, with some
shoulder features becoming more pronounced in the spectrum envelope. In terms of spectroscopic nomenclature, such a phenomenon can be understood as an intensity-borrowing effect \cite{Cederbaum_multimode,vibronic_coupling_model_Cederbaum_AnnRevPhysChem_2004,CI_spectroscopyYarkony_AnnRevPhysChem_2012}, which arises from the field-induced couplings between field-free states.
On the other hand, the coupling strength dependence of the spectrum envelope is completely absent if the spectra in Fig. \ref{FDS_weakcoupling} are generated from computations in which the rotational motion is restricted by setting $J_{\rm max}=1$. Such a rotationally-restricted model inherently lacks any signatures of a LICI, whose formation requires at least two nuclear degrees of freedom. Therefore, in terms of the adiabatic representation, the intensity borrowing effect visible in Fig. \ref{FDS_weakcoupling} can be attributed to the nonadiabatic couplings of a LICI created by the quantized cavity radiation field.
Inspecting the individual transition lines in the spectra reveals that increasing
the light-matter coupling strength can result in the splitting
of existing peaks and the appearance of additional peaks, as shown
in the two panels on the right side of Fig. \ref{FDS_weakcoupling}. The upper right panel of Fig. \ref{FDS_weakcoupling}
shows the progression of three peaks, corresponding to transitions
from the initial state (essentially the $\vert1$ $0$ $0\rangle\vert0\rangle$
state) to field-dressed states composed primarily of the $\vert2$
$7$ $1\rangle\vert0\rangle$, $\vert2$ $7$ $3\rangle\vert0\rangle$,
and $\vert2$ $7$ $5\rangle\vert0\rangle$ states, with $\vert1$
$v$ $J\rangle\vert1\rangle$-type states ($J$ even) contributing
as well. With increasing light-matter coupling strength these transitions
are red shifted, and they can be interpreted as originating from the
field-free transition $\vert1$ $0$ $0\rangle\rightarrow\vert2$
$7$ $1\rangle$, which is split due to the mixing of $\vert2$ $7$
$1\rangle$ with other states through the light-matter coupling with
the cavity mode. The lower right panel of Fig. \ref{FDS_weakcoupling} shows
the progression of three peaks, which do not arise from the splitting
of an existing field-free peak, but appear as new peaks. These transitions
are blue shifted with increasing light-matter coupling strength, and
they occur between the initial state (essentially the $\vert1$ $0$
$0\rangle\vert0\rangle$ state) and field-dressed states composed
primarily of the $\vert1$ $3$ $0\rangle\vert1\rangle$, $\vert1$
$3$ $2\rangle\vert1\rangle$, and $\vert1$ $3$ $4\rangle\vert1\rangle$ states. Such transitions are forbidden in the zero light-matter coupling
limit; however, they become visible as the light-matter coupling with
the cavity mode contaminates the $\vert1$ $3$ $J\rangle\vert1\rangle$
states with $\vert2$ $v$ $1\rangle\vert0\rangle$-type states, to
which the initial state has allowed transitions.
\subsection{Spectra in the strong and ultrastrong coupling regimes}
In Fig. \ref{FDS_weakToUScoupling} field-dressed spectra obtained with
light-matter coupling strengths ranging from the weak to the ultrastrong coupling regimes are shown for a cavity-mode wavelength of $\lambda=653$ nm. The spectra labeled ``1D'' in Fig. \ref{FDS_weakToUScoupling} were obtained with a model having restricted rotational motion ($J_{\rm max}=1$). The results labeled ``2D'' fully account for rotations as well as vibrations; therefore, incorporate the effects of a LICI on the spectrum.
By comparing the ``1D'' and ``2D'' spectrum envelopes in Fig. \ref{FDS_weakToUScoupling}, it is apparent that there is a significant increase in absorption for the ``2D'' case. As discussed in the next subsection, this is primarily due to nonresonant light-matter couplings between $\vert1$ $0$ $J\rangle\vert0\rangle$ and $\vert2$ $v'$ $J\pm 1\rangle\vert1\rangle$-type states and partially due to the intensity borrowing effect induced by the nonadiabatic couplings of the LICI.
Figure \ref{FDS_weakToUScoupling} also shows field-dressed PECs in the diabatic and adiabatic representations.
As the coupling strength increases, two separate polariton surfaces are formed, and the absorption spectrum splits into two distinct groups of peaks, corresponding to transitions onto the two polariton states. At the largest coupling strength, a slight modification of the ground-state PEC can also be seen, indicating that the ultrastrong coupling regime has been reached.
\subsection{Impact of nonresonant coupling}
Interestingly, nonresonant couplings seem to have an impact on the spectrum at much smaller coupling strengths than those required for a significant modification of the ground-state PEC. The spectrum envelopes depicted with dotted and continuous lines in Fig. \ref{FDS_weakToUScoupling} indicate whether spectra were computed by using the
\begin{equation}
\hat{H}_{\rm 3x3}=
\begin{bmatrix}\hat{T}+V_{1}(R) & 0 & 0 \\
0 & \hat{T}+V_{2}(R) & g_{21}(R,\theta)\sqrt{2} \\
0 & g_{12}(R,\theta)\sqrt{2} & \hat{T}+V_{1}(R)+\hslash\omega_c
\end{bmatrix},\label{eq:CavityHamiltonian_detailed_3x3}
\end{equation}
upper left three-by-three block of the Hamiltonian in Eq. (\ref{eq:CavityHamiltonian_detailed}) or a six-by-six block, respectively. The deviation between these two types of spectra represents the effects of nonresonant couplings, because the $\hat{T}+V_{2}(R)+\hslash\omega_c$ term and its couplings with $\hat{T}+V_{1}(R)$ are present in Eq. \ref{eq:CavityHamiltonian_detailed}, but absent in Eq. \ref{eq:CavityHamiltonian_detailed_3x3}.
The plots in Fig. \ref{FDS_weakToUScoupling} clearly demonstrate that in the vibration-only ``1D'' case nonresonant couplings have no visible impact on the spectrum; however, for the ``2D'' case, in which rotations are accounted for, nonresonant couplings lead to a visible increase in the absorption signal even at the lowest coupling strengths shown. The physical origin of the increase in absorption is the contamination of the $\vert1$ $0$ $0\rangle\vert0\rangle$ ground state with the $\vert1$ $0$ $2\rangle\vert0\rangle$, $\vert1$ $0$ $4\rangle\vert0\rangle$, etc. states, which allows for transitions onto the $J=3,5,...$ components of the rovibronic states in the excited polariton manifold. These results indicate that the effects of nonresonant couplings can not be described in a vibration only model, and if one wishes to obtain meaningful simulation results for coupling strengths reaching or exceeding those shown in Fig. \ref{FDS_weakToUScoupling}, it is necessary to properly account for molecular rotations.
\subsection{Cavity-mode wavelength dependence of the spectrum}
Figure \ref{W-I-D} shows the cavity-mode wavenumber dependence of the
field-dressed spectrum obtained at the $\varepsilon _c=\sqrt{\hslash\omega_c/(\epsilon_{0}V)}$ cavity one-photon field strengths of 0.844$\cdot 10^{-4}$, 1.688$\cdot 10^{-4}$, and 3.376$\cdot 10^{-4}$ atomic units, corresponding to classical field intensities of 1, 4, and 16 GWcm$^{-2}$, respectively.
It can be concluded from Fig. \ref{W-I-D} that, as expected, the field-dressed spectrum changes with the cavity-mode wavelength. Furthermore, the cavity-mode wavelength dependent spectrum shows qualitative features considerably different from the dressing-field wavelength dependence of the spectrum when Na$_{2}$ is dressed by medium intensity laser fields \cite{LICI_in_spectrum_Szidarovszky_JPCL_2018}, as one might expect from Fig. \ref{PEC}.
For all coupling strengths shown in Fig. \ref{W-I-D}, at large dressing-field photon energies, \textit{i.e.}, those exceeding 17000 cm$^{-1}$ or so, the spectra resemble the field-free spectrum, depicting around twenty lines corresponding to transitions to $\vert2$ $v$ $1\rangle\vert0\rangle$-type states. This is expected, because for such large photon energies, the $V_1(R)+\hslash \omega _c$ PEC crosses the $V_2(R)$ PEC at short internuclear distances, and the $V_2(R)$ PEC remains unperturbed in the Frank--Condon region. As the photon energy of the dressing field is lowered and the crossing of the $V_1(R)+\hslash \omega _c$ and $V_2(R)$ PECs approaches the Frank--Condon region, the spectrum becomes perturbed.
In the top row of Fig. \ref{W-I-D}, a decrease can be seen in the spectrum line intensities along diagonal lines in the plots, forming island-type features. Focusing on a specific vibrational state on $V_2(R)$, corresponding to a vertical line in the plots, a decrease in the spectrum intensity occurs when this vibrational state becomes resonant with one of the $\vert1$ $v$ $J\rangle\vert1\rangle$ states. Due to the resonance, a strong mixing occurs between the $\vert1$ $v$ $J\rangle\vert1\rangle$- and $\vert2$ $v'$ $J'\rangle\vert0\rangle$-type states, which leads to a decrease of the transition amplitude from the ground state. Nonetheless, when the mixing of the states is not as efficient as in the resonant case, \textit{i.e.}, at the island-type features on the plots, an increase can be seen in the spectrum intensities with respect to the field-free case.
As depicted in the middle and bottom rows of Fig. \ref{W-I-D}, when the coupling strength is increased, the picture of a ``perturbed spectrum'' gradually changes into the picture of two distinct spectra corresponding to the two polariton surfaces, in accordance with Fig. \ref{FDS_weakToUScoupling}. The dressing-field wavenumber dependence of the spectrum in the bottom row of Fig. \ref{W-I-D} can easily be understood in terms of the wavenumber dependence of the polariton surfaces depicted in the rightmost column of Fig. \ref{FDS_weakToUScoupling}.
\section{Summary and conclusions}
We investigated the rovibronic spectrum of homonuclear diatomic molecules dressed by the quantized radiation field of an optical cavity. Formation of light-induced conical intersections induced by the quantized radiation field is shown for the first time by identifying the robust light-induced nonadiabatic effects in the spectrum. The coupling strength and the cavity mode wavelength dependence of the field-dressed spectrum was also investigated from the weak to the ultrastrong coupling regimes. Formation of polariton states in the strong coupling regime was demonstrated, and its was shown how nonresonant couplings lead to an increased absorption in the field-dressed spectrum even before the ultrastrong coupling regime is reached. The numerical results demonstrate that the additional degree of freedom (which is the rotation in the present diatomic situation) plays a crucial role in the appropriate description of the light-induced nonadiabatic processes as well as in the efficiency of nonresonant couplings. Therefore, for physical scenarios when diatomic molecular rotations can proceed in the cavity, properly accounting for the rotational degrees of freedom is mandatory for obtaining reliable simulation results.
We hope that our findings will stimulate photochemical cavity experiments, and also the extension of the theory for the proper description of polyatomic molecules. It did not escape our attention that there is much potential in studying light-induced conical intersections in polyatomic molecules in cavity without rotations as there are many nuclear degrees of freedom to form such intersections which can also be used to selectively manipulate certain chemical and physical properties.
\section{Acknowledgement}
This research was supported by the EU-funded Hungarian grant EFOP-3.6.2-16-2017-00005 and by the Deutsche Forschungsgemeinschaft (Project ID CE10/50-3). The authors are grateful to NKFIH for support (Grant No. PD124623, K119658 and K128396). The authors thank P\'eter Domokos for the fruitful discussions.
|
1,314,259,993,477 | arxiv | \section{Introduction}
Nowadays, with the rapid development of analyzing player performance by collecting the past matches they have played, researchers have cooperated with sports teams and players to boost the advancement of sports analytics.
However, it is difficult for novel algorithms to be verified in real-time matches due to the cost and the performance concerns of players.
To mitigate the problem, \citet{DBLP:conf/aaai/KurachRSZBERVMB20} proposed a reinforcement learning football environment, which benefits researchers by reproducing and testing algorithms quickly offline.
Nonetheless, there is no existing environment to develop new ideas in turn-based sports, e.g., badminton, tennis.
Directly using existing environments is not feasible due to the varying nature of different sports.
Therefore, we focus on one of the turn-based sports, badminton, to demonstrate our proposed reinforcement learning environment.
However, there are at least two challenges to describe various factors in a rally.
First, \textbf{3-D Trajectories}: The trajectory of a shuttlecock consists of not only 2-D coordinates but also the height.
The actual height of the shuttlecock cannot be detected precisely due to the regulations in the real-world high-ranking matches and the cost of deploying such advanced techniques (e.g., hawk-eye systems).
Moreover, there are no existing records for the shuttlecock's height, and it is also difficult for domain experts to label the 3-D trajectory, especially with the height.
Second, \textbf{Multi-Agent Turn-Based Environment}: As described in \cite{DBLP:conf/aaai/WangSCP22}, a rally is composed of two players playing alternatively, which is different from the conventional sequence with the same target.
Therefore, it is challenging to design proper states, actions, and rewards for both agents since each agent do the complicated action like returning the shuttlecock and player positioning by taking various observation like the shuttlecock's position and the opponent's into consideration.
To address these issues, we propose a reinforcement learning badminton environment that is equipped with multiple view angles to review a given match (either a simulation or a real match).
In addition, we design the environment based on the multi-agent particle environment (MAPE) \cite{NIPS2017_68a97503} to describe the process of two agents in a rally.
In this manner, our badminton environment is able to not only support coaches and players to review and investigate tactics of players in a more flexible way, but also provides researchers with an interface place to quickly demonstrate new algorithms.
For a more detailed illustration, please refer to our demonstration here\footnote{https://youtu.be/WRPcbalb6yc.}.
\section{Approach}
\subsection{Dataset Collection}
We use the dataset collected by previous research \cite{DBLP:conf/aaai/WangSCP22}, which includes 75 high-ranking matches from 2018 to 2021 by 31 players from men's singles and women's singles games labeled by domain experts with the BLSR format \cite{DBLP:conf/icdm/WangCYWFP21,10.1145/3551391}.
The dataset includes the positions of the players and the shuttlecock as 2-D coordinates, the timestamp of each ball round, the type of ball, the scores, and the motions of players.
We aimed to learn the tactics from different players by these datasets.
\subsubsection{Mimicking Actual Ball Height}
We lack information about the shuttlecock's height in the collected dataset (we only know a label describing whether the hit point is above the net or not).
To simulate the actual height, we set the height of each shot type with the average height below the net and standard deviation, and then use normal distribution as the corresponding distribution.
\subsection{Reinforcement Learning Badminton Environment}
As the tactics vary according to individual player, we have to design a process in a rally that is able to mimic players while considering different factors.
Specifically, our environment is based on MAPE, which supports multi-agent training.
\subsubsection{Environment Design}
The environment is designed following a regular real-world badminton court, which includes two players from each side, a shuttlecock, the net and the boundary.
To have a better visualization experience and adapt different application scenarios, we proposed multi-view observation options, which enables the user to monitor the playing process (training process when training agents) through the side view or the top view.
To cope with this limitation, we designed a size shrinking method to illustrate the height of the shuttlecock.
Specifically, the rendering object is bigger if the shuttlecock is closer to the player, and smaller otherwise.
\subsubsection{Turn-Based Procedure}
As badminton is a fast-paced sport, it is difficult for the agents to move instantaneously.
Therefore, our goal is to make the agent focus on learning the tactics of the badminton player instead of playing badminton.
We therefore simplify the real-time game into a turn-based environment.
The procedure in a rally is as follows: 1) Assume that the shuttlecock is served by player A. In this sub-step, player A, as an agent, will decide the landing position of the shuttlecock, the ball type to hit, and the defense position to go to after returning the ball.
On the other hand, player B, as an opponent agent, will decide the target position to go to in order to return the shuttlecock.
2) The environment will simulate the player's move and the trajectory of the shuttlecock until the shuttlecock reaches the defense region of the opponent.
3) At the moment the shuttlecock enters the opponent's defense region, the simulation will stop.
On the other hand, the player will also decide the target position to go to in order to return the shuttlecock.
4) After receiving the players' decision, the environment will keep simulating until the shuttlecock falls into the proper region, that is close enough to the opponent and the height of the shuttlecock is reasonable for the type of shot the opponent is returning.
5) The step is finished, so the roles of the players swap. The environment executes the returning action and goes back to Step 2 until the rally is finished.
\subsubsection{Simulation}
To produce a realistic environment and enhance the reference value of the environment in reality, we apply the meta-parameter based on the match dataset.
The meta-parameter we tuned based on the dataset includes the player speed, the defense range of the players, the returning region distribution of different ball types, and other physical parameters of the shuttlecock.
Furthermore, we follow \cite{Chen2009ASO} to simulate the shuttlecock trajectory.
\section{Preliminary Results}
\begin{figure}
\centering
\includegraphics[height=!, width=\linewidth, keepaspectratio]{images/multiview.png}
\caption{The schematic of the reinforcement learning badminton environment with two supporting views.}
\label{fig1: multi-view}
\end{figure}
\noindent\textbf{Multiple Angles of View. }
Figure \ref{fig1: multi-view} illustrates our proposed badminton environment equipped with different views, which enables researchers and domain experts to observe the playing procedure.
\noindent\textbf{Multi Agent. }
In general reinforcement learning (RL) environments, there is just one agent to interact with an environment in a match.
However, badminton games are usually for two or four players to play, so we built our environment based on MAPE to achieve this function.
Our environment is able to train not just one, but two or three or four agents in the same match, and can deal situations like a training agent versus with an expert player or two agents controlling two players on the same side respectively in doubles games.
\noindent\textbf{Recording Match Data. }
One of the characteristics in our environment is that it records the match data through the matches.
This technique benefits researchers with not only data augmentation but also debugging for improving training policies.
|
1,314,259,993,478 | arxiv | \section{Introduction}
\label{Se1}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{prior.png}
\end{center}
\caption{ The diagram of VaDE\xspace. The data generative process of VaDE\xspace is done as follows:
1) a cluster is picked from a GMM model; 2) a latent embedding is generated based on the picked cluster;
3) DNN $f({\bf z};{\boldsymbol{\theta}})$ decodes the latent embedding into an observable $\bf x$. A encoder network $g({\bf x};{\boldsymbol{\phi}})$
is used to maximize the ELBO of VaDE\xspace.
}
\label{fig:diagram}
\end{figure}
Clustering is the process of grouping similar objects together, which is one of the most fundamental tasks in
machine learning and artificial intelligence. Over the past decades, a large family of clustering algorithms have been developed and successfully applied in enormous real world tasks~\cite{ng02,ye08,yang10,xie15}.
Generally speaking, there is a dichotomy of clustering methods: Similarity-based clustering and Feature-based clustering.
Similarity-based clustering builds models upon a distance matrix, which is a $N\times N$ matrix that measures the
distance between each pair of the $N$ samples. One of the most famous similarity-based clustering methods is Spectral Clustering
(SC)~\cite{von07}, which leverages the Laplacian spectra of the distance matrix to reduce dimensionality before clustering. Similarity-based clustering methods have the advantage that domain-specific similarity or kernel functions
can be easily incorporated into the models. But these methods suffer scalability issue due to super-quadratic running time for computing spectra.
Different from similarity-based methods, a feature-based method takes a $N\times D$ matrix as input, where $N$ is
the number of samples and $D$ is the feature dimension. One popular feature-based clustering method is $K$-means, which aims to
partition the samples into $K$ clusters so as to minimize the within-cluster sum of squared errors. Another representative feature-based
clustering model is Gaussian Mixture Model (GMM), which assumes that the data points are generated from a Mixture-of-Gaussians (MoG),
and the parameters of GMM are optimized by the Expectation Maximization (EM) algorithm. One advantage of GMM over $K$-means
is that a GMM can generate samples by estimation of data density. Although $K$-means, GMM and their variants~\cite{ye08,liu10} have been extensively used, learning good representations most suitable for clustering tasks is left largely unexplored.
Recently, deep learning has achieved widespread success in numerous machine learning
tasks~\cite{alex12,zheng14sup,szegedy2015going,zheng2014neural,he16,zheng15deep,zheng2016neural}, where learning good representations by deep
neural networks (DNN) lies in the core. Taking a similar approach, it is conceivable to
conduct clustering analysis on good representations, instead of raw data points.
In a recent work, Deep Embedded Clustering (DEC)~\cite{xie15} was proposed to
simultaneously learn feature representations and cluster assignments by deep neural
networks. Although DEC performs well in clustering, similar to $K$-means, DEC cannot
model the generative process of data, hence is not able to
generate samples. Some recent works, e.g. VAE~\cite{kingma13}, GAN~\cite{goodfellow14}
, PixelRNN~\cite{oord2016pixel}, InfoGAN~\cite{Chen16InfoGAN} and PPGN~\cite{nguyen16PPGN},
have shown that neural networks can be trained to generate meaningful samples.
The motivation of this work is to develop a
{\it clustering} model based on neural networks that 1) learns good representations that capture the
statistical structure of the data, and 2)
is capable of generating samples.
In this paper, we propose a clustering framework, Variational Deep Embedding (VaDE\xspace), that combines
VAE~\cite{kingma13} and a Gaussian Mixture Model for clustering tasks.
VaDE\xspace models the data generative process by a GMM and a DNN $f$: 1) a cluster is picked up
by the GMM; 2) from which a latent representation $\bf z$ is sampled; 3) DNN $f$ decodes $\bf z$
to an observation $\bf x$.
Moreover, VaDE\xspace is optimized by using another DNN $g$
to encode observed data $\bf x$ into latent embedding $\bf z$, so that the Stochastic
Gradient Variational Bayes (SGVB) estimator and the {\it reparameterization}
trick~\cite{kingma13} can be used to maximize the evidence lower bound (ELBO).
VaDE\xspace generalizes VAE in that a Mixture-of-Gaussians prior replaces the single
Gaussian prior. Hence, VaDE\xspace is by design more suitable for clustering tasks\footnote{Although people
can use VaDE\xspace to do unsupervised feature learning or semi-supervised learning
tasks, we only focus on clustering tasks in this work.}. Specifically,
the main contributions of the paper are:
\begin{itemize}
\item We propose an unsupervised generative clustering framework, VaDE\xspace, that combines VAE and GMM together.
\item We show how to optimize VaDE\xspace by maximizing the ELBO using the SGVB estimator and the {\it reparameterization} trick;
\item Experimental results show that VaDE\xspace outperforms the state-of-the-art clustering models on $5$ datasets from various modalities by a large margin;
\item We show that VaDE\xspace can generate highly realistic samples for any specified cluster, without using supervised information during training.
\end{itemize}
The diagram of VaDE\xspace is illustrated in Figure~\ref{fig:diagram}.
\section{Related Work}
\label{sec:related_work}
Recently, people find that learning good representations plays an important role in clustering tasks. For
example, DEC~\cite{xie15} was proposed to learn feature representations and cluster
assignments simultaneously by deep neural networks. In fact, DEC learns a mapping from
the observed space to a lower-dimensional latent space, where it iteratively optimizes the KL divergence
to minimize the within-cluster distance of each cluster. DEC achieved impressive performances
on clustering tasks. However, the feature embedding in DEC is designed specifically for clustering
and fails to
uncover the real underlying structure
of the data, which makes the model lack of the ability to extend itself to other
tasks beyond clustering, such as generating samples.
The deep generative models have recently attracted much attention in
that they can capture the data distribution by neural networks,
from which unseen samples can be generated. GAN and VAE are among the most successful deep generative models in recent years.
Both of them are appealing unsupervised generative
models, and their variants have been extensively studied and applied in various tasks such as semi-supervised
classification~\cite{Kingma14Semi,maaloe16auxiliary,salimans16improvedGAN,makhzani16AAE,abbasnejad16infiniteVAE},
clustering~\cite{makhzani16AAE} and image generation~\cite{radford15,dosovitskiy16DeePSiM}.
For example, \cite{abbasnejad16infiniteVAE} proposed to use a mixture of VAEs
for semi-supervised classification tasks, where the mixing coefficients of these VAEs are modeled by a
Dirichlet process to adapt its capacity to the input data.
SB-VAE~\cite{Nalisnick16SBVAE} also applied Bayesian nonparametric techniques
on VAE, which derived a stochastic latent dimensionality
by a stick-breaking prior and achieved good performance on semi-supervised classification tasks.
VaDE\xspace differs with SB-VAE in that the cluster assignment
and the latent representation are jointly considered in the Gaussian mixture prior,
whereas SB-VAE separately models
the latent representation and the class variable, which fails to capture
the dependence between them. Additionally, VaDE\xspace does not need the class label during training,
while the labels of data are required by SB-VAE due to its semi-supervised setting.
Among the variants of VAE, Adversarial Auto-Encoder(AAE)~\cite{makhzani16AAE} can also do
unsupervised clustering tasks. Different
from VaDE\xspace, AAE uses GAN to match the aggregated posterior with the prior of VAE,
which is much more complex than VaDE\xspace on the training procedure.
We will compare AAE with VaDE\xspace in the experiments part.
Similar to VaDE\xspace, \cite{nalisnickapproximate} proposed DLGMM to combine VAE and GMM together.
The crucial difference, however, is that VaDE\xspace uses a mixture of Gaussian prior to
replace the single Gaussian prior of VAE, which is suitable for clustering tasks
by nature, while DLGMM uses a mixture of Gaussian distribution
as the approximate posterior of VAE and does not model the class variable.
Hence, VaDE\xspace generalizes VAE to clustering tasks, whereas DLGMM is used to improve the capacity of the original VAE and is not suitable for
clustering tasks by design. The recently proposed GM-CVAE~\cite{shu16stochastic}
also combines VAE with GMM together. However, the GMM in GM-CVAE is used to model the transitions between video
frames, which is the main difference with VaDE\xspace.
\section{Variational Deep Embedding}
\label{sec:model}
In this section, we describe Variational Deep Embedding (VaDE\xspace), a model for probabilistic
clustering problem within the framework of Variational Auto-Encoder (VAE).
\subsection{The Generative Process}
\label{sec:gen_process}
Since VaDE\xspace is a kind of unsupervised generative approach to clustering, we herein first
describe the generative process of VaDE\xspace. Specifically, suppose there are $K$ clusters,
an observed sample $\mathbf{x}\in \mathbb{R}^D$ is generated by the following process:
\begin{enumerate}
\item Choose a cluster $c \sim \textup{Cat}(\boldsymbol{\pi})$
\item Choose a latent vector ${\bf z} \sim \mathcal{N}\left(\boldsymbol{\mu}_c,\boldsymbol{\sigma}_c^2{\bf I}\right)$
\item Choose a sample $\bf x$:
\begin{enumerate}
\item If $\bf x$ is binary
\begin{enumerate}
\item Compute the expectation vector $\boldsymbol{\mu}_x$
\begin{equation}
\boldsymbol{\mu}_x = f({\bf z};\boldsymbol{\theta})
\label{eqn:f1}
\end{equation}
\item Choose a sample ${\bf x} \sim \textup{Ber}(\boldsymbol{\mu}_x)$
\end{enumerate}
\item If $\bf x$ is real-valued
\begin{enumerate}
\item Compute $\boldsymbol{\mu}_x$ and $\boldsymbol{\sigma}_x^2$
\begin{equation}
[\boldsymbol{\mu}_x;\log \boldsymbol{\sigma}_x^2] = f({\bf z};\boldsymbol{\theta})
\label{eqn:f2}
\end{equation}
\item Choose a sample ${\bf x}\sim \mathcal{N}\left(\boldsymbol{\mu}_x,\boldsymbol{\sigma}_x^2\mathbf{I}\right)$
\end{enumerate}
\end{enumerate}
\end{enumerate}
where $K$ is a predefined parameter, $\pi_k$ is the prior probability for cluster $k$,
$\boldsymbol{\pi}\in \mathbb{R}_+^K$, $1=\sum_{k=1}^K \pi_k$, $\textup{Cat}(\boldsymbol{\pi})$
is the categorical distribution parametrized by $\boldsymbol{\pi}$, $\boldsymbol{\mu}_c$ and
$\boldsymbol{\sigma}_c^2$ are the mean and the variance of the Gaussian distribution corresponding to cluster $c$, $\bf I$ is an identity matrix, $f({\bf z};\boldsymbol{\theta})$ is a neural
network whose input is $\bf z$ and is parametrized by $\boldsymbol{\theta}$, $\textup{Ber}(\boldsymbol{\mu}_x)$
and $\mathcal{N}(\boldsymbol{\mu}_x,\boldsymbol{\sigma}_x^2)$ are multivariate Bernoulli
distribution and Gaussian distribution parametrized by $\boldsymbol{\mu}_x$ and $\boldsymbol{\mu}_x,\boldsymbol{\sigma}_x$,
respectively. The generative process is depicted in Figure~\ref{fig:diagram}.
According to the generative process above, the joint probability $p({\bf x}, {\bf z}, c)$ can be
factorized as:
\begin{equation}
p({\bf x}, {\bf z}, c) = p({\bf x}|{\bf z})p({\bf z}|c)p(c),
\label{eqn:fact_p}
\end{equation}
since $\bf x$ and $c$ are independent conditioned on $\bf z$. And the probabilities are defined as:
\begin{eqnarray}
p(c) &=& \textup{Cat}(c|{\boldsymbol{\pi}})\label{eqn:p_c}\\
p({\bf z}|c) &=& \mathcal{N}\left({\bf z}|\boldsymbol{\mu}_c,\boldsymbol{\sigma}_c^2{\bf I}\right)\label{eqn:p_zc}\\
p({\bf x}| {\bf z}) &=& \textup{Ber}({\bf x}| {\bf \boldsymbol{\mu}}_x) \quad or\quad \mathcal{N}({\bf x}|\boldsymbol{\mu}_x,\boldsymbol{\sigma}_x^2\mathbf{I})\label{eqn:p_xz}
\end{eqnarray}
\subsection{Variational Lower Bound}
\label{sec:vlowerbound}
A VaDE\xspace instance is tuned to maximize the likelihood of the given data points. Given
the generative process in Section~\ref{sec:gen_process}, by using Jensen's inequality,
the log-likelihood of VaDE\xspace can be written as:
\begin{flalign}
\log p({\bf x})&=\log\int_{\bf z}\sum_{c}p({\bf x,z},c)d{\bf z}\nonumber\\
&\geq E_{q({\bf z},c|{\bf x})}[\log\frac{p({\bf x,z},c)}{q({\bf z},c|{\bf x})}]=\mathcal{L}_{\textup{ELBO}}({\bf x})\label{eqn:loglikelihood}
\end{flalign}
where $\mathcal{L}_{\textup{ELBO}}$ is the evidence lower bound (ELBO), $q({\bf z},c|{\bf x})$ is the variational
posterior to approximate the true posterior $p({\bf z},c|{\bf x})$. In VaDE\xspace, we
assume $q({\bf z},c|{\bf x})$ to be a mean-field distribution and can be factorized as:
\begin{equation}
q({\bf z},c| {\bf x}) = q({\bf z|x})q(c|{\bf x}).
\label{eqn:va_q}
\end{equation}
Then, according to Equation~\ref{eqn:fact_p} and \ref{eqn:va_q}, the $\mathcal{L}_{\textup{ELBO}}({\bf x})$
in Equation~\ref{eqn:loglikelihood} can be rewritten as:
\begin{eqnarray}
\mathcal{L}_{\textup{ELBO}}({\bf x})&=&E_{q({\bf z},c|{\bf x})}\left[\log \frac{p({\bf x,z},c)}{q({\bf z},c|{\bf x})}\right]\nonumber\\
&=&E_{q({\bf z},c|{\bf x})}\left[\log p({\bf x,z},c)-\log q({\bf z},c|{\bf x})\right]\nonumber\\
&=&E_{q({\bf z},c|{\bf x})}[\log p({\bf x}|{\bf z})+\log p({\bf z}|c)\label{eqn:elbo_fact}\\
&\quad&+ \log p(c) -\log q({\bf z}| {\bf x}) - \log q(c|{\bf x})]\nonumber
\end{eqnarray}
In VaDE\xspace, similar to VAE, we use a neural network $g$ to model $q({\bf z|x})$:
\begin{eqnarray}
[\boldsymbol{\tilde\mu};\log \boldsymbol{\tilde\sigma}^2]&=&g({\bf x};\boldsymbol{\phi})\label{eqn:g_mu_sigma}\\
q({\bf z}|{\bf x})&=&\mathcal{N}({\bf z};\boldsymbol{\tilde\mu},{\boldsymbol{\tilde\sigma}}^2{\bf I})
\label{eqn:q_z_x}
\end{eqnarray}
where $\boldsymbol{\phi}$ is the parameter of network $g$.
By substituting the terms in Equation~\ref{eqn:elbo_fact} with
Equations~\ref{eqn:p_c}, \ref{eqn:p_zc}, \ref{eqn:p_xz} and \ref{eqn:q_z_x},
and using the SGVB estimator and the {\it reparameterization} trick,
the $\mathcal{L}_{\textup{ELBO}}({\bf x})$ can be rewritten as:
\footnote{This is the case when the observation $\bf x$ is binary. For the real-valued situation, the ELBO
can be obtained in a similar way.}
\begin{small}
\begin{flalign}
\mathcal{L}_{\textup{ELBO}}({\bf x})
=&\frac{1}{L}\sum_{l=1}^L\sum_{i=1}^D{x_i}\log\boldsymbol{{\mu}}^{(l)}_x|_i+(1-x_i)\log(1-\boldsymbol{{\mu}}^{(l)}_x|_i)\nonumber\\
&-\frac{1}{2}\sum_{c=1}^K\gamma_{c}\sum_{j=1}^J(\log\boldsymbol{\sigma}^2_{c}|_{j}+
\frac{\tilde{\boldsymbol{\sigma}}^2|_j}{\boldsymbol{\sigma}^2_{c}|_{j}}+
\frac{(\tilde{\boldsymbol{\mu}}|_j-\boldsymbol{\mu}_{c}|_{j})^2}{\boldsymbol{\sigma}^2_{c}|_{j}})\nonumber\\
&+\sum_{c=1}^K\gamma_{c}\log \frac{\pi_c}{\gamma_{c}}
+\frac{1}{2}\sum_{j=1}^J(1+\log\tilde{\boldsymbol{\sigma}}^2|_j)\label{eqn:ELBO_detail}
\end{flalign}
\end{small}
where $L$ is the number of Monte Carlo samples in the SGVB estimator,
$D$ is the dimensionality of ${\bf x}$ and $\boldsymbol{\mu}_x^{(l)}$, $x_i$
is the $i$\textsuperscript{th} element of $\bf x$,
$J$ is the dimensionality of $\boldsymbol{\mu}_c$, $\boldsymbol{\sigma}_c^2$,
$\tilde{\boldsymbol{\mu}}$ and $\tilde{\boldsymbol{\sigma}}^2$,
and ${\bf \ast}|_j$ denotes the $j$\textsuperscript{th} element of $\bf \ast$,
$K$ is the number of clusters, $\pi_c$ is the prior probability of cluster $c$,
and $\gamma_c$ denotes $q(c|{\bf x})$ for simplicity.
In Equation~\ref{eqn:ELBO_detail},
we compute $\boldsymbol{\mu}_x^{(l)}$ as
\begin{equation}
\boldsymbol{\mu}_x^{(l)}=f({\bf z}^{(l)};{\bf \theta}),
\end{equation}
where ${\bf z}^{(l)}$ is
the $l$\textsuperscript{th} sample from $q({\bf z}|{\bf x})$ by Equation~\ref{eqn:q_z_x} to produce
the Monte Carlo samples. According to the {\it reparameterization} trick, ${\bf z}^{(l)}$ is
obtained by
\begin{equation}
{\bf z}^{(l)}=\boldsymbol{\tilde\mu}+\boldsymbol{\tilde\sigma}\circ\boldsymbol{\epsilon}^{(l)},
\end{equation}
where $\boldsymbol{\epsilon}^{(l)}\sim \mathcal{N}(0 ,{\bf I})$, $\circ$ is element-wise multiplication, and
$\boldsymbol{\tilde\mu}$, $\boldsymbol{\tilde\sigma}$ are derived by Equation~\ref{eqn:g_mu_sigma}.
We now describe how to formulate $\gamma_c \triangleq q(c|{\bf x})$ in Equation~\ref{eqn:ELBO_detail} to
maximize the ELBO. Specifically, $\mathcal{L}_{\textup{ELBO}}({\bf x})$
can be rewritten as:
\begin{small}
\begin{flalign}
&\mathcal{L}_{\textup{ELBO}}({\bf x})=E_{q({\bf z},c|{\bf x})}\left[\log \frac{p({\bf x,z},c)}{q({\bf z},c|{\bf x})}\right]\nonumber\\
=&\int_{\bf z}\sum_c q(c| {\bf x})q({\bf z|x})\left[\log\frac{p({\bf x|z})p({\bf z})}{q({\bf z|x})}+\log\frac{p(c|{\bf z})}{q(c|{\bf x})}\right]d{\bf z}\nonumber\\
=&\int_{\bf z}q({\bf z|x})\log\frac{p({\bf x|z})p({\bf z})}{q({\bf z|x})}d{\bf z}
-\int_{\bf z}q({\bf z|x})D_{KL}(q(c|{\bf x})||p(c|{\bf z}))d{\bf z}\label{eq:q_x_prove}
\end{flalign}
\end{small}
In Equation~\ref{eq:q_x_prove}, the first term has no relationship with $c$ and the second term is non-negative.
Hence, to maximize $\mathcal{L}_{\textup{ELBO}}({\bf x})$, $D_{KL}(q(c|{\bf x})||p(c|{\bf z})) \equiv 0$
should be satisfied. As a result, we use the following equation to compute $q(c|{\bf x})$ in VaDE\xspace:
\begin{equation}
q(c|{\bf x})=p(c|{\bf z})\equiv\frac{p(c)p({\bf z}|c)}{\sum_{c'=1}^Kp(c')p({\bf z}|c')}
\label{eqn:p_c_z}
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.95\linewidth]{compare_new.png}
\end{center}
\caption{Clustering accuracy over number of epochs during training on MNIST.
We also illustrate the best performances of DEC, AAE, LDMGI and GMM.
It is better to view the figure in color.}
\label{fig:KL}
\end{figure}
By using Equation~\ref{eqn:p_c_z}, the information loss induced by the mean-field
approximation can be mitigated, since $p(c|{\bf z})$ captures the
relationship between $c$ and $\bf z$. It is worth noting that $p(c|{\bf z})$
is only an approximation to $q(c|{\bf x})$,
and we find it works well in practice\footnote{We approximate $q(c|{\bf x})$ by:
1) sampling a ${\bf z}^{(i)}\sim q({\bf z|x})$; 2) computing $q(c|{\bf x})=p(c|{\bf z}^{(i)})$
according to Equation~\ref{eqn:p_c_z}}.
Once the training is done by maximizing the ELBO w.r.t the parameters of
$\lbrace \boldsymbol{\pi}, \boldsymbol{\mu}_c, \boldsymbol{\sigma}_c,
\boldsymbol{\theta}, \boldsymbol{\phi} \rbrace$, $c \in \lbrace 1,\cdots, K\rbrace$,
a latent representation $\bf z$
can be extracted for each observed sample $\bf x$ by Equation~\ref{eqn:g_mu_sigma}
and Equation~\ref{eqn:q_z_x}, and the clustering assignments can be obtained
by Equation~\ref{eqn:p_c_z}.
\subsection{Understanding the ELBO of VaDE\xspace}
\label{sec:elbo}
This section, we provide some intuitions of the ELBO of VaDE\xspace. More specifically, the ELBO in Equation~\ref{eqn:loglikelihood} can be further rewritten as:
\begin{equation}
\mathcal{L}_{\textup{ELBO}}({\bf x})=E_{q({\bf z},c|{\bf x})}[\log p({\bf x}|{\bf z})]-D_{KL}(q({\bf z},c|{\bf x})||p({\bf z},c))
\label{eqn:analysis_elbo}
\end{equation}
The first term in Equation~\ref{eqn:analysis_elbo} is the {\it reconstruction} term, which encourages
VaDE\xspace to explain the dataset well. And the second term is the Kullback-Leibler divergence from the Mixture-of-Gaussians (MoG) prior $p({\bf z},c)$ to the variational posterior $q({\bf z},c|{\bf x})$, which regularizes the latent embedding $\bf z$ to lie on a MoG manifold.
\begin{figure}[ht]
\begin{center}
\subfigure[\scriptsize{Epoch 0 (11.35$\%$)}]{\includegraphics[width = 0.3\linewidth]{epoch0.png}}
\subfigure[\scriptsize{Epoch 1 (55.63$\%$)}]{\includegraphics[width = 0.3\linewidth]{epoch1.png}}
\subfigure[\scriptsize{Epoch 5 (72.40$\%$)}]{\includegraphics[width = 0.3\linewidth]{epoch5.png}}
\subfigure[\scriptsize{Epoch 50 (84.59$\%$)}]{\includegraphics[width = 0.3\linewidth]{epoch50.png}}
\subfigure[\scriptsize{Epoch 120 (90.76$\%$)}]{\includegraphics[width = 0.3\linewidth]{epoch120.png}}
\subfigure[\scriptsize{Epoch End (94.46$\%$)}]{\includegraphics[width = 0.3\linewidth]{epochend.png}}
\end{center}
\caption{The illustration about how data is clustered in the latent space learned by VaDE during
training on MNIST. Different colors indicate different ground-truth classes and the clustering accuracy at the corresponding epoch is reported in the bracket. It is clear to see that the latent representations become more and more suitable for clustering during training, which can also be proved by the increasing clustering accuracy.}
\label{fig:epoch_vis}
\end{figure}
To demonstrate the importance of the KL term in Equation~\ref{eqn:analysis_elbo}, we
train an Auto-Encoder (AE) with the same network architecture as VaDE\xspace first, and then apply GMM
on the latent representations from the learned AE, since a VaDE model without the KL term is
almost equivalent to an AE. We refer to this model as AE+GMM.
We also show the performance of using GMM directly on the observed
space (GMM), using VAE on the observed space and
then using GMM on the latent space from VAE (VAE+GMM)\footnote{By doing this, VAE and GMM are optimized separately.},
as well as the performances of LDMGI~\cite{yang10}, AAE~\cite{makhzani16AAE} and DEC~\cite{xie15}, in Figure~\ref{fig:KL}.
The fact that VaDE\xspace outperforms AE+GMM (without KL term) and VAE+GMM significantly confirms the importance of the regularization term and the advantage of jointly optimizing VAE and GMM by VaDE\xspace. We also present the illustrations of
clusters and the way they are changed w.r.t. training epochs on MNIST dataset in Figure~\ref{fig:epoch_vis}, where we map
the latent representations ${\bf z}$ into 2D space by t-SNE~\cite{maaten08}.
\section{Experiments}
\label{sec:experimens}
In this section, we evaluate the performance of VaDE\xspace on 5 benchmarks from different modalities: MNIST~\cite{lecun98},
HHAR~\cite{stisen15}, Reuters-10K~\cite{lewis04}, Reuters~\cite{lewis04} and STL-10~\cite{coates11}. We provide quantitative comparisons of VaDE\xspace with other clustering methods including GMM,
AE+GMM, VAE+GMM, LDGMI~\cite{yang10}, AAE~\cite{makhzani16AAE} and the strong baseline DEC~\cite{xie15}.
We use the
same network architecture as DEC for a fair comparison.
The experimental results show that VaDE\xspace achieves
the state-of-the-art performance on all these benchmarks.
Additionally, we also provide quantitatively comparisons with other variants of VAE
on the discriminative quality of the latent representations.
The code of VaDE\xspace is available at \url{https://github.com/slim1017/VaDE}.
\subsection{Datasets Description}
\label{sec:datasets}
The following datasets are used in our empirical experiments.
\begin{itemize}
\item {\bf MNIST}: The MNIST dataset consists of $70000$ handwritten digits.
The images are centered and of size 28 by 28 pixels.
We reshaped each image to a 784-dimensional vector
\item {\bf HHAR}: The Heterogeneity Human Activity Recognition (HHAR) dataset
contains $10299$ sensor records from smart phones and smart watches.
All samples are partitioned into $6$ categories of human activities
and each sample is of $561$ dimensions.
\item {\bf REUTERS}: There are around $810000$ English news stories labeled with a category tree in original Reuters dataset. Following DEC, we used $4$ root categories: corporate/industrial, government/social, markets, and economics as labels and discarded all documents with multiple labels, which results in a $685071$-article dataset.
We computed tf-idf features on the $2000$ most frequent words to represent all articles. Similar to
DEC, a random subset of $10000$ documents is sampled, which is referred to as Reuters-10K,
since some spectral clustering methods (e.g. LDMGI)
cannot scale to full Reuters dataset.
\item {\bf STL-10}: The STL-10 dataset consists of color
images of 96-by-96 pixel size. There are $10$ classes with $1300$ examples each.
Since clustering directly from raw pixels of high resolution images is rather difficult,
we extracted features of images of STL-10 by ResNet-50~\cite{he16}, which were then used to
test the performance of VaDE\xspace and all baselines. More specifically, we applied a $3\times 3$
average pooling over the last feature map of ResNet-50 and the dimensionality
of the features is $2048$.
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Dataset & $\#$ Samples & Input Dim & $\#$ Clusters \\
\hline\hline
MNIST & $70000$&$784$&$10$\\
HHAR & $10299$&$561$&$6$\\
REUTERS-10K & $10000$&$2000$&$4$\\
REUTERS &$685071$&$2000$&$4$\\
STL-10 &$13000$&$2048$&$10$\\
\hline
\end{tabular}
\end{center}
\caption{Datasets statistics}
\label{table:dataset}
\end{table}
\begin{table*}
\begin{center}
\begin{small}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Method & MNIST & HHAR& REUTERS-10K & REUTERS & STL-10\\
\hline\hline
GMM &$53.73$&$60.34$&$54.72$&$55.81$&$72.44$\\
AE+GMM & $82.18$&$77.67$&$70.13$&$70.98$&$79.83$\\
VAE+GMM & $72.94$&$68.02$&$69.56$&$60.89$&$78.86$\\
LDMGI &$84.09^{\dagger}$&$63.43$&$65.62$&N/A&$79.22$\\
AAE & $83.48$&$83.77$&$69.82$&$75.12$&$80.01$\\
DEC & $84.30^{\dagger}$&$79.86$&$74.32$&$75.63^{\dagger}$&$80.62$\\
VaDE &${\bf 94.46}$&${\bf 84.46}$&${\bf 79.83}$&${\bf 79.38}$&${\bf 84.45}$\\
\hline
\end{tabular}
\end{small}
\begin{minipage}{0.5\textwidth}
{\small
$\dagger$: Taken from \cite{xie15}.
}
\end{minipage}
\end{center}
\caption{Clustering accuracy ($\%$) performance comparison on all datasets.}
\label{table:results}
\end{table*}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Method & k=3 & k=5 & k=10 \\
\hline\hline
VAE & $18.43$ & $15.69$ & $14.19$\\
DLGMM & $9.14$ & $8.38$ & $8.42$\\
SB-VAE & $7.64$ & $7.25$ & $7.31$\\
VaDE &${\bf 2.20}$ & ${\bf 2.14}$ & ${\bf 2.22}$\\
\hline
\end{tabular}
\end{center}
\caption{MNIST test error-rate ($\%$) for kNN on latent space.}
\label{table:sbvae}
\end{table}
\subsection{Experimental Setup}
\label{sec:exp_config}
As mentioned before, the same network architecture as DEC is adopted by VaDE\xspace for a fair comparison.
Specifically, the architectures of $f$ and $g$ in Equation~\ref{eqn:f1} and Equation~\ref{eqn:g_mu_sigma}
are $10$-$2000$-$500$-$500$-$D$ and $D$-$500$-$500$-$2000$-$10$, respectively, where $D$ is the input dimensionality.
All layers are fully connected. Adam optimizer~\cite{kingma15adam} is used to maximize the ELBO of Equation~\ref{eqn:elbo_fact},
and the mini-batch size is $100$. The learning rate for MNIST, HHAR, Reuters-10K and STL-10 is $0.002$ and
decreases every $10$ epochs with a decay rate of $0.9$, and the learning rate for Reuters is $0.0005$ with
a decay rate of $0.5$ for every epoch.
As for the generative process in Section~\ref{sec:gen_process}, the multivariate Bernoulli distribution
is used for MNIST dataset, and the multivariate Gaussian distribution is used for the others. The number of
clusters is fixed to the number of classes for each dataset, similar to DEC. We will
vary the number of clusters in Section~\ref{exp:n_clusters}.
Similar to other VAE-based models~\cite{sonderby16,kingma16improving}, VaDE\xspace suffers from the
problem that the reconstruction term in Equation~\ref{eqn:analysis_elbo} would be so weak in the beginning
of training that the model might get stuck in an undesirable local minima or saddle point, from which
it is hard to escape. In this work, pretraining is used to avoid this problem.
Specifically, we use a Stacked Auto-Encoder to pretrain the networks $f$ and $g$.
Then all data points are projected into the latent space $\bf z$ by the pretrained network $g$, where a
GMM is applied to initialize the parameters of
$\lbrace \boldsymbol{\pi}, \boldsymbol{\mu}_c, \boldsymbol{\sigma}_c\rbrace$, $c \in \lbrace 1,\cdots, K\rbrace$.
In practice, few epochs of pretraining are enough to provide a good initialization of VaDE\xspace.
We find that VaDE\xspace is not sensitive to hyperparameters after pretraining. Hence, we did not spend
a lot of effort to tune them.
\subsection{Quantitative Comparison}
\label{sec:quan_comparison}
Following DEC, the performance of VaDE\xspace is measured by {\it unsupervised clustering accuracy (ACC)}, which is
defined as:
\begin{equation}
\textup{ACC}=\max_{m\in \mathcal{M}}\frac{\sum_{i=1}^N{\mathds{1}}\{l_i=m(c_i)\}}{N}\nonumber
\end{equation}
where $N$ is the total number of samples, $l_i$ is the ground-truth
label, $c_i$ is the cluster assignment obtained by the model,
and $\mathcal{M}$ is the set of all possible one-to-one mappings between cluster assignments and labels.
The best mapping can be obtained by using the Kuhn–Munkres algorithm~\cite{munkres57}.
Similar to DEC, we perform $10$ random restarts when initializing all clustering models
and pick the result with the best objective value. As for LDMGI, AAE and DEC, we
use the same configurations as their original papers. Table~\ref{table:results}
compares the performance of VaDE\xspace with other baselines over all datasets.
It can be seen that VaDE\xspace outperforms all these baselines by a large margin on all datasets.
Specifically, on MNIST, HHAR, Reuters-10K, Reuters and STL-10 dataset, VaDE\xspace achieves ACC of $94.46\%$,
$84.46\%$, $79.83\%$, $79.38\%$ and $84.45\%$, which outperforms DEC with a
relative increase ratio of $12.05\%$, $5.76\%$, $7.41\%$, $4.96\%$ and $4.75\%$, respectively.
We also compare VaDE\xspace with SB-VAE~\cite{Nalisnick16SBVAE} and DLGMM~\cite{nalisnickapproximate}
on the discriminative power of the latent representations, since these two baselines cannot
do clustering tasks. Following SB-VAE, the discriminative powers of
the models' latent representations are assessed by running a k-Nearest Neighbors classifier (kNN)
on the latent representations of MNIST. Table~\ref{table:sbvae} shows the error rate of the kNN classifier
on the latent representations. It can be seen that VaDE\xspace outperforms SB-VAE and DLGMM significantly\footnote{We use the same network architecture for VaDE\xspace, SB-VAE in Table~\ref{table:sbvae}
for fair comparisons. Since there is no code available for DLGMM, we take the number of DLGMM directly
from \cite{nalisnickapproximate}. Note that \cite{Nalisnick16SBVAE} has already shown
that the performance of SB-VAE is comparable to
DLGMM.}.
\begin{figure}[h]
\begin{center}
\subfigure[GMM]{\includegraphics[width = 0.35\linewidth]{gmm.pdf}}
\hspace{6mm}
\subfigure[VAE]{\includegraphics[width = 0.35\linewidth]{vae.pdf}}
\subfigure[InfoGAN]{\includegraphics[width = 0.35\linewidth]{InfoGAN.pdf}}
\hspace{6mm}
\subfigure[VaDE]{\includegraphics[width = 0.35\linewidth]{vade.pdf}}
\end{center}
\caption{The digits generated by GMM, VAE, InfoGAN and VaDE\xspace. Except (b), digits in the same row come from the same cluster.}
\label{fig:generating}
\end{figure}
Note that although VaDE\xspace can learn discriminative representations of samples,
the training of VaDE\xspace is in a totally \textit{unsupervised} way. Hence, we did not compare VaDE\xspace
with other supervised models.
\subsection{Generating Samples by VaDE\xspace}
\label{sec:exp_generating}
One major advantage of VaDE\xspace over DEC~\cite{xie15} is that it is by nature a
{\it generative} clustering model and can generate highly realistic samples for any specified cluster (class).
In this section, we provide some qualitative comparisons on generating samples among VaDE\xspace, GMM, VAE and the state-of-art generative method InfoGAN~\cite{Chen16InfoGAN}.
Figure~\ref{fig:generating} illustrates the generated samples for class $0$ to $9$ of
MNIST by GMM, VAE, InfoGAN and VaDE\xspace, respectively. It can be seen that the digits
generated by VaDE\xspace are smooth and diverse. Note that the classes of the samples from VAE
cannot be specified. We can also see that the
performance of VaDE\xspace is comparable with InfoGAN.
\subsection{Visualization of Learned Embeddings}
\label{sec:exp_visualization}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{visualization.pdf}
\end{center}
\caption{Visualization of the embeddings learned by VAE, DEC and VaDE on
MNIST, respectively. The first row illustrates the ground-truth labels for each digit,
where different colors indicate different labels. The second row demonstrates the
clustering results, where correctly clustered samples are colored with green
and, incorrect ones with red. GT:4 means the ground-truth label of
the digit is $4$, DEC:4 means DEC assigns the digit to the cluster of 4,
and VaDE\xspace:4 denotes the assignment by VaDE\xspace is $4$, and so on.
It is better to view the figure in color.}\label{fig:embeddings}
\end{figure}
In this section, we visualize the learned representations of VAE,
DEC and VaDE\xspace on MNIST dataset. To this end, we use t-SNE~\cite{maaten08}
to reduce the dimensionality of the latent representation $\bf z$ from
$10$ to $2$, and plot $2000$ randomly sampled digits in Figure~\ref{fig:embeddings}.
The first row of Figure~\ref{fig:embeddings} illustrates the ground-truth labels
for each digit, where different colors indicate different labels. The
second row of Figure~\ref{fig:embeddings} demonstrates the clustering results,
where correctly clustered samples are colored with green and incorrect ones with red.
From Figure~\ref{fig:embeddings} we can see that the original VAE which
used a single Gaussian prior does not perform well in clustering tasks.
It can also be observed that the embeddings
learned by VaDE\xspace are better than those by VAE and DEC, since the number of
incorrectly clustered samples is smaller. Furthermore, incorrectly clustered
samples by VaDE\xspace are mostly located at the border of each cluster, where
confusing samples usually appear. In contrast, a lot of the incorrectly
clustered samples of DEC appear in the interior of the clusters, which
indicates that DEC fails to preserve the inherent structure of the data.
Some mistakes made by DEC and VaDE\xspace are also marked in Figure~\ref{fig:embeddings}.
\subsection{The Impact of the Number of Clusters}
\label{exp:n_clusters}
\begin{figure}[h]
\begin{center}
\subfigure[7 clusters]{\includegraphics[height = 0.33\linewidth]{mnist_7c.pdf}\label{fig:7_clusters}}
\hspace{10mm}
\subfigure[14 clusters]{\includegraphics[height = 0.33\linewidth]{mnist_14c.pdf}\label{fig:14_clusters}}
\end{center}
\caption{Clustering MNIST with different numbers of clusters.
We illustrate samples belonging to
each cluster by rows.}\label{fig:n_clusters}
\end{figure}
So far, the number of clusters for VaDE\xspace is set to the number of classes for each dataset, which is a prior knowledge. To demonstrate VaDE\xspace's representation power as an unsupervised clustering model, we deliberately choose different numbers of clusters $K$. Each row in Figure~\ref{fig:n_clusters} illustrates the samples from a cluster grouped by VaDE\xspace on MNIST dataset, where $K$ is set to $7$ and $14$ in Figure~\ref{fig:7_clusters} and Figure~\ref{fig:14_clusters}, respectively. We can see that, if $K$ is smaller than the number of classes,
digits with similar appearances will be clustered together, such as $9$ and $4$, $3$ and $8$ in Figure~\ref{fig:7_clusters}.
On the other hand, if $K$ is larger than the number of classes, some digits will fall into sub-classes by VaDE\xspace, such as the fatter $0$ and thinner $0$, and the upright $1$ and oblique $1$ in Figure~\ref{fig:14_clusters}.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed Variational Deep Embedding (VaDE\xspace) which embeds the
probabilistic clustering problems into a Variational Auto-Encoder (VAE) framework.
VaDE\xspace models the data generative procedure by a GMM model and a neural network, and
is optimized by maximizing the evidence lower bound (ELBO) of the log-likelihood of
data by the SGVB estimator and the {\it reparameterization} trick.
We compared the clustering performance of VaDE\xspace with strong baselines on 5 benchmarks
from different modalities, and the experimental results showed that VaDE\xspace outperforms
the state-of-the-art methods by a large margin. We also showed that VaDE\xspace could generate
highly realistic samples conditioned on cluster information without using any supervised
information during training. Note that although we use a MoG prior for VaDE\xspace in
this paper, other mixture models can also be adopted in this framework flexibly,
which will be our future work.
\section*{Acknowledgments}
We thank the School of Mechanical Engineering of BIT~(Beijing Institute of Technology) and Collaborative Innovation Center of Electric Vehicles in Beijing for their support. This work was supported by the National Natural Science Foundation of China~(61620106002, 61271376). We also thank the anonymous reviewers.
\bibliographystyle{named}
|
1,314,259,993,479 | arxiv |
\section{Introduction}
\input{intro.tex}
\section{Related Work}
\input{relat.tex}
\section{Methodology}
\input{method.tex}
\section{Experiments}
\input{exper.tex}
\section{Conclusion}
\input{conclu.tex}
\bibliographystyle{splncs04}
\subsection{Datasets}
Two large-scale datasets, nuScenes dataset and Lyft dataset, are applied in experiments. The details of two datasets are shown in the following.
\noindent{\textbf{NuScenes Dataset}}~\cite{nuscenes}~~~ It collects 1000 scenes of 20s duration with 32 beams lidar sensor. The number of total frames is 40,000, which is sampled at 2Hz, and total 3D boxes are about 1.4 million. 10 categories are annotated for 3D detection, including Car, Pedestrian, Bus, Barrier, and \emph{etc.} (details in the experimental results). They also officially split the data into training and validation set, and the test results are evaluated at EvalAI\footnote{\url{https://evalai.cloudcv.org/web/challenges/challenge-page/356/overview}}. Furthermore, a new metric is also introduced in nuScenes dataset, namely nuScenes detection score (NDS)~\cite{nuscenes}, which quantifies the quality of detections in terms of average classification precision, box location, size, orientation, attributes, and velocity. The mean average precision (mAP) is based on the distance threshold (\emph{i.e.}~ 0.5m, 1.0m, 2.0m and 4.0m).
The whole range is about 100 meter, and we mainly use the range of 0-50m in full 360 degree. More detailed descriptions are shown in Supplementary Materials.
\noindent{\textbf{Lyft Dataset}}~\cite{lyft}~~~ It contains one 40-beam roof lidar and two 40-beam bumper lidars, and in the experiments, we only use the data from roof lidar. The data format is similar to the nuScenes dataset. Total 9 categories are annotated for detection, including car, emergency\_vehicle, motorcycle, bus, truck, and \emph{etc.}.
Total 22,680 frames are used as the training data, and test set contains 27,468 frames while 30\% of the test data is for validation in Kaggle competition\footnote{\url{https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles}}. The evaluation metric is the mean average precision, which is similar to the metric of COCO dataset but calculates the 3D IoU (with the threshold of 0.5, 0.55, 0.6, 0.65, ..., 0.95). Hence, we name it as \textbf{mAP-3D}, and it is worthy to note that mAP-3D is much strict than mAP in nuScenes and Kitti~\cite{kitti}.
\subsection{Implementation Details}
In our implementation, we use the pillar based~\cite{pointpillar} method to convert the point cloud to the structured representation.
For nuScenes dataset, the x, y, z range is ([-49.6, 49.6], [-49.6, 49.6], [-5, 3]) and the pillar size is [0.2, 0.2, 8]. The max number of pillars is 30,000 and max number of points per pillar is 20.
For Lyft dataset, the range is ([-89.6, 89.6], [-89.6, 89.6], [-5, 3]) and the pillar size is [0.2, 0.2, 8] too. The max number of pillars is 60,000 and max number of points per pillar is 12.
For the anchors, we calculate the mean width, length and height of each class and use birdview 2D IoU (width and length) as the matching metric; when the matching between anchors and ground truth is larger than the positive threshold, these anchors are positive, otherwise if the matching is smaller than negative threshold, they are negative anchors. The matching threshold is different for different categories.
During inference, the multi-class and rotational NMS is employed, where multi-class NMS indicates applying NMS for each class independently. For a \textbf{fair comparison}, no multi-scale training / testing, SyncBN and ensemble are applied. For nuScenes dataset, online ground truth sampling~\cite{second,Zhu2019ClassbalancedGA} is not used.
We also submit the results on these official websites~\footnote{\url{https://www.nuscenes.org/object-detection}}\footnote{\url{https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles}} (Our submissions are ``gezi" and ``OIDH" respectively, and both of them are anonymous.).
\noindent{\textbf{Network Details}}~~~
For the point-to-structure, we follow the network in~\cite{pointpillar}, where a simplified PointNet is used. It contains a linear layer, BatchNorm and ReLU layer to handle the features of pillars.
For the CNN feature encoding, the FPN based module is introduced to extract the fused features. Three levels of features are first upsampled with the transposed 2D convolution, and then concatenated.
For the shape-aware grouping heads, objects with similar shape and scale share the same head. For bus, truck and trailer, a heavier head is applied, where two downsample blocks process the features from FPN. Each downsample block consists of 3x3 2D convolution layer with stride=2, followed by BatchNorm and ReLU. For the lighter head (such as bicycle, motorcycle), the block with stride=1 is used. For the medium head, one downsample block is applied. Note that another block with stride=1 is followed in each downsample block. More detailed network structure is shown in Supplementary Materials.
\noindent{\textbf{Optimization}}~~~ We use the Adam optimizer with cycle learning decay. The maximum learning rate is 3e-3 and weight decay is 0.001. We train 60 epoches and 80 epoches for nuScenes dataset and Lyft dataset, respectively; the batch size is 2 for nuScenes and 1 for Lyft dataset.
\subsection{Results}
\begin{table*}
\small
\caption{Results of multi-class 3D detection on nuScenes dataset. ``Trail", ``CV", ''Ped" , ``MC", ``Bicy", ``TC", ``Bar" indicates the trailer, construction vehicle, pedestrian, motorcycle, bicycle, traffic cone, and barrier respectively. Bold-face and underline numbers denote the best and second-best respectively }
\vspace{-1ex}
\setlength{\tabcolsep}{0.4pt}
\begin{center}
\begin{tabular*}{1.0\linewidth}{c|c|c| c| c| c| c| c| c| c|c|c|c|c}
\toprule
Methods & Modality & Car & Truck & Bus & Trail & CV & Ped & MC & Bicy & TC & Bar & mAP &NDS\\
\hline
\hline
Mono~\cite{dism} & RGB & 47.8 & 22.0 & 18.8 & 17.6 & 7.4 & 37.0 & 29.0 & \textbf{24.5} & 48.7 & 51.1 & 30.4 & 38.4\\
\hline
Second~\cite{second} & Lidar & 73.1 & 25.2 & 30.5 & 31.5 & 8.5 & 59.3 & 21.7 & 4.9 & 18.0 & 43.3 & 31.6 & 46.8\\
\hline
PP~\cite{pointpillar} & Lidar & {68.4} & 23.0 & 28.2 & 23.4 & 4.1 & 59.7 & 27.4 & 1.1 & 30.8 & 38.9 & 30.5 & 45.3 \\
\hline
Painting~\cite{Vora2019PointPaintingSF} & Lidar\&RGB & \underline{77.9} & \underline{35.8} & \underline{36.1} & \underline{37.3} & \textbf{15.8} & \textbf{73.3} & \underline{41.5} & \underline{24.1} & \bf{62.4} & \bf{60.2} & \bf{46.4} & \bf{58.1} \\
\hline
SSN & Lidar & \bf{80.7} & \bf{37.5} & \bf{39.9} & \textbf{43.9} & \underline{14.6} & \underline{72.3} & \bf{43.7} & {20.1} & \underline{54.2} & \underline{56.3} & \underline{46.3} & \underline{56.9} \\
\bottomrule
\end{tabular*}
\end{center}
\label{tab:nuscenes}
\vspace{-1ex}
\end{table*}
\noindent{\textbf{Results on nuScenes dataset.}}~~~ In this experiment, we test our model on nuScenes dataset and report the performance on the test set from official evaluation server. The results are shown in Table.~\ref{tab:nuscenes}. We give the detailed AP of each category and other metrics. It can be found that SSN achieves about 15\% improvement in mAP and 10\% in NDS compared to these lidar-based methods, even for some small objects, such as pedestrian and traffic cone. Even compared with the Lidar\&RGB fusion method~\cite{Vora2019PointPaintingSF}, our lidar-based model also achieves comparable performance and performs better in the main categories of traffic scenarios, such as Car, Truck, Bus and Motorcycle, \emph{etc.}~
Note that the results of PointPillar and Painting~\cite{Vora2019PointPaintingSF} are copied from the original papers and for Second, we re-implement it under our setting and hyper-parameters are followed with SSN.
For bicycle, due to its sparsity and low height, it is difficult to specify in the point cloud while it can be accessed in the image, thus the result of Bicycle in image detection is better than the 3D detection.
\noindent{\textbf{Results on Lyft dataset.}}~~~ For Lyft dataset, there is no official split of training set and validation set. Hence, we report the results on Kaggle competition (30\% test data is used for public validation but the host does not provide the ground truth. We submit the outputs of SSN and our baseline model to obtain the results). As Lyft dataset is a very new dataset, there is no official implementation. We re-implement PointPillar and Second to perform experiment on Lyft dataset, and optimization method and anchor matching strategy follow the SSN.
Table.~\ref{tab:lyft} shows the results of SSN and other existing methods on the test set. SSN consistently achieves the better performance with about 5\% improvement compared to existing methods. Due to the strict metric (mAP-3D under IoU 0.5 to 0.95), the result on Lyft dataset is lower than nuScenes. Note that we only report the results of single model with single-scale training. The result on the official websites is 18.1\% which is applied with multi-scale training.
\noindent{\textbf{Qualitative Analysis.}}~~~ We show several samples from the challenging Lyft dataset in Figure.~\ref{fig:lyft}. For ease of interpretation, we show the 3D boxes from the BEV perspective. It can be found that the car, bus and other vehicle achieve the decent performance. Some false positives and missing objects appear on the far range (about 50m). \textbf{TSNE visualization.} We use the TSNE to visualize the distribution of shape signature in Figure.~\ref{fig:tsne}. Four categories in nuScenes, including Car, Truck, Motorcycle and Ped, are sampled to display for a clearly visual effect. We sample 50 instances for each category, where 25 of them are with distance $<$ 40 meters and others are with distance $>$ 40 meters. It can be observed that the discrepancy across different classes is clear, which indicates the capability of our shape signature to separate the shape distribution across different categories.
Meanwhile, the distribution of shape signature within the same class differs with different distance (points with distance $<$ 40m and points with distance $>$ 40m cluster at different regions accordingly), which demonstrates the shape signature acts as a soft (not hard) constraint and keeps the shape distribution consistent (not same).
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{lyft.pdf}
\end{center}
\vspace{-4ex}
\caption{We show the qualitative analysis of our model. These point clouds are sampled from Lyft dataset. Green boxes are the ground truth and red boxes are the model's predictions. To give a clear visualization, we crop the point could with range [-50, 50]
(\textbf{Best viewed in color)}.}
\label{fig:lyft}
\end{figure*}
\subsection{Ablation Studies}
In this section, we perform the thorough ablation experiments to investigate the effect of different components in our method, including shape-aware grouping heads and shape signature, the scalability of the proposed shape signature with various backbone networks, and comparison with other shape signature.
\begin{minipage}[t!]{\textwidth}
\begin{minipage}[t]{0.4\textwidth}
\makeatletter\def\@captype{table}\makeatother
\caption{Results on test set of Lyft dataset}
\small
\begin{tabular*}{1.0\linewidth}{c|c|c}
\toprule
Methods & Modality & mAP-3D \\
\hline
\hline
Voxelnet~\cite{voxelnet} & Lidar & 10.1 \\
\hline
PointPillar~\cite{pointpillar} & Lidar & \underline{13.4}\\
\hline
Second~\cite{second} & Lidar & 13.0 \\
\hline
SSN & Lidar &\bf{17.9}\\
\bottomrule
\end{tabular*}
\label{tab:lyft}
\end{minipage}
\hspace{.1in}
\begin{minipage}[t]{0.5\textwidth}
\makeatletter\def\@captype{table}\makeatother
\caption{Experimental results of ablation studies on two key components on nuScenes dataset}
\small
\begin{tabular*}{1.0\linewidth}{c|c|c}
\toprule
Methods & mAP & NDS \\
\hline
\hline
PointPillar~\cite{pointpillar} & 29.4 & 44.9 \\
\hline
$+$ Shape-aware Grouping Heads & 40.6 & 51.3\\
\hline
$+$ Shape Signature& 45.3 & 57.0 \\
\bottomrule
\end{tabular*}
\label{tab:ab1}
\end{minipage}
\end{minipage}
\noindent{\textbf{Effect of Different Components.}}~~~ In this experiment, we choose the PointPillar as the backbone, and perform the ablation study by adding the components step-by-step. Due to the limited submissions in the evaluation server, we report the results on the official validation set of nuScenes dataset. As shown in Table.~\ref{tab:ab1}, it can be found that two key components, shape-aware grouping heads and shape signature, achieve the significant performance gain, with 6.4\% and 5.7\% improvements in NDS respectively, which demonstrates that the shape information does improve the multi-class detection.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{tsne.pdf}
\end{center}
\vspace{-4ex}
\caption{We show the distribution of our shape signature via TSNE.
(\textbf{Best viewed in color)}.}
\label{fig:tsne}
\end{figure*}
\noindent{\textbf{Scalability of Shape Signature.}}~~~ To investigate the scalability of the proposed shape signature, we perform a thorough study, where the shape signature is combined with different backbone networks and tested on different datasets. The detailed results are shown in Table.~\ref{tab:ab2}. For different backbone networks, we use PointPillar and Second, which utilize the 2D convolution and 3D convolution networks, respectively, and cover the mainstream in 3D object detection. It can be found that the shape signature could greatly improve the performance for different backbone networks on various datasets.
Furthermore, it also achieves the consistent performance gain across different datasets, \emph{i.e.}~ nuScenes, Lyft and Kitti~\cite{kitti} dataset. Note that the mAP-3D in Lyft is similar to COCO dataset, which is much difficult than mAP in nuScenes and Kitti.
From these two perspectives, we can find that the proposed shape signature does possess good scalability and the exploration of shape information does improve the capability of detection networks in the discrimination of multiple categories.
\begin{table}[ht]
\centering
\caption{Experimental results of ablation studies on the scalability of shape signature. We perform the ablation with different backbones (PointPollar and Second) on three datasets (nuScenes, Lyft and Kitti). Note that for nuScenes, we report the results on the validation set and for Lyft, we report the results on the public test set in Kaggle. For Kitti, we report the moderate mAP with IoU=0.7 on two categories (car and pedestrian). ``PP" denotes PointPillar and ``SS" denotes our shape signature}
\vspace{-2ex}
\label{tab:ab2}
\begin{tabular}{c|l|c|c|c|c|c|c|c}\toprule
Dataset&Methods & mAP & NDS & Dataset & mAP-3D & Dataset & mAP@car & mAP@ped\\
\midrule
\multirow{4}{*}{nuScenes}& PP\cite{pointpillar} & 29.4 & 44.9 & \multirow{4}{*}{Lyft} & 13.4 & \multirow{4}{*}{Kitti} & 74.3 & 41.9\\
\cline{2-4}\cline{6-6}\cline{8-9}
&\textbf{$+$ SS} & \textbf{36.6} & \textbf{49.8} & & \textbf{16.2} & & \textbf{76.2} & \textbf{43.5}\\
\cline{2-4}\cline{6-6}\cline{8-9}
&Second\cite{second} & 31.1 & 46.9 & & 13.0 & & 73.7 & 42.6\\
\cline{2-4}\cline{6-6}\cline{8-9}
&\textbf{$+$ SS} & \textbf{34.3} & \textbf{48.9} & & \textbf{15.4} & & \textbf{75.4} & \textbf{44.1}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\small
\caption{Experimental results of Shape-aware grouping heads \emph{v.s.} One-to-one heads and Implicit shape signature \emph{v.s.} our shape signature. O-to-O Heads and SG Heads denote the one-to-one heads and shape-aware grouping heads, respectively}
\vspace{-2ex}
\setlength{\tabcolsep}{3.0pt}
\centering
\vspace{1ex}
\begin{tabular*}{1.0\linewidth}{l|c||c|c||c|c}
\toprule
Methods & PP~\cite{pointpillar} & PP $+$ O-to-O Heads & \textbf{PP $+$ SG Heads} & PP $+$ IS\cite{IS} & \textbf{PP $+$ SS}\\
\hline
\hline
mAP & 29.4 & 32.0 & \textbf{39.1} & 31.4 & \textbf{36.6} \\
\hline
NDS & 44.9 & 46.2 & \textbf{51.0} & 46.7 & \textbf{49.8}\\
\bottomrule
\end{tabular*}
\label{tab:ab3}
\end{table}
\noindent{\textbf{Shape-aware Grouping Heads {\em{v.s.}} One-to-One Heads.}}~~~ To verify the effectiveness of the shape-aware grouping heads, we compare the shape-aware heads to the one-to-one heads, in which each head covers one category. The difference between two types of heads is the shape information investigation. From the results shown in Table.~\ref{tab:ab3}, it can be found that the shape-aware grouping heads perform much better than one-to-one heads in both metric terms, which further demonstrates the shape information benefits the multi-class discrimination. Moreover, the shape grouping strategy is also more effective than the one-to-one strategy, which groups the objects with similar shape and scale to aid the exploration of shape information.
\noindent \textbf{Comparison with other Shape Signature.}~~~ The previous work~\cite{IS} provides an implicit shape representation for instance segmentation. We adapt this approach into the point cloud segmentation and obtain the implicit shape signature with same dimension (``IS" is the notation). We compare the ``IS" with our shape signature (``SS") in Table.~\ref{tab:ab3}. It can be found that our shape signature outperforms the implicit shape signature with a large margin because ``SS" better handles difficulties from point cloud by completion and robustness enhancement.
\noindent \textbf{Dimension of Shape Signature.}~~~ We use top 3 coefficients of Chebyshev approximation, because they principally and effectively cover the shape function. For example, for the bird-view shape vector of a car (we show full coefficients), [1.93, -0.65, 0.083, 4.68e-03, 1.064e-05, $\dots$], it can be found that top 3 coefficients contain the main knowledge and are appropriate as objective.
\subsection{Overview}
Given a point cloud, our goal is to localize and classify the multi-class target objects. Unlike the single-class detectors, we desire to obtain a detector which could effectively distinguish the objects from multiple categories. To this end, we propose a multi-class 3D detection framework based on shape information exploration. The basic idea is to utilize the shape information via two key ingredients, \emph{i.e.}~shape signature objective and shape-aware grouping heads, to benefit the multi-class classification.
As shown in Fig.~\ref{fig:pileline}, our framework consists of four components, \emph{i.e.}~point-to-structure, pyramid feature encoding, shape-aware grouping heads and multi-task objectives, where point-to-structure and pyramid feature encoding are flexible (\emph{i.e.}~multiple options are available). The key components of SSN are the shape signature objective and the shape-aware grouping heads.
Particularly, during the training the shape signature objective could guide the learning of discriminative features via back-propagation, benefitting the multi-class discrimination. After training, the shape signature objective is no longer needed. In what follows, we will present the details of shape signature and SSN.
\subsection{Shape Signature}
Given the ground truth points of object, we parameterize the shape information of the object with the proposed shape signature, then apply the obtained shape signature vector as a soft constraint to improve the feature capability of multi-class discrimination.
As mentioned above, the desired shape signature should carry two properties: 1) compact and effective as a part of objective; 2) robust to the sparsity and noise. To achieve this, we introduce several operations to handle the issue of point clouds. As shown in Fig.~\ref{fig:ss}, the shape signature contains two components, shape completion and shape embedding, where shape completion consists of Transform and Symmetry, and shape encoding involves Projection, Convex Hull, Angle-Radius and Chebyshev Fitting.
\vspace{-1ex}
\subsubsection{Shape Completion}
Since the scan of lidar sensor only covers the partial observation, this property limits the shape investigation. We thus introduce the shape completion to tackle this issue, which consists of following steps.
\noindent \textbf{Transform.}~~~ The points of target object are located in the scene. We first transform the center of ground truth box to the origin point, and use the forwarding direction as the reference axis.
\noindent \textbf{Symmetry.}~~~ Lidar scans could only cover two or three faces of object, thus this partial observation would affect the investigation of shape. We introduce the centro-symmetry to complete the partial view. From Fig.~\ref{fig:ss} (b), we can find that after symmetry, the points of target object become more dense and the observation gets complete.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{shape_signature2.pdf}
\end{center}
\vspace{-3ex}
\caption{We show the workflow of the proposed shape signature. Two major components, \emph{i.e.}~Shape Completion and Shape Embedding, are illustrated with two dashed rectangles. Specifically, step (a) is to transform the center of box to the origin point. (b) is the symmetry for completing the partial observation. (c) is to project the 3D points into three views. (d) is to extract the convex hull to enhance the robustness to sparsity. (e) is the Angle-Radius and step (f) is the Chebyshev fitting to get the final shape vector.
(\textbf{Best viewed in color)}.}
\label{fig:ss}
\vspace{-3ex}
\end{figure*}
\vspace{-1ex}
\subsubsection{Shape Embedding} We then introduce following operations to achieve the compact and effective shape embedding.
\noindent \textbf{Projection.}~~~ Given the completed points, we project the 3D points to three 2D views, \emph{i.e.}~bird view, front view and side view.
Based on the projection, the 3D points are decoupled into several 2D planes, which could thoroughly describe the 3D shape and benefit the reduction of parameters.
\noindent \textbf{Convex Hull.}~~~ After projection, we get 2D points of different views. However, it can be found that the organization of 2D points is limited to effectively represent the shape and there still exists the inner-sparsity. Hence, the convex hull is introduced to characterize these 2D points and emphasize the contour of views, thus being robust to the inner-sparsity. Furthermore, the contour of 2D points also maintains the scale information, which is an important factor for multi-class discrimination (see Fig.~\ref{fig:ss} (d)).
\noindent \textbf{Angle-Radius.}~~~ To describe the convex hull and highlight the contour shape and scale, we design an angle-radius parametric function $f(\theta)$. We use the center of ground truth box as the origin point $\sigma$ and densely sample some angles $\theta$. In this way, the function
$f(\theta) = dist(\sigma \stackrel{\theta}{\longrightarrow} \mathbb{C})$, where $\mathbb{C}$ is the convex hull and $dist$ indicates the distance between origin point and intersection point (\emph{i.e.}~radius). In the implementation, we sample 360 angles and calculate the radius accordingly.
From Fig.~\ref{fig:ss} (e) (see the aspect ratios), it is noted that the function $f(\theta)$ involving the angle and radius does well in maintaining the shape and scale of contour. However, the dense sampling also introduces the long vector (360 dimensions) which is not desired for the objective. Hence, to shorten the long vector representation and further enhance the robustness against the noise (\emph{e.g.}~some outliers in the 2D points), we introduce the Chebyshev Fitting to process the angle-radius function $f(\theta)$.
\noindent \textbf{Chebyshev Fitting.}~~~ Chebyshev Polynomials Fitting~\cite{cheb} provides an approximation that is close to the polynomial of best approximation to a function under the maximum norm. Our goal is to apply the Chebyshev polynomials to approximate the angle-radius function, and then use their coefficients to serve as the final shape vector.
There are two kinds of Chebyshev polynomials fitting~\cite{cheb}, and we use the Chebyshev polynomials of first kind. The first kind $T_n(x)$ is defined by the recurrence relation:
\begin{align}
T_0(x) &= 1, T_1(x) = x,\\
T_{n+1}(x) &= 2xT_n(x) - T_{n-1}(x).
\end{align}
Hence, the generic formulation of Chebyshev approximation can be written as a sum of {$T_n(x)$}.
\begin{align}
f(x) \approx \sum_{n=0}^{N} \alpha_n T_n(x),
\end{align}
where $\alpha$ are the coefficients. These coefficients can be computed with the formulas:
\begin{align}
\alpha_0 &= \frac{1}{N+1}\sum_{n=0}^{N}f(x_n)T_0(x_n)\\
\alpha_j &= \frac{2}{N+1}\sum_{n=0}^N f(x_n)T_j(x_n)
\end{align}
Since the number of coefficients in $f(x)$ is $2^{N-1}$, we truncate $\alpha$ with top $k$ terms. For each view, top $k$ coefficients are the shape vector. The final shape signature is $ [\underbrace{\alpha_1, \dots, \alpha_k}_{\text{Birdview}}, \underbrace{\alpha_1, \dots, \alpha_k}_{\text{Sideview}}, \underbrace{\alpha_1, \dots, \alpha_k}_{\text{Frontview}}]$. In the implementation, we use $k$=3 and the dimension of final shape signature vector is 9, which is suitable to serve as an objective for the network.
\noindent\textbf{Some Extreme Cases.}~~~ Due to the limitation of Lidar sensor and human annotators, some ground truth boxes contain less than or equal to 5 points, even 0 point for incorrect labeling. For these boxes, it is hard to model the shape information, and we thus use the average encoding of that category to represent their shape vectors.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{pipeline.pdf}
\end{center}
\vspace{-2ex}
\caption{The pipeline of our framework SSN. Four major components are illustrated with four dashed rectangles. The first one is the Point-to-Structure part, which converts the raw points into the structured representation, such as voxels~\cite{second,voxelnet} or pillars~\cite{pointpillar}. The second is the pyramid feature encoding part. The third one is the shape-aware grouping heads, which consist of multiple branches for objects with similar shape and scale. The final part is the objective, including classification, localization and shape signature regression.
(\textbf{Best viewed in color}).}
\label{fig:pileline}
\end{figure*}
\subsection{SSN: Shape Signature Networks}
Based on the proposed shape signature, we design the SSN to achieve the effective multi-class 3D detection. We first describe each component, especially two key ingredients, \emph{i.e.}~shape-aware grouping heads and shape signature objective, then we integrate different parts to form the unified target: exploring the shape information to better distinguish multi-class objects.
\vspace{1ex}
\noindent\textbf{Point-to-Structure.}~~~ Since the organization of point cloud is unstructured, the first step is to transform the point cloud to the structured representation. As mentioned above, multiple options are available in this part, such as the voxel-based~\cite{second,voxelnet} representation or pillar-based~\cite{pointpillar} or Bird-view representation~\cite{mv3d}. After obtaining the structured representation, the subsequent 2D convolution or 3D convolution networks can be applied. In the implementation, we choose the pillar-based representation to structure the point clouds. Furthermore, we also test the shape signature with other structure representation (voxel-based) and the proposed shape signature shows good scalability.
\vspace{1ex}
\noindent\textbf{Pyramid Feature Encoding.}~~~ We follow the idea of FPN~\cite{FPN} to perform the feature encoding. A top-down convolutional network is first applied to extract the feature from multiple spatial resolutions. Then all features are fused together through upsampling and concatenation.
\noindent\textbf{Shape-aware Grouping Heads.}~~~
Since multi-class target objects vary significantly in scale and shape, we propose the shape-aware grouping heads to adapt this ideology for multi-class discrimination. The basic idea is to create multiple heads, in which objects with similar scale and shape share the weights. The reasons mainly lie in the following: 1) objects with different scale and shape should have different heads. For example, the head of bus needs to be heavier (or more deep) than the head of bike due to its large scale, because heavier head, larger receptive field. 2) shape grouping heads could perform the coarse shape exploration and also alleviate the effect from other groups.
As shown in Fig.~\ref{fig:pileline}, the design of shape-aware grouping heads follows the spirit of ``larger object, heavier head". Based on the shape and scale of target objects, we group the bus, truck and trailer together with a heavier head, and gather bicycle and motorcycle with a lighter head, and treat the car with a medium head. Each head only covers the prediction of corresponding categries. By integrating above components, a SSD-based detection framework is formed.
\subsection{Multi-task Objectives}
In our framework, there are three objectives, \emph{i.e.}~multi-class classification, localization regression and shape vector regression. For the multi-class classification, we follow the previous work~\cite{second} to use the focal loss~\cite{focal}
\begin{align}
\mathcal{L}_{cls} = -\alpha_t(1-p_t)^{\gamma}\log(p_t),
\end{align}
where $p_t$ is the class probability of the default box and we use $\alpha=0.25$ and $\gamma=2$.
For the localization loss, we use the smooth L1 loss to minimize the distance between predictions and localization residuals~\cite{second}.
\begin{align}
\mathcal{L}_{loc} = \text{SmoothL1}(\triangle b),
\end{align}
where $\triangle b$ are the localization residuals, including the center ($x,y,z$), scale ($w,h,l$) and rotation ($\theta$).
Unlike regressing the residuals in localization, the network is trained to directly regress the shape vector. For the shape regression, we also apply the smooth L1 loss.
\begin{align}
\mathcal{L}_{shape} = \text{SmoothL1}(\mathbb{S}),
\end{align}
where $\mathbb{S}$ is the shape vector.
The total objective of three tasks is therefore:
\begin{align}
\mathcal{L} = \beta_1 \mathcal{L}_{cls} + \beta_2 \mathcal{L}_{loc} + \beta_3 \mathcal{L}_{shape},
\end{align}
where $\beta$ are the constant factors of loss terms. As the shape loss is much larger than localization and classification loss, we set $\beta_1 = 1.0$, $\beta_2 = 1.0$ and $\beta_3 = 0.5$ to balance the value scale.
|
1,314,259,993,480 | arxiv | \section{Introduction}
In recent years, the quantity of available source code has been growing exponentially \cite{source_growth}. Code reuse at this scale is predicated on understanding and searching through a massive number of projects and source code documents. The ability to generate meaningful, semantic labels is key to comprehending and searching for relevant source code, especially as the diversity of programming languages, libraries, and code content continues to expand.
The search functionality for current large code repositories, such as GitHub \cite{github} and SourceForge \cite{sourceforge}, will match queried terms in the source code, comments, or documentation of a project. More sophisticated search approaches have shown better performance in retrieving relevant results, but they often insufficiently handle scale, breadth, or ease of use. Santanu and Prakash \cite{patterns} develop pattern languages for C and PL/AS that allow users to write generic code-like schematics. Although the schematics locate specific source code constructs, they do not capture the general functionality of a program and scale poorly to large code corpora. Bajracharya et al. \cite{sourcerer} develop a search engine called Sourcerer that enhances keyword search by extracting features from its code corpus. Sourcerer scales well to large corpora, but it is still hindered by custom language-specific parsers. Suffering from a similar problem, Exemplar \cite{exemplar} is a system that tracks the flow of data through various API calls in a project. Exemplar also uses the documentation for projects/API calls in order to match a user's keywords. Recent applied works have similar shortcomings \cite{openhub} \cite{krugle} \cite{searchcode} \cite{sourcegraph}. Creating a solution that operates across programming languages, libraries, and projects is difficult due to the complexity of modeling such a huge variety of code.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{code_example_border}
\caption{An example prediction of our model. The input code snippet is on the left, while the predicted labels and their raw certainties are on the right. Keyword matching on the predicted labels would not have been able to locate this code. }
\label{fig:code_snippet}
\end{figure}
As a step in that direction, we present a novel framework for generating labels for source code of arbitrary language, length, and domain. Using a machine learning approach, we capitalize on a wealth of crowdsourced data from Stack Overflow (SO) \cite{stackoverflow}, a forum that provides a constantly growing source of code snippets that are user-labeled with programming languages, tool sets, and functionalities. Prior works have attempted to predict a single label for an SO post \cite{coocc_pred} \cite{predicting_tags} using both the post's text and source code as input. To our knowledge, our work is the first to use Stack Overflow to predict exclusively on source code. Additionally, prior methods do not attempt multilabel classification, which becomes a significant issue when labeling realistic source code documents instead of brief SO snippets. Our approach utilizes SO's code snippets to simultaneously model thousands of concepts and predict on previously unseen source code, as demonstrated in Fig. \ref{fig:code_snippet}.
We construct a deep convolutional neural network that directly processes source code documents of arbitrary length and predicts their functionality using pre-existing Stack Overflow tags. As users ask questions about new programming languages and tools, the model can be retrained to maintain up-to-date representations.
Our contributions are as follows:
\begin{itemize}
\item First work, to our knowledge, to introduce a baseline for multilabel tag prediction on Stack Overflow posts.
\item Convolutional neural network architecture that can handle arbitrary length source code documents and is agnostic to programming language.
\item State-of-the-art top-1 accuracy (79\% vs 65\% \cite{predicting_tags}) for predicting tags on Stack Overflow posts, using only code snippets as input.
\item Approach that enables tagging of source code corpora external to Stack Overflow, which is validated by a human study.
\end{itemize}
We organize the rest of the paper as follows: section \ref{related} discusses related works, section \ref{data} details data preprocessing and correction, section \ref{methodology} explains our neural network architecture and validation, section \ref{results} displays our results, section \ref{challenges} presents challenges and limitations, and section \ref{conclusions} considers future work.
\section{Related Work} \label{related}
Due to the parallels between source code and natural language \cite{naturalness} \cite{surveybigcodenatural}, we find that recent work in the natural language processing (NLP) domain is relevant to our problem. Modern NLP approaches have generated state-of-the-art results with long short-term memory neural networks (LSTMs) and convolutional neural networks (CNNs). Sundermeyer, Schl{\"u}ter, and Ney \cite{lstm_language} have shown that LSTMs perform better than n-grams for modeling word sequences, but the vocabulary size for word-level models is often large, requiring a massive parameter space. Kim, Jernite, Sontag, and Rush \cite{character_aware} show that by combining a character-level CNN with an LSTM, they can achieve comparable results while having 60\% fewer parameters. Further work shows that CNNs are able to achieve state-of-the-art performance without the training time and data required for LSTMs \cite{nn_models}. In the source code domain, however, prior work has utilized a wide variety of methods.
In 1991, Maarek, Berry, and Kaiser \cite{ir_libraries} recognized that there was a lack of usable code libraries. Libraries were difficult to find, adapt, and integrate without proper labeling, and locating components functionally close to a given topic posed a challenge. The authors developed an information retrieval approach leveraging the co-occurrence of neighboring terms in code, comments, and documentation.
More recently, Kuhn, Ducasse, and G{\'\i}rba \cite{semantic_clustering} apply Latent Semantic Indexing (LSI) and hierarchical clustering in order to analyze source code vocabulary without the use of external documentation. LSI-based methods have had success in the code comprehension domain, including document search engines \cite{lsi_search} and IDE-integrated topic modeling \cite{relational_topics}. Although the method seems to perform well, labeling an unseen source code document requires reclustering the entire dataset. This is a significant setback for maintaining a constantly growing corpus of labeled documents.
In the context of source code labeling, supervised methods are mostly unexplored. A critical issue in this task is the massive amount of labeled data required to create the model. A few efforts have recognized Stack Overflow for its wealth of crowdsourced data. Saxe, Turner, and Blokhin \cite{crowd} search for Stack Overflow posts containing strings found in malware binaries, and use the retrieved tags to label the binaries. Kuo \cite{coocc_pred} attempts to predict tags on SO posts by computing the co-occurrence of tags and words in each post. He achieves a 47\% top-1 accuracy, which in this context is the task of predicting only one tag per post.
Clayton and Byrne \cite{predicting_tags} also attempt to predict a tag for SO posts. They invest a great deal of effort in feature extraction inspired by ACT-R's declarative memory retrieval mechanisms \cite{act-r}. Utilizing logistic regression, they achieve a 65\% top-1 accuracy.
In this work, we generate a more complex machine learning model than those present in previous attempts. Because we intend to generalize our model to source code files, we make our tests stricter by only using the actual code inside Stack Overflow posts as inputs to the model. Despite the information loss from not taking advantage of the entire post text, we still further improve on the performance of prior work and obtain a 78.7\% top-1 accuracy.
\section{Data} \label{data}
The primary goal of our work is to create a machine learning system that will classify the functionality of source code. We achieve this by leveraging Stack Overflow, a large, public question and answer forum focused on computer programming. Users can ask a question, provide a response, post code snippets, and vote on posts. The SO dataset provides several advantages in particular: a huge quantity of code snippets; a wide set of tags that cover concepts, tools, and functionalities; and a platform that is constantly updated by users asking and answering questions about new technologies. Due to the complexity of the data, we use this section to discuss the data's characteristics and our preprocessing procedures in detail.
\begin{figure}[t]
\centering
\includegraphics[width=.95\linewidth]{so_example_totally_marked}
\caption{A Stack Overflow thread with a question and answer. The thread's tags are boxed in red and the code snippets are boxed in blue. For the purpose of training our model, the tags are the output labels and the code snippets are the input features. We can see from this example that the longer snippets look like valid code, while the shorter snippets are not as useful.}
\label{fig:so_post}
\end{figure}
Fig. \ref{fig:so_post} is an example of a Stack Overflow thread. Users who ask a question are allowed to attach a maximum of five tags. Although there is a long list of previously used tags, users are free to enter any text. The tags are often chosen with programming language, concepts, and functionality in mind. The tags for this example, boxed in red, are ``python," ``list," and ``slice." Additionally, any user is allowed to block off text as a code snippet. In this example, the user providing an answer uses many code snippets, which have been boxed in blue. Although the code snippets may describe a particular functionality, they do not necessarily represent a complete or syntactically correct program.
Our initial intuition is that the code snippets can simply be input into a machine learning model with the user-created tags as labels. This trained model would then be able to accept any source code and provide tags as output. As we further analyze the data, several questions need to first be resolved, including how to associate tags with snippets, what constitutes a single code sample, and which data should be filtered from the dataset.
Stack Overflow's threads are the fundamental pieces of our training data. The publicly available SO data dump provides over 70 million source code snippets with labels that would be useful for real world projects. Because the tags are selected at the thread level while snippets occur in individual posts, we assign the thread's tags to each post in that thread. Since a single post can have many code snippets, we choose to concatenate the snippets using newline characters as separators in order to preserve a user's post as a single idea.
Although these transformations ensure that a post will suffice as input to a language-level model, they do not guarantee the usefulness of the snippets themselves. The following section will address several problems with short, uninformative code snippets, user error in tagging posts and generating code with the correct functionality, and the long-tailed distribution of unusable tags.
\subsection{Statistics and Data Correction}
As of December 15, 2016, the Stack Overflow data dump contains 24,158,127 posts that have at least one code snippet, 73,934,777 individual code snippets, and 46,663 total tags. Despite the large amount of data, there is a severe long-tailed phenomenon that is common in many online communities \cite{onlineproperties}. The distributions of code-snippet length and number of tags per post are of particular importance to our problem.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{stats/log_snip_length_distr}
\caption{The distribution of snippet lengths in the full dataset, with frequencies logarithmically scaled. Although short code snippets are extremely common, they have limited value.}
\label{fig:log_snip_length}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{stats/zoomed_snip_length_distr}
\caption{A zoomed view of the snippet length distribution, with 1 bin equal to 1 character. There are many strings that are empty or only a few characters long.}
\label{fig:zoomed_snip_length}
\end{figure}
Fig. \ref{fig:log_snip_length} shows the distribution of individual snippet lengths, measured in number of characters, throughout Stack Overflow. As one would expect, the longer snippets are many orders of magnitude less frequent than the shorter snippets. Fig. \ref{fig:zoomed_snip_length} further demonstrates that, of the many short snippets, there is a huge quantity that are empty strings or are only a few characters long. There are several reasons why these snippets are poor choices for training data. First, a single character is usually not descriptive enough to characterize multiple tags. Saying that `x' is a good indicator of python, machine learning, and databases does not make sense. Going back to Fig. \ref{fig:so_post}, we can also see that the short snippets are often references to code, but not valid code themselves.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{stats/zoomed50_medavg_punctuation_per_length}
\caption{The mean and median number of punctuation marks at different snippet lengths. At a snippet length of 10 characters, the mean and median number of punctuation marks is 1, indicating a reasonable choice for minimum snippet length.}
\label{fig:zoomed50_med_punctuation}
\end{figure}
In order to avoid cutting out snippets at an uninformed threshold, we investigate snippets of different lengths in more detail. We found punctuation to be a good indicator of code usefulness in short snippets. The occurrence of punctuation means that we are more likely to see programming language constructs such as ``variable = value" or ``class.method." However, simply removing all snippets without punctuation is not viable because of valuable alphanumeric keywords and punctuation-free code (``call WriteConsole''), so we instead decide to filter based on a threshold length. Fig. \ref{fig:zoomed50_med_punctuation} shows the median and mean number of punctuation marks for different snippet lengths. At a snippet length of 10 characters, the mean and median are both greater than one, so we filter out all snippets that are length 9 or below from the data.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{stats/tags_per_post}
\caption{Distribution of tags per post. All posts on Stack Overflow must have at least one tag, but there is a maximum of five tags, resulting in missing labels. }
\label{fig:tags_per_post}
\end{figure}
Additionally, Fig. \ref{fig:tags_per_post} shows the distribution of tags per post. As stated previously, Stack Overflow allows a maximum of five tags for any given post. Although most posts contain three tags, there is still a significant number of posts with fewer tags. The combined effect from a high quantity of posts that have few tags and an enforced maximum creates a ``missing label phenomenon." This is the situation where a given post is not tagged with \emph{all} of the functionalities or concepts actually described in the post. This is a non-trivial challenge for machine learning models because a code snippet is considered a negative example for a given label if that label is missing.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{stats/zoomed_score_per_post}
\caption{The distribution of scores on Stack Overflow posts. Negative scores are often the result of poorly worded questions, incorrectly tagged posts, or flawed code snippets, so we filter them out of the training set. We keep zero-scored snippets because they may not have been viewed enough to be voted on.}
\label{fig:zoomed_scores}
\end{figure}
Users can also add errors to the training data by simply being wrong about their tags or posted code on Stack Overflow. Because users can vote based on the quality of a post, we can use scores as an indicator for incorrectly tagged or poorly coded posts. Fig. \ref{fig:zoomed_scores} shows the distribution of scores for posts that have at least one code snippet. We cut all posts with negative scores from the training data. Although we considered cutting posts with zero score because they had not been validated by other users via voting, we ultimately choose to keep them because the score distribution shows that there is a large amount of data with zero score.
\begin{table}
\caption{Rankings are based on the number of posts that are labeled with a tag, after filtering data for snippet and score thresholds. This shows that the majority of tags have too few samples to train and validate a machine learning model. }
\label{tab:tag_ranks}
\centering
\begin{tabular}{ r l r }
\toprule
Rank & Tag & \# of Posts \\
\midrule
1 & javascript & 2,585,182 \\
8 & html & 1,279,137 \\
73 & apache & 99,377 \\
751 & web-config & 10,056 \\
4,508 & simplify & 1,000 \\
16,986 & against & 100 \\
46,663 & db-charmer & 1\\
\bottomrule
\end{tabular}
\end{table}
After filtering the data for the snippet length and score thresholds, one problem remained with the set of valid labels. Because users are allowed to enter any text as a tag for their posts, there is a long-tailed distribution of tags that are rarely used. Table \ref{tab:tag_ranks} displays the magnitude of the problem. In the first 4,508 tags, the amount of posts per tag drops from 2.5 million to just 1,000. In order to enable a 99\% / 1\% training/test split and still have 10 positive labels per tag to estimate performance, we cut off tags with fewer than one thousand positive samples.
In the following section, we explain how we construct our models and perform validation using the snippet, score, and tag-filtered data.
\section{Methodology} \label{methodology}
Our motivations for using neural networks in this work are severalfold. As discussed in the introduction, convolutional neural networks have shown state-of-the-art performance in natural language tasks with less computation than LSTMs \cite{character_aware} \cite{nn_models}. Both natural language and source code tasks must model structure, semantic meaning, and context.
Neural networks also have the ability to efficiently handle multilabel classification problems: rather than training \(M\) classifiers for \(M\) different output labels, the output layer of a neural network can have \(M\) nodes, simultaneously providing predictions for multiple labels. This enables the neural network to learn features that are common across labels, whereas individual classifiers must learn those relationships separately.
\subsection{Neural Network Architecture}
\begin{figure}[t]
\centering
\includegraphics[width=.9\linewidth]{new_default_neural_net_architecture}
\caption{An overview of the neural network architecture. (a) The characters from a given code snippet are converted to real-valued vectors using a \emph{character embedding}. (b) We use stacked convolutional filters of different lengths with ReLU activations over the matrix of embeddings. (c) We perform sum-over-time pooling on the output of each stacked convolution. (d) A flattened vector is fed into two fully-connected, batch-normalized, dense layers with ReLU activations. (e) Each output node uses a logistic activation function to produce a value from 0 to 1, representing the probability of a given label.}
\label{fig:arch}
\end{figure}
Fig. \ref{fig:arch} gives an overview of the neural network architecture. In part (a) of Fig. \ref{fig:arch}, we use a character embedding to transform each printable character into a 16-dimensional real-valued vector. We chose character embeddings over more commonly used word embeddings for multiple reasons. Creating an embedding for every word in the source code domain is problematic because of the massive set of unique identifiers. Forming a dictionary from words only seen in the training set will not generalize, and using all possible identifiers will be infeasible to optimize. The neural network only needs to optimize 100 embeddings when using the printable characters. Additionally, the character embeddings are able to function on any text, allowing the model to predict on source code without the use of language-specific parsers or features.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{embedding_outlined}
\caption{A two-dimensional projection of the character embedding vectors that are optimized during model training. The model generates clear spatial relationships between various sets of characters. The separation between uppercase letters, lowercase letters, symbols, and numbers is of particular interest. In general, meaningful spatial relationships significantly improve the features extracted by the convolutional layers.}
\label{fig:embedding}
\end{figure}
In order to provide an intuition of the character embedding, we use PCA to project the 16-dimensional embedding vectors down to two dimensions, as displayed in Fig. \ref{fig:embedding}. This figure indicates that the model generates salient spatial relationships between the embedded characters during optimization, which is critical to the performance of the convolutional layers. The convolutions are able to preserve information about words and sequences by sliding over the embedding vectors of consecutive characters. We stack two convolutional layers of the same size for various filter lengths, which generates a stacked convolution matrix. Using sum-over-time pooling on the stacked convolution matrix allows us to obtain a fixed-length vector regardless of the initial input size.
After two batch normalized dense layers, the last layer has a logistic activation for each neuron in order to output the probability of a tag occurring. The network is trained on binary vectors containing a 1 for every tag that occurs for a given code snippet and 0 otherwise. We use binary cross-entropy as our loss function.
\subsection{Validation Setup}
Since we train the model on Stack Overflow and predict on arbitrary source code, we must validate the model in both domains. On the SO data, we use a hold-out test-set strategy so that the model can be evaluated on previously unseen data. In the source code domain, we perform human validation to verify the accuracy of the model’s outputs.
\subsection{Stack Overflow Validation}
To validate the neural network on Stack Overflow, we tested a number of multilabel test set stratification algorithms. Stratification based on k-fold cross-validation, which is a standard technique for binary and multiclass classification tasks, cannot be directly applied to the SO multilabel classification problem due to classes not being disjoint. Furthermore, due to the class imbalance caused by using a long-tailed tag distribution for labels, random stratification produces partitions of the data that do not generate good estimates for multilabel problems \cite{stratification} \cite{multi_mining}. In particular, the label counts for the top tag and the 4,508th tag differ by 3 orders of magnitude, which can result in classes with very few positive labels for the test set.
Since deep CNN models take a long time to train and benefit from large datasets, we want to avoid cross validation and use as much of the dataset as possible to train our model. Our goal is to generate a 98\% / 1\% / 1\% train/validation/test split that still provides a good estimate of performance. With an ideal stratification, this would ensure that even the rarest tags (with 1000 samples each) would have 10 samples in the validation and test sets, which is sufficient for estimating performance. On our dataset, this would result in about 240,000 samples in validation and test sets.
Multilabel stratification begins with the $m$-by-$q$ label matrix $Y$, where $m$ is the number of samples in the dataset $D$, $q$ is the number of labels in the set of labels $\Lambda$, and $Y[i,j] = 1$ where sample $i$ has label $j$, and $Y[i,j] = 0$ otherwise. The goal is to generate a partition $\{D^1, \ldots, D^k\}$ of $D$ that fulfills certain desirable properties. First, the size of each partition should match a designated ratio of samples, in our case, $\frac{|D^{\text{train}}|, |D^{\text{test}}|, |D^{\text{val}}|}{|D|} = (0.98, 0.1, 0.1)$. Additionally, the proportion of positive examples of each label in each partition should be the same; i.e.,
$$\forall s \in \{train, test, val\}, \forall j \in \Lambda: \frac{\sum_{i \in D^s} Y[i, j]}{|D^s|} = c_j$$ where $c_j$ is the proportion of positive examples of label $j$ in $D$.
Labelset stratification \cite{multi_mining} considers each combination of labels, denoted labelsets, as a unique label and then performs standard k-fold stratification on those labelsets. This works well for multilabel problems where each labelset appears sufficiently often. However, this does not optimize for individual label counts, which is a problem for datasets like SO that include rare labels and rare label combinations. We found that iterative stratification \cite{stratification}, a greedy method that specifically focuses on balancing the label counts for rare labels, produced the best validation and test sets. To produce our partition, we ran iterative stratification twice with a 99\%/1\% split, which resulted in a 98.01\%/0.99\%/ 0.1\% train/validation/test split.
\begin{figure}[t]
\centering
\includegraphics[width=.95\linewidth]{source_ui}
\caption{The GUI for human validation of model outputs on source code documents.}
\label{fig:source_ui}
\end{figure}
\subsection{Source Code Validation}
Validating the model's performance on source code poses a different challenge because of the lack of labeled documents. In order to obtain results, we performed human validation on source code that is randomly sampled from GitHub \cite{github}. Specifically, we ran a script to download the master branches of random GitHub projects via the site's API until we had 146,408 individual files. We sampled 20 files for each of the following extensions, resulting in a total of 200 source code documents: [py, xml, java, html, c, js, sql, asm, sh, cpp]. Note that the extensions were not presented to the users and that they do not inform the predictions of the model. We created a GUI, displayed in Fig. \ref{fig:source_ui}, that presents the top labels and asks users if they agree with, disagree with, or are unsure about each label. There were a total of 3 reviewers, each of whom answered the questions on the GUI for all 200 source code documents. We remove the unsure answers and use simple voting among the remaining ratings to produce ground truth and compute an ROC curve.
\section{Results} \label{results}
On the Stack Overflow data, we first calculated the top-1 accuracy previously used by Kuo \cite{coocc_pred} and Clayton and Byrne \cite{predicting_tags}. We obtain a 78.7\% top-1 accuracy, which is a significant improvement over the previous best of 65\%.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{auc_histogram_total}
\caption{The distribution of tag AUCs for each model. Because our dataset uses 4,508 labels, there are 4,508 AUCs binned and plotted for each model. This graph demonstrates how well each model performs across all the labels.}
\label{fig:auc_hist}
\end{figure}
However, we found that metric to be lacking: it only checks if the model's top prediction is in the SO post's tag set. Our goal is to predict many tags pertinent to a source code document, not just its primary tag. Because our work is introducing the multilabel tag prediction problem on Stack Overflow code snippets, we train multiple baseline models to demonstrate the significance of our convolutional neural network architecture. In order to evaluate the results, we computed the area under ROC (AUC) for each individual tag. This is a reasonable evaluation because it demonstrates the performance of each model across the entire set of tags.
\begin{table}[t]
\caption{Mean, median, and standard deviation of tag AUCs for each model. }
\label{tab:auc_statistics}
\centering
\begin{tabular}{ l c c c }
\toprule
Model & Mean & Median & Stdev \\
\midrule
Embedding CNN & 0.957 & 0.974 & 0.048 \\
Embedding Logistic Regression & 0.751 & 0.759 & 0.099 \\
N-gram Logistic Regression & 0.514 & 0.502 & 0.093 \\
\bottomrule
\end{tabular}
\end{table}
We used two additional models as baselines for this problem. The first model performs logistic regression on a bag of n-grams. This model obtains the 75,000 most common n-grams (using n=1,2,3) from the training set to use as features. The second model performs logistic regression on a character embedding of the input code using an embedding dimension of 8. We choose these two models as baselines because they test two different types of featurizations and they are able to efficiently train and predict on multilabel problems.
Fig. \ref{fig:auc_hist} shows the distributions of tag AUCs for the CNN model and the logistic regression baseline models. Because our dataset uses 4,508 tags, there are 4,508 AUC values that are binned and plotted for each model. The shape of the logistic regression distributions are similar - most of the tags fall within the central range of the models' distributions and there are few tags that perform relatively well or relatively poorly. Our convolutional architecture performs well on most of the tags, and instead has a long-tailed distribution of decreasing performance.
Table \ref{tab:auc_statistics} displays a summarized, quantitative view of the tag AUC distributions. The logistic regression models have similar standard deviations, but the n-gram model has a considerably lower mean and median, indicating that the n-gram features are not as effective as the character embeddings. The convolutional network has a significantly higher mean and median, and a lower standard deviation. Although all of the models perform worse as the rarity of the tags increases, the lower standard deviation of the convolutional network implies that the model is more robust to the rarity of a given tag.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{stats/ROC_M1_unsure_2}
\caption{Human validation ROC curve with a 0.769 AUC. This differs from the Stack Overflow AUC values because it operates on the results of human validation, which is limited to only a few tags per document.}
\label{fig:roc_m1}
\end{figure}
For source code validation, we use human feedback on the convolutional network to generate Fig. \ref{fig:roc_m1}. The model obtained a 0.769 AUC. For the sake of comparison, we compute top-1 accuracy with the human validation on source code and obtain an 86.6\% accuracy. We note that this is better than the analogous performance on Stack Overflow, which indicates that, on source code, the model performs better for the first tag, but worse for the rest.
As a final note on performance, we trained and tested our model using an NVIDIA 1080 GPU. Our model obtains speeds of about 317,000 characters per second. Assuming an average of 38 characters per line of code (calculated based on a random sample of source files from GitHub\cite{github}), the model is able to achieve prediction speeds of 8,342 source lines of code per second. To put this in context, it would take the model less than an hour to predict on the 20+ million lines of code in the Linux kernel. It is also readily parallelized to quickly predict across much larger source code corpora.
\section{Challenges/Limitations} \label{challenges}
In the course of our research, we encountered a few limitations that require further study. First is the transfer learning problem between Stack Overflow code snippets and source code. The lack of labeled source code prevents us from training directly on the desired domain.
The size of SO code snippets and the maximum number of tags per post are detrimental to the model's ability to predict on arbitrarily long source code. Due to the five tags per post limit, predicting more tags will increase the model loss, resulting in predictions with few tags. The original hypothesis was that the model would associate few predictions with short snippets and many tags for longer snippets, but the source code evaluation did not strongly support this. Exploring approaches that utilize loss functions other than binary cross-entropy may address these tag limit problems.
Another issue is that Stack Overflow users do not tag their code snippets directly, but rather their questions. For example, a user could post a code snippet of an XML document, ask how to parse it in Java, and tag the thread with ``XML," ``Java," and ``parse." These tags are all extremely relevant to the user's question, but they do not describe the code snippet independently. During training, our model is only able to see that the XML document is an example of XML, Java, and parsing. This creates noise in the Java and parse labels.
Finally, the human verification process is a noisy evaluation of the model's performance on source code. Verifying the predictions is an arduous process because the model is familiar with thousands of functionalities. It is infeasible for individuals to be masters of such a wide range of ideas and tools, which results in a significant amount of labeler disagreement.
\section{Conclusions/Future Work} \label{conclusions}
We leverage the crowdsourced data from Stack Overflow to train a deep convolutional neural network that can attach meaningful, semantic labels to source code documents of arbitrary language. While most current code search approaches locate documents by matching strings from user queries, our approach enables us to identify documents based on functional content instead of the literal characters used in source code or documentation. A logical next step is to apply this model to large source code corpora and build a search interface to find source code of interest.
Unlike previous supervised SO tag-prediction models, we train and test strictly on code snippets, yet we still advance the top-1 prediction accuracy from 65\% to 79\% on Stack Overflow. We also achieve 87\% on human-validated source code. Using the area under ROC to measure performance, we obtain a mean AUC of 0.957 on the Stack Overflow dataset and an AUC of 0.769 on the human source code validation. Refining the methodology and data preprocessing by training the model with entire threads instead of posts could alleviate the performance drop caused by transfer learning. An alternative direction for future research is to investigate better metrics and loss functions for training and evaluating model performance on long-tailed multilabel datasets. This could prevent the model from being punished for predicting more than five tags.
Finally, extensions of the architecture that broaden the contextual aperture of the convolutional layers may grant the model a deeper understanding of abstract code concepts and semantics. This would enable more sophisticated code search and comprehension.
\section*{Acknowledgments} \label{acknowledgements}
This project was sponsored by the Air Force Research Laboratory (AFRL) as part of the DARPA MUSE program.
|
1,314,259,993,481 | arxiv | \section{Introduction}
The majority of nuclides heavier than iron are produced via neutron capture reactions \cite{B2FH57, Kappeler11-RMP,
Arnould07-PR,Thielemann17-ARNPS}. However, there are a few dozens of nuclides on the proton rich side of the valley of stability, the so-called
$p$-nuclei, which cannot be reached by these processes. These are produced mainly in the so-called $\gamma$
process \cite{Rauscher13-RPP}, occurring in hot, dense astrophysical plasmas as encountered, e.g., in core-collapse supernovae
(ccSN) and thermonuclear supernovae (SNIa). While SNIa remain a promising site for $p$-nucleus production \cite{Travaglio11-AJ,Travaglio15-AJ}, the
ccSN model calculations still show deficiencies in reproducing the observed $p$-nucleus abundances in some nuclear mass regions
\cite{Rauscher13-RPP}. Both sites, however, may contribute to the galactic $p$-nucleus content. The deficiencies are partly due to
the uncertain nuclear physics input \cite{Rauscher16-MNRAS}. The reaction network for the $\gamma$ process involves tens of thousands of
reactions on thousands of mainly unstable nuclei. The network calculations use mostly theoretical reaction rates calculated with
the Hauser-Feshbach (H-F) statistical model \cite{Hauser52-PR}. Above neutron number $N=82$ the reaction flow is mainly proceeding
through chains of ($\gamma$,n) and ($\gamma$,$\alpha$) reactions due to nuclear structure effects (reaction $Q$ values)
\cite{Rauscher13-RPP,Arnould03-PR}. Experimental reaction rate information can be obtained by measuring the inverse $\alpha$-capture
reaction cross sections \cite{Mohr07-EPJA,Kiss08-PRL,Rauscher09-PRC} and applying the principle of detailed balance
\cite{Rauscher11-IJMPE}. Experimental data of $\alpha$-capture reactions in the relevant energy region are still scarce,
however \cite{Szucs14-NDS}. A comparison of H-F predictions to the scarce low-energy data above $N=82$ have consistently shown an
overprediction of cross sections \cite{Rauscher13-RPP}. In the astrophysically relevant energy region \cite{Rauscher10-PRC} the H-F cross section
calculations are only sensitive to the $\alpha$-channel width \cite{Rauscher12-AJS}, which is calculated using global
$\alpha+$nucleus optical model potentials. Recently, an energy
dependent modification of the depth of the imaginary part of the widely used McFadden-Satchler potential \cite{McFadden66-NP} was
shown to describe much better the experimental data \cite{Sauerwein11-PRC, Kiss14-PLB, Kiss15-JPG}. Lately developed further
alternatives also include energy-dependent modifications of the imaginary part, these are e.g. \cite{Avrigeanu10-PRC, Demetriou02-NPA,
Mohr13-ADNDT}.
This work presents experimental data for one of the heaviest nuclides investigated so far. For the first time, thick target yield measurement combined with X-ray detection were employed for determining
$\gamma$-process related cross sections for such a heavy nuclide. The data were compared to H-F calculations
for further constraining the optical model potential.
\section{Thick target yield and cross section}
Most of the former studies concentrated on the direct measurement of the reaction cross sections. Usually thin
layers of target material are used, in which the projectile energy loss is small, and by knowing the number of target atoms the
cross section can be derived at an effective energy. In the present study the projectile stops in the target, therefore, reactions
take place with all energies between the bombarding energy and zero. Thus the quantity to be measured is the so-called thick target
yield, i.\,e., the number of reactions per projectile. The number of target atoms is maximized in this way and does not limit the
yield to be measured. In $\gamma$ process relevant studies the thick target yield technique was applied recently only in the lower
mass range \cite{Gyurky14-NPA,Fiebiger17-JPG}. This study is the pioneering work in the heavy mass range.
The thick target yield ($Y_{TT}(E)$) as a function of $\alpha$ energy ($E$) is related to the reaction cross section ($\sigma(E)$) by the following integral formula:
\begin{equation}
Y_{TT}(E) =\int_{0}^{E} \frac{\sigma(E')}{\epsilon_{eff}(E')} dE',
\label{eq:TTY}
\end{equation}
where $\epsilon_{eff}(E)$ is the effective stopping power for the studied isotope, i.e., the stopping power of chemically pure
iridium divided by the isotopic abundance of the studied isotope.
From the measured thick target yields the cross section between two energies can be obtained by subtraction:
\begin{equation}
\sigma(E_{eff}) = \frac{\left(Y_{TT}(E_2) - Y_{TT}(E_1)\right)\overline{\epsilon_{eff}}(E_1;E_2)}{E_2 - E_1},
\label{eq:sigma}
\end{equation}
where $\overline{\epsilon_{eff}}(E_1;E_2)$ is the averaged effective stopping power in the $(E_1;E_2)$ energy range.
$E_{eff}$ is determined from the yield curve, per definition
\begin{equation}
Y_{TT}(E_{eff}) = \frac{Y_{TT}(E_2) + Y_{TT}(E_1)}{2}.
\label{eq:eff_energy}
\end{equation}
\section{Studied reactions}
Iridium in its natural form consists of two isotopes, $^{191}$Ir and $^{193}$Ir with 37.3\% and 62.7\% relative abundances,
respectively. The $\alpha$-induced reactions on both isotopes were investigated in the energy range of E$_\alpha=13.4$\,MeV$-$ $17.0$\,MeV in
0.5\,MeV energy steps. For the investigations the activation technique was used. Therefore only reactions leading to unstable
nuclei were studied.
The main reaction of interest is the radiative capture of $\alpha$ particles by $^{191}$Ir. Since in the studied energy region the
($\alpha$,n) reaction channel is also open for both isotopes, these reactions were also studied. Although they are not
immediately important in the $\gamma$ process, their cross sections are mainly sensitive to the $\alpha$-channel width.
Accordingly they provide an additional constraint for the $\alpha+$nucleus optical potential.
\paragraph{$^{191}$Ir($\alpha$,$\gamma$)$^{195}$Au}
Because there are no previous experimental data for this reaction in the literature, our new data enlarges the existing
database related to $\gamma$ process reactions.
The reaction product $^{195}$Au has the longest half-life of all studied isotopes (see \tab{tab:param}). For this reaction, not
only the $\gamma$ rays were used in the $Y_{TT}$ determination, but also the X-rays. However, as the characteristic X-rays
following the decay of the different studied isotopes are identical, the X-ray peaks in the spectra are initially populated by all
of them. After several months the reaction products with shorter half-lives decay off and only $^{195}$Au remains. Thus, from
the time-dependent population of the X-ray peaks, the activity of this isotope can be derived. At the lowest irradiation energies
the $\gamma$ peak was buried by the background and only the X-rays were strong enough to be visible. The X-ray detection technique
was tested already for thin targets \cite{Kiss11-PLB, Kiss12-PRC}. In this paper we present the first thick target yield
measurements via X-ray detection for a $\gamma$ process related study.
\begin{table}[t]
\caption{Decay parameters of the reaction products used for the analysis \cite{NDS194,NDS195,NDS196}.}
\label{tab:param}
\center
\resizebox{0.99\columnwidth}{!}{
\begin{tabular}{l c c r@{.}l @{}l}
Reaction & Half-life & $\gamma$ ray or X-ray& \multicolumn{3}{c}{Intensity} \\
product & / h & energy / keV & \multicolumn{3}{c}{ / \%} \\
\hline
$^{194}$Au & 38.02\,{\it10} & 328.5 &60 & 4 &{\it8}\\
& & 293.5 &10 & 58 &{\it15}\\
$^{195}$Au & 4464.2\,{\it14} & \phantom{2}66.8 &47 & 2 &{\it11}\\
& & \phantom{2}98.9 &11 & 21 &{\it15}\\
$^{196m}$Au & 9.6\,{\it1} & 147.8 &43 & 5 &{\it15}\\
& & 188.3 &30 & 0 &{\it15}\\
$^{196}$Au & 148.006\,{\it14} & 355.7 &87 & 0 &{\it30}\\
& & 333.0 &22 & 9 &{\it9}\\
& & 426.1 &6 & 6 &{\it3}\\
\end{tabular}
}
\end{table}
\paragraph{$^{191}$Ir($\alpha$,n)$^{194}$Au}
There are two datasets in the literature for this reaction \cite{Bhardwaj92-PRC, Ismail98-Pra}. Both were obtained using the
stacked foil technique. Only the lowest energy points of these
studies are within our investigated energy range. Due to the limitations of this technique, however, those points have large
energy uncertainties.
In our measurement, the energy uncertainty is much smaller even with the subtraction method.
\paragraph{$^{193}$Ir($\alpha$,n)$^{196m}$Au}
The metastable state of $^{196}$Au at an excitation energy of 0.596 MeV has a long enough half-life to be measurable by the
activation technique. This level decays exclusively by internal transitions to the ground state, producing $\gamma$ rays with high
relative intensity (see \tab{tab:param}). Using these, the partial thick target yield populating this level was derived. Previously
in the literature only the ratios of the reaction cross sections leading to the metastable and to the ground state were published,
and mainly at reaction energies much higher than our energy range \cite{Gavriluk89-conf, Denisov93-PAN, Chuvilskaya99-BRASP}.
\paragraph{$^{193}$Ir($\alpha$,n)$^{196}$Au}
From the decay of $^{196}$Au the total reaction cross section was derived. Even though $^{196}$Au nuclei in their ground states are also
produced via the long lived isomeric state, after one day of waiting time the majority of the metastable nuclei
de-excited. Using only the spectra recorded after this time, the measured decay curve of $^{196}$Au was not distorted. The total
reaction cross section including the production via the metastable state was calculated from these countings.
\section{Experimental details}
\paragraph{Targets} For the measurements, 50\,$\mu$m thick high purity (99.9\,\%) iridium foils of natural isotopic composition were used. This
thickness fulfils the criteria of a thick target to completely stop the $\alpha$ particles. With the maximum energy investigated here (17\,MeV), the average range of an
$\alpha$ particle in iridium according SRIM \cite{SRIM} is 40\,$\pm$2\,$\mu$m. According to the supplier's specification, the
iridium foils contain trace amounts of platinum, rhenium, and iron at the ppm level.
\paragraph{Irradiations} For the irradiations, the MGC-20 type cyclotron of Atomki was used. The $\alpha$ particles entered the
activation chamber through a beam defining aperture and a second aperture was supplied with -300\,V secondary electron suppression
voltage. The apertures and the chamber were isolated allowing to measure the beam current. The typical $\alpha^{++}$-beam current
was $2$\,$\mu$A$-2.5$\,$\mu$A. The length of the irradiations was typically $22$\,h$-34$\,h. Since the $\alpha$ particles completely stopped in
the targets, the possible blistering had to be avoided. Therefore, the irradiation was stopped typically every 12\,h and the
target was rotated to receive the bombardment at a slightly different spot. With this method, no visible blistering occurred.
The beam current was recorded with a multichannel scaler, stepping the channels every minute. In this way, the small variations
in beam intensity were followed and taken into account in the data analysis.
\paragraph{$\gamma$-ray and X-ray detection} The produced activity was determined by counting the $\gamma$ and/or X-rays following
the decay of the reaction products (see \tab{tab:param}). For the counting a thin-crystal high-purity germanium detector, a
so-called Low Energy Photon Spectrometer (LEPS) was used. The detector was equipped with a home-made 4$\pi$ shielding consisting of
layers of copper, cadmium, and lead \cite{Szucs14-AIPConf}.
The activity of the reaction products of the ($\alpha$,n) channels was measured at 3\,cm distance.
The dead time was always below 5\% at the beginning of the counting and decreased to a negligible level within a few hours.
A typical spectrum taken 3\,h after the irradiation with E$_\alpha = 15$\,MeV is shown in \fig{fig:gamma}.
\begin{figure}[t]
\includegraphics[width=0.92\linewidth]{gamma_spectrum}
\caption{\label{fig:gamma} Spectrum of the sample irradiated with 15.0\,MeV $\alpha$ particles. Waiting and counting times are indicated with $t_w$ and $t_c$, respectively. The upper panels show the peaks used for the activity determination.}
\end{figure}
The countings for the $^{191}$Ir($\alpha$,$\gamma$)$^{195}$Au reaction product were done in 1\,cm counting geometry to increase the
efficiency. Typically these countings were done at least 4 months after the irradiations, hence no notable dead time was experienced
and only the $^{195}$Au isotope populated the X-ray peaks, i.e., less than 0.5\,\% contribution came from the other isotopes.
In the X-ray spectra the X-ray fluorescence of the bulk iridium was always observed causing peaks at, e.g., 63.3\,keV and 64.9\,keV
(K$_{\alpha_2}$ and K$_{\alpha_1}$, respectively).
The X-ray fluorescence was induced by long-lived parasitic activities like $^{57}$Co, which were always procured on the trace
impurities in the targets.
This kind of fluorescence was not observed in previous thin target measurements \cite{Kiss11-PLB, Kiss12-PRC} because in those cases
less parasitic activity was produced by the fewer impurity atoms and there was also less material on which the fluorescence could
be induced.
The K$_{\alpha_2}$ X-ray from the reaction product at 65.1\,keV is buried under the fluorescence peak but thanks to the excellent
energy resolution of the LEPS detector the K$_{\alpha_1}$ X-ray at 66.8\,keV can be separated from the much more intense
fluorescence peak. Even when the separation was excellent, the florescence was the main limiting factor of the activity
determination (see \fig{fig:Xray}).
\begin{figure}[t]
\includegraphics[width=0.92\columnwidth]{X-ray_spectrum}
\caption{\label{fig:Xray} Spectrum of the sample of the 15.5\,MeV irradiation. Waiting and counting times are indicated with $t_w$ and $t_c$, respectively. The upper panels show the $^{195}$Au K$_{\alpha_1}$ X-ray peak and the $\gamma$ ray peak used for the activity determination.}
\end{figure}
The detector efficiency calibration was done with $\gamma$ sources of known activity at end-cap to target distances of
10\,cm and 15\,cm to avoid true coincidence-summing effects. The obtained efficiency points were fitted with an exponential
function \cite{McFarland91-RR} as shown in \fig{fig:eff}. At each energy the 1$\sigma$ confidence interval
of the fit was used for the efficiency uncertainty.
The efficiency at the actual counting distance (3\,cm) was determined with the help of several targets which were
counted both in 10\,cm and 3\,cm geometry. From the observed count rates, knowing the half-lives of the products and the time
difference of the countings, the efficiency conversion factors were derived. This factor contains the possible loss due to the true
coincidence-summing in close geometry. The conversion factors measured with the different sources were consistent. Therefore their
statistically weighted average was used in the close-geometry efficiency determination. The close-geometry efficiency uncertainty
contains the uncertainty of the fit and the uncertainty of the conversion factors and thus ranges from 1.5\,\% to 8\,\%. The latter
value is for the two lines for the metastable state, where the statistical uncertainty in the efficiency ratio measurement
dominated.
Similar efficiency conversion factors were derived for the 10\,cm to 1\,cm and 15\,cm to 1\,cm counting geometries for the X-ray
peak and $\gamma$ peak several months after the 17\,MeV irradiations when only the $^{195}$Au reaction product was present in the
target. Only this source was strong enough for this method, because of the sizeable $^{195}$Au isotope production via the $^{193}$Ir($\alpha$,$2$n)$^{195}$Au reaction. The 1\,cm efficiency was than calculated from both the 10\,cm and 15\,cm
calibration curves and the weighted average of them was used in the analysis. The final efficiency uncertainty in 1\,cm counting
distance was 3\,\%.
\begin{figure}[b]
\includegraphics[width=0.92\columnwidth]{Efficiency}
\caption{\label{fig:eff} The measured detector efficiency at 10\,cm and 15\,cm source to end-cap distance, fitted by an
exponential function \mbox{$\epsilon(E) = (A E^B + C E^D)^{-1}$} \cite{McFarland91-RR}. One $\sigma$ confidence levels around
the fit is also shown by dotted lines.}
\end{figure}
\section{\label{sec:analysis} Analysis and experimental results}
\paragraph{Thick target yield}
The peaks were fitted by a Gaussian while a linear background was assumed under the peaks.
The detected counts ($C$) are related to the counting and irradiation parameters as follows,
\begin{equation}
\hspace{-8mm} C = Y_{TT}\,\eta\,I\,\sum_{i=1}^{n}\left(\phi_i\,e^{-(n-i)\,\lambda_x\,\Delta t} \right) e^{-\lambda_x t_w} \left( 1 - e^{-\lambda_x t_c} \right)
\end{equation}
where $\eta$ is the absolute detection efficiency, $I$ is the relative intensity of the investigated transition, $\phi_i$ is the incident
particle flux in the $i$th one minute time window ($\Delta t$) of the multichannel scaler, $\lambda_x$ is the decay constant of the
given reaction product, $n\,\Delta t$, $t_w$ and $t_c$ are length of the irradiation, the waiting time between the end of the
irradiation and the beginning of the counting, and the duration of the counting, respectively.
The spectra were stored on a 1\,h time basis to follow the decay of the reaction products and check the stability of the counting system.
The half-lives of the reaction products were found to be consistent with their literature values. Therefore the spectra were
summed up to reduce the statistical uncertainty.
Since thick targets were used and activity is created in the bulk of the target, the attenuation of the exiting radiation had to be
taken into account. To estimate this effect the target was estimated to be built up from 0.01\,$\mu$m thick slices. The attenuation
of the radiation from each slice was calculated using the known attenuation coefficient of iridium \cite{NIST} and
averaged weighted by an estimated activity distribution.
For the calculation of the activity distribution the actual beam energy in each slice was calculated using SRIM \cite{SRIM} and
considered to be constant within the slice. For each slice cross sections from the NON-SMOKER calculations \cite{Rauscher01-ADNDT}
were used as the first estimate. Later, the activity distribution was iteratively re-calculated using the obtained final cross
sections. Note that for the estimation only the energy dependence of the cross section is important and its absolute scale plays no
role in the activity distribution determination. With higher beam energy the highest attenuation is experienced since the tail of
the activity distribution penetrates deeper into the sample.
The attenuation of the $\gamma$ rays with energies higher than 200\,keV was less than 0.1\%, for the $\gamma$ rays with energy 188\,keV and 149\,keV was less than 0.5\% and 0.9\%, respectively. The highest attenuation of about 5\% is experienced by the 98.9\,keV $\gamma$ ray. As conservative estimate, 30\% relative uncertainty was assigned to the attenuation.
\begin{table*}[t]
\caption{Experimental thick target yields ($10^{-12}$ reactions / incident particle). $i$: At the marked energies the $^{195}$Au
activity was also created through the $^{193}$Ir($\alpha$,$2$n)$^{195}$Au reaction channel.}
\label{tab:TTY}
\centering
\begin{tabular}{l c c c c}
E$_\alpha$ / MeV& $^{191}$Ir($\alpha$,n)$^{194}$Au & $^{191}$Ir($\alpha$,$\gamma$)$^{195}$Au & $^{193}$Ir($\alpha$,n)$^{196m}$Au & $^{193}$Ir($\alpha$,n)$^{196}$Au \\
\hline
17.00 $\pm$ 0.05 & 13200 $\pm$ 500 & $i$ & 310 $\pm$ 20 & 21100 $\pm$ 900 \\
16.50 $\pm$ 0.05 & 6600 $\pm$ 200 & $i$ & 146 $\pm$ 10 & 11000 $\pm$ 500 \\
16.00 $\pm$ 0.05 & 2240 $\pm$ 80 & 10.9 $\pm$ 0.6 & 42 $\pm$ 2 & 3670 $\pm$ 160 \\
15.50 $\pm$ 0.05 & 770 $\pm$ 30 & 3.4 $\pm$ 0.2 & 13.1$\pm$ 0.9 & 1310 $\pm$ 60 \\
15.00 $\pm$ 0.05 & 243 $\pm$ 8 & 1.53 $\pm$ 0.19 & 3.4 $\pm$ 0.3 & 410 $\pm$ 18 \\
14.50 $\pm$ 0.04 & 77 $\pm$ 3 & 0.76 $\pm$ 0.05 & 0.96$\pm$ 0.13& 136 $\pm$ 6 \\
14.00 $\pm$ 0.04 & 12.4 $\pm$ 0.5 & 0.24 $\pm$ 0.04 & $<$ 0.21 & 20.7 $\pm$ 1.0 \\
13.40 $\pm$ 0.04 & 4.9 $\pm$ 0.2 & $<$ 0.47 & $<$ 0.16 & 7.0 $\pm$ 0.4 \\
\end{tabular}
\end{table*}
Thick target yields for each ($\alpha$,n) reaction channel were determined from more than one $\gamma$ peak and consistent results
were found. In case of the $^{191}$Ir($\alpha$,$\gamma$)$^{195}$Au reaction the $Y_{TT}$ above 15.5\,MeV were determined both from
the detected X-rays and $\gamma$ rays. Again, consistent results within their statistical uncertainty were found. Below 15.5\,MeV the $\gamma$ peak was not visible,
thus only the X-ray was used for the $Y_{TT}$ determination.
The $Y_{TT}$ obtained from the different transitions were averaged using their statistical weight, which is the combination of the
uncertainty of the fitted peak area, the uncertainty of the relative intensity of the given peak, the efficiency uncertainty, and
the uncertainty of the attenuation. After the averaging, the uncertainty of the absolute intensity per decay and the beam current
uncertainty (3\,\%) were quadratically added. The obtained $Y_{TT}$ are shown in \tab{tab:TTY}.
Above 16.01\,MeV, the $^{193}$Ir($\alpha$,2n)$^{195}$Au reaction channel is open, producing the same isotope as
$^{191}$Ir($\alpha$,$\gamma$)$^{195}$Au. Therefore, no thick target yields were determined for the radiative capture at 16.5\,MeV
and 17\,MeV.
\paragraph{Cross sections}
An average cross section between two energies was derived from the thick target yield using \eq{eq:sigma}. The differentiation of the $Y_{TT}$ has
been done for each transition using the statistical error only. After that the relative uncertainty of the
intensity of the given peak, detection efficiency and attenuation was quadratically added to the relative uncertainty of the derived
cross sections. The consistent cross section values were then averaged using these uncertainties.
Finally, the uncertainty of the absolute intensity per decay, the beam current,
and the stopping power uncertainty (4\,\%) were quadratically added to the relative uncertainty of the averaged value.
For the effective energy determination, an exponential curve was fitted to the measured yield points. The quoted effective energy
was calculated by \eq{eq:eff_energy}. The energy error contains the beam energy uncertainty of 0.3\,\% and an additional 0.5\,\%
uncertainty, which accounts for the considered energy dependence and fit uncertainty of the yield. The derived cross sections are
shown in \tab{tab:XS} and in the figures later.
\begin{table*}[t]
\caption{Derived reaction cross sections in $\mu$barn.}
\label{tab:XS}
\center
\begin{tabular}{c c}
\multirow{8}{*}{
\begin{tabular}{c c c}
E$_{eff_{c.m.}}$ / MeV & $^{191}$Ir($\alpha$,n)$^{194}$Au & $^{191}$Ir($\alpha$,$\gamma$)$^{195}$Au \\
\hline
16.47 $\pm$ 0.10& 1420 $\pm$ 75 & -- \\
15.98 $\pm$ 0.10& 956 $\pm$ 50 & -- \\
15.49 $\pm$ 0.09& 329 $\pm$ 17 & 1.69 $\pm$ 0.14 \\
15.00 $\pm$ 0.09& 119 $\pm$ 6 & 0.46 $\pm$ 0.07 \\
14.51 $\pm$ 0.09& 39.0 $\pm$ 2.1 & 0.19 $\pm$ 0.05 \\
14.02 $\pm$ 0.08& 15.3 $\pm$ 0.8 & 0.08 $\pm$ 0.02 \\
13.50 $\pm$ 0.08& 1.50 $\pm$ 0.09& $<$ 0.05
\end{tabular}
} &
\multirow{8}{*}{
\begin{tabular}{c c c}
E$_{eff_{c.m.}}$ / MeV & $^{193}$Ir($\alpha$,n)$^{196m}$Au & $^{193}$Ir($\alpha$,n)$^{196}$Au \\
\hline
16.48 $\pm$ 0.10& 20.4 $\pm$ 1.7 & 1295$\pm$ 76 \\
15.99 $\pm$ 0.10& 13.4$\pm$ 1.1 & 952 $\pm$ 56 \\
15.50 $\pm$ 0.09& 3.9 $\pm$ 0.3 & 313 $\pm$ 18 \\
15.01 $\pm$ 0.09& 1.33$\pm$ 0.12& 121 $\pm$ 7 \\
14.52 $\pm$ 0.09& 0.34$\pm$ 0.04& 37.8 $\pm$ 2.3 \\
14.03 $\pm$ 0.08& 0.12$\pm$ 0.03& 16.3 $\pm$ 1.0 \\
13.51 $\pm$ 0.08& $<$ 0.02 & 1.64 $\pm$ 0.10
\end{tabular}
} \\
\\
\\
\\
\\
\\
\\
\end{tabular}
\end{table*}
\section{Discussion}
The experimental data have been compared with sta\-tis\-ti\-cal-model calculations performed with the SMARAGD code \cite{smaragd}.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{an}
\caption{\label{fig:an} $^{191}$Ir($\alpha$,n)$^{194}$Au and $^{193}$Ir($\alpha$,n)$^{196}$Au reaction cross sections compared with statistical model calculations. The black dots are the experimental data. Black solid, green dotted, red dashed, and blue dot-dashed lines are the calculations with the standard McFadden-Satchler potential, and with the modified potential with $a_E = 2.5, 2.0, 1.5$\,MeV, respectively.}
\end{figure}
The ($\alpha$,n) cross sections are solely sensitive to the $\alpha$-channel width \cite{Rauscher12-AJS}.
As can be seen in \fig{fig:an}, the standard McFadden-Satchler potential \cite{McFadden66-NP} does not reproduce well the measured
data. A good reproduction is found when using the modified energy-dependent potential of \cite{Sauerwein11-PRC}. In this approach
the energy-dependent depth of the imaginary part is given by
\begin{equation}
W(C,E_\mathrm{c.m.}^\alpha)=\frac{25}{1+e^{\left(0.9E_\mathrm{C}-E_\mathrm{c.m.}^\alpha \right)/a_E}} \quad \mathrm{MeV},
\end{equation}
where $E_\mathrm{C}$ is the height of the Coulomb barrier. Choosing $a_E=2.0$\,MeV gives the best description of the present
experimental data.
Using the same $\alpha$-channel width for calculating the ($\alpha$,$\gamma$) cross section the model overestimates the experimental data (see
\fig{fig:191ag}). In this reaction channel, the calculated cross sections above the ($\alpha$,n) threshold are equally sensitive
to the $\alpha$-, neutron-, and $\gamma$-widths. Since the $\alpha$ width has been determined by the ($\alpha$,n)
reaction, the poor reproduction has to be ascribed to the neutron- and/or $\gamma$-widths. These, on the other hand, do not play a
role to determine the astrophysical reaction rates involving $\alpha$ particles in the $\gamma$ process.
\begin{figure}[htb]
\includegraphics[width=0.99\columnwidth]{ir191ag}
\caption{\label{fig:191ag} Same as \fig{fig:an} but for the $^{191}$Ir($\alpha$,$\gamma$)$^{195}$Au reaction.}
\end{figure}
\section{Summary}
Thick target yields of $\alpha$-induced reactions on iridium of natural isotopic composition were measured in the energy range of
$E_\alpha = 13.4$\,MeV and 17\,MeV with the activation method. The combination of X-ray detection with thick target yield measurements
has been performed in this mass region for the first time, allowing to measure the reaction cross sections at lower energies than
ever before. From the measured thick target yields, reaction cross sections were derived and
compared with statistical model calculations. The results show that the recently suggested energy-dependent modification of the
widely used McFadden-Satchler $\alpha$+nucleus optical potential gives a good description of the experimental data. The
$\gamma$- and neutron widths above the ($\alpha$,n) threshold cannot be further constrained by the present data but are not
relevant for the astrophysical $\gamma$ process.
\section*{Acknowledgement}
This work was supported by NKFIH (K108459, K120666) and the EU COST Action CA16117 (ChETEC).
|
1,314,259,993,482 | arxiv | \section{Introduction}
According to the World Health Organization 2019 statistics, Cardiovascular Diseases (CVDs) contribute to nearly 34\% of all deaths worldwide. The most critical risk factor for CVD is elevated blood pressure, also known as hypertension \cite{paper28}. Thereby early diagnosis of abnormal BP can aid a person in acquiring timely treatment and avoid facing severe medical complications by CVDs.
Blood pressure is a vital physiological indicator of a person's heart condition \cite{paper30}. When the heart contracts, BP in blood vessels reaches its maximum value called Systolic Blood Pressure (SBP), and when the heart relaxes, BP in blood vessels reaches its minimum value called Diastolic Blood Pressure (DBP). Additionally, the average BP in a cardiac cycle is termed as Mean Average Pressure (MAP). Hypertension occurs when an individual at rest has SBP more than 140 mmHg or DBP more than 90 mmHg \cite{paper53}. Conventional BP estimation in a clinical setting is performed using a cuff-based Sphygmomanometer that requires the aid of a medical expert. Various factors like mental stress, diet, etc., contribute to fluctuations in BP over time \cite{paper54}. Thereby intermittent estimation of BP by Sphygmomanometer is not reliable for unstable BP measures. The variable nature of BP has necessitated the need for beat-to-beat BP analysis such as Blood Pressure Variability (BPV)\cite{paper36} and continuous BP monitoring.
The Invasive Arterial Line (IAL) \cite{paper32} approach is considered as the gold standard for continuous BP estimation. The IAL procedure follows the insertion of Intra-arterial catheters in arteries of high-risk or critically ill patients \cite{paper55}. Though known for its superior performance, the method underlies the risk of medical complications such as infection, bleeding, clots, and nerve damage due to its invasive nature. As an alternate to pervasive monitoring, the emergence of cuffless BP estimation methods \cite{paper56} offered a ubiquitous solution that is unobtrusive and non-invasive. PPG signals in the interpretation of various physiological parameters have received widespread attention due to their potential to detect CVDs \cite{paper31}. Cuffless methods projected predominant use of Photoplethysmogram (PPG) signal and its derivatives.
PPG signal is a low-cost and straightforward representation of the heart's volumetric variation of blood flow. It is measured by an oximeter that illuminates the skin, and the reflection obtained is directly correlated to the changes in the volume of blood flow. The versatility of the PPG signal in terms of inference to efficiency ratio makes it a suitable prospect for estimating blood pressure in a resource-constrained environment. In recent years, optimizing deep learning models for real-time inference on resource-constrained devices has gained prominent interest \cite{paper33}. There is a dearth of work in deep learning-based BP prediction approaches experimented on an edge platform for BP estimation.
This work proposes BP-Net, a signal-to-signal translation U-Net architecture that estimates Arterial BP (ABP) waveform from PPG signal input. Following inference of ABP, we derive SBP and DBP measures and benchmark our results based on international standards. We further experiment with the real-time inference of BP-Net on a resource-constrained edge device and evaluate performance based on inference time.
The paper is structured as follows, Section II details current work performed under blood pressure prediction using deep learning approaches, Section III comprises dataset and model architecture information. Section IV contains experimental results of BP-Net based on international standards and also discusses how BP-Net compares with existing approaches. In Section V, we conclude the paper with a scope for future work.
\section{Related Work}
Prior research on blood pressure estimation can be categorized into two groups, Pulse Transit Time (PTT) Technique, and Regression Technique. Pulse Transit Time is the time taken by a blood wave to propagate between two places in a cardiovascular system. PTT is measured as the time interval between the R peak of the Electrocardiogram (ECG) and the systolic peak of fingertip PPG in a cardiac cycle. Since PTT is observed to be negatively correlated with BP \cite{paper35}, different approaches have been proposed to predict BP from PTT by calibration procedures \cite{paper37, paper43, paper44}.
Several machine learning approaches to BP estimation are based on the Regression Technique. Kachuee \textit{et al}. \cite{paper7} experimented with standard machine learning models like Support Vector Machine, Random Forest to estimate SBP and DBP by feature extraction from PPG and ECG signals. The authors of \cite{paper38} reviewed the problem of accuracy reduction in ML models and proposed Recurrent Neural Network (RNN) architecture with Long Short Term Memory (LSTM) networks for long-term BP prediction. Lee \textit{et al}. \cite{paper39} used a combination of ECG, PPG, and Ballistocardiogram (BCG) signals to train a Bi-LSTM network for beat-to-beat continuous BP estimation. Major prevalent regression-based approaches map input PPG signal and ECG signal or physiological parameters to output SBP and DBP values. Although most of the approaches provide exceptional results, they require extensive feature engineering, ECG signal (obtrusive in measurement), or both. Slapničar \textit{et al}. \cite{paper40} and Shimazaki \textit{et al}. \cite{paper41} experimented with raw PPG signal as input along with its first and second-order derivatives to estimate SBP and DBP. However, performance-wise their approaches did not generalize well compared to existing methods.
In recent times, similarities between ABP and PPG waveform have attracted considerable interest \cite{paper42}\cite{paper49}. Considering the analogous relationship between ABP and PPG, Ibtehaz \textit{et al}. \cite{paper5} proposed PPG2ABP, a cascaded U-Net architecture to estimate ABP waveform from PPG waveform. From the estimated ABP waveform, DBP and SBP are derived by standard peak detection algorithm \cite{paper43}. Similarly Athaya \textit{et al}. \cite{paper6} performed signal-to-signal translation from PPG to ABP using a U-Net approach and Harfiya \textit{et al}. \cite{paper8} used PPG waveform along with its derivatives to train a LSTM network to estimate ABP.
Majority of current-day wearable devices that estimate BP utilize the PTT \cite{paper20} approach due to its non-invasive requirements. Since ABP waveform requires minimal pre-processing for estimation and also provides additional diagnostic information about the patient \cite{paper50}, we implement an ABP-based BP estimation framework to be deployed on edge devices that alleviates extensive feature engineering involved with prevailing PTT-based approaches while providing appreciable performance in real-time.
\section{Methodology}
\subsection{Dataset Description}
Physionet's Multi-parameter Intelligent Monitoring in Intensive Care (MIMIC) II Waveform database \cite{paper23} comprises recordings of various physiological signals and physiological parameters from Intensive Care Unit (ICU) patients. For our experimentation, we use MIMIC II derived cuffless Blood Pressure Estimation Data Set compiled by Kachuee \textit{et al}. \cite{paper7}. The dataset contains pre-processed waveform data of ECG, PPG, and ABP signals sampled at 125 Hz. Signals with unusual values of BP such as very high/low (SBP $\geq$ 180, SBP $\leq$ 80, DBP $\geq$ 130, DBP $\leq$ 60) or missing data were excluded from the dataset. Table I presents the statistics of the dataset.
\begin{table}[!ht]
\renewcommand{\arraystretch}{1.3}
\caption{Blood Pressure Ranges in the dataset}
\label{tab:Ranges}
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
&
\begin{tabular}[c]{@{}l@{}}Min\\ (mmHg)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Max\\ (mmHg)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}STD\\ (mmHg)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Mean\\ (mmHg)\end{tabular} \\ \hline
DBP & 60.2 & 128.3 & 9.2 & 70.9 \\ \hline
MAP & 68.6 & 136.2 & 9.7 & 93.2 \\ \hline
SBP & 81.5 & 178.8 & 18.7 & 137.9 \\ \hline
\end{tabular}
\end{table}
\subsection{Data Preprocessing}
To remove noise from the raw extracted physiological signals, Kachuee \textit{et al}. \cite{paper7} performed Discrete Wavelet Decomposition (DWT) \cite{paper24} to 10 decomposition levels with Daubechies 8 (db8) as the mother wavelet. Compared to existing filtering methods, the DWT technique is adopted due to better phase response, efficiency in terms of computational complexity, and adaptability to different Signal to Noise Ratio (SNR) regimes. Following DWT, very high-frequency components between 250 Hz and 500 Hz and very low-frequency components corresponding to the range of 0 to 0.25 Hz were eliminated by zeroing their decomposition coefficients. Further conventional wavelet denoising is performed on the remaining decomposition coefficients with soft Rigrsure thresholding \cite{paper25}. Finally, reconstruction of the decomposition is carried out to output a clean processed signal.
\begin{figure*}
\centering
\includegraphics[width=\textwidth, height=8cm]{arch2.png}
\caption{BP-Net architecture}
\end{figure*}
Considering the computational need required to handle the extensive data ($\approx$741.53 hours) obtained, the pre-processed data is subjected to down-sampling, prioritizing the preserving of important information. The down-sampling technique captured 948 subjects worth 127260 counts of episodical data ($\approx$353.5 hours) with each episode attributing to 10-second long waveform data.
\subsection{BP-Net Architecture}
Motivated by the advancements of U-Net in several medical domain applications \cite{paper52, suresh2020endtoend}, BP-Net was developed as an extension of the U-Net framework proposed by Ronneberger \textit{et al}.\cite{paper22}. Analogous to standard encoder-decoder network, the U-Net consists of a contraction path (encoder) and an expansion path (decoder) bridged by skip connections between symmetrical layers. The architecture of BP-Net combines various blocks serving different purposes. The sequence of flow of input is initially through the Average Ensemble Block followed by Contraction Blocks (CB), Expansion Blocks (EB), and ultimately through the Denoising Block. Additionally, the interior of CB and EB are supplemented by the Inception-Residual (IR) Block. The implementation details of BP-Net are described in Section IV A. The design and function of each block are as follows:
\subsubsection{\textbf{Average Ensemble Block}}
Before forwarding the input signal to the contraction path, the signal is processed to improve the Signal to Noise ratio by subjecting it to Ensemble Averaging. Multiple variants of the input signal are created by passing the signal through a convolutional layer, thereby increasing the number of channels. Further averaging is performed by convolution across the channels to derive a representative signal that is jitter-free. The Ensemble Averaging action leads to faster convergence during training.
\subsubsection{\textbf{Inception-Residual Block}}
The IR block features the use of multiple convolutional filters of different kernel sizes to perform simultaneous convolutions. Further channel-wise concatenations of the simultaneous convolutions are performed to produce the output. The IR block also contains a residual connection to mitigate the problem of vanishing gradients.
\subsubsection{\textbf{Contraction Block}}
The Contraction Block accomplishes down-sampling operation by subjecting the input through padded convolutional layers to double the number of channels. The output from the padded convolutional layers is further passed onto batch normalization followed by the operation of Leaky ReLU activation. Strided convolution is performed on the activation outputs, and eventually, the intermediate output is passed to the IR block to produce the Contraction Block's output feature map.
\subsubsection{\textbf{Expansion Block}}
The Expansion Block performs an up-sampling operation by using padded convolutional layers to halve the number of channels. Further, batch normalization and Leaky ReLU activation operations are performed. Strided transposed convolution is carried out on the activation output to reduce the number of channels and pass it to the IR block. Furthermore, for the projection of features from the contraction path to the expansion path, the final EB output is produced by concatenation of the output feature map of the previous EB in the expansion path with the output feature map of the corresponding CB in the contracting path.
\subsubsection{\textbf{Denoising Block}}
The Denoising Block present at the end of the architecture produces the final output of the network by learnt up-sampling to match the ground truth output dimension. It also performs a denoising operation to output a less-distorted signal.
\subsection{Self Supervised Pretraining}
Unsupervised learning methods for encoder-decoder architecture focus on minimizing the reconstruction error. Although Unsupervised learning leads to successful data representation, it suffers from a significant drawback where the mechanism of model learning depends entirely on single-point model abstraction, i.e., the network learns to construct its output while neglecting other data points present in the dataset.
Self-Supervised Learning (SSL) aims to understand the semantic relationship between neighboring samples in the dataset to direct learning more representative features and act as a comprehensive feature extraction process. Following the SSL intuition, we initially train the model to reconstruct the input PPG waveform. After training, the learned encoder weights of the model are freezed and eventually used to train another model that performs the required task of reconstructing the ABP signal from the PPG signal. Thereby, the encoder part of the model that captures intermediate waveform representations of PPG signal explicitly, is fine-tuned for learning to estimate the output ABP signal.
\section{Experiments and Results}
\begin{figure}
\centering
\subfloat[Ground-Truth]{
\label{ref_label2}
\includegraphics[width=0.48\textwidth]{ground-truth.png}
}
\newline
\subfloat[BP-Net]{
\label{ref_label2}
\includegraphics[width=0.48\textwidth]{network.png}
}
\newline
\subfloat[Comparison]{
\label{ref_label2}
\includegraphics[width=0.48\textwidth]{comparison.png}
}
\caption{Waveform interpretation of signals}
\label{ref_label_overall}
\end{figure}
\subsection{Implementation}
For experimentation, the BP-Net architecture from Section III C comprises 5 Contraction Blocks, 5 Expanding Blocks, 1 Average Ensemble block, and 1 Denoising block. Adam optimizer with an initial learning rate of 0.0001 was used to optimize the model's weights to minimize Mean-Absolute-Error (MAE) loss. The hyper-parameters for the model configuration were decided after extensive empirical analysis.
From the derived 127260 counts of episodic data, 100000 samples were partitioned into training data and 27260 as testing data. The structure of data in MIMIC-II involves every subject's data present next to each other. To reduce the overlap between the training set and testing set, K-Fold cross-validation is suggested \cite{paper7}, and thereby 10-Fold cross-validation is performed for experimentation. Additionally, the input PPG and output ABP signals were mean normalized to facilitate the training need of the deep learning model.
\vspace{-.5cm}
\begin{align*}
SBP = maximum(ABP) \\
MAP = mean(ABP) \\
DBP = minimum(ABP)
\end{align*}
Considering 10-Fold cross-validation, 10 BP-Net networks were trained each for 300 epochs with a learning rate scheduler that altered the learning rate by a factor of 10 every 100 epochs. The best performing fold was chosen according to the K-Fold cross-validation technique, and the best fold's model is used to evaluate the test data. To derive SBP and DBP values from the predicted values of ABP, the maximum and minimum values of each episode are calculated.
\subsection{Performance Evaluation Metrics}
\subsubsection{\textbf{BHS Standard}}
For structured evaluation criteria to evaluate blood pressure measuring devices and methods, the British Hypertension Society (BHS) \cite{paper26} provides a discrete protocol for evaluation. The BHS standard considers performance accuracy in terms of the percentage of the cumulative error divided across three categories based on performance. For a method to be granted a specific grade, the cumulative error percentages must cross the threshold for a particular grade in every category (5 mmHg, 10 mmHg, 15 mmHg) as detailed in Table II.
\subsubsection{\textbf{AAMI Standard}}
Similar to BHS, the Advancement of Medical Instrumentation (AAMI) \cite{paper27} also sets rules for validating the effectiveness of the blood pressure measuring devices and methods. According to the AAMI standard, the evaluation criteria are based on whether Mean Error (ME) and Standard Deviation (SD) are within the range of 5 mmHg and 8 mmHg. In addition, the AAMI standard is applicable for evaluation only when a minimum of 85 subjects are involved for BP estimation.
\subsubsection{\textbf{Mean Absolute Error (MAE)}}
Apart from BHS and AAMI standards, Blood Pressure estimation methods are compared based on Mean Absolute Error. MAE can be formulated as below in Equation 1.
\begin{equation}
MAE = (\frac{1}{N})\sum_{i=1}^{N}\left | e_{i} \right |
\end{equation}
The {\em e} represents the difference between the ground truth BP and predicted BP value in mmHg, and N represents the number of test samples.
\subsection{Performance Evaluation Results}
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{Evaluation using BHS Standard}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{}} & \multicolumn{3}{c|}{Cumulative Error Percentage} \\ \cline{3-5}
\multicolumn{2}{|c|}{} & \textless{}5mmHg & \textless{}10mmHg & \textless{}15mmHg \\ \hline
\multirow{3}{*}{BP-Net} & DBP & 84.34\% & 95.19\% & 98.14\% \\ \cline{2-5}
& MAP & 85.64\% & 94.40\% & 97.68\% \\ \cline{2-5}
& SBP & 69.21\% & 86.01\% & 92.19\% \\ \hline
\multirow{3}{*}{BHS} & Grade A & 60\% & 85\% & 95\% \\ \cline{2-5}
& Grade B & 50\% & 75\% & 90\% \\ \cline{2-5}
& Grade C & 40\% & 65\% & 85\% \\ \cline{1-5}
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{Evaluation using AAMI Standard}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{} & ME & SD & Passed \\ \hline
\multirow{3}{*}{BP-Net} & DBP & 0.594 & 4.778 & Yes \\ \cline{2-5}
& MAP & 0.425 & 4.784 & Yes \\ \cline{2-5}
& SBP & -0.225 & 8.504 & No \\ \hline
AAMI Standard & & \textless{}=5 & \textless{}=8 & \\ \hline
\end{tabular}
\end{table}
Table II and Table III present BP-Net's performance based on BHS and AAMI standards, respectively. As observed from Table II, our method yields \textbf{Grade A} for DBP and MAP estimation and \textbf{Grade B} for SBP estimation as per the BHS standard. From Table III, we observe that our method satisfies the requirements for DBP and MAP estimation in the case of the AAMI standard. It is observed that SBP estimation fails in both BHS and AAMI standards by a narrow margin. Subject to the BHS standard, the model falls short by 3\% in the 15 mmHg error threshold while satisfying the 5 mmHg and 10 mmHg error thresholds, thus achieving grade B instead of grade A. While in the case of AAMI, the method fails to satisfy the SD criteria. The inadequacy in the performance of SBP estimation is prevalent in other existent works \cite{paper7, paper5, paper21, paper60} that deal with the MIMIC database. The limitation is generally attributed to the high variance exhibited by the SBP signal (Table 1) compared to DBP and MAP counterparts. Given MAE evaluation, BP-Net achieved MAE values of 5.16 mmHg and 2.89 mmHg for SBP and DBP prediction, respectively.
\subsection{Evaluation of Inference Time}
BP estimation presents a tedious task in terms of continuous BP monitoring. To facilitate the real-time application of our model, inference time can be considered a pivotal evaluation metric. \textit{Inference time} is the time taken by the model to predict real-time input data to produce the desired output. Since our work concentrates on continuous BP monitoring, the time taken by the model to convert PPG to ABP signal is crucial.
\begin{table}[H]
\renewcommand{\arraystretch}{1.3}
\caption{Edge device specification}
\centering
\begin{tabular}{|l|l|}
\hline
SoC & Broadcom 2711, Quad-core Cortex A72, 64-bit \\ \hline
RAM & 4GB \\ \hline
Operating Power & 5V @ 3A \\ \hline
\end{tabular}
\end{table}
To estimate inference time, our model has been deployed on a resource-constrained, low-cost edge device, Raspberry Pi 4 Model B, with the specifications mentioned in Table IV. This resulted in an observed time of 42.53 {\em ms} to convert 10 seconds/1 episode of PPG signal to ABP signal, which translates to 4.25 {\em ms} to convert 1 second of PPG signal to 1 second ABP signal.
Currently, there exists no published work under the context of deep learning-based BP estimation with edge constraints. Thus a general comparison of performances of other works is not possible.
\subsection{Comparison with existing approaches}
A comparative analysis of existing approaches based on MAE and international standards, BHS, and AAMI is presented in Table V. Table V details experimentation results of approaches that map PPG waveform to ABP waveform and successively to SBP and DBP.
\begin{table}[!ht]
\caption{Results of ABP estimation approaches}
\centering
\renewcommand{\arraystretch}{1.3}
\begin{threeparttable}[ht]
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Method} & \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{MAE} & \multicolumn{2}{c|}{BHS/AAMI\tnote{*}} \\ \cline{3-6}
& & SBP & DBP & SBP & DBP \\ \hline
{\cite{paper6}} & \begin{tabular}[c]{@{}c@{}}100 subjects\\ (MIMIC II, III)\end{tabular} & 3.68 & 1.97 & A/P & A/P \\ \hline
{\cite{paper5}} & \begin{tabular}[c]{@{}c@{}}942 subjects\\ (MIMIC II)\end{tabular} & 5.73 & 3.45 & B/F & A/P \\ \hline
{\cite{paper8}} & \begin{tabular}[c]{@{}c@{}}5289 subjects\\ (MIMIC II)\end{tabular} & 4.05 & 2.41 & A/P & A/P \\ \hline
BP-Net & \begin{tabular}[c]{@{}c@{}}942 subjects \\ (MIMIC II)\end{tabular} & 5.16 & 2.89 & B/F & A/P \\ \hline
\end{tabular}
\begin{tablenotes}
\item[*] BHS, letter represents Grade granted by BHS standard. \\
AAMI, P represents Satisfied and F represents Not Satisfied.
\end{tablenotes}
\end{threeparttable}
\vspace{-2mm}
\end{table}
From the collated information in Table V, Athaya \textit{et al}. \cite{paper6} presents a similar U-Net approach to that of BP-Net to estimate BP. However, they use fewer number of subjects compared to other prominent existing approaches, thereby, cannot be generalized. Harfiya \textit{et al}. \cite{paper8} incorporates first and second-order derivatives along with PPG signal as input to train their model. Although their model achieves exemplar performance for many subjects, the complexity of preprocessing involved makes their approach not feasible for edge deployment. Ibtehaz \textit{et al}. \cite{paper5} makes use of two cascaded U-Net architectures to estimate BP; the computational weight demanded by their approach makes them impractical for inference in a real-time environment. From the perspective of edge implementation, the proposed approach must involve minimal computational power and complexity. BP-Net overcomes the limitations of \cite{paper5, paper8} by using only PPG signal to train a standalone U-Net architecture, thereby reducing the computational complexity involved in porting the model to an edge device.
Though a diverse amount of work has been performed under blood pressure estimation using deep learning, comparison across the established works remains a difficult task. The main reason for the incongruity is the inconsistent evaluation criteria followed by most of the proposed methodologies. Several works proposed, develop a proprietary dataset of their own and evaluate their work suited to their dataset parameters. This poses an issue considering the number of subjects considered by proprietary datasets tends to be very few compared to public datasets. The requirement of a public dataset is satisfied by the MIMIC database. Although appreciable work is done on MIMIC-II for blood pressure estimation, different works lack a general norm on the number of subjects and the evaluation parameters being used.
\section{Conclusion}
Prevalent non-invasive BP estimation procedures require extensive feature engineering associated with PPG and/or other signals. We alleviate this problem by proposing a Deep Learning based solution to be deployed on resource-constrained devices. In this paper, we develop a U-Net architecture that performs signal-to-signal translation from PPG signal to ABP signal to estimate SBP and DBP values. We further benchmark the inference time of our model on a resource-constrained Raspberry Pi 4 device to validate its application on an embedded or edge platform. Although the performance of BP-Net is comparable to the existing well-known approaches, it can further be improved by increasing the number of subjects taken for experimentation.
\bibliographystyle{IEEEtran}
{\footnotesize
\section{Introduction}
According to the World Health Organization 2019 statistics, Cardiovascular Diseases (CVDs) contribute to nearly 34\% of all deaths worldwide. The most critical risk factor for CVD is elevated blood pressure, also known as hypertension \cite{paper28}. Thereby early diagnosis of abnormal BP can aid a person in acquiring timely treatment and avoid facing severe medical complications by CVDs.
Blood pressure is a vital physiological indicator of a person's heart condition \cite{paper30}. When the heart contracts, BP in blood vessels reaches its maximum value called Systolic Blood Pressure (SBP), and when the heart relaxes, BP in blood vessels reaches its minimum value called Diastolic Blood Pressure (DBP). Additionally, the average BP in a cardiac cycle is termed as Mean Average Pressure (MAP). Hypertension occurs when an individual at rest has SBP more than 140 mmHg or DBP more than 90 mmHg \cite{paper53}. Conventional BP estimation in a clinical setting is performed using a cuff-based Sphygmomanometer that requires the aid of a medical expert. Various factors like mental stress, diet, etc., contribute to fluctuations in BP over time \cite{paper54}. Thereby intermittent estimation of BP by Sphygmomanometer is not reliable for unstable BP measures. The variable nature of BP has necessitated the need for beat-to-beat BP analysis such as Blood Pressure Variability (BPV)\cite{paper36} and continuous BP monitoring.
The Invasive Arterial Line (IAL) \cite{paper32} approach is considered as the gold standard for continuous BP estimation. The IAL procedure follows the insertion of Intra-arterial catheters in arteries of high-risk or critically ill patients \cite{paper55}. Though known for its superior performance, the method underlies the risk of medical complications such as infection, bleeding, clots, and nerve damage due to its invasive nature. As an alternate to pervasive monitoring, the emergence of cuffless BP estimation methods \cite{paper56} offered a ubiquitous solution that is unobtrusive and non-invasive. PPG signals in the interpretation of various physiological parameters have received widespread attention due to their potential to detect CVDs \cite{paper31}. Cuffless methods projected predominant use of Photoplethysmogram (PPG) signal and its derivatives.
PPG signal is a low-cost and straightforward representation of the heart's volumetric variation of blood flow. It is measured by an oximeter that illuminates the skin, and the reflection obtained is directly correlated to the changes in the volume of blood flow. The versatility of the PPG signal in terms of inference to efficiency ratio makes it a suitable prospect for estimating blood pressure in a resource-constrained environment. In recent years, optimizing deep learning models for real-time inference on resource-constrained devices has gained prominent interest \cite{paper33}. There is a dearth of work in deep learning-based BP prediction approaches experimented on an edge platform for BP estimation.
This work proposes BP-Net, a signal-to-signal translation U-Net architecture that estimates Arterial BP (ABP) waveform from PPG signal input. Following inference of ABP, we derive SBP and DBP measures and benchmark our results based on international standards. We further experiment with the real-time inference of BP-Net on a resource-constrained edge device and evaluate performance based on inference time.
The paper is structured as follows, Section II details current work performed under blood pressure prediction using deep learning approaches, Section III comprises dataset and model architecture information. Section IV contains experimental results of BP-Net based on international standards and also discusses how BP-Net compares with existing approaches. In Section V, we conclude the paper with a scope for future work.
\section{Related Work}
Prior research on blood pressure estimation can be categorized into two groups, Pulse Transit Time (PTT) Technique, and Regression Technique. Pulse Transit Time is the time taken by a blood wave to propagate between two places in a cardiovascular system. PTT is measured as the time interval between the R peak of the Electrocardiogram (ECG) and the systolic peak of fingertip PPG in a cardiac cycle. Since PTT is observed to be negatively correlated with BP \cite{paper35}, different approaches have been proposed to predict BP from PTT by calibration procedures \cite{paper37, paper43, paper44}.
Several machine learning approaches to BP estimation are based on the Regression Technique. Kachuee \textit{et al}. \cite{paper7} experimented with standard machine learning models like Support Vector Machine, Random Forest to estimate SBP and DBP by feature extraction from PPG and ECG signals. The authors of \cite{paper38} reviewed the problem of accuracy reduction in ML models and proposed Recurrent Neural Network (RNN) architecture with Long Short Term Memory (LSTM) networks for long-term BP prediction. Lee \textit{et al}. \cite{paper39} used a combination of ECG, PPG, and Ballistocardiogram (BCG) signals to train a Bi-LSTM network for beat-to-beat continuous BP estimation. Major prevalent regression-based approaches map input PPG signal and ECG signal or physiological parameters to output SBP and DBP values. Although most of the approaches provide exceptional results, they require extensive feature engineering, ECG signal (obtrusive in measurement), or both. Slapničar \textit{et al}. \cite{paper40} and Shimazaki \textit{et al}. \cite{paper41} experimented with raw PPG signal as input along with its first and second-order derivatives to estimate SBP and DBP. However, performance-wise their approaches did not generalize well compared to existing methods.
In recent times, similarities between ABP and PPG waveform have attracted considerable interest \cite{paper42}\cite{paper49}. Considering the analogous relationship between ABP and PPG, Ibtehaz \textit{et al}. \cite{paper5} proposed PPG2ABP, a cascaded U-Net architecture to estimate ABP waveform from PPG waveform. From the estimated ABP waveform, DBP and SBP are derived by standard peak detection algorithm \cite{paper43}. Similarly Athaya \textit{et al}. \cite{paper6} performed signal-to-signal translation from PPG to ABP using a U-Net approach and Harfiya \textit{et al}. \cite{paper8} used PPG waveform along with its derivatives to train a LSTM network to estimate ABP.
Majority of current-day wearable devices that estimate BP utilize the PTT \cite{paper20} approach due to its non-invasive requirements. Since ABP waveform requires minimal pre-processing for estimation and also provides additional diagnostic information about the patient \cite{paper50}, we implement an ABP-based BP estimation framework to be deployed on edge devices that alleviates extensive feature engineering involved with prevailing PTT-based approaches while providing appreciable performance in real-time.
\section{Methodology}
\subsection{Dataset Description}
Physionet's Multi-parameter Intelligent Monitoring in Intensive Care (MIMIC) II Waveform database \cite{paper23} comprises recordings of various physiological signals and physiological parameters from Intensive Care Unit (ICU) patients. For our experimentation, we use MIMIC II derived cuffless Blood Pressure Estimation Data Set compiled by Kachuee \textit{et al}. \cite{paper7}. The dataset contains pre-processed waveform data of ECG, PPG, and ABP signals sampled at 125 Hz. Signals with unusual values of BP such as very high/low (SBP $\geq$ 180, SBP $\leq$ 80, DBP $\geq$ 130, DBP $\leq$ 60) or missing data were excluded from the dataset. Table I presents the statistics of the dataset.
\begin{table}[!ht]
\renewcommand{\arraystretch}{1.3}
\caption{Blood Pressure Ranges in the dataset}
\label{tab:Ranges}
\centering
\begin{tabular}{|l|l|l|l|l|}
\hline
&
\begin{tabular}[c]{@{}l@{}}Min\\ (mmHg)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Max\\ (mmHg)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}STD\\ (mmHg)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Mean\\ (mmHg)\end{tabular} \\ \hline
DBP & 60.2 & 128.3 & 9.2 & 70.9 \\ \hline
MAP & 68.6 & 136.2 & 9.7 & 93.2 \\ \hline
SBP & 81.5 & 178.8 & 18.7 & 137.9 \\ \hline
\end{tabular}
\end{table}
\subsection{Data Preprocessing}
To remove noise from the raw extracted physiological signals, Kachuee \textit{et al}. \cite{paper7} performed Discrete Wavelet Decomposition (DWT) \cite{paper24} to 10 decomposition levels with Daubechies 8 (db8) as the mother wavelet. Compared to existing filtering methods, the DWT technique is adopted due to better phase response, efficiency in terms of computational complexity, and adaptability to different Signal to Noise Ratio (SNR) regimes. Following DWT, very high-frequency components between 250 Hz and 500 Hz and very low-frequency components corresponding to the range of 0 to 0.25 Hz were eliminated by zeroing their decomposition coefficients. Further conventional wavelet denoising is performed on the remaining decomposition coefficients with soft Rigrsure thresholding \cite{paper25}. Finally, reconstruction of the decomposition is carried out to output a clean processed signal.
\begin{figure*}
\centering
\includegraphics[width=\textwidth, height=8cm]{arch2.png}
\caption{BP-Net architecture}
\end{figure*}
Considering the computational need required to handle the extensive data ($\approx$741.53 hours) obtained, the pre-processed data is subjected to down-sampling, prioritizing the preserving of important information. The down-sampling technique captured 948 subjects worth 127260 counts of episodical data ($\approx$353.5 hours) with each episode attributing to 10-second long waveform data.
\subsection{BP-Net Architecture}
Motivated by the advancements of U-Net in several medical domain applications \cite{paper52, suresh2020endtoend}, BP-Net was developed as an extension of the U-Net framework proposed by Ronneberger \textit{et al}.\cite{paper22}. Analogous to standard encoder-decoder network, the U-Net consists of a contraction path (encoder) and an expansion path (decoder) bridged by skip connections between symmetrical layers. The architecture of BP-Net combines various blocks serving different purposes. The sequence of flow of input is initially through the Average Ensemble Block followed by Contraction Blocks (CB), Expansion Blocks (EB), and ultimately through the Denoising Block. Additionally, the interior of CB and EB are supplemented by the Inception-Residual (IR) Block. The implementation details of BP-Net are described in Section IV A. The design and function of each block are as follows:
\subsubsection{\textbf{Average Ensemble Block}}
Before forwarding the input signal to the contraction path, the signal is processed to improve the Signal to Noise ratio by subjecting it to Ensemble Averaging. Multiple variants of the input signal are created by passing the signal through a convolutional layer, thereby increasing the number of channels. Further averaging is performed by convolution across the channels to derive a representative signal that is jitter-free. The Ensemble Averaging action leads to faster convergence during training.
\subsubsection{\textbf{Inception-Residual Block}}
The IR block features the use of multiple convolutional filters of different kernel sizes to perform simultaneous convolutions. Further channel-wise concatenations of the simultaneous convolutions are performed to produce the output. The IR block also contains a residual connection to mitigate the problem of vanishing gradients.
\subsubsection{\textbf{Contraction Block}}
The Contraction Block accomplishes down-sampling operation by subjecting the input through padded convolutional layers to double the number of channels. The output from the padded convolutional layers is further passed onto batch normalization followed by the operation of Leaky ReLU activation. Strided convolution is performed on the activation outputs, and eventually, the intermediate output is passed to the IR block to produce the Contraction Block's output feature map.
\subsubsection{\textbf{Expansion Block}}
The Expansion Block performs an up-sampling operation by using padded convolutional layers to halve the number of channels. Further, batch normalization and Leaky ReLU activation operations are performed. Strided transposed convolution is carried out on the activation output to reduce the number of channels and pass it to the IR block. Furthermore, for the projection of features from the contraction path to the expansion path, the final EB output is produced by concatenation of the output feature map of the previous EB in the expansion path with the output feature map of the corresponding CB in the contracting path.
\subsubsection{\textbf{Denoising Block}}
The Denoising Block present at the end of the architecture produces the final output of the network by learnt up-sampling to match the ground truth output dimension. It also performs a denoising operation to output a less-distorted signal.
\subsection{Self Supervised Pretraining}
Unsupervised learning methods for encoder-decoder architecture focus on minimizing the reconstruction error. Although Unsupervised learning leads to successful data representation, it suffers from a significant drawback where the mechanism of model learning depends entirely on single-point model abstraction, i.e., the network learns to construct its output while neglecting other data points present in the dataset.
Self-Supervised Learning (SSL) aims to understand the semantic relationship between neighboring samples in the dataset to direct learning more representative features and act as a comprehensive feature extraction process. Following the SSL intuition, we initially train the model to reconstruct the input PPG waveform. After training, the learned encoder weights of the model are freezed and eventually used to train another model that performs the required task of reconstructing the ABP signal from the PPG signal. Thereby, the encoder part of the model that captures intermediate waveform representations of PPG signal explicitly, is fine-tuned for learning to estimate the output ABP signal.
\section{Experiments and Results}
\begin{figure}
\centering
\subfloat[Ground-Truth]{
\label{ref_label2}
\includegraphics[width=0.48\textwidth]{ground-truth.png}
}
\newline
\subfloat[BP-Net]{
\label{ref_label2}
\includegraphics[width=0.48\textwidth]{network.png}
}
\newline
\subfloat[Comparison]{
\label{ref_label2}
\includegraphics[width=0.48\textwidth]{comparison.png}
}
\caption{Waveform interpretation of signals}
\label{ref_label_overall}
\end{figure}
\subsection{Implementation}
For experimentation, the BP-Net architecture from Section III C comprises 5 Contraction Blocks, 5 Expanding Blocks, 1 Average Ensemble block, and 1 Denoising block. Adam optimizer with an initial learning rate of 0.0001 was used to optimize the model's weights to minimize Mean-Absolute-Error (MAE) loss. The hyper-parameters for the model configuration were decided after extensive empirical analysis.
From the derived 127260 counts of episodic data, 100000 samples were partitioned into training data and 27260 as testing data. The structure of data in MIMIC-II involves every subject's data present next to each other. To reduce the overlap between the training set and testing set, K-Fold cross-validation is suggested \cite{paper7}, and thereby 10-Fold cross-validation is performed for experimentation. Additionally, the input PPG and output ABP signals were mean normalized to facilitate the training need of the deep learning model.
\vspace{-.5cm}
\begin{align*}
SBP = maximum(ABP) \\
MAP = mean(ABP) \\
DBP = minimum(ABP)
\end{align*}
Considering 10-Fold cross-validation, 10 BP-Net networks were trained each for 300 epochs with a learning rate scheduler that altered the learning rate by a factor of 10 every 100 epochs. The best performing fold was chosen according to the K-Fold cross-validation technique, and the best fold's model is used to evaluate the test data. To derive SBP and DBP values from the predicted values of ABP, the maximum and minimum values of each episode are calculated.
\subsection{Performance Evaluation Metrics}
\subsubsection{\textbf{BHS Standard}}
For structured evaluation criteria to evaluate blood pressure measuring devices and methods, the British Hypertension Society (BHS) \cite{paper26} provides a discrete protocol for evaluation. The BHS standard considers performance accuracy in terms of the percentage of the cumulative error divided across three categories based on performance. For a method to be granted a specific grade, the cumulative error percentages must cross the threshold for a particular grade in every category (5 mmHg, 10 mmHg, 15 mmHg) as detailed in Table II.
\subsubsection{\textbf{AAMI Standard}}
Similar to BHS, the Advancement of Medical Instrumentation (AAMI) \cite{paper27} also sets rules for validating the effectiveness of the blood pressure measuring devices and methods. According to the AAMI standard, the evaluation criteria are based on whether Mean Error (ME) and Standard Deviation (SD) are within the range of 5 mmHg and 8 mmHg. In addition, the AAMI standard is applicable for evaluation only when a minimum of 85 subjects are involved for BP estimation.
\subsubsection{\textbf{Mean Absolute Error (MAE)}}
Apart from BHS and AAMI standards, Blood Pressure estimation methods are compared based on Mean Absolute Error. MAE can be formulated as below in Equation 1.
\begin{equation}
MAE = (\frac{1}{N})\sum_{i=1}^{N}\left | e_{i} \right |
\end{equation}
The {\em e} represents the difference between the ground truth BP and predicted BP value in mmHg, and N represents the number of test samples.
\subsection{Performance Evaluation Results}
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{Evaluation using BHS Standard}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{}} & \multicolumn{3}{c|}{Cumulative Error Percentage} \\ \cline{3-5}
\multicolumn{2}{|c|}{} & \textless{}5mmHg & \textless{}10mmHg & \textless{}15mmHg \\ \hline
\multirow{3}{*}{BP-Net} & DBP & 84.34\% & 95.19\% & 98.14\% \\ \cline{2-5}
& MAP & 85.64\% & 94.40\% & 97.68\% \\ \cline{2-5}
& SBP & 69.21\% & 86.01\% & 92.19\% \\ \hline
\multirow{3}{*}{BHS} & Grade A & 60\% & 85\% & 95\% \\ \cline{2-5}
& Grade B & 50\% & 75\% & 90\% \\ \cline{2-5}
& Grade C & 40\% & 65\% & 85\% \\ \cline{1-5}
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{Evaluation using AAMI Standard}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{} & ME & SD & Passed \\ \hline
\multirow{3}{*}{BP-Net} & DBP & 0.594 & 4.778 & Yes \\ \cline{2-5}
& MAP & 0.425 & 4.784 & Yes \\ \cline{2-5}
& SBP & -0.225 & 8.504 & No \\ \hline
AAMI Standard & & \textless{}=5 & \textless{}=8 & \\ \hline
\end{tabular}
\end{table}
Table II and Table III present BP-Net's performance based on BHS and AAMI standards, respectively. As observed from Table II, our method yields \textbf{Grade A} for DBP and MAP estimation and \textbf{Grade B} for SBP estimation as per the BHS standard. From Table III, we observe that our method satisfies the requirements for DBP and MAP estimation in the case of the AAMI standard. It is observed that SBP estimation fails in both BHS and AAMI standards by a narrow margin. Subject to the BHS standard, the model falls short by 3\% in the 15 mmHg error threshold while satisfying the 5 mmHg and 10 mmHg error thresholds, thus achieving grade B instead of grade A. While in the case of AAMI, the method fails to satisfy the SD criteria. The inadequacy in the performance of SBP estimation is prevalent in other existent works \cite{paper7, paper5, paper21, paper60} that deal with the MIMIC database. The limitation is generally attributed to the high variance exhibited by the SBP signal (Table 1) compared to DBP and MAP counterparts. Given MAE evaluation, BP-Net achieved MAE values of 5.16 mmHg and 2.89 mmHg for SBP and DBP prediction, respectively.
\subsection{Evaluation of Inference Time}
BP estimation presents a tedious task in terms of continuous BP monitoring. To facilitate the real-time application of our model, inference time can be considered a pivotal evaluation metric. \textit{Inference time} is the time taken by the model to predict real-time input data to produce the desired output. Since our work concentrates on continuous BP monitoring, the time taken by the model to convert PPG to ABP signal is crucial.
\begin{table}[H]
\renewcommand{\arraystretch}{1.3}
\caption{Edge device specification}
\centering
\begin{tabular}{|l|l|}
\hline
SoC & Broadcom 2711, Quad-core Cortex A72, 64-bit \\ \hline
RAM & 4GB \\ \hline
Operating Power & 5V @ 3A \\ \hline
\end{tabular}
\end{table}
To estimate inference time, our model has been deployed on a resource-constrained, low-cost edge device, Raspberry Pi 4 Model B, with the specifications mentioned in Table IV. This resulted in an observed time of 42.53 {\em ms} to convert 10 seconds/1 episode of PPG signal to ABP signal, which translates to 4.25 {\em ms} to convert 1 second of PPG signal to 1 second ABP signal.
Currently, there exists no published work under the context of deep learning-based BP estimation with edge constraints. Thus a general comparison of performances of other works is not possible.
\subsection{Comparison with existing approaches}
A comparative analysis of existing approaches based on MAE and international standards, BHS, and AAMI is presented in Table V. Table V details experimentation results of approaches that map PPG waveform to ABP waveform and successively to SBP and DBP.
\begin{table}[!ht]
\caption{Results of ABP estimation approaches}
\centering
\renewcommand{\arraystretch}{1.3}
\begin{threeparttable}[ht]
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Method} & \multirow{2}{*}{Dataset} & \multicolumn{2}{c|}{MAE} & \multicolumn{2}{c|}{BHS/AAMI\tnote{*}} \\ \cline{3-6}
& & SBP & DBP & SBP & DBP \\ \hline
{\cite{paper6}} & \begin{tabular}[c]{@{}c@{}}100 subjects\\ (MIMIC II, III)\end{tabular} & 3.68 & 1.97 & A/P & A/P \\ \hline
{\cite{paper5}} & \begin{tabular}[c]{@{}c@{}}942 subjects\\ (MIMIC II)\end{tabular} & 5.73 & 3.45 & B/F & A/P \\ \hline
{\cite{paper8}} & \begin{tabular}[c]{@{}c@{}}5289 subjects\\ (MIMIC II)\end{tabular} & 4.05 & 2.41 & A/P & A/P \\ \hline
BP-Net & \begin{tabular}[c]{@{}c@{}}942 subjects \\ (MIMIC II)\end{tabular} & 5.16 & 2.89 & B/F & A/P \\ \hline
\end{tabular}
\begin{tablenotes}
\item[*] BHS, letter represents Grade granted by BHS standard. \\
AAMI, P represents Satisfied and F represents Not Satisfied.
\end{tablenotes}
\end{threeparttable}
\vspace{-2mm}
\end{table}
From the collated information in Table V, Athaya \textit{et al}. \cite{paper6} presents a similar U-Net approach to that of BP-Net to estimate BP. However, they use fewer number of subjects compared to other prominent existing approaches, thereby, cannot be generalized. Harfiya \textit{et al}. \cite{paper8} incorporates first and second-order derivatives along with PPG signal as input to train their model. Although their model achieves exemplar performance for many subjects, the complexity of preprocessing involved makes their approach not feasible for edge deployment. Ibtehaz \textit{et al}. \cite{paper5} makes use of two cascaded U-Net architectures to estimate BP; the computational weight demanded by their approach makes them impractical for inference in a real-time environment. From the perspective of edge implementation, the proposed approach must involve minimal computational power and complexity. BP-Net overcomes the limitations of \cite{paper5, paper8} by using only PPG signal to train a standalone U-Net architecture, thereby reducing the computational complexity involved in porting the model to an edge device.
Though a diverse amount of work has been performed under blood pressure estimation using deep learning, comparison across the established works remains a difficult task. The main reason for the incongruity is the inconsistent evaluation criteria followed by most of the proposed methodologies. Several works proposed, develop a proprietary dataset of their own and evaluate their work suited to their dataset parameters. This poses an issue considering the number of subjects considered by proprietary datasets tends to be very few compared to public datasets. The requirement of a public dataset is satisfied by the MIMIC database. Although appreciable work is done on MIMIC-II for blood pressure estimation, different works lack a general norm on the number of subjects and the evaluation parameters being used.
\section{Conclusion}
Prevalent non-invasive BP estimation procedures require extensive feature engineering associated with PPG and/or other signals. We alleviate this problem by proposing a Deep Learning based solution to be deployed on resource-constrained devices. In this paper, we develop a U-Net architecture that performs signal-to-signal translation from PPG signal to ABP signal to estimate SBP and DBP values. We further benchmark the inference time of our model on a resource-constrained Raspberry Pi 4 device to validate its application on an embedded or edge platform. Although the performance of BP-Net is comparable to the existing well-known approaches, it can further be improved by increasing the number of subjects taken for experimentation.
\bibliographystyle{IEEEtran}
{\footnotesize |
1,314,259,993,483 | arxiv | \section{Introduction}
Hierarchical structure formation scenarios suggest that galaxies form
by continuous accretion of small, dark--matter dominated satellites.
The possibility of an extragalactic deployment of high--velocity clouds has
long been considered, and in various contexts, by Oort (1966, 1970, 1981),
Verschuur (1975), Eichler (1976), Einasto et al. (1976), Bajaja et al.
(1987), Burton (1997), Wakker \& van Woerden (1997),
Braun \& Burton (1999; 2000, astro-ph/9912417), Blitz et al. (1999), and
L\'opez--Corredoira et al. (1999). The discussion of Blitz et al.
ties several HVC properties to the hierarchical structure formation and
evolution of galaxies. In this context the extended HVC complexes would be
nearby and currently being accreted onto the Galaxy, while the compact,
isolated objects would be the primitive building blocks at larger
distances, scattered throughout the Local Group.
The class of compact, isolated high--velocity clouds catalogued by
Braun \& Burton (1999) represent objects which plausibly originated under
common circumstances, have shared a common evolutionary history, have not
(yet) been strongly influenced by the radiative or tidal fields of the
Milky Way or M31, and are falling towards the Local Group barycenter. The
CHVC catalogue was based on survey data made with telescopes of modest
resolution. The principal source was the Leiden/Dwingeloo Survey (LDS) of
Hartman \& Burton (1997), characterized by the angular resolution of $36'$
provided by the Dwingeloo 25--meter antenna; important additional data came
from the more coarsely--sampled surveys of Hulsbosch \& Wakker (1988) and
of Bajaja et al. (1985) as analyzed by Wakker \& van Woerden (1991), as
well as from some new \hi material observed at $21'$ resolution using the
NRAO 140--foot telescope. The CHVCs are largely unresolved in angle in the
single--dish catalogue, and therefore the large range in observed velocity
widths can not be directly interpreted in terms of the intrinsic
properties of individual gaseous entities.
Of the sample of 65 CHVCs catalogued by Braun \& Burton (1999) only
two had been subject earlier to interferometric imaging. Wakker \& Schwarz
had used the WSRT to show that both CHVC\,$114\!-\!10\!-\!430$ and
CHVC\,$114\!-\!06\!-\!466$ exhibit a core/halo structure (with only some
40\% of the single--dish flux recovered) that the linewidths of the
resolved cores were substantially narrower than when the individual cores
were blended at low resolution, and that several of the components
displayed systematic velocity gradients. We have now imaged six
additional CHVC fields using the Westerbork Synthesis Radio Telescope, and
a further ten using the 304--meter Arecibo telescope. Selected properties
of several of these fields are shown here. A complete discussion of the
WSRT imaging is given by Braun \& Burton (2000, astro-ph/9912417); a
discussion of the Arecibo material is in preparation.
\section{WSRT and Arecibo Data}
Observations of the six CHVC fields imaged with the WSRT involved
twelve--hour integrations in the standard array configuration having a
shortest baseline of 36 meters. The effective velocity resolution was 1.2
times the channel
spacing of 2.06 \kms, over 256 spectral channels centered on the $v_{\rm LSR}$~ of
each source as catalogued by Braun \& Burton (1999) on the basis of
the single--dish spectra in the LDS. The angular and kinematic resolution
afforded by the WSRT makes it well suited in important regards to
detailed studies of the CHVC class of objects. Diffuse structures
extending over more than about 10 arcmin are, however, not adequately
imaged by the interferometer unless precautions are taken to eliminate the
short--spacing bowl surrounding regions of bright extended emission. In a
straightforward attempt to identify the column densities and overal extent
likely to characterize any diffuse structures, we made use of the LDS
data to determine the emission from an elliptical Gaussian with dimensions
and orientation as measured
in the LDS, and with a total flux sufficient to recover the LDS integrated
emission.
\begin{figure}[t]
\psfig{figure=h125mk+sp.eps,width=12cm}
\vspace{-.5cm}
\caption{WSRT image of CHVC\,$125\!+\!41\!-\!207$ displaying \hi column
densities at 28 arcsec angular resolution.
\NH was calculated assuming negligible opacity, and is displayed by contours
at the levels indicated in units of $10^{18}$~cm$^{-2}$ and a linear
grey--scale running from 0 to $300 \times 10^{18}$~cm$^{-2}$.
The location of the
Seyfert galaxy Mrk\,205 is marked. This background source lies on a line
of sight which penetrates the halo of the CHVC, where reconstruction
of the integral flux using the composite WSRT and LDS single--dish
data reveals
a moderate diffuse--emission column depth. Bowen \& Blades (1993) have
measured Mg\,{\sc ii~} absorption towards Mrk\,205, and a metallicity substantially
subsolar.
}
\end{figure}
\begin{figure}[t]
\psfig{figure=spec+phase.eps,width=13cm}
\vspace{-.3cm}
\caption{{\it left:}~ \hi spectrum observed in the direction indicated in
Fig. 2 of one of the bright emission knots in CHVC\,$125\!+\!41\!-\!207$ .
The spectrum is unresolved at a channel
separation of 2 \kms, indicating a core temperature of less than 85 K and
quiescent turbulence. {\it right:}~ Equilibrium temperature curves for \hi
in an intergalactic
environment characterized by a metallicity of 10\% of the solar value and
a dust--to--gas ratio of 10\% of that in the solar neighborhood, calculated
for two values of the neutral shielding column depth, namely $10^{19}$~cm$^
{-2}$ (solid line) and $10^{20}$~cm$^{-2}$ (dashed line). The upper dotted
line indicates the 8000 K temperature characteristic of the WNM; the lower
one, the kinetic temperature of 85 K observed in the opaque
core of CHVC\,$125\!+\!41\!-\!207$. The volume density is tightly
constrained for this temperature; a distance then follows from the measured
column density and angular size.
}
\end{figure}
We have also recently observed ten CHVCs with the Arecibo telescope.
The Arecibo facility is well--suited to provide sensitivity to the
total column density at relatively high resolution. This is especially
important for CHVC targets which are of a size comparable to the field
of view of most synthesis intruments. The Arecibo targets were
observed with the new Gregorian feed and the narrow L--band receiver
with two bandpass settings, namely 6.25 MHz and 1.56 MHz (yielding
$\Delta v=1.3$ and 0.32 \kms, respectively) each centered on the $v_{\rm LSR}$~
of the CHVC as determined by Braun \& Burton (1999). The Arecibo
targets were first mapped on a grid of $1\deg \times 1\deg$ size on a
fully--sampled 90 arcsec lattice, in short integrations, in order to
determine the locations of the peak flux concentrations; then at one or
more of these principal components long--integration spectra were
accumulated in a cut made at constant declination by repeating drift
scans over the same $2\deg$ in right ascension. Typical column density
sensitivies of some $10^{17.5}$~cm$^{-2}$ over 20 \kms~were reached, an
\NH regime largely unexplored (cf. Zwaan et al. 1997).
\section{The Exceptionally Narrow Core in CHVC\,$125\!+\!41\!-\!207$}
The compact high--velocity cloud CHVC\,$125\!+\!41\!-\!207$ is typical
of the class of objects in several regards. Figure 2, adopted from Braun
\& Burton (2000, astro-ph/9912417;
see also Burton \& Braun 1999) shows several cool, quiescent cores embedded
in a diffuse, warmer halo. The spectrum plotted in Fig. 3, observed towards
the brightest of these
cores, has a linewidth which is completely unresolved by the effective
resolution of the WSRT imaging. The velocity channels adjacent to the line
peak have intensities below 20\% of the maximum value. Such a
width is one of the narrowest measured in \hi emission, and contrains
both the kinetic temperature and the amount of turbulence. An upper limit
to the thermal--broadening FWHM of 2 \kms~corresponds to an upper limit to
the kinetic temperature of 85 K. The physical situation is yet more
tightly
constrained, because the brightness temperature in this core is observed to
be 75 K; thus a lower limit to the opacity follows from $T_{\rm b} =
T_{\rm k}(1 - e^{-1})$, yielding $\tau \geq 2$. Any broadening which might
be due to macroscopic turbulence is less than 1 \kms.
The tightly--constrained temperature found for
CHVC\,$125\!+\!41\!-\!207$ allows an estimate of the distance to this
object. Wolfire et al. (1995a,
1995b) show that a cool \hi phase is stable under extragalactic conditions
if a sufficient column of shielding gas is present and if the thermal
pressure is high. Calculations of equilibrium conditions which would pertain
in the Local Group environment have been communicated to us by Wolfire,
Sternberg, Hollenbach, and McKee, and are shown in Fig. 3 for two
bracketing
values of the shielding column density, namely $10^{19}$~cm$^{-2}$ and
$10^{20}$~cm$^{-2}$. The figure shows that the equilibrium
volume densities corresponding to the observed value of $T_{\rm k}=85$~K lie
in the range 0.65 to 3.5~cm$^{-3}$. Thus provided with this range of
volume
densities, and having measured both the column depth of the cool core and
its angular size, the distance to CHVC\,$125\!+\!41\!-\!207$ follows
directly
from $D=N_{\rm HI}/(n_{\rm HI}/\theta)$, yielding a value in the range
210 to 1100 kpc. Further considerations of several opaque, cool cores made
by Braun \& Burton (2000, astro-ph/9912417) suggest
that a distances between 0.5 and 1 Mpc are most plausibly representative of
these objects.
Measurements of metallicity of high--velocity clouds are important to
discussions of the nature of the phenomenon. If the clouds are primitive
objects scattered throughout the Local Group, the gas would not be
substantially enriched in heavy elements produced by stellar evolution. On
the other hand, if the anomalous velocities had been generated by supernova
explosions or some other energetic occurance in the Galactic disk, for
example according to the precepts of the galactic fountain scenario
(Shapiro \& Field, 1976; Bregman, 1980) then
the gas would be substantially enriched, with the already moderately high
metallicities characteristic of the Galactic disk further enhanced
by the circumstances of the ejection event. The small angular size of
the CHVCs, and the amount of substructure being revealed at high
resolution, will make it generally difficult to find suitable background
sources. But the diffuse halo of CHVC\,$125\!+\!41\!-\!207$ overlaps the
Seyfert galaxy Mrk\,205, and in this direction Bowen \& Blades (1993)
detected Mg\,{\sc ii~} absorption at $v_{\rm LSR}=-209$~\kms. We determine a
metallicity of this object in the range 0.04 to 0.07 solar.
\section{Rotation in the Cores of CHVC\,$204\!+\!30\!+\!075$}
\begin{figure}[t]
\psfig{figure=h204m0+m1.eps,width=13cm}
\caption{{\it left:}~ Westerbork image of CHVC\,$204\!+\!30\!+\!075$
showing \NH (calculated assuming negligible opacity) at an angular
resolution of
1 arcmin; contours are drawn at levels of 20, 50, 100, 200, and $300 \times
10^{18}$~cm$^{-2}$. {\it right:}~ Intensity--weighted
line--of--sight velocity, with contours of $v_{\rm LSR}$ showing systematic
kinematic gradients across the two principal components of the CHVC object,
consistent with rotation; contours are drawn in steps of 5 \kms~from 40 to
85 \kms. }
\end{figure}
\begin{figure}[t]
\psfig{figure=kinenfw2.ps,width=12cm}
\vspace{-.5cm}
\caption{Rotation velocities fit by a standard application of the
tilted--ring method to kinematic gradients revealed in the two
principal components of
CHVC\,$204\!+\!30\!+\!075$ shown in Fig. 4. (The upper panel pertains to
the component at higher declination.) The best--fit position angle,
inclination,
and systemic velocity are indicated. The solid lines show the rotation
curves of Navarro, Frenk, \& White (1997) cold--dark--matter halos of the
indicated masses.
}
\end{figure}
The narrowest FWHM of the CHVCs cataloged by Braun \& Burton had a
value of 5.9 \kms; the broadest, a value of 95 \kms. Under higher
resolution, the characteristic width narrows as objects are resolved into
several principal components, moving relative to each other.
If the objects with multiple cores are to be stable, distances of order
several hundred kpc are required. Some of the compact cores imaged owe
their large total width in the LDS single--dish data to velocity
gradients. The resolved WSRT image of
CHVC\,$204\!+\!30\!+\!075$ is shown in Fig. 4. The object shows two
principal components each of which is elongated; furthermore, each of the
elongated structures shows a systematic velocity gradient along the major
axis.
The velocity gradients exhibited by the two principal components of
CHVC\,$204\!+\!30\!+\!075$ can be modelled in terms of circular rotation in
a flattened disk system. Fig. 5 shows the results of fitting a standard
tilted--ring model to the data. The fits display velocity rising slowly
but continuously with radius to an amplitude of some 15 \kms~in the one
case and to some 20 \kms~in the other, and then flattening to a constant
value beyond about 500 or 600 arcsec. Estimates of the contained dynamical
mass follow from the rotation curves if the distance is assumed, and the total
gas mass follows from the integrated \hi fluxes. At an assumed distance of
0.7 Mpc, the two principal clumps of CHVC\,$204\!+\!30\!+\!075$ have
$M_{\rm dyn}=10^{8.1}$ and $10^{8.3}$ M$_\odot$, and gas masses
(including \hi and helium of 40\% by mass) of $10^{6.5}$ and $10^{6.9}$
M$_\odot$, respectively for the upper and lower concentrations shown in
Figures 4 and 5. The dark--to--visible mass ratios for these concentrations
are 36 and 29, respectively.
The shape of the modelled rotation curves for both of the
CHVC\,$204\!+\!30\!+\!075$ components is reproduced by the standard
cold--dark--matter halo as presented by Navarro et al. (1997). At the assumed
distance of 0.7 Mpc, the Navarro et al. halos fit to the two components
have masses of $10^{7.8}$ M$_\odot$ (within 9.3 kpc) and $10^{8.2}$ M$_\odot$
(within 12.6 kpc), respectively.
\section{The Objects CHVC\,$230\!+\!61\!+\!165$ and
CHVC\,$158\!-\!39\!-\!285$}
\begin{figure}[t]
\psfig{figure=h230d1.eps,width=12.8cm}
\vspace{-.5cm}
\caption{{\it left:}~ Position, velocity cut through
CHVC\,$230\!+\!61\!+\!165$
observed with the Arecibo telescope, at an angular resolution of
$3\farcm3$. The cut samples right ascension along the
fixed declination $15\fdg467$. This compact object shows no kinematic
gradient along the cut sampled. {\it right:}~ Variation of \NH with
position, sampled at the velocity (158 \kms) of the peak of the
column--density distribution. The \NH values plotted here and in Fig. 7
are based on
calibrated intensities in units of $T_{\rm b}$.
}
\end{figure}
\begin{figure}[]
\psfig{figure=h158d1.eps,width=12.8cm}
\vspace{-.5cm}
\vfill
\caption{{\it left:}~ Position, velocity cut through
CHVC\,$158\!-\!39\!-\!285$ observed with the Arecibo telescope at an
angular resolution of $3\farcm3$.
The cut samples right ascension along the
fixed declination $16\fdg292$. This CHVC shows a kinematic gradient
consistent with rotation, spanning
some 40 \kms, as well as the characteristic core/halo morphology. {\it
right:}~ Variation of \NH
with position, sampled at the velocity ($-282.5$ \kms) of the peak of the
column--density distribution.
}
\end{figure}
The WSRT imaging of CHVC\,$230\!+\!61\!+\!165$ revealed a simple, faint
structure. The Arecibo telescope is particularly well-suited to such
targets, because of its sensitivity to low \hi brightnesses. A
position, velocity cut through this object at the location of the peak
\NH is shown in the lefthand panel of Fig. 6. No kinematic gradient is
revealed. However the cut does show an interesting characteristic
which several other of the CHVCs observed at Arecibo also show, namely
a tendency to be more sharply bounded on one side of the cut than on
the other. In the case of CHVC\,$230\!+\!61\!+\!165$, the higher
right--ascension boundary is sharper than the lower one down to \NH =
10$^{18.5}$. (We will consider this property further in our full
discussion of the Arecibo observations.)
The Arecibo data on CHVC\,$158\!-\!39\!-\!285$ are shown in Fig. 7.
This object is also a simple one, with only one component revealed. The
position, velocity cut through the location of the peak \NH value also shows
that one side of the object is more sharply bounded than the other. The
prominent kinematic gradient is consistent with simple rotation within
the high \NH core. The plots
on the righthand panels of Fig. 6 and Fig. 7 show the variation of \NH
with position across the two CHVCs. The cores are embedded in diffuse
material with characteristic \NH values of order $10^{18.5}$ cm$^{-2}$.
\section{Discussion}
The WSRT and Arecibo high--resolution imaging shows that the morphology
of the compact high--velocity clouds is characterized by one or more
quiescent, low--dispersion compact cores embedded in a diffuse, warmer
halo. The gas in the cores is identified with the cool neutral medium
(CNM) of condensed \hi at temperatures in the range 50 -- 200 K; the
halo gas is identified with the warm neutral medium. Such a nested
geometry is expected if the CNM is to be stable in the presence of an
ionizing radiation field of the sort expected in the Local Group
environment. The cores contribute typically about 40\% of the \hi flux,
while covering about 15\% of the surface in the CHVC.
The high--resolution imaging data allows two independent distance
estimates to be made. A distance for CHVC\,$125\!+\!41\!-\!207$
follows from assuming rough spherical symmetry and equating the
well--constrained volume and column densities of the compact cores.
Another distance constraint (coupled with a dark--to--visible mass
ratio) follows from consideration of the stability of CHVCs having
multiple cores in a common envelope but having large relative
velocities. The evidence available indicates that CHVCs have
characteristic sizes of about 10 kpc, \hi masses of about $10^7$
M$_\odot$, and are at distances of 0.4 -- 1.0 Mpc.
The imaging has also shown that the compact cores of CHVCs are commonly
rotating. The observed kinematic gradients can be fit by rotating
flattened disks, yielding dynamical masses for assumed distances.
Dark--to--visible mass ratios of order 10 -- 40 at $D=0.7$~Mpc are
indicated. The rotation curves shown here agree well with the
cold--dark--matter halo predicted by Navarro et al. (1997).
The gaseous properties characteristic of CHVCs bear similarities to
those of many dwarf irregular galaxies populating the Local Group (e.g.
Young \& Lo 1996, 1997a, 1997b). It is an important astrophysical
challenge to establish the details of these similarities. It is
particularly important to establish whether any of the CHVCs contain
stars as well; the stellar density might be very low, but detection of
any stellar component would lead to improved distances and would
constrain the evolutionary history, and would augment the information
on the faint end of the Local Group luminosity function. Failure to
detect stars would imply that CHVCs are very primitive proto--galactic
objects dominated by dark--matter halos. In either case, it seems
plausible that the CHVCs are the missing Local Group satellite systems
predicted by Klypin et al. (1999) and Moore et al. (1999).
\acknowledgements
We are grateful to M.G. Wolfire, A. Sternberg, D. Hollenbach, and C.F. McKee
for providing the equilibrium temperature curves shown in Fig. 3, and to
P. Perillat for assistance during our Arecibo observing session.
The WSRT is operated by the Netherlands Foundation for Research in
Astronomy, under contract with the Netherlands Organization for
Scientific Research. The Arecibo Observatory is part of the National
Astronomy and Ionosphere Center, which is operated by Cornell University
under a cooperative agreement with the National Science Foundation.
\newpage
|
1,314,259,993,484 | arxiv | \section{Introduction}
Seiberg and Witten showed that the low-energy effective theory of
$N=2$ supersymmetric gauge theory
in four dimensions is determined by the prepotential, a
holomorhic function of the period integrals of the meromorphic one-form (the
Seiberg-Witten differential) on a Riemann surface\cite{SeWi}.
For a simple Lie group $G$, it has been proposed in \cite{Go,MaWa} that the
spectral curve of the periodic Toda lattice associated with the dual
affine Lie algebra $(\widehat{G})^{\vee}$ provides the Riemann surface
which describes the Coulomb branch of $N=2$ supersymmetric Yang-Mills
theory with the gauge group $G$.
In the case of gauge theories with some matter hypermultiplets, the
spectral curves and related integrable systems are
discussed in \cite{DHPh}.
Other systematic approaches based on the heterotic/type II duality\cite{Le} or
the M5 branes \cite{Wi} are also studied extensively.
In the preset work we will study the exact solution of the low-energy
effective theory from the viewpoint of the spectral curve of the
periodic Toda lattice.
For $ADE$ type gauge groups, the spectral curves are shown to be
the sum of the superpotential of two-dimensional topological
Landau-Ginzburg models of $ADE$ type and that of the topological
$CP^{1}$ model.
In a series of papers\cite{ItYa1,ItYa2,ItYa3,ItXiYa}, we have studied
various aspects of the exact solution of the Seiberg-Witten theories
with $ADE$ gauge groups by using two-dimensional topological field
theories.
This paper is organized as follows:
In sect. 2, we introduce the spectral curve of the periodic
Toda lattice associated with the dual of the affine Lie algebra
$(\widehat{G})^{\vee}$ for the gauge theory with gauge group $G$.
In sect. 3, we consider the $ADE$ gauge groups and express
the spectral curve as the sum of the
superpotential of the topological $CP^{1}$ model and the $ADE$
minimal model.
Using the flat coordinates in the $A$-$D$-$E$ singularity theory,
we derive the Picard-Fuchs differential equations obeyed by the period
integral of the Seiberg-Witten differential.
We then show that these equations are equivalent to the Gauss-Manin
system for the $ADE$ minimal model and the $CP^{1}$ model and the
scaling relation for the Seiberg-Witten differential.
In sect, 4, we study an exact solution in the strong coupling
region.
Argyres and Douglas\cite{ArDo} showed that there exists a non-trivial
RG fixed point in the Coulomb branch such that the massless solitons with
mutually non-local charges coexist and the theory
corresponds to $N=2$ superconformal field theory, where the gauge
invariant order parameters have fractional dimensions with respect to
the BPS mass.
We investigate this Argyres-Douglas point in the
Coulomb branch of the $N=2$ supersymmetric Yang-Mills theory for $ADE$
gauge groups.
\section{Spectral Curves and $A$-$D$-$E$ Singularity}
The low-energy properties of the Coulomb branch of $N=2$ supersymmetric
gauge theories with gauge group $G$ are exactly described by holomorphic data
associated with certain algebraic curves.
In particular, the BPS mass formula
is expressed in terms of the period integrals of the so-called
Seiberg-Witten (SW) differential $\lambda_{SW}$:
\begin{equation}
m_{BPS}=| n^{I} a_{I}+m^{I} {a_{D}}_{I}|
\label{eq:bps}
\end{equation}
where $n^{I}$ and $m^{I}$ are integers and
\begin{equation}
a_I=\oint_{A_I}\lambda_{SW}, \hskip10mm {a_D}_I=\oint_{B_I}\lambda_{SW},
\quad I=1,\cdots ,r
\label{period}
\end{equation}
along one-cycles $A_I$, $B_I$ with appropriate intersections on the curve.
Here $r$ is the rank of the gauge group $G$.
The low-energy effective theory effective action is described by the
prepotential ${\cal F}(a)$.
The dual period ${a_{D}}_{I}$ is then given by $\partial {\cal
F}(a)/\partial a_{I}$.
We define the spectral curve for the periodic
Toda lattice for the (twisted) affine Lie algebra $(\widehat{G})^{\vee}$.
Let $G$ be a simple Lie algebra with rank $r$.
Let $\alpha_{1}, \cdots, \alpha_{r}$ be simple roots of the Lie algebra
and $\alpha_{0}=-\theta$, where $\theta$ denotes the highest root.
We consider a representation ${\cal R}$ with $d$ dimensions.
Let $E_{\alpha}$ be generators
associated with the roots $\alpha$ and $H^{i}$
those of the Cartan subalgebra, which are realized by $d\times d$
matrices in the representation ${\cal R}$.
Introduce matrices $A$ and $B$ by
\begin{eqnarray}
A(z)&=& \sum_{i=1}^{r} b_{i}H_{i}+a_{i}(E_{\alpha_{i}}+E_{-\alpha_{i}})
+a_{0} (z E_{\alpha_{0}}+z^{-1}E_{-\alpha_{0}}) \nonumber \\
B(z)&=& \sum_{i=1}^{r} b_{i}H_{i}+a_{i}(E_{\alpha_{i}}-E_{-\alpha_{i}})
+a_{0} (z E_{\alpha_{0}}-z^{-1}E_{-\alpha_{0}}),
\end{eqnarray}
where $z$ is called as the spectral parameter.
The equation of motion of the periodic Toda lattice is defined in
the Lax form
\begin{equation}
{d A\over d t}=\mbox{[} A, B\mbox{]}.
\end{equation}
The spectral curve is defined by the characteristic polynomial of the
matrix $A(z)$:
\begin{equation}
{\cal P}^{{\cal R}}_{G}(x; u_{1},\cdots, u_{r},z)\equiv
{\rm det}_{{\cal R}}(x 1_{d}-A(z))=0.
\label{curve}
\end{equation}
Here $u_{i}$ ($i=1, \cdots, r$)
are the $i$-th order Casimirs of $G$ with degree $q_{i}$ where $q_{i}$
is order of the Casimirs.
The exponents $e_{i}$
of $G$ is related to $q_{i}$ by $q_{i}=e_{i}+1$.
The second order Casimir $u_{2}$ has degree 2 and
the top Casimir $u_{r}$ has degree $q_{r}=h$, where $h$ is the
Coxeter number.
The curve depends on the scale parameter
$\mu^{2}=\prod_{i=0}^{r}a_{i}^{n_{i}}$, where non-negative integers
$n_{i}$'s are the Dynkin labels of the affine roots.
The spectral parameter $z$ and $\mu$ have degree $h^{\vee}$, the dual Coxeter
number of $G$.
The exponents and the (dual) Coxeter number are given in the table 1.
\begin{table}[h]
\label{tab1}
\begin{center}
\begin{tabular}{lllll}
\hline
group $G$ & $(\widehat{G})^{\vee}$ & $h$ & $h^{\vee}$ & exponents \\ \hline
$A_{r}$ & $A^{(1)}$ & $r+1$ & $r+1$ & $1,2,\cdots, r$ \\
$B_{r}$ & $A_{2r-1}^{(2)}$ & $2r$ & $2r-1$ & $1,3,\cdots, 2r-1$ \\
$C_{r}$ & $D_{r+1}^{(2)}$ & $2r$ & $r+1$ & $1,3,\cdots, 2r-1$ \\
$D_{r}$ & $D_{r}^{(1)}$ & $2r-2$ & $2r-2$ & $1,3,\cdots, 2r-3,r-1$ \\
$E_{6}$ & $E_{6}^{(1)}$ & 12 & 12 & 1,4,5,7,8,11 \\
$E_{7}$ & $E_{7}^{(1)}$ & 18 & 18 & 1,5,7,9,11,13,17 \\
$E_{8}$ & $E_{8}^{(1)}$ & 30 & 30 & 1,7,11,13,17,19,23,29 \\
$F_{4}$ & $E_{6}^{(2)}$ & 12 & 9 & 1,5,7,11 \\
$G_{2}$ & $D_{4}^{(3)}$ & 6 & 4 & 1,5 \\
\hline
\end{tabular}
\end{center}
\caption{List of the (dual) Coxeter numbers and exponents}
\end{table}
The spectral curve is invariant under the transformation
$z\rightarrow \mu^{2}/z$.
For simply laced Lie algebra $G$, we have $h=h^{\vee}$. Therefore the top
Casimir $u_{r}$ and the spectral parameter $z$ or its dual $\mu^{2}/z$
have the same degree.
It is found that for the representation ${\cal R}$ of $G=ADE$, the
spectral curve is given by
\begin{equation}
P_{G}^{{\cal R}}(x; u_{1}, \cdots, u_{r}+z+{\mu^{2}\over z})=0,
\end{equation}
where $P^{{\cal R}}_{G}(x;u_{1}, \cdots, u_{r})$ is the characteristic
polynomial of the representation ${\cal R}$ of $G$ and expressed in the
form of
\begin{equation}
P_{G}^{{\cal R}}(x; u_{1}, \cdots, u_{r})=\prod_{i=}^{d}(x-\lambda_{i}\cdot
a).
\end{equation}
Here $\lambda_{i}$ denote the weight vector of the representation ${\cal R}$.
We list the explicit form of the spectral curve for $A_{r}$,
$D_{r}$ and $E_{6}$ Lie algebras for the $d$-dimensional
representation $\underline{d}$:
\begin{itemize}
\item $A_{r}$ ($A_{r}^{(1)}, \underline{r+1}$)
\begin{equation}
x^{r+1}-u_{1}x^{r-1}-\cdots -u_{r}-\left(z+{\mu^2\over z}\right)=0
\end{equation}
\item $D_{r}$ ($D_{r}^{(1)},\underline{2r}$)
\begin{equation}
x^{2r}-u_{1}x^{2r-2}-\cdots-u_{r-2}x^{4}-
u_{r}x^2-u_{r-1}^2-x^{2}\left(z+{\mu^2\over z}\right)=0
\end{equation}
\item
$E_{6}$ ($E_{6}^{(1)}, \underline{27}$) \cite{LeWa}
\begin{equation}
{1\over2} x^{3} \left(z+{\mu^2\over z}+u_{6}\right)^2
-q_{1}(x) \left(z+{\mu^2\over z}+u_{6}\right)
+q_{2}(x)=0,
\end{equation}
where
\begin{eqnarray}
q_1=&&270 x^{15}+342 u_{1} x^{13}+162 u_{1}^2 x^{11}-252 u_{2} x^{10}
+(26 u_{1}^3+18 u_{3}) x^{9} \nonumber \\
&& -162 u_{1} u_{2} x^{8}+(6 u_{1} u_{3} -27 u_{4}) x^{7}
-(30 u_{1}^2 u_{2}-36 u_{5}) x^{6}
+(27 u_{2}^2 -9 u_{1} u_{4}) x^{5} \nonumber \\
&& -(3 u_{2} u_{3}-6 u_{1} u_{5}) x^{4}
-3 u_{1} u_{2}^2 x^3-3 u_{2} u_{5} x-u_{2}^3, \nonumber \\
q_{2}=&& {1\over 2x^{3}} (q_1^2-p_1^2 p_2), \nonumber \\
p_1=&& 78 x^{10}+60 u_{1} x^{8} +14 u_{1}^2 x^{6}-33 u_{2} x^{5}
+2 u_{3} x^{4}-5 u_{1} u_{2} x^{3}-u_{4} x^{2}-u_{5} x-u_{2}^2, \nonumber \\
p_2=&&12 x^{10}+12 u_{1} x^{8}+4 u_{1}^2 x^{6}-12 u_{2} x^{5}+u_{3}x^{4}
-4 u_{1} u_{2} x^{3}-2 u_{4} x^{2}+4 u_{5} x+u_{2}^2. \nonumber \\
&& \label{eq:e6}
\end{eqnarray}
\end{itemize}
For simply-laced Lie algebra $G$, $\widehat{G}$ is self-dual, i.e.
$(G^{(1)})^{\vee}=G^{(1)}$.
For non-simply laced Lie algebras,
we have $\widehat{B_{r}}^{\vee}=A_{2r-1}^{(2)}$,
$\widehat{C_{r}}^{\vee}=D_{r+1}^{(2)}$,
$\widehat{F_{4}}^{\vee}=E_{6}^{(2)}$ and
$\widehat{G_{2}}^{\vee}=D_{4}^{(3)}$.
Thus we need the twisted affine Lie algebra
to construct the spectral curve.
The characteristic polynomial can be obtained by folding procedure
of the corresponding Dynkin diagram.
Due to $h\neq h^{\vee}$, the spectral parameter $z$ or its
dual $\mu^{2}/z$ appears in the spectral curve in a nontrivial way.
Now we will write the explicit form of the spectral curves for
$d$-dimensional representation $\underline{d}$ of non-simply laced Lie
algebra $G$.
\begin{itemize}
\item For $B_{r}$ ($A_{2r-1}^{(2)}$, $\underline{2r}$) case,
the spectral curve of the representation $\underline{2r}$ is obtained
from the characteristic polynomial
$P_{A_{2r-1}}^{\underline{2r}}(x; u_{1},\cdots, u_{2r-1})$ by the restriction
$u_{2}=\cdots=u_{2r-4}=0$ and $u_{2r-2}=z+{\mu^{2}/z}$:
\begin{equation}
x^{2r}-u_{1}x^{2r-2}-\cdots-u_{2r-1}-x\left(z+{\mu^2\over z}\right)=0.
\end{equation}
\item For $C_{r}$ ($D_{r+1}^{(2)}$:$\underline{2r+2}$)
case, the curve is obtained from the characteristic polynomial
$
P_{D_{r+1}}^{\underline{2r+2}}(x; u_{1}, \cdots, u_{r}, u_{r+1})
$
by restricting $u_{r}=x-\mu^{2}/z$;
\begin{equation}
x^{2r+2}-u_{1}x^{2r}-\cdots-u_{r-1} x^{4}
-u_{r+1}x^{2}-\left(z-{\mu^2\over z}\right)^2=0.
\end{equation}
\item For $F_{4}$ ($E_{6}^{(2)}$:$\underline{27}$) case, the curve is
obtained from the characteristic polynomial
$
P_{E_{6}}^{\underline{27}}(x; u_{1},u_{2}, u_{3}, u_{4}, u_{5}, u_{6})
$
by the restriction
$u_{2}=0$ and $u_{5}=-6 (z+\mu^{2}/z)$;
\begin{eqnarray}
&& -8 \left( z+{\mu^{2}\over z}\right)^{3}
+a_{1}(x) \left( z+{\mu^{2}\over z}\right)^{2}
+a_{2}(x) \left( z+{\mu^{2}\over z}\right)
+a_{3}(x)=0, \nonumber \\
a_{1}(x) = && \!\!\!\!\!\!\!\!
- 636x^{9} - 300{u_{1}}x^{7} - 48{u_{1}}^{2}x^{5}
- 5{u_{3}}x^{3} + 2{u_{4}}x, \nonumber \\
a_{2}(x)= && \!\!\!\!\!\!\!\!
- 168 x^{18} - 348 {u_{1}} x^{16} - 276 {u_{1}}^{2} x^{14}
+ ( - 116 {u_{1}}^{3} + 14 {u_{3}}) x^{12} \nonumber \\
& & \!\!\!\!\!\!\!\!
+ ( - 92 {u_{4}} - 20 {u_{1}}^{4} - 8 {u_{1}} {u
_{3}}) x^{10} + ( - 42 {u_{1}} {u_{4}} - 6 {u_{1}}^{2} {u_{3
}}) x^{8}\nonumber \\
&& \!\!\!\!\!\!\!\!
+ ( - 4 {u_{6}} - {\displaystyle \frac {10}{3}} {u
_{1}}^{2} {u_{4}} - {\displaystyle \frac {2}{3}} {u_{3}}^{2})
x^{6}
+ ({\displaystyle \frac {1}{3}} {u_{3}} {u_{4}}
- {\displaystyle \frac {2}{3}} {u_{6}} {u_{1}}) x^{4}, \nonumber \\
a_{3}(x)=&& \!\!\!\!\!\!\!\!
x^{27} + 6 {u_{1}} x^{25} + 15 {u_{1}}^{2} x^{23} +
(20 {u_{1}}^{3} + {u_{3}}) x^{21} + (5 {u_{4}} + 4 {u_{1}} {u_{3}} +
15 {u_{1}}^{4}) x^{19} \nonumber \\
& & \!\!\!\!\!\!\!\!
+ (6 {u_{1}}^{2} {u_{3}} + 12 {u_{1}} {u_{4}} +
6 {u_{1}}^{5}) x^{17} + ({\displaystyle \frac {1}{3}} {u_{3}}
^{2} + 5 {u_{6}} + 4 {u_{1}}^{3} {u_{3}} + {\displaystyle
\frac {26}{3}} {u_{1}}^{2} {u_{4}} + {u_{1}}^{6}) x^{15} \nonumber \\
& & \!\!\!\!\!\!\!\!
+ ({\displaystyle \frac {4}{3}} {u_{1}}^{3} {u_{4
}} + {\displaystyle \frac {19}{3}} {u_{6}} {u_{1}} + {u_{1}}^{
4} {u_{3}} + {\displaystyle \frac {4}{3}} {u_{3}} {u_{4}} +
{\displaystyle \frac {2}{3}} {u_{3}}^{2} {u_{1}}) x^{13} \nonumber \\
& & \!\!\!\!\!\!\!\!
+ ({\displaystyle \frac {1}{3}} {u_{1}}^{2} {u_{3
}}^{2} - {\displaystyle \frac {1}{3}} {u_{1}}^{4} {u_{4}} -
{\displaystyle \frac {15}{4}} {u_{4}}^{2} + 3 {u_{6}} {u_{1}}
^{2}) x^{11} \nonumber \\
& & \!\!\!\!\!\!\!\!
+ ({\displaystyle \frac {1}{3}} {u_{6}} {u_{3}}
- {\displaystyle \frac {4}{9}} {u_{1}}^{2} {u_{3}} {u_{4}}
+ {\displaystyle \frac {1}{27}} {u_{3}}^{3} - {\displaystyle
\frac {13}{6}} {u_{4}}^{2} {u_{1}} + {\displaystyle \frac {13
}{27}} {u_{6}} {u_{1}}^{3}) x^{9} \nonumber \\
& & \!\!\!\!\!\!\!\!
+ ( - {\displaystyle \frac {1}{9}} {u_{3}}^{2} {u
_{4}} - {\displaystyle \frac {1}{2}} {u_{6}} {u_{4}} +
{\displaystyle \frac {1}{9}} {u_{6}} {u_{1}} {u_{3}} -
{\displaystyle \frac {7}{36}} {u_{1}}^{2} {u_{4}}^{2}) x^{7}
+ ({\displaystyle \frac {1}{12}} {u_{4}}^{2} {u_{3}} -
{\displaystyle \frac {1}{6}} {u_{6}} {u_{1}} {u_{4}}) x^{5}
\nonumber \\
& & + ( - {\displaystyle \frac {1}{54}} {u_{4}}^{3} -
{\displaystyle \frac {1}{108}} {u_{6}}^{2}) x^{3}.
\label{eq:f4}
\end{eqnarray}
\item For $G_{2}$ ($D_{4}^{(3)}$:$\underline{8}$)
case\cite{MaWa}, the spectral curve is obtained from
$P_{D_{4}}^{\underline{8}}(x; u_{1}, u_{2}, u_{3},
u_{4})
$ by the restriction $u_{1}=2u$,
$u_{2}=-u^{2}-z+\mu^{2}/z$, $u_{3}=\sqrt{3}
(z-{\mu^{2}/z})$ and $u_{4}=v+2u (z+{\mu^{2}/ z})$;
\begin{equation}
3\left(z-{\mu^{2}\over z}\right)^2
-x^8+2 u x^6-
\left[ u^2 +\left(z+{\mu^{2}\over z}\right)\right] x^4
+\left[ v+ 2 u \left(z+{\mu^{2}\over z}\right) \right] x^2=0.
\end{equation}
\end{itemize}
We may write the spectral curve in the form of
\begin{equation}
z+{\mu^2 \over z}=W_G^{\cal R} (x,u_1,\cdots ,u_{r}),
\label{eq:solve}
\end{equation}
namely we regard the curve as an fibration over $CP^{1}$ with the
fiber characterized by the function $W^{{\cal R}}_{G}(x)$.
For example, in the case of $A_{r}$, $D_{r}$ and $E_{6}$ gauge groups,
we obtain
\begin{itemize}
\item $A_{r}$
\begin{equation}
W_{A_r}^{\underline{r+1}}=x^{r+1}-u_1x^r- \cdots -u_{r-1}x-u_{r},
\end{equation}
\item $D_{r}$
\begin{equation}
W_{D_r}^{\underline{2r}}=x^{2r-2}-u_1x^{2r-4}- \cdots -u_{r-2}x^2
-{u_{r-1}^2 \over x^2}-u_{r},
\end{equation}
\item $E_{6}$
\begin{equation}
W_{E_6}^{\underline{27}}={1\over x^3}
\left( q_1 \pm p_1\sqrt{p_2}\right)-u_6,
\end{equation}
where $p_{1}$, $p_{2}$ and $q_{1}$ are given in (\ref{eq:e6}).
\end{itemize}
It is important to to notice that $W_{ADE}(x)$ is nothing but the
superpotential of the two-dimensional topological Landau-Ginzburg (LG)
model of type $ADE$.
$W_{A_{r}}^{\underline{r+1}}(x)$ and $W_{D_{r}}^{\underline{2r}}(x)$ are
familiar superpotentials for the $A_{r}$ and $D_{r}$ type minimal
models.
For $E_{6}$ case, the function $W_{E_{6}}^{\underline{27}}$ looks like
very different from the usual deformation of the $E_{6}$ singularity
written in terms of three variables
\begin{equation}
W_{E_{6}}(x_{1},x_{2},x_{3})=x_{1}^{4}+x_{2}^{3}+x_{3}^{2}.
\label{eq:e6sing}
\end{equation}
It is, however, found in \cite{EY} that $W_{E_{6}}^{\underline{27}}$
is a single-variable version of the LG superpotential for the $E_{6}$
minimal model.
On the other hand, the singularity of the form of (\ref{eq:e6sing})
is obtained by considering the fibration of ALE spaces\cite{Le}.
The relation of these two description of the Seiberg-Witten curves are
discussed in \cite{LeWa}.
For non-simply laced cases, we have rather nontrivial $W_{G}(x)$
from the spectral curves although in the two dimensional case, the LG
superpotentials are obtained from the simply laced one by the folding
procedure\cite{Zu}.
The functions $W^{{\cal R}}_{G}(x)$ are given as follows:
\begin{itemize}
\item $B_{r}$
\begin{equation}
W_{B_{r}}^{\underline{2r}}(x;u_{1}, \cdots, u_{r})
={W_{BC}(x;u_{1}, \cdots, u_{r})\over x},
\end{equation}
where the LG potential of $BC$ type
\begin{equation}
W_{BC}(x;u_{1}, \cdots, u_{r})=x^{2r}-\sum_{i=1}^{r} u_{i} x^{2r-2i}
\end{equation}
is obtained from the $A_{2r-1}$ superpotential
$W_{A_{2r-1}}(x;\tilde{u}_{1}, \cdots, \tilde{u}_{2r-1})$ by the restriction
$\tilde{u}_{2k}=0$
($k=1,\cdots, r-1$) and setting $u_{k}=\tilde{u}_{2k-1}$ ($k=1,\cdots, r$).
\item $C_{r}$
\begin{equation}
W_{C_{r}}^{\underline{2r+2}}(x;u_{1}, \cdots, u_{r})
=\left(x^{2}W_{BC}(x;u_{1}, \cdots, u_{r})^{2}+4\mu^{2}\right)^{1/2}.
\end{equation}
\item $F_{4}$
\begin{equation}
W_{F_{4}}^{\underline{27}}={a_{1}(x)\over 24}
-{1\over2}
\left\{ \left( -q+\sqrt{q^{2}+4 p^{3}}\right)^{1/3}
+\left(-q-\sqrt{q^{2}+4 p^{3}}\right)^{1/3}\right\},
\end{equation}
where
\begin{eqnarray}
p(x)&&=-{a_{2}\over6}-{a_{1}^{2}\over 144}, \nonumber \\
q(x)&&={1\over27} \left( {a_{1}^{3}\over 32}
+{9\over8} a_{1} a_{2} +27 a_{3} \right),
\end{eqnarray}
and $a_{1}$, $a_{2}$ and $a_{3}$ are defined in (\ref{eq:f4}).
\item $G_{2}$
\begin{equation}
W_{G_{2}}^{\underline{8}}={1\over 6} (p_{1}+ \sqrt{p_{1}^2+12 p_{2}}),
\end{equation}
where
\begin{equation}
p_{1}= 6 x^{4}-2 u x^{2}, \hskip10mm
p_{2}= x^{8} -2 u x^{6}+u^{2} x^{4}-v x^{2} + 12 \mu^{4} .
\end{equation}
\end{itemize}
Note that for $C_{r}$ and $G_{2}$ cases, the superpotentials
$W_{G}^{{\cal R}}(x)$ depend on the scale parameter $\mu$ explicitly.
So far we have seen the SW spectral curves for general
gauge groups.
The SW differential defined on these spectral curves take
the simple form\cite{MaWa}:
\begin{equation}
\lambda_{SW}={1\over 2\pi i} x {d z \over z}.
\label{eq:swd}
\end{equation}
In the next section we will study the period integrals of the
SW differential using the two-dimensional topological LG
theory.
\section{Picard-Fuchs Equations and 2D Topological Landau-Ginzburg Models}
In this section we consider the $ADE$ gauge groups.
To describe the moduli space of the Coulomb branch we adopt the
flat coordinate system $(t_1, t_2, \cdots ,t_r)$ developed
in the $A$-$D$-$E$ singularity theory instead of the conventional Casimir
coordinates $(u_1, u_2, \cdots ,u_r)$.
The
coordinate transformation is read off from the residue integral
\begin{eqnarray}
t_i=c_i \oint dx W_G^{\cal R} (x,u)^{e_i \over h}, \hskip10mm i=1,\cdots, r
\label{flat}
\end{eqnarray}
with a suitable constant $c_i$ \cite{EY,EYY}.
Notice that the overall degree of $W_G^{\cal R}$ is equal to $h$.
The flat coordinates $t_i$ are expressed as polynomials in $u_i$.
Firstly we discuss the role of flat coordinates in
the two-dimensional topological Landau-Ginzburg
models.
We define the primary fields $\phi_{i}(x,t)$ as the derivatives of the
superpotential $W_{G}(x,t)$:
\begin{equation}
\phi_{i}(x,t)={\partial W_{G}(x,t)\over \partial t_{i}}.
\end{equation}
We choose the normalization factor $c_{r}$ such that
$\phi_{r}(x,t)=1$.
We now consider two-dimensional topological gravity coupled to
topological LG model\cite{DVV}.
In this case, the primary field $\phi_{r}$ is regarded as the puncture
operator $P$.
In the flat coordinate system, the topological metric
$\eta_{ij}=\langle \phi_{i}\phi_{j}P\rangle$ is
independent of $t_{i}$ and takes the form:
\begin{equation}
\eta_{ij}=\delta_{e_{i}+e_{j},h}.
\end{equation}
Furthermore, the primary fields obey the operator product expansions
\begin{equation}
\phi_i (x,t)\, \phi_j (x,t)=\sum_{k=1}^r {C_{ij}}^k(t)\, \phi_k (x,t)
+Q_{ij}(x,t) \, \partial_x W_{G}(x,t).
\label{eq:ope}
\end{equation}
The flatness condition implies that the function $Q_{ij}(x,t)$ in
(\ref{eq:ope}) satisfies
\begin{equation}
{\partial^2 W_{G}(x,t) \over \partial t_i \partial t_j}
=\partial_x \, Q_{ij}(x,t).
\end{equation}
The structure constants $C_{ij}^{k}(t)$ are related to the three-point
function $F_{ijk}(t)=\langle \phi_{i}\phi_{j}\phi_{k}\rangle$ by the
relation $F_{ijk}(t)=\eta_{kl}C_{ij}^{l}(t)$.
In two-dimensional topological theory, all the topological correlation
functions are determined by the free energy $F(t)$.
The three point function $F_{ijk}(t)$ is given by
$\partial^{3} F(t)/\partial t_{i}\partial t_{j} \partial t_{k}$.
The associativity of the chiral ring
$(\phi_{i}\phi_{j})\phi_{k}=\phi_{i}(\phi_{j}\phi_{k})$ implies the
relation
$C_{ij}^{l} C_{l k}^{n}=C_{j k}^{l} C_{i l}^{n}$.
Let us introduce $r\times r$ matrices $C_{i}$, $F_{i}$ and $\eta$ by
$(C_{i})_{j}^{k}=C_{ij}^{k}$, $(F_{i})_{jk}=F_{ijk}$ and $\eta=(\eta_{ij})$.
The associativity condition leads to the commutativity of the matrices
$C_{i}$;
\begin{equation}
C_{i} C_{j}=C_{j} C_{i}.
\end{equation}
Using $F_{i}=\eta C_{i}$, we obtain the
Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) equation:
\begin{equation}
F_{i}\eta^{-1} F_{j}=F_{j}\eta^{-1} F_{i},
\label{eq:wdvv}
\end{equation}
which is one of the important relations in two-dimensional topological
theory.
Now we go back to the four-dimensional gauge theory.
We consider the period integral of the SW differential
$\lambda_{SW}$ (\ref{eq:swd}):
\begin{equation}
\Pi=\int_{\gamma}\lambda_{SW}
\end{equation}
along the certain one-cycle $\gamma$ on the spectral curve (\ref{eq:solve}).
In terms of the superpotential $W_{G}(x,t)$, the SW
differential takes the form:
\begin{equation}
\lambda_{SW}={1\over2\pi i} {x\partial_{x}W_{G}(x,t)\over
\sqrt{W_{G}(x,t)^2-4\mu^2}} d x.
\end{equation}
It is shown in \cite{ItYa1} that $\Pi$ obeys the set of
differential equations:
\begin{equation}
\partial_{t_{i}}\partial_{t_{j}}\Pi=\sum_{k=1}^{r} C_{ij}^{k}(t)
\partial_{t_{k}}\partial_{t_{r}}\Pi,
\label{eq:gm}
\end{equation}
which is called as the Gauss-Manin system in the singularity theory.
In addition to the Gauss-Manin system, the SW differential satisfies
another two differential equations.
{}From the scaling relation to the superpotential
\begin{equation}
\left(\sum_{i=1}^{r}q_{i}t_{i}\partial_{t_{i}}+x\partial_{x}\right) W_{G}(x,t)
=h W_{G}(x,t),
\end{equation}
we obtain the scaling relation for the period $\Pi$:
\begin{equation}
\left(\sum_{i=1}^{r}q_{i}t_{i}\partial_{t_{i}}+h^{\vee}\mu\partial_{\mu}
-1\right)\Pi=0.
\label{eq:scale}
\end{equation}
The final differential equation is obtained by regarding the l.h.s. of
the spectral curve (\ref{eq:solve}) as the superpotential of the
topological $CP^{1}$ model \cite{cp1}:
\begin{equation}
W_{CP^{1}}(z)=z+{\mu^{2}\over z}-t_{r}.
\end{equation}
Since $\log \mu^{2}$ and $t_{r}$ are flat coordinates of the $CP^{1}$
model, we obtain the $CP^{1}$ relation:
\begin{equation}
\left( (\mu\partial_{\mu})^2 -4 \mu^{2} \partial_{t_{r}}^{2}\right)
\Pi=0 .
\label{eq:cpone}
\end{equation}
Combining the scaling relation (\ref{eq:scale}) and the $CP^{1}$
relation (\ref{eq:cpone}), we obtain the differential equation
\begin{equation}
\left\{\left( \sum_{i=1}^{r}q_{i}t_{i}{\partial\over \partial t_{i}}-1 \right)^2
-4\mu^2 h^{2}{\partial^{2}\over \partial t_{r}^2} \right\}\Pi=0
\label{eq:scaling}
\end{equation}
By solving the Gauss-Manin system (\ref{eq:gm}) and the scaling
equation (\ref{eq:scaling}), we may analyze the exact solutions in the
Coulomb branch.
For classical gauge groups, the present Picard-Fuchs equations are
shown to be the same as those appeared in the previous
works\cite{pf1}, in which various gauge theories with or without
hypermultiplets are discussed.
In the weak coupling region where the scale parameter
$\Lambda^{2h}=4\mu^{2}$ are small, the solutions of these Picard-Fuchs
equations are studied extensively using various methods\cite{pf1,weak},
which are shown to agree with the microscopic instanton calculation
\cite{micro}.
In the next section, we study the solutions in the strong coupling
region.
But before going to the next section, we discuss an important consequence
of the Gauss-Manin system.
Since the dual period ${a_{D}}_{I}$ also satisfies the Gauss-Manin
system (\ref{eq:gm}), this Gauss-Manin system
provides the third-order differential equation
for the prepotential ${\cal F}(a)$:
\begin{equation}
\widetilde{\cal F}_{ijk}=\sum_{l=1}^{r} {C_{ij}}^{l}\widetilde{\cal
F}_{lrk},
\label{eq:wdvvsw1}
\end{equation}
where
\begin{eqnarray}
\widetilde{\cal F}_{ijk}&=&\partial_{t_{i}}a_{I} \partial_{t_{j}}a_{J}
\partial_{t_{k}}a_{K} {\cal F}_{IJK}, \nonumber \\
{\cal F}_{IJK}&=&\partial_{a_{I}}\partial_{a_{J}}\partial_{a_{K}}{\cal F}(a)
\end{eqnarray}
The equations (\ref{eq:wdvvsw1}) is very similar to
$F_{ijk}(t)=C_{ij}^{l}\eta_{kl}$ in two-dimensional topological
theory.
Let us introduce matrices $\widetilde{\cal F}_{i}$, ${\cal G}$ and
${\cal F}_{I}$
defined by $(\widetilde{\cal F}_{i})_{jk}=\widetilde{\cal F}_{ijk}$,
${\cal G}=\widetilde{\cal F}_{r}$, and
$({\cal F}_I)_{JK}={\cal F}_{IJK}$, respectively.
Then we find the WDVV equations in the Seiberg-Witten
theory\cite{wdvv,ItYa3}:
\begin{equation}
\widetilde{\cal F}_{i} {\cal G}^{-1} \widetilde{\cal F}_{j}
= \widetilde{\cal F}_{j} {\cal G}^{-1} \widetilde{\cal F}_{i},
\end{equation}
which may be written in the form
\begin{equation}
{\cal F}_{I} {\cal F}_{K} ^{-1} {\cal F}_{J}
= {\cal F}_{J} {\cal F}_{K}^{-1} {\cal F}_{I}.
\end{equation}
In addition to the WDVV equation, the prepotential satisfies the
scaling equation\cite{scale}:
\begin{equation}
\left(\sum_{I=1}^{r}a_{I}\partial_{a_{I}}+h^{\vee}\mu\partial_{\mu}\right)
{\cal F}(a)=2 {\cal F}(a).
\end{equation}
This scaling equation is important to calculate the instanton
correction to the prepotential in the weak coupling region\cite{ItYa2}.
Recently it is shown that the prepotential satisfies further
non-trivial equations \cite{RG} obtained from the Whitham hierarchy.
Instanton correction to the prepotential has been calculated in this
framework\cite{EdGoMa} for some gauge theories.
It would be interesting to study the relation between the WDVV
equation approach and this formulation.
\section{Superconformal point}
One of interesting phenomena in the strong coupling physics of $N=2$
supersymmetric gauge theory is the existence of non-trivial $N=2$
superconformal fixed point in the Coulomb branch\cite{ArDo,ArPlSeWi,EgHoItYa}.
At this point, massless solitons of mutually nonlocal charges coexist.
The superconformal field theory is characterized by the scaling
operators with non-trivial (fractional) scaling dimensions.
In particular, the calculations of the scaling exponents based on the
exact solution suggest that the superconformal fixed points are
characterized by the $A$-$D$-$E$ classification\cite{EgHoItYa}.
We will study this RG fixed point in view of the Picard-Fuchs
equations obtained in the previous section.
For $G=ADE$, the superconformal fixed points exist at
$$
t_{1}=\cdots =t_{r-1}=0, \quad t_{r}=\pm 2\mu.
$$
We take the plus sign without loss of generality.
Let us introduce new flat coordinates $\tilde{t}_{i}$ by
shifting $t_{r}$ by $2\mu$:
\begin{equation}
t_{i}=\tilde{t}_{i}, \quad (i=1,\cdots,r-1), \quad
t_{r}=\tilde{t}_{r}+2\mu
\end{equation}
Since the OPE coefficients $C_{ij}^{k}(t)$ are independent of
$t_{r}$ \cite{ItYa2},
the Gauss-Manin system does not change its form under this change of
coordinates:
\begin{equation}
\left( \partial_{\tilde{t}_{i}}\partial_{\tilde{t}_{j}}
-\sum_{k=1}^{r} {C_{ij}}^{k}(\tilde{t})
\partial_{\tilde{t}_{k}}\partial_{\tilde{t}_{r}}\right)\Pi=0,
\label{eq:gmscf}
\end{equation}
The scaling equation (\ref{eq:scaling}), on the other hand, becomes
\begin{equation}
\left\{\left( \sum_{i=1}^{r}q_{i}\tilde{t}_{i}\partial_{\tilde{t}_{i}}-1 \right)^2
+4\mu h
\left[
\left( \sum_{i=1}^{r}q_{i}\tilde{t}_{i}\partial_{\tilde{t}_{i}}-1 \right)
\partial_{\tilde{t}_{r}}
+{h\over2} \partial_{\tilde{t}_{r}}
\right] \right\}\Pi=0.
\label{eq:scascf}
\end{equation}
Introduce the scaling parameter $\epsilon$ by
\begin{equation}
\tilde{t}_{i}=\epsilon^{q_{i}}\rho_{i}, \quad (i=1,\cdots,r-1), \quad
\tilde{t}_{r}=\epsilon^{h}
\end{equation}
and consider the limit $\epsilon\rightarrow 0$.
We are interested in the solution of the SW periods which behave like
\begin{equation}
\Pi=\epsilon^{\alpha} f(\rho) +\cdots,
\label{eq:scfsol}
\end{equation}
as $\epsilon\rightarrow 0$.
Since
\begin{equation}
\partial_{\tilde{t}_{i}}=\epsilon^{-q_{i}}\partial_{\rho_{i}}, \quad
\partial_{\tilde{t}_{r}}={1\over h} \epsilon^{-h} \left(
\epsilon\partial_{\epsilon}-\sum_{i=1}^{r-1}q_{i}\rho_{i}\partial_{\rho_{i}}
\right),
\end{equation}
the Gauss-Manin system (\ref{eq:gmscf}) for $i,j<r$ becomes
the differential equations for $f(\rho)$ with respect to $\rho$:
\begin{eqnarray}
&& \left( \partial_{\rho_{i}}\partial_{\rho_{j}}
-{1\over h} \sum_{k=1}^{r-1} {\tilde{C}_{ij}}^{k}(\rho)
\partial_{\rho_{k}}
\left[
\alpha-\sum_{l=1}^{r-1}q_{l}\rho_{l}\partial_{\rho_{l}}
\right] \right. \nonumber \\
&& \left. -{1\over h^{2}} {\tilde{C}_{ij}}^{r}(\rho)
\left[
\alpha-h-\sum_{l=1}^{r-1}q_{l}\rho_{l}\partial_{\rho_{l}}
\right]
\left[
\alpha-\sum_{l=1}^{r-1}q_{l}\rho_{l}\partial_{\rho_{l}}
\right]
\right)f(\rho)=0
\end{eqnarray}
where
$
\tilde{C}_{ij}^{k}(\rho)=C_{ij}^{k}(\tilde{t})\epsilon^{q_{i}+q_{j}-q_{k}-h}.
$
As for the scaling equation (\ref{eq:scascf}), the second term is
dominant in the superconformal limit.
We thus find that $f(\rho)$ should satisfy the equation
\begin{eqnarray}
\left(\alpha-{h+2\over2}\right)
\left[
\alpha-\sum_{l=1}^{r-1}q_{l}\rho_{l}\partial_{\rho_{l}}
\right]
f(\rho)=0
\end{eqnarray}
This equation determines the exponent $\alpha$ in (\ref{eq:scfsol}) such as
\begin{equation}
\alpha={h+2\over2}.
\label{eq:exp}
\end{equation}
The superconformal field theory is characterized by the scaling
operator ${\rm tr}\phi^{q_{i}}$, whose conformal dimension is
measured with respect to the BPS mass (\ref{eq:bps}).
{}From (\ref{eq:scfsol}) and (\ref{eq:exp}), we have
\begin{equation}
\langle {\rm tr}\phi^{q_{i}}\rangle
\sim (m_{BPS})^{{2q_{i}\over h+2}}
\end{equation}
Thus the scaling dimension of $\langle {\rm tr} \phi^{q_{i}}\rangle$
is $2q_{i}/(h+2)$, which agrees with the result of \cite{EgHoItYa}.
We could examine the above arguments from the viewpoint of the SW curve.
In terms of the parameters $\tilde{t}$, the SW curve becomes
\begin{equation}
z+{\mu^{2}\over z}=W_{G}(x; t)=W_{G}(x; \tilde{t})-2\mu.
\end{equation}
Let us introduce $\xi$ by
\begin{equation}
\xi=\sqrt{z}+{\mu\over \sqrt{z}}.
\end{equation}
Then the curve is expressed in the form of the $ADE$ superpotential with
the Gaussian part $\xi^{2}$:
\begin{equation}
\xi^{2}=W_{G}(x; \tilde{t}) .
\end{equation}
The SW differential then becomes
\begin{equation}
\lambda_{SW}={1\over2\pi i} x{dz\over z}
={1\over2\pi i} 2 x {d \xi \over \sqrt{\xi^{2}-4 \mu}}.
\end{equation}
In the superconformal limit, expanding the above formula around $\epsilon=0$
we obtain
\begin{equation}
\lambda_{SW}=-{1\over2\pi i} {1 \over \sqrt{-\mu}}\sum_{n=0}^{\infty}
{(2n)!\over 2^{2n} (n!)^{2}} {W_{G}(x,\tilde{t})^{{2n+1\over2}} d x \over
(2n+1) (4\mu)^{n}},
\label{eq:swexpa}
\end{equation}
up to the total derivative term.
After rescaling $x=\epsilon\tilde{x}$, the leading term in (\ref{eq:swexpa})
is
\begin{equation}
\lambda_{SW}=-{1\over2\pi i \sqrt{-\mu}} \epsilon^{h+2\over2}
\sqrt{W_{G}(\tilde{x};\rho_{1},\cdots, \rho_{r-1},1)} d\tilde{x}+\cdots ,
\end{equation}
which also leads to the exponent (\ref{eq:exp}).
Note that the derivative of the SW period integrals:
$$
{\partial \Pi\over \partial \rho_{i}}\sim
\epsilon^{h+2\over2} \int {\partial_{\rho_{i}}W_{G}(\tilde{x};\rho)\over
\sqrt{W_{G}(\tilde{x};\rho)}} d\tilde{x}
$$
is the period of the curve
\begin{equation}
y^{2}=W_{G}(\tilde{x};\rho_{1},\cdots, \rho_{r-1},1).
\label{eq:swhe}
\end{equation}
Thus we find that the superconformal fixed point is simply
characterized by the $ADE$ superpotential $W_{G}(x,\tilde{t})$.
The SW curve reduce to the hyperelliptic type curve (\ref{eq:swhe}).
For example, let us consider the $A_{2}$ case.
The SW curve (\ref{eq:swhe}) is given by
$$
y^{2}=x^{3}-\rho x-1,
$$
which is nothing but the curve of $SU(2)$ gauge theory with $N_{f}=1$
at the superconformal point (the small torus in \cite{ArDo}).
The Gauss-Manin system (\ref{eq:gmscf}) becomes
\begin{equation}
\left[
(4\rho^{3}-27)\partial_{\rho}^{2}-{5\over4}\rho
\right] f(\rho)=0.
\end{equation}
One may solve this equation around $\rho=0$ found that the results
agree with those obtained in \cite{ArDo}.
\section{Discussion}
In this paper, we have seen the close relationship between the
four-dimensional gauge theory with $ADE$ gauge group and
two-dimensional topological LG models coupled to topological gravity.
We have examined the exact solutions around the superconformal points
using the Picard-Fuchs equations and showed that the superconformal points
are simply characterized by the $A$-$D$-$E$ singularity.
For other gauge theories with or without matter hypermultiplets, on
the other hand, it is difficult to extend the present results because
of the complexity of the superpotential.
But at the superconformal point we would expect that the curves become
simple and are classified by the $A$-$D$-$E$ singularity\cite{EgHoItYa}.
The two-dimensional topological field theory would provide a
useful tool for analyzing the physics in both weak and strong coupling
region.
It seems interesting to compare the expansion (\ref{eq:swexpa}) of the
SW differential around the superconformal point with that around the
origin ($t_{i}=0$) of the Coulomb branch:
\begin{equation}
\lambda_{SW}=-{1\over2\pi i} {1\over \sqrt{-4\mu^{2}}}
\sum_{n=0}^{\infty}
{(2n)!\over 2^{2n} (n!)^{2}} {W_{G}(x,t)^{2n+1} d x \over
(2n+1) (4\mu^{2})^{n}}.
\label{eq:oexpa}
\end{equation}
In \cite{ItXiYa}, it has been shown that the period integrals of
(\ref{eq:oexpa}) are expressed directly in terms of
the one-point function $\langle \sigma_{n}(\phi_{i})\rangle$
of the $n$-th gravitational descendant
$\sigma_{n}(\phi_{i})$ of the primary field $\phi_{i}$
in two-dimensional topological LG models coupled to topological
gravity.
The one-point function satisfies the
same Gauss-Manin system, which is evaluated by the residue
integrals\cite{EYY}:
\begin{equation}
\langle \sigma_{n}(\phi_{i})\rangle=b_{n,i} \sum_{j=1}^{r}
\eta_{ij}\oint W_{G}(x,t)^{{e_{j}\over h}+n+1},
\end{equation}
where $b_{n,i}$ is certain constant.
In this formulation, the Gauss-Manin system (\ref{eq:gm}) is also
derived from the topological
recursion relation \cite{WiDi}:
\begin{equation}
\langle \sigma_{n}(\phi_{i}) X Y \rangle = \sum_{j}
\langle \sigma_{n-1}(\phi_{i})\phi_{j}\rangle \langle \phi^{j} X Y \rangle
\end{equation}
and the puncture equation (in the small phase space):
\begin{equation}
\langle P \prod_{i=1}^{s} \sigma_{n_{i}}(\phi_{\alpha_{i}})\rangle
=\sum_{i=1}^{s} \langle \prod_{j=1}^{s}\sigma_{n_{j}-\delta_{ji}}
(\phi_{\alpha_{j}})\rangle.
\end{equation}
One might expect that the similar relation would be hold in the
superconformal case, which would be important to understand the
relation between the SW theory and $d<1$ topological strings.
\vskip3mm\noindent
{\bf Acknowledgments} \
The author would like to thank C.-S. Xiong and S.-K. Yang for useful
discussions.
This work is supported in part by
the Grant-in-Aid from the Ministry of Education, Science and Culture,
Priority Area: \lq\lq Supersymmetry and Unified
Theory of Elementary Particles'' (\#707).
\newpage
|
1,314,259,993,485 | arxiv | \section{Introduction}
Probabilistic programming is a rapidly developing discipline at the interface of programming and Bayesian statistics \cite{GordonHNR14,Goodman2014,VandeMeent2018}.
The idea is to express probabilistic models (incorporating the prior distributions) and the observed data as programs,
and to use a general-purpose Bayesian inference engine, which acts directly on these programs, to find the posterior distribution given the observations.
Some of the most influential probabilistic programming languages (PPLs) used in practice are \emph{universal} (i.e.~the underlying language is Turing-complete); e.g. Church \cite{GoodmanMRBT08}, Anglican \cite{TolpinMW15}, Gen \cite{Cusumano-Towner19}, Pyro \cite{pyro19} and Turing \cite{ge2018Turing}.
Using stochastic branching, recursion and higher-order features, universal PPLs can express arbitrarily complex models.
For instance, these language constructs can be used to incorporate probabilistic context free grammars \cite{Manning99}, statistical phylogenetics \cite{Ronquist2020.06.16.154443}, and even physics simulations \cite{BaydinSBHM0MNGL19} into probabilistic models.
However, expressivity of the PPL comes at the cost of complicating the posterior inference.
Consider, for example, the following problem from \cite{MakOPW21,MakZO21}.
\begin{example}[Pedestrian]
\label{ex:pedestrian}
A pedestrian has gotten lost on a long road and only knows that they are a random distance between 0 and 3 km from their home. They repeatedly walk a uniformly random distance of at most 1 km in either direction, until they find their home. When they arrive, a step counter tells them that they have traveled a distance of 1.1 km in total.
We assume that the measured distance is normally distributed around the true distance with standard deviation 0.1 km.
Given this observation, what is the posterior distribution of the starting point?
We can specify this as a probabilistic program as follows:
\vspace{2mm}
\algdef{SE}[SUBALG]{Indent}{EndIndent}{}{\algorithmicend\ }%
\algtext*{Indent}
\algtext*{EndIndent}
\parbox{\linewidth-10pt}{
\begin{algorithmic}
\State \textbf{let} $\mathit{start}$ = \textbf{sample} uniform$(0, 3)$\textbf{ in}
\State \textbf{letrec} $\mathit{walk}$ $x$ =
\Indent
\State \textbf{if} $x \leq 0$ \textbf{then} 0 \textbf{else}
\Indent
\State \textbf{let} $\mathit{step}$ = \textbf{sample} uniform$(0, 1)$\textbf{ in}
\State $\mathit{step} + \mathit{walk}\big((x + \mathit{step}) \oplus_{0.5} (x - \mathit{step})\big)$
\EndIndent
\EndIndent
\State \textbf{let} $\mathit{distance} = \mathit{walk}(\mathit{start})$\textbf{ in}
\State \textbf{observe} $\mathit{distance}$ \textbf{from} $\ensuremath{\mathrm{Normal}}(1.1, 0.1)$;
\State $\mathit{start}$
\end{algorithmic}
}
\vspace{2mm}
\noindent
Here \textbf{sample} uniform$(a, b)$ samples a uniform value in $[a, b]$, $M \oplus_{0.5} N$ is (fair) probabilistic branching, and \textbf{observe} $M$ \textbf{from} $D$ observes the value of $M$ from distribution $D$.
\end{example}
\cref{ex:pedestrian} is a challenging model for inference algorithms in several regards:
not only does the program use stochastic branching and recursion, but the number of random variables generated is unbounded -- it's \emph{nonparametric} \cite{HjortHMW10,Ghahramani2013,MakZO21}.
To approximate the posterior distribution of the program, we apply two standard inference algorithms:
likelihood-weighted importance sampling (IS), a simple algorithm that works well on low-dimensional models with few observations \cite{mcbook}; and Hamiltonian Monte Carlo (HMC) \cite{DuaneKPR87}, a successful MCMC algorithm that uses gradient information to efficiently explore the parameter space of high-dimensional models.
\Cref{fig:pedestrian-stochastic} shows the results of the two inference methods as implemented in Anglican \cite{TolpinMW15} (for IS) and Pyro \cite{pyro19} (for HMC): they clearly disagree!
But how is the user supposed to know which (if any) of the two results is correct?
Note that \emph{exact} inference methods (i.e.~methods that try to compute a closed form solution of the posterior inference problem using computer algebra and other forms of symbolic computation) such as PSI \cite{GehrMV16,GehrSV20}, Hakaru \cite{NarayananCRSZ16}, Dice \cite{HoltzenBM20}, and SPPL \cite{SaadRM21} are only applicable to non-recursive models, and so they don't work for \cref{ex:pedestrian}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/pedestrian.pdf}
\vspace{-7mm}
\caption{Histogram of samples from the posterior distribution of \cref{ex:pedestrian} and wrong samples produced by the probabilistic programming system Pyro.}
\label{fig:pedestrian-stochastic}
\end{figure}
\subsection{Guaranteed Bounds}
The above example illustrates central problems with both approximate stochastic and exact inference methods.
For approximate methods, there are no guarantees for the results they output after a finite amount of time, leading to unclear inference results (as seen in \cref{fig:pedestrian-stochastic}).\footnote{Take MCMC sampling algorithms.
Even though the Markov chain will eventually converge to the target distribution,
we do not know how long to iterate the chain to ensure convergence \cite{Roy2020,mcbook}.
For variational inference \cite{Zhang2019}, there is likewise no convergence guarantee:
given a variational family, there is no guarantee that a given value of KL-divergence (from the approximating to the posterior distribution) is attainable by the minimising distribution.}
For exact methods, the symbolic engine may fail to find a closed-form description of the posterior distribution and, more importantly, they are only applicable to very restricted classes of programs (most notably, non-recursive models).
Instead of computing approximate or exact results, this work is concerned with computing \emph{guaranteed} bounds on the posterior distribution of a probabilistic program.
Concretely, given a probabilistic program $P$ and a measurable set $X \subseteq \mathbb{R}$ (given as an interval), we infer upper and lower bounds on $\measureSem{P}(X)$ (formally defined in \cref{sec:2background}), i.e.~the posterior probability of $P$ on $X$.%
\footnote{By repeated application of our method on a discretisation of the domain we can compute histogram-like bounds.}
Such bounds provide a ground truth to compare approximate inference results with: if the approximate results violate the bounds, the inference algorithm has not converged yet or is even ill-suited to the program in question.
Crucially, our method is applicable to arbitrary (and in particular recursive) programs of a universal PPL.
For \cref{ex:pedestrian}, the bounds computed by our method (which we give in \cref{sec:7practical-evaluation}) are tight enough to separate the IS and HMC output.
In this case, our method infers that the results given by HMC are wrong (i.e.~violate the guaranteed bounds) whereas the IS results are plausible (i.e.~lie within the guaranteed bounds).
To the best of our knowledge, no existing methods can provide such definite answers for programs of a universal PPL.
\subsection{Contributions}
The starting point of our work is an interval-based operational semantics \cite{BeutnerO21}.
In our semantics we evaluate a program on \emph{interval traces} (i.e.~sequences of intervals of reals with endpoints between 0 and 1) to approximate the outcomes of sampling, and use interval arithmetic \cite{Dawood2011} to approximate numerical operations (\cref{sec:3intervals}).
Our semantics is sound in the sense that any (pairwise compatible and exhaustive) set of interval traces yields lower and upper bounds on the posterior distribution of a program.
These lower/upper bounds are themselves super-/subadditive measures.
Moreover, under mild conditions (mostly restrictions on primitive operations), our semantics is also complete,
i.e.~for any $\epsilon > 0$ there exists a countable pairwise compatible set of interval traces that provides $\epsilon$-tight bounds on the posterior.
Our proofs hinge on a combination of stochastic symbolic execution and the convergence of Riemann sums, providing a natural correspondence between our interval trace semantics and the theory of (Riemann) integration (\cref{sec:4intervals-theory}).
Based on our interval trace semantics, we present a practical algorithm to automate the computation of guaranteed bounds.
It employs an interval type system (together with constraint-based type inference) that bounds both the value of an expression in a refinement-type fashion \emph{and} the score weight of any evaluation thereof.
The (interval) bounds inferred by our type system fit naturally in the domain of our semantics.
This enables a sound approximation of the behaviour of a program with finitely many interval traces (\cref{sec:5interval-analysis}).
We implemented our approach in a tool called GuBPI\footnote{GuBPI is available at \href{https://gubpi-tool.github.io/}{gubpi-tool.github.io}.} (\textbf{Gu}aranteed \textbf{B}ounds for \textbf{P}osterior \textbf{I}nference), pronounced ``guppy'', described in \cref{sec:6linear}, and evaluate it on a suite of benchmark programs from the literature \cite{GehrMV16,GehrSV20,SankaranarayananCG13}.
We find that the bounds computed by GuBPI are competitive in many cases where the posterior could already be inferred exactly.
Moreover, GuBPI's bounds are useful (in the sense that they are precise enough to e.g. rule out erroneous approximate results as in \cref{fig:pedestrian-stochastic}) for recursive models that could not be handled rigorously by any method before (\cref{sec:7practical-evaluation}).
\subsection{Scope and Limitations}
The contributions of this paper are of both theoretical and practical interests.
On the theoretical side, our novel semantics underpins a sound and deterministic method to compute guaranteed bounds on program denotations.
As shown by our completeness theorem, this analysis is applicable---in the sense that it computes arbitrarily tight bounds---to a very broad class of programs.
On the practical side, our analyser GuBPI implements (an optimised version of) our semantics.
As is usual for exact/guaranteed%
\footnote{By ``exact/guaranteed methods'', we mean inference algorithms that compute deterministic (non-stochastic) results about the mathematical denotation of a program. In particular, they are correct with probability 1, contrary to stochastic methods.}
methods, our semantics considers an exponential number of program paths, and partitions each sampled value into a finite number of interval approximations.
Consequently, GuBPI generally struggles with high-dimensional models.
We believe GuBPI to be most useful for unit-testing of implementations of Bayesian inference algorithms such as \Cref{ex:pedestrian}, or to compute results on (recursive) programs when non-stochastic bounds are needed.
\section{Background}
\label{sec:2background}
\subsection{Basic Probability Theory and Notation}
We assume familiarity with basic probability theory, and refer to \cite{Pollard2002} for details.
Here we just fix notations.
A \emph{measurable space} is a pair $(\Omega, \Sigma_\Omega)$ where $\Omega$ is a set (of outcomes) and $\Sigma_\Omega \subseteq 2^\Omega$ is a $\sigma$-algebra defining the measurable subsets of $\Omega$.
A \emph{measure} on $(\Omega, \Sigma_\Omega)$ is a function $\mu : \Sigma_\Omega \to \mathbb{R}_{\geq 0} \cup \{\infty\}$ that satisfies $\mu(\emptyset) = 0$ and is $\sigma$-additive.
For $\mathbb{R}^n$, we write $\Sigma_{\mathbb{R}^n}$ for the Borel $\sigma$-algebra and $\lambda_n$ for the Lebesgue measure on $(\mathbb{R}^n, \Sigma_{\mathbb{R}^n})$.
The Lebesgue integral of a measurable function $f$ with respect to a measure $\mu$ is written $\int f \D \mu$ or $\int f(x) \, \mu(\D x)$.
Given a predicate $\psi$ on $\Omega$, we define the Iverson brackets $[\psi] : \Omega \to \mathbb{R}$ by mapping all elements that satisfy $\psi$ to $1$ and all others to $0$.
For $A \in \Sigma_{\Omega}$ we define the bounded integral $\int_A f \D \mu := \int f(x) \cdot [x \in A] \mu(\D x)$.
\subsection{Statistical PCF (SPCF)}
As our probabilistic programming language of study, we use \emph{statistical PCF} (SPCF) \cite{MakOPW21}, a typed variant of \cite{BorgstromLGS16}.
SPCF includes primitive operations which are measurable functions $f : \mathbb{R}^{|f|} \to \mathbb{R}$, where $|f| \geq 0$ denotes the arity of the function.
\emph{Values} and \emph{terms} of SPCF are defined as follows:
\begin{align*}
V &:= x \mid \lit r \mid \lambda x. M \mid \fixLam \varphi x M\\
M, N, P &:= V \mid M N \mid \ifElse M N P \mid \lit f(M_1, \cdots, M_{|f|})\\
&\quad\quad\mid \mathsf{sample} \mid \mathsf{score}(M)
\end{align*}%
where $x$ and $\varphi$ are variables, $f$ is a primitive operation, and $\lit r$ a constant with $r \in \ensuremath{\mathbb R}$.
Note that we write $\fixLam \varphi x M$ instead of $\mathsf{Y}(\lambda \varphi x. M)$ for the fixpoint construct.
The branching construct is $\ifSimple M N P$, which evaluates to $N$ if $M \le 0$ and $P$ otherwise.
In SPCF,~$\mathsf{sample}$ draws a random value from uniform distribution on $[0, 1]$ and $\mathsf{score}(M)$ weights the current execution with the value of $M$.
Samples from a different real-valued distribution $D$ can be obtained by applying the inverse of the cumulative density function for $D$ to a uniform sample \cite{RubinsteinK17}.
Most PPLs feature an \textbf{observe} statement instead of manipulating the likelihood weight directly with \textbf{score}, but they are equally expressive \cite{Staton17}.%
\footnote{\label{fnote:score observe}
In Bayesian terms, an \textbf{observe} statement multiplies the likelihood function by the probability (density) of the observation \cite{GordonHNR14} (as we have used in \cref{ex:pedestrian}).
Scoring makes this explicit by keeping a weight for each program execution \cite{BorgstromLGS16}.
Observing a value $v$ from a distribution $D$ then simply multiplies the current weight by $\pdf_D(v)$ where $\pdf_D$ is the probability density function of $D$ (for continuous distributions) or the probability mass function of $D$ (for discrete distributions).
}
As usual, we write $\letIn{x = M} N$ for $(\lambda x. N) M$, $M;N$ for $\letIn{\_ = M} N$ and $M \oplus_p N$ for $\ifSimple{\mathsf{sample} - p}{M}{N}$.
The type system of our language is as expected, with types being generated by $\alpha, \beta := \textbf{\textsf{R}} \mid \alpha \to \beta$.
Selected rules are given below:
\noindent
\begin{minipage}{0.5\linewidth}
\begin{prooftree}
\AxiomC{}
\UnaryInfC{$\Gamma \vdash \mathsf{sample} : \textbf{\textsf{R}} $}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{prooftree}
\AxiomC{$\Gamma \vdash M : \textbf{\textsf{R}}$}
\UnaryInfC{$\Gamma \vdash \mathsf{score}(M) : \textbf{\textsf{R}} $}
\end{prooftree}
\end{minipage}
\noindent
\begin{minipage}{0.5\linewidth}
\begin{prooftree}
\AxiomC{$\Gamma, \varphi: \alpha \to \beta, x:\alpha \vdash M : \beta$}
\UnaryInfC{$\Gamma \vdash \fixLam \varphi x M : \alpha \to \beta $}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\begin{prooftree}
\AxiomC{$\{\Gamma \vdash M_i : \textbf{\textsf{R}}\}_{i=1}^{|f|}$}
\UnaryInfC{$\Gamma \vdash \lit f(M_1, \cdots, M_{|f|}) : \textbf{\textsf{R}} $}
\end{prooftree}
\end{minipage}
\begin{figure}
%
\small
\begin{minipage}{0.55\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{}
\UnaryInfC{$\stdConf{ (\lambda x. M) V, \boldsymbol{s}, w } \to \stdConf{ M[V/x], \boldsymbol{s}, w}$}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.45\columnwidth}
\vspace{1.3mm}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{}
\UnaryInfC{$\stdConf{\mathsf{sample}, r \, \boldsymbol{s}, w} \to \stdConf{ \lit{r}, \boldsymbol{s}, w} $}
\end{prooftree}
\end{minipage}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{}
\UnaryInfC{$\stdConf {(\fixLam \varphi x M) V, \boldsymbol{s}, w} \to \stdConf {M[V/x, (\fixLam \varphi x M)/\varphi], \boldsymbol{s}, w} $}
\end{prooftree}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{}
\UnaryInfC{$\stdConf {f(\lit{r_1}, \cdots, \lit{r_{|f|}}), \boldsymbol{s}, w} \to \stdConf{ \lit{f(r_1, \cdots, r_{|f|})}, \boldsymbol{s}, w} $}
\end{prooftree}
\begin{minipage}{0.5\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$r \leq 0$}
\UnaryInfC{$\stdConf {\ifSimple{\lit{r}}{N}{P}, \boldsymbol{s}, w} \to \stdConf {N, \boldsymbol{s}, w} $}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.5\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$r > 0$}
\UnaryInfC{$\stdConf {\ifSimple{\lit{r}}{N}{P}, \boldsymbol{s}, w} \to \stdConf {P, \boldsymbol{s}, w} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{0.5\columnwidth}
\vspace{1mm}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$r \geq 0$}
\UnaryInfC{$\stdConf {\mathsf{score}(\lit{r}), \boldsymbol{s}, w} \to \stdConf{ \lit{r}, \boldsymbol{s}, w \cdot r} $}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.5\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$\stdConf {R, \boldsymbol{s}, w} \to \stdConf {M, \boldsymbol{s}', w'}$}
\UnaryInfC{$\stdConf {E[R], \boldsymbol{s}, w} \to \stdConf {E[M], \boldsymbol{s}', w'} $}
\end{prooftree}
\end{minipage}
%
\caption{Small-step CbV reduction for SPCF. } \label{fig:reductionRules}
\end{figure}
\subsection{Trace Semantics}
Following \cite{BorgstromLGS16}, we endow SPCF with a trace-based operational semantics.
We evaluate a probabilistic program $P$ on a fixed \defn{trace} $\boldsymbol{s} = \langle r_1, \dots, r_n \rangle \in \mathbb{T} := \bigcup_{n \in \ensuremath{\mathbb N}} [0,1]^n$, which \emph{predetermines} the probabilistic choices made during the evaluation.
Our semantics therefore operates on configurations of the form $\stdConf{M, \boldsymbol{s}, w}$ where $M$ is an SPCF term, $\boldsymbol{s}$ is a trace and $w \in \mathbb{R}_{\geq 0}$ a weight.
The call-by-value (CbV) reduction is given by the rules in \cref{fig:reductionRules}, where $E[\cdot]$ denotes a CbV evaluation context.
The definition is standard \cite{BorgstromLGS16, MakOPW21,BeutnerO21}.
Given a program $\vdash P : \textbf{\textsf{R}}$, we call a trace $\boldsymbol{s}$ \defn{terminating} just if $\stdConf {P, \boldsymbol{s}, 1} \to^* \stdConf {V, \langle\rangle, w}$ for some value $V$ and weight $w$, i.e.~if the samples drawn are as specified by $\boldsymbol{s}$, the program $P$ terminates.
Note that we require the trace $\boldsymbol{s}$ to be completely used up.
As $P$ is of type $\textbf{\textsf{R}}$ we can assume that $V = \lit{r}$ for some $r \in \mathbb{R}$.
Moreover (given $P$) each terminating $\boldsymbol{s}$ uniquely determines the returned \emph{value} $\lit{r}$ where $r =: \valueSem P(\boldsymbol{s}) \in \ensuremath{\mathbb R}$, and the \emph{weight} $w =: \weightSem P(\boldsymbol{s}) \in \ensuremath{\mathbb R}_{\ge 0}$, of the execution.
For a nonterminating trace $\boldsymbol{s}$, $\valueSem P(\boldsymbol{s})$ is undefined and $\weightSem P(\boldsymbol{s}) := 0$.
\begin{example}
\label{ex:pedestrian2}
For an illustration, consider \cref{ex:pedestrian}.
On the terminating trace $\boldsymbol{s} = \langle0.4, 0.1, 0.2, 0.8, 0.7\rangle \in [0,1]^5 \subseteq \mathbb{T}$ we recurse twice and obtain:
\[ \valueSem P(\boldsymbol{s}) = 0.4, \; \weightSem P(\boldsymbol{s}) = \pdf_{\ensuremath{\mathrm{Normal}}(1.1, 0.1)}(0.9). \]
\end{example}
In order to do measure theory, we need to turn our set of traces into a measurable space.
The trace space $\mathbb{T}$ is equipped with the $\sigma$-algebra $\Sigma_\mathbb{T} := \{ \bigcup_{n \in \ensuremath{\mathbb N}} U_n \mid U_n \in \Sigma_{[0,1]^n} \}$ where $\Sigma_{[0,1]^n}$ is the Borel $\sigma$-algebra on $[0,1]^n$ . We define a measure $\mu_\mathbb{T}$ by $\mu_\mathbb{T}(U) := \sum_{n\in \ensuremath{\mathbb N}} \lambda_n(U \cap [0,1]^n)$ \cite{BorgstromLGS16}.
We can now define the semantics of an SPCF program $\vdash P : \textbf{\textsf{R}}$ by using the weight and returned value of (executions of $P$ determined by) individual traces.
Given $U \in \Sigma_\mathbb{R}$, we need to define the likelihood of $P$ evaluating to a value in $U$.
To this end, we set $\valueSem P^{-1}(U) := \{ \boldsymbol{s} \in \mathbb{T} \mid \stdConf{P, \boldsymbol{s}, 1} \to^* \stdConf{\lit{r}, \langle\rangle, w}, r \in U \}$, i.e.~the set of traces on which the program $P$ reduces to a value in $U$.
As shown in \citep[Lem.~9]{BorgstromLGS16}, $\valueSem P^{-1}(U)$ is measurable.
Thus, we can define (cf. \cite{BorgstromLGS16,MakOPW21})
\[
\textstyle\measureSem P(U) := \int_{\valueSem P^{-1}(U)} \weightSem P(\boldsymbol{s}) \,\mu_\mathbb{T}(\D \boldsymbol{s}).
\]
That is, the integral takes all traces $\boldsymbol{s}$ on which $P$ evaluates to a value in $U$, weighting each $\boldsymbol{s}$ with the weight $\weightSem P(\boldsymbol{s})$ of the corresponding execution.
{A program $P$ is called \defn{almost surely terminating (AST)} if it terminates with probability 1, i.e.~$\mu_\mathbb{T}(\valueSem P^{-1}(\ensuremath{\mathbb R})) = 1$.
This is a necessary assumption for approximate inference algorithms (since they execute the program).
See \cite{BorgstromLGS16} for a more in-depth discussion of this (standard) sampling-style semantics.
\paragraph{Normalizing constant and integrability}
In Bayesian statistics, one is usually interested in the \emph{normalised} posterior, which is a conditional probability distribution.
We can obtain the normalised denotation as
$ \mathsf{posterior}_P := \frac{\measureSem P}{Z_P}$
where $Z_P := \measureSem P(\ensuremath{\mathbb R})$ is the \emph{normalising constant}.
We call $P$ \emph{integrable} if $0 < Z_P < \infty$.
The bounds computed in this paper (on the unnormalised denotation $\measureSem{P}$) allow us to compute bounds on the normalizing constant $Z_P$, and thereby also on the normalised denotation.
All bounds reported in this paper (in particular in \cref{sec:7practical-evaluation}) refer to the \emph{normalised} denotation.
\section{Interval Trace Semantics}
\label{sec:3intervals}
In order to obtain guaranteed bounds on the distribution denotation $\measureSem P$ (and also on $\mathsf{posterior}_P$) of a program $P$, we present an interval-based semantics.
In our semantics we approximate the outcomes of $\mathsf{sample}$ with intervals and handle arithmetic operations by means of interval arithmetic (similar to the approach by \citet{BeutnerO21} in the context of termination analysis).
Our semantics enables us to reason about the denotation of a program \emph{without} considering the uncountable space of traces explicitly.
\subsection{Interval Arithmetic}\label{sec:intervalArith}
For our purposes, an \defn{interval} has the form $[a, b]$ which denotes the set $\{ x \in \ensuremath{\mathbb R} \mid a \le x \le b \}$, where $a \in \ensuremath{\mathbb R} \cup \{-\infty\}$, $b \in \ensuremath{\mathbb R} \cup \{\infty\}$, and $a \leq b$.
For consistency, we write $[0, \infty]$ instead of the more typical $[0,\infty)$.
For $X \subseteq \ensuremath{\mathbb R} \cup \{-\infty,\infty\}$, we denote by $\mathbb{I}_X$ the set of intervals with endpoints in $X$, and simply write $\mathbb{I}$ for $\mathbb{I}_{\ensuremath{\mathbb R} \cup \{-\infty,\infty\}}$.
We call an $n$-tuple of intervals an $n$-dimensional \defn{box}.
We can lift functions on real numbers to intervals as follows: for each $f: \ensuremath{\mathbb R}^n \to \ensuremath{\mathbb R}$ we define $f^\mathbb{I}: \mathbb{I}^n \to \mathbb{I}$ by
\[ f^\mathbb{I}([a_1,b_1], \dots, [a_n,b_n]) := [\inf F, \sup F] \]
where $F := f([a_1,b_1], \dots, [a_n,b_n])$.
For constants $c \in \ensuremath{\mathbb R}$ (i.e. nullary functions), and common functions like $+$, $-$, $\times$, $|\cdot|$, $\min$, $\max$, and monotonically increasing/decreasing functions $f: \ensuremath{\mathbb R} \to \ensuremath{\mathbb R}$, their interval-lifted counterparts can easily be computed, from the values of the original function on just the endpoints of the input interval.
For example, constants become point intervals $c^\mathbb{I} = [c, c]$, and addition lifts to $[a_1,b_1] +^\mathbb{I} [a_2,b_2] = [a_1 + a_2, b_1 + b_2]$;
similarly for multiplication $\times^\mathbb{I}$.
\subsection{Interval Traces and Interval SPCF}
In our interval interpretation, probabilistic programs are run on \emph{interval traces}.
An \defn{interval trace}, $\langle I_1, \dots, I_n \rangle \in \mathbb{T}_\mathbb{I} := \bigcup_{n\in \ensuremath{\mathbb N}} (\mathbb{I}_{[0,1]})^n$, is a finite sequence of intervals $I_1, \dots, I_n$, each with endpoints between $0$ and $1$.
To distinguish ordinary traces $\boldsymbol{s} \in \mathbb{T}$ from interval traces $\boldsymbol{t} \in \mathbb{T}_\mathbb{I}$, we call the former \emph{concrete} traces.
We define the \defn{refinement} relation $\mathrel{\triangleleft}$ between concrete and interval traces as follows:
for $\boldsymbol{s} = \langle r_1, \cdots, r_n \rangle \in \mathbb{T}$ and $\boldsymbol{t} = \langle I_1, \dots, I_m \rangle\in \mathbb{T}_\mathbb{I}$, we define $\boldsymbol{s} \mathrel{\triangleleft} \boldsymbol{t}$ just if $n = m$ and for all $i$, $r_i \in I_i$.
For each interval trace $\boldsymbol{t}$, we denote by $\tracesin{\boldsymbol{t}} := \{ \boldsymbol{s} \in \mathbb{T} \mid \boldsymbol{s} \mathrel{\triangleleft} \boldsymbol{t} \}$ the set of all refinements of $\boldsymbol{t}$.
To define a reduction of a term on an interval trace, we extend SPCF with \emph{interval literals} $\lit{[a,b]}$, which replace the literals $\lit{r}$ but are still considered values of type $\textbf{\textsf{R}}$.
In fact, $\lit r$ can be read as an abbreviation for $\lit{[r,r]}$.
We call such terms \defn{interval terms}, and the resulting language \defn{Interval SPCF}.
\begin{figure}
%
\small
\begin{minipage}{0.55\columnwidth}
\begin{prooftree}
\def1pt{0pt}
\AxiomC{}
\UnaryInfC{$\intConf {(\lambda x. M) V, \boldsymbol{t}, w} \to_\mathbb{I} \intConf {M[V/x], \boldsymbol{t}, w}$}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.45\columnwidth}
\vspace{1.5mm}
\begin{prooftree}
\def1pt{0pt}
\AxiomC{}
\UnaryInfC{$\intConf{ \mathsf{sample}, I \, \boldsymbol{t}, w} \to_\mathbb{I} \intConf{ \lit{I}, \boldsymbol{t}, w} $}
\end{prooftree}
\end{minipage}
\begin{prooftree}
\AxiomC{}
\UnaryInfC{$\intConf {(\fixLam \varphi x M) V, \boldsymbol{t}, w} \to_\mathbb{I} \intConf {M[V/x, (\fixLam \varphi x M)/\varphi], \boldsymbol{t}, w} $}
\end{prooftree}
\begin{minipage}{1\columnwidth}
\begin{prooftree}
\def1pt{-2pt}
\AxiomC{$b \leq 0$}
\UnaryInfC{$\intConf{\ifSimple{\lit{[a, b]}}{N}{P}, \boldsymbol{t}, w} \to_\mathbb{I} \intConf {N, \boldsymbol{t}, w} $}
\end{prooftree}
\end{minipage}
\begin{minipage}{1\columnwidth}
\begin{prooftree}
\def1pt{-2pt}
\AxiomC{$a > 0$}
\UnaryInfC{$\intConf {\ifSimple{\lit{[a, b]}}{N}{P},\boldsymbol{t}, w} \to_\mathbb{I} \intConf {P, \boldsymbol{t}, w} $}
\end{prooftree}
\end{minipage}
\begin{prooftree}
\AxiomC{}
\UnaryInfC{$\intConf{ f(\lit{I_1}, \cdots, \lit{I_{|f|}}), \boldsymbol{t}, w} \to_\mathbb{I} \intConf{ \lit{f^\mathbb{I}(I_1, \dots, I_{|f|})}, \boldsymbol{t}, w} $}
\end{prooftree}
\begin{prooftree}
\AxiomC{$a \geq 0$}
\UnaryInfC{$\intConf {\mathsf{score}(\lit{[a, b]}), \boldsymbol{t}, w} \to_\mathbb{I} \intConf {\lit{[a, b]}, \boldsymbol{t}, w \times^\mathbb{I} [a, b]} $}
\end{prooftree}
\begin{prooftree}
\AxiomC{$\intConf {R, \boldsymbol{t}, w} \to_\mathbb{I} \intConf {M, \boldsymbol{t}', w'}$}
\UnaryInfC{$\intConf {E[R], \boldsymbol{t}, w} \to_\mathbb{I} \intConf {E[M], \boldsymbol{t}', w'} $}
\end{prooftree}
%
\caption{Rules for interval reduction $\to_\mathbb{I}$.}
\label{fig:interval-semantics}
\end{figure}
\paragraph{Reduction}
The interval-based reduction $\to_\mathbb{I}$ now works on configurations $\intConf{M, \boldsymbol{t}, w}$ of interval terms $M$, interval traces $\boldsymbol{t} \in \mathbb{T}_\mathbb{I}$, and interval weights $w \in \mathbb{I}_{\ensuremath{\mathbb R}_{\ge 0}}$.
The redexes and evaluation contexts of SPCF extend naturally to interval terms.
The reduction rules are given in \cref{fig:interval-semantics}.%
\footnote{
For conditionals, the interval bound is not always precise enough to decide which branch to take, so the reduction can get stuck if $a \le 0 < b$.
We could include additional rules to overapproximate the branching behavior (see \ifFull{\cref{app:sec-additional-reduction-rules}}).
But the rules given here simplify the presentation and are enough to prove soundness and completeness.
}
The reduction relation $\to_\mathbb{I}$ allows us to define the \emph{interval weight} function ($ \weightSem P^\mathbb{I} : \mathbb{T}_\mathbb{I} \to \mathbb{I}_{\ensuremath{\mathbb R}_{\ge 0}}$) and \emph{interval value} function ($\valueSem P^\mathbb{I} : \mathbb{T}_\mathbb{I} \to \mathbb{I}$) by:
\begin{align*}
\weightSem P^\mathbb{I}(\boldsymbol{t}) &:= \begin{cases}
w &\text{if } \intConf{P, \boldsymbol{t}, 1} \to^*_\mathbb{I} \intConf{V, \langle\rangle, w} \\
[0, \infty] &\text{otherwise,}
\end{cases} \\
\valueSem P^\mathbb{I}(\boldsymbol{t}) &:= \begin{cases}
[a, b] &\text{if } \intConf{P, \boldsymbol{t}, 1} \to^*_\mathbb{I} \intConf{\lit{[a, b]}, \langle\rangle, w} \\
[-\infty, \infty] &\text{otherwise.}
\end{cases}
\end{align*}
It is not difficult to prove the following relationship between standard and interval reduction.
\begin{restatable}{lemma}{lemIntervalApproximation}
Let $ \vdash P : \textbf{\textsf{R}}$ be a program.
For any interval trace $\boldsymbol{t}$ and trace $\boldsymbol{s} \mathrel{\triangleleft} \boldsymbol{t}$, we have $\weightSem P(\boldsymbol{s}) \in \weightSem P^\mathbb{I}(\boldsymbol{t})$ and $\valueSem P(\boldsymbol{s}) \in \valueSem P^\mathbb{I}(\boldsymbol{t})$.
\label{lem:interval-approximation}
\end{restatable}
\subsection{Bounds from Interval Traces}
\label{sec:boundsFromIntervalTraces}
\paragraph{Lower bounds}
How can we use this interval trace semantics to obtain lower bounds on $\measureSem P$?
We need a few definitions.
Two intervals $[a_1,b_1],\allowbreak [a_2,b_2] \in \mathbb{I}$ are called \defn{almost disjoint} if $b_1 \le a_2$ or $b_2 \le a_1$.
Interval traces $\langle I_1,\dots,I_m \rangle$ and $\langle J_1,\dots,J_n \rangle \in \mathbb{T}_\mathbb{I}$ are called \defn{compatible} if there is an index $i \in \{1,\dots,\min(m,n)\}$ such that $I_i$ and $J_i$ are almost disjoint.
We define the \emph{volume} of an interval trace $\boldsymbol{t} = \langle[a_1,b_1],\allowbreak\dots,\allowbreak[a_n,b_n]\rangle$ as $\volume (\boldsymbol{t}) := \prod_{i=1}^n (b_i - a_i)$.
Let $\mathcal{T} \subseteq \mathbb{T}_\mathbb{I}$ be a countable set of pairwise compatible interval traces.
Define the \emph{lower bound} on $\measureSem P$ by
\begin{align*}
\lowerBound P^\mathcal{T} (U) \!:=\! \sum_{\boldsymbol{t} \in \mathcal{T}} \volume(\boldsymbol{t}) \! \cdot \! (\min \weightSem P^\mathbb{I}(\boldsymbol{t})) \!\cdot\! \big[\valueSem P^\mathbb{I}(\boldsymbol{t}) \subseteq U\big]
\end{align*}
for $U \in \Sigma_\ensuremath{\mathbb R}$.
Note that in general $\lowerBound P^\mathcal{T}$ is not a measure, but merely a \emph{superadditive measure}.%
\footnote{A \emph{superadditive measure} $\mu$ on $(\Omega, \Sigma_\Omega)$ is a measure, except that $\sigma$-additivity is replaced by $\sigma$-superadditivity: $\mu(\bigcup_{i\in\ensuremath{\mathbb N}} U_i) \ge \sum_{i\in\ensuremath{\mathbb N}} \mu(U_i)$ for a countable, pairwise disjoint family $(U_i)_{i \in \ensuremath{\mathbb N}} \in \Sigma_\Omega$.}
\paragraph{Upper bounds}
For upper bounds, we require the notion of a set of interval traces being \emph{exhaustive}, which is easiest to express in terms of infinite traces.
Let $\mathbb{T}_\infty := [0,1]^\omega$ be the set of infinite traces.
Every interval trace $\boldsymbol{t}$ \emph{covers} the set of infinite traces with a prefix contained in $\boldsymbol{t}$, i.e.~$\mathit{cover}(\boldsymbol{t}) := \tracesin{\boldsymbol{t}} \times \mathbb{T}_\infty$ (where the Cartesian product $\times$ can be viewed as trace concatenation).
A countable set of (finite) interval traces $\mathcal{T} \subseteq \mathbb{T}_\mathbb{I}$ is called \defn{exhaustive} if $\bigcup_{\boldsymbol{t} \in \mathcal{T}} \mathit{cover}(\boldsymbol{t})$ covers almost all of $\mathbb{T}_\infty$, i.e.~$\mu_{\mathbb{T}_\infty}(\mathbb{T}_\infty \setminus \bigcup_{\boldsymbol{t} \in \mathcal{T}} \mathit{cover}(\boldsymbol{t})) = 0$.%
\footnote{The $\sigma$-algebra on $\mathbb{T}_\infty$ is defined as the smallest $\sigma$-algebra that contains all sets $U \times \mathbb{T}_\infty$ where $U \in \Sigma_{[0, 1]^n}$ for some $n \in \ensuremath{\mathbb N}$. The measure $\mu_{\mathbb{T}_\infty}$ is the unique measure with $\mu_{\mathbb{T}_\infty}(U \times \mathbb{T}_\infty) = \lambda_n(U)$ when $U \in \Sigma_{[0, 1]^n}$.}
Phrased differently, almost all concrete traces must have a finite prefix that is contained in some interval trace in $\mathcal{T}$; the analysis in the interval semantics on $\mathcal{T}$ therefore covers the behaviour on almost all concrete traces (in the original semantics).
\begin{example}
(i) The singleton set $\{\langle [0,1], [0,0.6] \rangle\}$ is not exhaustive as, for example, all infinite traces $\langle r_1, r_2, \cdots \rangle$ with $r_2 > 0.6$ are not covered.
(ii) The set $\{\langle [0,0.6] \rangle , \langle [0.3, 1] \rangle\}$ is exhaustive, but not pairwise compatible.
(iii) Define
$\mathcal{T}_1 := \{ \langle [\tfrac12, 1]^{\dots n}, [0,\tfrac13] \rangle \mid n \in \ensuremath{\mathbb N} \}$ and $\mathcal{T}_2 := \{ \langle [\tfrac12, 1]^{\dots n},\allowbreak [0,\tfrac12] \rangle \mid n \in \ensuremath{\mathbb N} \}$ where $x^{\dots n}$ denotes $n$-fold repetition of $x$.
$\mathcal{T}_1$ is pairwise compatible but not exhaustive. For example, it doesn't cover the set $[\tfrac12, 1] \times (\tfrac13, \tfrac12) \times \mathbb{T}_\infty$, i.e.~all traces $\langle r_1, r_2, \cdots \rangle$ where $r_1 \in [\tfrac12, 1]$ and $r_2 \in (\tfrac{1}{3}, \tfrac{1}{2})$.
$\mathcal{T}_2$ is pairwise compatible and exhaustive (the set of non-covered traces $(\tfrac12, 1]^\omega$ has measure $0$).
\end{example}
Let $\mathcal{T} \subseteq \mathbb{T}_\mathbb{I}$ be an exhaustive set of interval traces.
Define the \emph{upper bound} on $\measureSem P$ by
\begin{align*}
\upperBound P^\mathcal{T}(U) \!:=\!\! \sum_{\boldsymbol{t} \in \mathcal{T}} \volume(\boldsymbol{t}) \! \cdot \! (\sup \weightSem P^\mathbb{I}(\boldsymbol{t})) \! \cdot \! \big[\valueSem P^\mathbb{I}(\boldsymbol{t}) \cap U \ne \emptyset\big]
\end{align*}
for $U \in \Sigma_\ensuremath{\mathbb R}$.
Note that {$\upperBound P^\mathcal{T}$} is not a measure but only a \emph{subadditive measure}.%
\footnote{A \emph{subadditive measure} $\mu$ on $(\Omega, \Sigma_\Omega)$ is a measure, except that $\sigma$-additivity is replaced by $\sigma$-subadditivity: $\mu(\bigcup_{i\in\ensuremath{\mathbb N}} U_i) \le \sum_{i\in\ensuremath{\mathbb N}} \mu(U_i)$ for a countable, pairwise disjoint family $(U_i)_{i \in \ensuremath{\mathbb N}} \in \Sigma_\Omega$.}
\section{Soundness and Completeness}
\label{sec:4intervals-theory}
\subsection{Soundness}
We show that the two bounds described above are \emph{sound}, in the following sense.
\begin{theorem}[Sound lower bounds]\label{thm:lowerBoundsSound}
Let $\mathcal{T}$ be a pairwise compatible set of interval traces and $\vdash P: \textbf{\textsf{R}}$ a program (which need not be almost surely terminating).
Then
\( \lowerBound P^\mathcal{T} \le \measureSem P. \)
\end{theorem}
\begin{proof}
For any $U \in \Sigma_\ensuremath{\mathbb R}$, we have:
\begin{align}
\lowerBound P^\mathcal{T}(U)
&= \sum_{\boldsymbol{t} \in \mathcal{T}} \volume(\boldsymbol{t}) (\min \weightSem P^\mathbb{I}(\boldsymbol{t})) \big[\valueSem P^\mathbb{I}(\boldsymbol{t}) \subseteq U\big] \nonumber\\
&= \sum_{\boldsymbol{t} \in \mathcal{T}} \int_{\tracesin{\boldsymbol{t}}} (\min \weightSem P^\mathbb{I}(\boldsymbol{t})) \big[\valueSem P^\mathbb{I}(\boldsymbol{t}) \subseteq U\big] \D \boldsymbol{s} \nonumber\\
&\le \sum_{\boldsymbol{t} \in \mathcal{T}} \int_{\tracesin{\boldsymbol{t}}} \weightSem P(\boldsymbol{s}) \big[\valueSem P(\boldsymbol{s}) \in U\big] \D \boldsymbol{s} \label{eq:fourth soundness}\\
&= \int_{\bigcup_{\boldsymbol{t} \in \mathcal{T}} \tracesin{\boldsymbol{t}}} \weightSem P(\boldsymbol{s}) \big[\valueSem P(\boldsymbol{s}) \in U\big] \D \boldsymbol{s} \label{eq:fifth soundness}\\
&\le \int_{\mathbb{T}} \weightSem P(\boldsymbol{s}) \big[\valueSem P(\boldsymbol{s}) \in U\big] \D \boldsymbol{s} = \measureSem P(U) \label{eq:sixth soundness}
\end{align}
where \cref{eq:fourth soundness} follows from \cref{lem:interval-approximation}, \cref{eq:fifth soundness} from pairwise compatibility, and \cref{eq:sixth soundness} from $\bigcup_{\boldsymbol{t} \in \mathcal{T}} \tracesin{\boldsymbol{t}} \subseteq \mathbb{T}$.
\end{proof}
\begin{restatable}[Sound upper bounds]{theorem}{thmSoundUpper}\label{thm:upperBoundsSound}
Let $\mathcal{T}$ be an exhaustive set of interval traces and $\vdash P: \textbf{\textsf{R}}$ a program (which need not be almost surely terminating).
Then
\( \measureSem P \le \upperBound P^\mathcal{T}. \)
\end{restatable}
\begin{proof}[Proof sketch]
The formal proof is similar to soundness of lower bounds, but needs an infinite trace semantics \cite{CulpepperC17} for probabilistic programs and is given in \ifFull{the appendix}.
The idea is then that each interval trace $\boldsymbol{t}$ summarizes all infinite traces starting with $\tracesin{\boldsymbol{t}}$.
Exhaustivity ensures that almost all infinite traces are ``covered''.
\end{proof}
\subsection{Completeness}
The soundness results for upper and lower bounds allow us to derive bounds on the denotation of a program.
One would expect that a finer partition of interval traces will yield more precise bounds.
In this section, we show that for a program $P$ and an interval $I \in \mathbb{I}$, the approximations $\lowerBound P^\mathcal{T}(I)$ and $\upperBound P^\mathcal{T}(I)$ can in fact come arbitrarily close to $\measureSem P(I)$ for suitable $\mathcal{T}$.
However, this is only possible under certain assumptions.
\paragraph{Assumption 1: use of sampled values}
Interval arithmetic is imprecise if the same value is used more than once: consider, for instance, $\letIn{s = \mathsf{sample}} \ifElse{s - s}{0}{1}$ which deterministically evaluates to $0$.
In interval arithmetic however, if $x$ is approximated by an interval $[a,b]$ with $a < b$, the difference $x - x$ is approximated as $[a - b, b - a]$, which always contains both positive and negative values.
So no non-trivial interval trace can separate the two branches.
To avoid this, we could consider a call-by-name semantics (as done in \cite{BeutnerO21}) where sample values can only be used once by definition.
However, many of our examples cannot be expressed in the call-by-name setting, so we instead propose a less restrictive criterion to guarantee completeness for call-by-value:
we allow sample values to be used more than once, but at most once in the guard of each conditional, at most once in each score expression, and at most once in the return value.
While this prohibits terms like the one above, it allows, e.g.~$\letIn{s = \mathsf{sample}} \ifElse{s}{\lit f(s)}{\lit g(s)}$.
We formalise this sufficient condition with a resource-aware type-system in \ifFull{\cref{sec:qtt}}.
Most examples we encountered in the literature satisfy this assumption.
\paragraph{Assumption 2: primitive functions}
In addition, we need mild assumptions on the primitive functions, called \emph{boxwise continuity} and \emph{interval separability}.
We need to be able to approximate a program's weight function by step functions in order to obtain tight bounds on its integral.
A function $f: \ensuremath{\mathbb R}^n \to \ensuremath{\mathbb R}$ is \defn{boxwise continuous} if it can be written as the countable union of continuous functions on boxes, i.e. if there is a countable union of pairwise almost disjoint boxes $B_i$ such that $\bigcup B_i = \ensuremath{\mathbb R}^n$ and the restriction $f|_{B_i}$ is continuous for each $B_i$.
Furthermore, we need to approximate preimages.
Formally, we say that $A$ is a \defn{tight subset} of $B$ (written $A \Subset B$) if $A \subseteq B$ and $B \setminus A$ is a null set.
A function $f: \ensuremath{\mathbb R}^n \to \ensuremath{\mathbb R}$ is called \defn{interval separable} if for every interval $[a,b] \in \mathbb{I}$, there is a countable set $\mathcal B$ of boxes in $\ensuremath{\mathbb R}^n$ that tightly approximates the preimage, i.e.~$\bigcup \mathcal B \Subset f^{-1}([a,b])$.
A sufficient condition for checking this is the following.
If $f$ is boxwise continuous and preimages of points have measure zero, then $f$ is already interval separable (\ifFull{\cref{lem:continuous-preimage-null-interval-separable}}).
We assume the set $\mathcal F$ of primitive functions is \defn{admissible}, meaning it is closed under composition and each $f\in \mathcal F$ is interval separable and boxwise continuous.
\paragraph{The completeness theorem}
Using these two assumptions, we can state completeness of our interval semantics.
\begin{restatable}[Completeness of interval approximations]{theorem}{thmCompleteness}\label{thm:completeness}
\label{thm:Completeness of interval approximations}
Let $I \in \mathbb{I}$ and $\vdash P : \textbf{\textsf{R}}$ be an almost surely terminating program satisfying the two assumptions discussed above.
Then, for all $\epsilon > 0$, there is a countable exhaustive set of pairwise compatible interval traces $\mathcal{T} \subseteq \mathbb{T}_\mathbb{I}$ such that
\begin{align*}
\textstyle\upperBound P^{\mathcal{T}}(I) - \epsilon \le \measureSem P(I) \le \lowerBound P^{\mathcal{T}}(I) + \epsilon.
\end{align*}
\end{restatable}
\begin{proof}[Proof sketch.]
We consider each branching path through the program separately.
The set of relevant traces for a given path is a preimage of intervals for compositions of interval separable functions, hence can essentially be partitioned into boxes.
By boxwise continuity, we can refine this partition such that the weight function is continuous on each box.
To approximate the integral, we pass to a refined partition again, essentially computing Riemann sums.
The latter converge to the Riemann integral, which agrees with the Lebesgue integral under our conditions, as desired.
\end{proof}
For the lower bound, we can actually derive $\epsilon$-close bounds using only finitely many interval traces:
\begin{restatable}{corollary}{corollaryCompleteness}
Let $I \in \mathbb{I}$ and $\vdash P : \textbf{\textsf{R}}$ be as in \cref{thm:Completeness of interval approximations}.
There is a sequence of finite, pairwise compatible sets of interval traces $\mathcal{T}_1, \mathcal{T}_2, \ldots \subseteq \mathbb{T}_\mathbb{I}$ s.t.~\( \lim_{n \to \infty} \lowerBound P^{\mathcal{T}_n}(I) = \measureSem P(I). \)
\end{restatable}
For the upper bound, a restriction to finite sets $\mathcal{T}$ of interval traces is, in general, not possible:
if the weight function for a program is unbounded, it is also unbounded on some $\boldsymbol{t} \in \mathcal{T}$.
Then $\weightSem P^\mathbb{I}(\boldsymbol{t})$ is an infinite interval, implying $\upperBound P^\mathcal{T}(I) = \infty$ (see \ifFull{\cref{rem:countable-traces}} for details).
Despite the (theoretical) need for countably infinite many interval traces, we can, in many cases, compute finite upper bounds by making use of an interval-based static approximation, formalised as a type system in the next section.
\section{Weight-aware Interval Type System}
\label{sec:5interval-analysis}
To obtain sound bounds on the denotation with only finitely many interval traces, we present an interval-based type system that can derive static bounds on a program.
Crucially, our type-system is \emph{weight-aware}: we bound not only the return value of a program but also the weight of an execution.
Our analyzer GuBPI uses it for two purposes.
First, it allows us to derive upper bounds even for areas of the sample space not covered with interval traces.
Second, we can use our analysis to derive a \emph{finite} (and sound) approximation of the infinite number of symbolic execution paths of a program (more details in \cref{sec:6linear}).
Note that the bounds inferred by our system are \emph{interval bounds}, which allow for a seamless integration with interval trace semantics.
In this section we present the interval type system and sketch a constraint-based type inference method.
\begin{figure}[!t]
\footnotesize
\vspace{-0.3cm}
\begin{minipage}{0.2\columnwidth}
\vspace{6mm}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$x:\sigma \in \Gamma$}
\UnaryInfC{$\Gamma \vdash x: \exType{\sigma}{\mathbf{1}} $}
\end{prooftree}
\end{minipage}\hfill
\begin{minipage}{0.32\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\def\hskip .15in{\hskip .1in}
\AxiomC{$\Gamma \vdash M : \mathcal{A}$}
\AxiomC{$\mathcal{A} \sqsubseteq_\mathcal{A} \mathcal{B}$}
\BinaryInfC{$\Gamma \vdash M : \mathcal{B}$}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.45\columnwidth}
\vspace{5mm}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$\Gamma; \varphi: \sigma \to \mathcal{A} ; x:\sigma \vdash M : \mathcal{A}$}
\UnaryInfC{$\Gamma \vdash \fixLam{\varphi}{x} M : \exType{\sigma \to \mathcal{A}}{\mathbf{1}} $}
\end{prooftree}
\end{minipage}
\vspace{0.0cm}
\begin{minipage}{0.3\columnwidth}
\vspace{8mm}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$\Gamma; x:\sigma \vdash M : \mathcal{A}$}
\UnaryInfC{$\Gamma \vdash \lambda x. M : \exType{\sigma \to \mathcal{A}}{\mathbf{1}} $}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.7\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\def\hskip .15in{\hskip .15in}
\AxiomC{$\Gamma\vdash M : \exType{\sigma_1 \to \exType{\sigma_2}{\myint{e, f}}}{\myint{a, b}}$}
\AxiomC{$\Gamma \vdash N : \exType{\sigma_1}{\myint{c, d}}$}
\BinaryInfC{$\Gamma \vdash M N : \exType{\sigma_2}{\myint{a, b} \times^\mathbb{I} \myint{c, d} \times^\mathbb{I} \myint{e, f}}$ }
\end{prooftree}
\end{minipage}
\vspace{0.1cm}
\begin{minipage}{0.2\columnwidth}
\vspace{6.5mm}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{}
\UnaryInfC{$\Gamma \vdash \lit{r} : \exType{\myint{r, r}}{\mathbf{1}} $}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.8\columnwidth}
\begin{prooftree}
\def\hskip .15in{\hskip .15in}
\def1pt{1pt}
\AxiomC{$\Gamma \vdash M : \exType{\myint{\_, \_}}{\myint{a, b}}$}
\AxiomC{$\Gamma \vdash N :\exType{ \sigma}{\myint{c, d}}$}
\AxiomC{$\Gamma \vdash P : \exType{\sigma}{\myint{c, d}}$}
\TrinaryInfC{$\Gamma \vdash \ifSimple M N P : \exType{\sigma}{\myint{a,b} \times^\mathbb{I} \myint{c, d}} $}
\end{prooftree}
\end{minipage}
\vspace{0.1cm}
\begin{minipage}{0.25\columnwidth}
\vspace{6.5mm}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{}
\UnaryInfC{$\Gamma \vdash \mathsf{sample} : \exType{\myint{0, 1}}{\mathbf{1}} $}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.75\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$\Gamma \vdash M : \exType{\myint{a, b}}{\myint{c, d}}$}
\UnaryInfC{$\Gamma \vdash \mathsf{score}(M) : \exType{\myint{a, b} \sqcap \myint{0, \infty}}{\myint{c, d} \times^\mathbb{I} \big(\myint{a, b} \sqcap \myint{0, \infty}\big) } $}
\end{prooftree}
\end{minipage}
\vspace{0.1cm}
\begin{minipage}{1\columnwidth}
\begin{prooftree}
\def1pt{1pt}
\AxiomC{$\Gamma \vdash M_1 : \exType{\myint{a_1,b_1}}{\myint{c_1, d_1}}$}
\AxiomC{$\cdots$}
\AxiomC{$\Gamma \vdash M_{|f|} : \exType{\myint{a_{|f|}, b_{|f|}}}{\myint{c_{|f|}, d_{|f|}}}$}
\TrinaryInfC{$\Gamma \vdash f(M_1, \cdots, M_{|f|}) : \exType{f^\mathbb{I}(\myint{a_1,b_1}, \cdots, \myint{a_{|f|}, b_{|f|}})}{(\times^\mathbb{I})_{i=1}^{|f|} \myint{c_i, d_i}} $}
\end{prooftree}
\end{minipage}
%
\caption{Weight-aware interval type system for SPCF. We abbreviate $\mathbf{1} := [1, 1]$.} \label{fig:typeSystemSelection}
\end{figure}
\subsection{Interval Types}
We define interval types by the following grammar:
\begin{align*}
\sigma := I \mid \sigma \to \mathcal{A} \quad\quad \mathcal{A} := \exType{\sigma}{I}
\end{align*}%
where $I \in \mathbb{I}$ is an interval.
For readers familiar with refinement types, it is easiest to view the type $\sigma = I$ as the refinement type $\{x : \ensuremath{\mathbb R} \mid x \in I\}$.
The definition of the syntactic category $\mathcal{A}$ by mutual recursion with $\sigma$ gives a bound on the weight of the execution.
We call a type $\sigma$ \emph{weightless} and a type $\mathcal{A}$ \emph{weighted}.
The following examples should give some intuition for the types.
\begin{example}\label{ex:typeExample}
Consider the example term
\begin{align*}
\big(\fixLam{\varphi}{x} 5 \cdot x \oplus_{0.5} \mathit{sigm}(\varphi\, x + \mathsf{score} \,\mathsf{sample}) \big) (4 \cdot \mathsf{sample})
\end{align*}
where $\mathit{sigm} : \mathbb{R} \to [0, 1]$ is the sigmoid function.
In our type system, this term can be typed with the weighted type \scalebox{0.5}{$\exType{[0, 20]}{[0,1]}$}, which indicates that any terminating execution of the term reduces to a value (a number) within $[0, 20]$ and the weight of any such execution lies within $[0, 1]$.
\end{example}
\begin{example}\label{ex:pedestrianType}
We consider the fixpoint subexpression of the pedestrian example in \cref{ex:pedestrian}.
\begin{center}
\vspace{-2mm}
\scalebox{0.98}{\parbox{\linewidth}{
\begin{align*}
\mu^\varphi_x. \ifElse{x}{0}{\big(\lambda \mathit{step}. \mathit{step} + \varphi( (x \!+\! \mathit{step})\oplus_{0.5} (x \!-\! \mathit{step}) )\big) \mathsf{sample}}
\end{align*}
}}
\end{center}
%
Using the typing rules (defined below), we can infer the type
\scalebox{0.5}{$\exType{[a, b] \to \exType{[0, \infty]}{[1, 1]}}{[1,1]}$}
for any $a, b$.
This type indicates that any terminating execution reduces to a function value (of simple type $\textbf{\textsf{R}} \to \textbf{\textsf{R}}$) with weight within $[1, 1]$.
If this function value is then called on a value within $[a, b]$, any terminating execution reduces to a value within $[0, \infty]$ with a weight within $[1, 1]$.
\end{example}
\paragraph{Subtyping}
The partial order on intervals naturally extends to our type system.
For base types $I_1$ and $I_2$, we define $I_1 \sqsubseteq_\sigma I_2$ just if $I_1 \sqsubseteq I_2$, where $\sqsubseteq$ is interval inclusion.
We then extend this via:\\[-3mm]
\begin{minipage}{0.5\columnwidth}
\begin{prooftree}
\AxiomC{$\sigma_2 \sqsubseteq_\sigma \sigma_1$}
\AxiomC{$\mathcal{A}_1 \sqsubseteq_\mathcal{A} \mathcal{A}_2$}
\BinaryInfC{$\sigma_1 \to \mathcal{A}_1 \sqsubseteq_\sigma \sigma_2 \to \mathcal{A}_2$}
\end{prooftree}
\end{minipage}%
\begin{minipage}{0.5\columnwidth}
\vspace{5mm}
\begin{prooftree}
\AxiomC{$\sigma_1 \sqsubseteq_\sigma \sigma_2$}
\AxiomC{$I_1 \sqsubseteq I_2$}
\BinaryInfC{$\exType{\sigma_1}{I_1} \sqsubseteq_\mathcal{A} \exType{\sigma_2}{I_2}$}
\end{prooftree}
\end{minipage}
\vspace{2mm}
\noindent
Note that in the case of weighted types, the subtyping requires not only that the weightless types be subtype-related ($\sigma_1 \sqsubseteq_\sigma \sigma_2$) but also that the weight bound be refined $I_1 \sqsubseteq I_2$.
It is easy to see that both $\sqsubseteq_\mathcal{A}$ and $\sqsubseteq_\sigma$ are partial orders on types with the same underlying base type.
\subsection{Type System}
As for the interval semantics, we assume that every primitive operation $f : \mathbb{R}^n \to \mathbb{R}$ has an over-approximating interval abstraction $f^\mathbb{I} : \mathbb{I}^n \to \mathbb{I}$ (c.f. \cref{sec:intervalArith}).
The typing judgments (\cref{fig:typeSystemSelection}) now have the form $\Gamma \vdash P : \mathcal{A}$ where $\Gamma$ is a typing context mapping variables to types $\sigma$.
Our system is sound in the following sense (which we here only state for first-order programs).
\begin{restatable}{theorem}{typeSystemSoundness}\label{thm:staticRes}
Let $\vdash P : \textbf{\textsf{R}}$ be a simply-typed program.
If $\vdash P : \text{\scalebox{0.7}{$\exType{\myint{a, b}}{\myint{c, d}}$}}$ and $\stdConf{P, \boldsymbol{s}, 1}\to^* \stdConf{\lit{r}, \langle\rangle, w}$ for some $\boldsymbol{s} \in \mathbb{T}$ and $r, w \in \ensuremath{\mathbb R}$, then $r \in \myint{a, b}$ and $w \in \myint{c, d}$.
\end{restatable}
Note that the bounds derived by our type system only refer to terminating executions, i.e.~they are partial correctness statements.
\Cref{thm:staticRes} formalises the intuition of an interval type, i.e.~every type derivation in our system bounds \emph{both} the returned value (in typical refinement-type fashion \cite{FreemanP91}) and the weight of this derivation.
Our type system also comes with a weak completeness statement: for each term, we can derive some bounds in our system.
\begin{restatable}{proposition}{typeSystemComp}
Let $P$ be a closed, simply-typed program.
There exists an $\mathcal{A}$ such that $\vdash P : \mathcal{A}$.\label{prop:compltnessType}
\end{restatable}
\subsection{Constraint-based Type Inference}
In this section, we briefly discuss the automated type \emph{inference} in our system, as needed in our tool GuBPI.
For space reasons, we restrict ourselves to an informal overview (see \ifFull{\cref{app:sec5}} for a full account).
Given a program $P$, we can derive the symbolic skeleton of a type derivation (the structure of which is determined by $P$), where each concrete interval is replaced by a placeholder variable.
The validity of a typing judgment within this skeleton can then be encoded as constraints.
Crucially, as we work in the fixed interval domain and the subtyping structure $\sqsubseteq_\mathcal{A}$ is compositional, they are simple constraints over the placeholder variables in the abstract interval domain.
Solving the resulting constraints na\"{i}vely might not terminate since the interval abstract domain is not chain complete.
Instead we approximate the least fixpoint (where the fixpoint denotes a solution to the constraints) using \defn{widening}, a standard approach to ensure termination of static analysis on domains with infinite chains \cite{CousotCousot76,CousotC77}.
This is computationally much cheaper compared to, say, types with general first-order refinements where constraints are typically phrased as constrained Horn clauses (see e.g.~\cite{ChampionCKS20}).
This gain in efficiency is crucial to making our GuBPI tool practical.
\section{GuBPI and Symbolic Execution}
\label{sec:6linear}
In this section, we describe the overall structure of our tool GuBPI (\href{https://gubpi-tool.github.io/}{gubpi-tool.github.io}), which builds upon symbolic execution.
We also outline how the interval-based semantics can be accelerated for programs containing certain linear subexpressions.
\subsection{Symbolic Execution}
The starting point of our analysis is a \emph{symbolic exploration} of the term in question \cite{MakOPW21,GeldenhuysDV12,ChagantyNR13}.
For space reasons we only give an informal overview of the approach.
A detailed and formal discussion can be found in \ifFull{\cref{app:sec-symbolic}}.
The idea of symbolic execution is to treat outcomes of $\mathsf{sample}$ expressions fully symbolically: each $\mathsf{sample}$ evaluates to a fresh variable (called \emph{sample variable} and denoted $\alpha_1, \alpha_2, \dots$).
This requires us to postpone the evaluation of primitive functions, branching, and the weighting with $\mathsf{score}$ expressions (as the value in question is symbolic).
The result of symbolic execution is thus a symbolic value (a term consisting of sample variables and delayed primitive function applications).
During execution, we explore both branches of a conditional and keep track of the (symbolic) conditions on the sample variables that need to hold in the current branch.
Similarly, we record the (symbolic) values of $\mathsf{score}$ expressions.
Formally, our symbolic execution operates on \emph{configurations} of the form $\psi = \symConf{\mathcal{M}, n, {\Delta, \Xi}}$ where $\mathcal{M}$ is a symbolic term containing sample variables instead of sample outcomes,
$n \in \mathbb{N}$ a natural number used to obtain fresh sample variables,
$\Delta$ a list of symbolic constraints of the form $\mathcal{V} \bowtie r$, where $\mathcal{V}$ is a symbolic value, $r \in \ensuremath{\mathbb R}$ and ${\bowtie} \in {\{\leq, <, >, \geq\}}$, to keep track of the conditions for the current execution path;
and $\Xi$ is a set of values that records all symbolic values of $\mathsf{score}$ expressions encountered along the current path.
Key reduction rules include the following.
\begin{align*}
&\symConf{\mathsf{sample}, n, \Delta, \Xi} \leadsto_\mathit{sym} \symConf{\alpha_{n+1}, n+1, \Delta, \Xi}\\
&\symConf{\ifElse{\mathcal{V}}{\mathcal{N}}{\mathcal{P}}), n, \Delta, \Xi} \leadsto_\mathit{sym} \symConf{\mathcal{N}, n, \Delta \cup \{\mathcal{V} \leq 0\}, \Xi}\\
&\symConf{\ifElse{\mathcal{V}}{\mathcal{N}}{\mathcal{P}}), n, \Delta, \Xi} \leadsto_\mathit{sym} \symConf{\mathcal{P}, n, \Delta \cup \{\mathcal{V} > 0\}, \Xi }\\
&\symConf{\mathsf{score}(\mathcal{V}), n, \Delta, \Xi} \leadsto_\mathit{sym} \symConf{\mathcal{V}, n, \Delta \cup \{\mathcal{V} \geq 0 \}, \Xi \cup \{\mathcal{V}\}}
\end{align*}%
That is, we replace sample outcomes with fresh variables (first rule), explore both paths of a conditional but track the necessary constraints (second and third rule), and record all score values (forth rule).
\begin{example}\label{ex:pedestrianSymExec}
Consider the symbolic execution of \cref{ex:pedestrian} where the first step moves to the left and the second step moves to the right.
We reach a symbolic configuration $(\mathcal{M}, 5, \Delta, \Xi)$ where $\mathcal{M}$ is
\begin{center}
\vspace{-2mm}
\scalebox{0.99}{\parbox{\linewidth}{
\begin{align*}
\mathsf{score} \big(\pdf_{\ensuremath{\mathrm{Normal}}(1.1, 0.1)}\big(\alpha_2\! + \!\alpha_4\! +\! (\mu^\varphi_x. \mathcal{N}) (3\alpha_1 \!-\! \alpha_2 \!+ \!\alpha_4)\big)\big) ; 3\alpha_1.
\end{align*}
}}
\end{center}
Here $\alpha_1$ is the initial sample for $\mathit{start}$; $\alpha_2, \alpha_4$ the two samples of $\mathit{step}$; and $\alpha_3, \alpha_5$ the samples involved in the $\oplus_{0.5}$ operator.
The fixpoint $\mu^\varphi_x. \mathcal{N}$ is already given in \cref{ex:pedestrianType}, $\Xi = \emptyset$ and $\Delta = \{3\alpha_1 > 0, \alpha_3 > \tfrac{1}{2}, 3\alpha_1 \!-\! \alpha_2 > 0, \alpha_5 \leq \tfrac{1}{2}\}$.
\end{example}
For a symbolic value $\mathcal{V}$ using sample variables $\overline{\alpha} = \alpha_1, \allowbreak \dots, \allowbreak \alpha_n$ and $\boldsymbol{s} \in [0,1]^n$ we write $\mathcal{V}[\boldsymbol{s}/\overline \alpha] \in \mathbb{R}$ for the substitution of concrete values in $\boldsymbol{s}$ for the sample variables.
Call a symbolic configuration of the form $\Psi = \symPath{\mathcal{V}, n,\Delta, \Xi}$ (i.e., a configuration that has reached a symbolic value) a \defn{symbolic path}.
We write $\mathit{symPaths}(\psi)$ for the (countable) set of symbolic paths reached when evaluating from configuration $\psi$.
Given a symbolic path $\Psi = \symPath{\mathcal{V}, n,\Delta, \Xi}$ and a set $U \in \Sigma_\mathbb{R}$, we define the denotation along $\Psi$, written $\llbracket \Psi \rrbracket (U)$, as\\
\scalebox{1}{\parbox{\linewidth}{
\begin{align*}
\int_{[0,1]^n} \!
\big[\mathcal{V}[\boldsymbol{s}/\overline \alpha] \in U\big] \!\!\! \prod_{\mathcal{C} \bowtie r \in \Delta} \!\!\!\! \big[\mathcal{C}[\boldsymbol{s}/\overline \alpha] \bowtie r\big]
\prod_{\mathcal{W} \in \Xi} \!\!\mathcal{W}[\boldsymbol{s}/\overline \alpha]
\D\boldsymbol{s}.
\end{align*}%
}}\\
I.e.~the integral of the product of the $\mathsf{score}$ weights $\Xi$ over the traces of length $n$ where the result value is in $U$ and all the constraints $\Delta$ are satisfied.
We can recover the denotation of a program $P$ (as defined in \cref{sec:2background}) from all its symbolic paths starting in configuration $\symConf{P, 0, \emptyset, \emptyset}$.
\begin{restatable}{theorem}{symbolicExec}\label{thm:relation-semantics-symbolic}
For any term $\vdash P : \textbf{\textsf{R}}$ and $U \in \Sigma_\mathbb{R}$, we have
\begin{align*}
\textstyle\measureSem P(U) = \sum_{\Psi \in \mathit{symPaths}\symConf{P, 0, \emptyset, \emptyset}} \; \llbracket \Psi \rrbracket (U).
\end{align*}
\end{restatable}
Similarly to interval SPCF, we define \defn{symbolic interval terms} as symbolic terms that may contain intervals (and similarly for symbolic interval configurations and paths).
\begin{algorithm}[!t]
\begin{algorithmic}[1]
\State \textbf{Input:} $\vdash P : \textbf{\textsf{R}}$, depth limit $D \in \ensuremath{\mathbb N}$, and $U \in \mathbb{I}$
\State $S := \{ (P, 0, \emptyset, \emptyset) \}; d_{(P, 0, \emptyset, \emptyset)} := 0; T := \emptyset$
\While{$\exists \psi \in S$}
\If{$\psi$ has terminated}
\State $T := T \cup \{\psi\}; S := S \setminus \{\psi\}$
\ElsIf{$\psi$ contains no fixpoints or $d_\psi \leq D$}
\State $S := S \setminus \{\psi\}$
\For{$\psi'$ with $\psi \leadsto_\mathit{sym} \psi'$}
\State $S := S \cup \{\psi'\}$; $d_{\psi'} := d_\psi + 1$ \vspace{-1mm}
\EndFor
\Else
\State $S := (S \setminus \{\psi\}) \cup \{\mathit{approxFix}(\psi)\}$\label{line:approx}
\EndIf
\EndWhile
\State \textbf{return} $\big[\sum_{\Psi \in T} \; \llbracket \Psi \rrbracket_\mathit{lb}(U), \sum_{\Psi \in T} \; \llbracket \Psi \rrbracket_\mathit{ub}(U)\big]$\label{line:final}
\end{algorithmic}
\caption{Symbolic Analysis in GuBPI.}\label{alg:symRiBo}
\end{algorithm}
\subsection{GuBPI}
With symbolic execution at hand, we can outline the structure of our analysis framework GuBPI (sketched in \cref{alg:symRiBo}).
GuBPI's analysis begins with symbolic execution of the input term to accumulate a set of symbolic \emph{interval} paths $T$.
If a symbolic configuration $\psi$ has exceeded the user-defined depth limit $D$ and still contains a fixpoint, we overapproximate all paths that extend $\psi$ to ensure a finite set $T$.
We accomplish this by using the interval type system (\cref{sec:5interval-analysis}) to overapproximate all fixpoint subexpressions, thereby obtaining strongly normalizing terms (in line \ref{line:approx}).
Formally, given a configuration $\psi = \symConf{\mathcal{M}, n, \Delta, \Xi}$ we derive a typing judgment for the term $\mathcal{M}$ in the system in \cref{fig:typeSystemSelection}.
Any first-order fixpoint subterm is then typed with a (weightless) type of the form \scalebox{0.7}{$[a, b] \to \exType{[c, d]}{[e, f]}$}.
We replace this fixpoint with $\lambda \_.\big( \mathsf{score}([e, f])\mathbin{;} [c, d]\big)$.
We denote this operation on configurations by $\mathit{approxFix}(\psi)$ (it extends to higher-order fixpoints as expected).
Note that $\mathit{approxFix}(\psi)$ is a symbolic \emph{interval} configuration.
\begin{example}\label{ex:pedestrianSymbolicPath}
Consider the symbolic configuration given in \cref{ex:pedestrianSymExec}.
As in \cref{ex:pedestrianType} we infer the type of $\mu^\varphi_x. \mathcal{N}$ to be
\scalebox{0.65}{$[-1, 4] \to \exType{[0, \infty]}{[1, 1]}$}.
The function $\mathit{approxFix}$ replaces $\mu^\varphi_x. \mathcal{N}$ with $\lambda \_. \mathsf{score}([1, 1]); [0, \infty]$.
By evaluating the resulting symbolic interval configuration further, we obtain the symbolic interval path $\symPath{3\alpha_1, 5, \Delta, \Xi}$ where $\Delta$ is as in \cref{ex:pedestrianSymExec} and $\Xi = \{\pdf_{\ensuremath{\mathrm{Normal}}(1.1, 0.1)}(\alpha_2 + \alpha_4 + [0, \infty]) \}$.
Note that, in general, the further evaluation of $\mathit{approxFix}(\psi)$ can result in multiple symbolic paths.
\end{example}
Afterwards, we're left with a \emph{finite} set of symbolic interval paths $T$.
Due to the presence of intervals, we cannot define a denotation of symbolic interval paths directly and instead define lower and upper bounds.
For a symbolic interval value $\mathcal{V}$ that contains \emph{no} sample variables we define $\ulcorner \mathcal{V} \urcorner \subseteq \mathbb{R}$ as all value that the term can evaluate to by replacing every interval $[a, b]$ with some value $r \in [a, b]$.
We define $\llbracket \Psi \rrbracket_\mathit{lb}(U)$ by considering only those concrete traces that fulfill the constraints in $\Psi$ for \emph{all} concrete values within intervals and take the infimum over all scoring expressions:\\
\scalebox{0.89}{\parbox{\linewidth}{
\begin{align*}
\int\!\!\!
\big[\ulcorner\mathcal{V}[\boldsymbol{s}/\overline \alpha]\urcorner\subseteq U\big] \!\!\!\!\prod_{\mathcal{C} \bowtie r \in \Delta} \!\!\!\!\big[\forall t \!\in\! \ulcorner\mathcal{C}[\boldsymbol{s}/\overline \alpha]\urcorner. t\bowtie r\big] \!\!
\prod_{\mathcal{W} \in \Xi} \!\!\inf \ulcorner\mathcal{W}[\boldsymbol{s}/\overline \alpha]\urcorner
\D\boldsymbol{s}.
\end{align*}
}}\\
Similarly we define $\llbracket \Psi \rrbracket_\mathit{ub}(U)$ as\\
\scalebox{0.83}{\parbox{\linewidth}{
\begin{align*}
\int \!\!\!
\big[\ulcorner\mathcal{V}[\boldsymbol{s}/\overline \alpha]\urcorner\cap U \neq \emptyset \big] \!\!\!\!\prod_{\mathcal{C} \bowtie r \in \Delta} \!\!\!\!\big[\exists t \!\in\! \ulcorner\mathcal{C}[\boldsymbol{s}/\overline \alpha]\urcorner.t \bowtie r\big] \!\!
\prod_{\mathcal{W} \in \Xi} \!\!\sup \ulcorner\mathcal{W}[\boldsymbol{s}/\overline \alpha]\urcorner
\D\boldsymbol{s}.
\end{align*}
}}\\
We note that if $\Psi$ contains no intervals then $\llbracket \Psi \rrbracket$ is defined and we have $\llbracket \Psi \rrbracket_\mathit{lb} = \llbracket \Psi\rrbracket_\mathit{ub} = \llbracket \Psi \rrbracket$ .
We can now state the following theorem that formalizes the observation that $\mathit{approxFix}(\psi)$ soundly approximates all symbolic paths that result from $\psi$.
\begin{theorem}\label{thm:lowerUpperSymbolicDenotation}
Let $\psi$ be a symbolic (interval-free) configuration and $U \in \Sigma_{\ensuremath{\mathbb R}}$.
Define $A = \mathit{symPaths}(\psi)$ as the (possibly infinite) set of all symbolic paths reached when evaluating $\psi$ and $B = \mathit{symPaths}(\mathit{approxFix}(\psi))$ as the (finite) set of symbolic \emph{interval} paths reached when evaluating $\mathit{approxFix}(\psi)$.
Then
\begin{align*}
\textstyle\sum_{\Psi \in B} \; \llbracket \Psi \rrbracket_\mathit{lb}(U) \leq \sum_{\Psi \in A} \; \llbracket \Psi \rrbracket(U) \leq \sum_{\Psi \in B} \; \llbracket \Psi \rrbracket_\mathit{ub}(U).
\end{align*}
\end{theorem}
The correctness of \cref{alg:symRiBo} is then a direct consequence of \cref{thm:relation-semantics-symbolic,thm:lowerUpperSymbolicDenotation}.
\begin{corollary}
Let $T$ be the set of symbolic interval paths computed when at line \ref{line:final} of \cref{alg:symRiBo} and $U \in \Sigma_{\ensuremath{\mathbb R}}$.
Then
\begin{align*}
\textstyle\sum_{\Psi \in T} \; \llbracket \Psi \rrbracket_\mathit{lb}(U) \leq \llbracket P \rrbracket(U) \leq \sum_{\Psi \in T} \; \llbracket \Psi \rrbracket_\mathit{ub}(U).
\end{align*}
\end{corollary}
What remains is to compute the bounds $\llbracket \Psi \rrbracket_\mathit{lb}(U)$ and $\llbracket \Psi \rrbracket_\mathit{ub}(U)$ for a symbolic interval path $\Psi \in T$.
We first present the standard interval trace semantics (\cref{sec:standardInt}) and then a more efficient analysis for the case that $\Psi$ contains only linear functions (\cref{sec:linOpt}).
\subsection{Standard Interval Trace Semantics}
\label{sec:standardInt}
For paths containing nonlinear functions, we employ the semantics as introduced in \cref{sec:3intervals}.
Instead of applying interval traces to the entire program, we can restrict to the current symbolic interval path $\Psi$ (intuitively, by adding a $\mathsf{score}(0)$ to all other program paths).
The interval traces split the domain of each sample variable in $\Psi$ into intervals.
It is easy to see that for any pairwise disjoint and exhaustive set of interval traces $\mathcal{T}$, we have $\lowerBound \Psi^\mathcal{T} (U) \leq \llbracket \Psi \rrbracket_\mathit{lb}(U)$ and $\llbracket \Psi \rrbracket_\mathit{ub}(U) \leq \upperBound \Psi^\mathcal{T} (U)$ (see \cref{thm:lowerBoundsSound} and \ref{thm:upperBoundsSound}).
Applying the interval-based semantics on the level of symbolic interval paths maintains the attractive features, namely soundness and completeness (relative to the current path) of the interval-based semantics.
Note that the intervals occurring in $\Psi$ seamlessly integrate with our already interval-based semantics.
\subsection{Linear Interval Trace Semantics}
\label{sec:linOpt}
In case the score values and the guards of all conditionals are linear, we can improve and speed up the interval-based semantics.
Assume all symbolic interval values appearing in $\Psi$ are interval linear functions of $\overline{\alpha} \in \ensuremath{\mathbb R}^n$ into $\mathbb{I}$ (i.e.~$\overline{\alpha} \mapsto \mathbf{w}^\intercal \overline{\alpha} +^\mathbb{I} [a, b]$ for some $\mathbf{w} \in \ensuremath{\mathbb R}^n$ and $[a, b] \in \mathbb{I}$).
We assume, for now, that each symbolic value $\mathcal{W} \in \Xi$ denotes an interval-free linear function (i.e.~a function $\overline{\alpha} \mapsto \mathbf{w}^\intercal \overline{\alpha} + r$).
Fix some $U \in \mathbb{I}$.
We first note that both $\llbracket \Psi \rrbracket_\mathit{lb}(U)$ and $\llbracket \Psi \rrbracket_\mathit{ub}(U)$ are the integral of a polynomial over a convex polytope:
define\\
\scalebox{1}{\parbox{\linewidth}{
\begin{align*}
\mathfrak{P}_\mathit{lb} \! := \! \big\{\boldsymbol{s} \in \mathbb{R}^n \mid \! \ulcorner\mathcal{V}[\boldsymbol{s}/\overline \alpha]\urcorner \!\subseteq \!U \land \!\!\!\! \bigwedge_{\mathcal{C}\bowtie r \in \Delta}\!\!\!\! \forall t \! \in \!\ulcorner\mathcal{C}[\boldsymbol{s}/\overline \alpha]\urcorner. t \bowtie r \big\}
\end{align*}
}}\\
which is a polytope.%
\footnote{For example, if $\mathcal{C}$ denotes the function $\overline{\alpha} \mapsto \mathbf{w}^\intercal \overline{\alpha} + [a, b]$ we can transform a constraint $\forall t \! \in \!\ulcorner\mathcal{C}[\boldsymbol{s}/\overline \alpha]\urcorner. t \leq r $ into the linear constraint $\mathbf{w}^\intercal \overline{\alpha} + b \leq r$.}
Then $\llbracket \Psi \rrbracket_\mathit{lb}(U)$ is the integral of the polynomial $\overline{\alpha} \mapsto \prod_{\mathcal{W} \in \Xi} \mathcal{W}[\boldsymbol{s}/\overline \alpha]$ over $\mathfrak{P}_\mathit{lb}$.
The definition of $\mathfrak{P}_\mathit{ub}$ (the area of integration for $\llbracket \Psi \rrbracket_\mathit{ub}(U)$) is similar.
Such integrals can be computed exactly \cite{baldoni2011integrate}, e.g. with the LattE tool \cite{de2013software}.
Unfortunately, our experiments showed that this does not scale to interesting probabilistic programs.
Instead, we derive guaranteed bounds on the denotation by means of iterated volume computations.
This has the additional benefit that we can handle non-uniform samples and non-liner expressions in $\Xi$.
We follow an approach similar to that of the interval-based semantics in \cref{sec:4intervals-theory} but do not split/bound \emph{individual sample variables} and instead directly bound \emph{linear functions} over the sample variables.
Let $\Xi = \{\mathcal{W}_1, \cdots, \mathcal{W}_k\}$.
We define a \defn{box} (by abuse of language) as an element ${\boldsymbol{t}} = \langle [a_1,b_1], \cdots, [a_k, b_k] \rangle$, where $[a_i, b_i]$ gives a bound on $\mathcal{W}_i$.%
\footnote{Note the similarity to the interval trace semantics. While the $i$th position in an interval trace bounds the value of the $i$th sample variable, the $i$th entry of a box bounds the $i$th score value.}
We define $\mathit{lb}({\boldsymbol{t}}) := \prod_{i=1}^{k} a_i$ and $\mathit{ub}({\boldsymbol{t}}) := \prod_{i=1}^{k} b_i$.
The box ${\boldsymbol{t}}$ naturally defines a subset of $\mathfrak{P}_\mathit{lb}$ given by $\mathfrak{P}_\mathit{lb}^{\boldsymbol{t}} = \big\{\boldsymbol{s} \in \mathfrak{P}_\mathit{lb} \mid \bigwedge_{i=1}^k \mathcal{W}_i[\boldsymbol{s}/\overline \alpha] \in [a_i, b_i] \big\}$.
Then $\mathfrak{P}_\mathit{lb}^{\boldsymbol{t}}$ is again a polytope and we write $\volume(\mathfrak{P}_\mathit{lb}^{\boldsymbol{t}})$ for its volume.
The definition of $\mathfrak{P}_\mathit{ub}^{\boldsymbol{t}}$ and $\volume(\mathfrak{P}_\mathit{ub}^{\boldsymbol{t}})$ is analogous.
As for interval traces, we call two boxes ${\boldsymbol{t}}_1$, ${\boldsymbol{t}}_2$ \emph{compatible} if the intervals are almost disjoint in at least one position and a set of boxes $B$ \emph{exhaustive} if $\bigcup_{{\boldsymbol{t}} \in B} \mathfrak{P}_\mathit{lb}^{\boldsymbol{t}} = \mathfrak{P}_\mathit{lb}$ and $\bigcup_{{\boldsymbol{t}} \in B} \mathfrak{P}_\mathit{ub}^{\boldsymbol{t}} = \mathfrak{P}_\mathit{ub}$ (cf.~\cref{sec:boundsFromIntervalTraces}).
\begin{proposition}\label{prop:linearSplitting}
Let $B$ be a pairwise compatible, exhaustive set of boxes.
Then $\sum_{{\boldsymbol{t}} \in B} \, \mathit{lb}({\boldsymbol{t}}) \cdot \volume\big(\mathfrak{P}_\mathit{lb}^{\boldsymbol{t}}\big) \leq \llbracket \Psi \rrbracket_\mathit{lb}(U)$ and $ \llbracket \Psi \rrbracket_\mathit{ub}(U) \leq \sum_{{\boldsymbol{t}} \in B} \, \mathit{ub}({\boldsymbol{t}}) \cdot \volume\big(\mathfrak{P}_\mathit{ub}^{\boldsymbol{t}}\big)$.
\end{proposition}
As in the standard interval semantics, a finer partition into boxes yields more precise bounds.
While the volume computation involved in \cref{prop:linearSplitting} is expensive \cite{DyerF88}, the number of splits on the linear functions is much smaller than that needed in the standard interval-based semantics.
Our experiments empirically demonstrate that the direct splitting of linear functions (if applicable) is superior to the standard splitting.
In GuBPI we compute a set of exhaustive boxes by first computing a lower and upper bound on each $\mathcal{W}_i \in \Xi$ over $\mathfrak{P}_\mathit{lb}$ (or $\mathfrak{P}_\mathit{ub}$) by solving a linear program (LP) and splitting the resulting range in evenly sized chunks.
\paragraph{Beyond uniform samples and linear scores}
We can extend our linear optimization to non-uniform samples and arbitrary symbolic values in $\Xi$.
We accomplish the former by \emph{combining} the optimized semantics (where we bound linear expressions) with the standard interval-trace semantics (where we bound individual sample variables).
For the latter, we can identify linear sub-expressions of the expressions in $\Xi$, use boxes to bound each linear sub-expression, and use interval arithmetic to infer bounds on the entire expression from bounds on its linear sub-expressions.
More details can be found in \ifFull{the appendix}.
\begin{example}
Consider the path $\Psi = \symPath{3\alpha_1, 5, \Delta, \Xi}$ derived in \cref{ex:pedestrianSymbolicPath}.
We use 1-dimensional boxes to bound $\alpha_2 + \alpha_4$ (the single linear sub-expression of the symbolic values in $\Xi$).
To obtain a lower bound on $\llbracket \Psi \rrbracket_\mathit{lb}(U)$, we sum over all boxes ${\boldsymbol{t}} = \langle [a_1, b_1] \rangle$ and take the product of $\volume\big(\mathfrak{P}_\mathit{lb}^{\boldsymbol{t}}\big)$ with the lower interval bound of $\pdf_{\ensuremath{\mathrm{Normal}}(1.1, 0.1)}([a_1,b_1] + [0, \infty])$ (evaluated in interval arithmetic).
Analogously, for the upper bound we take the product of $\volume\big(\mathfrak{P}_\mathit{ub}^{\boldsymbol{t}}\big)$ with the upper interval bound of $\pdf_{\ensuremath{\mathrm{Normal}}(1.1, 0.1)}([a_1,b_1]+ [0, \infty])$.
\end{example}
\section{Practical Evaluation}
\label{sec:7practical-evaluation}
\begin{table}[!t]
\caption{GuBPI evaluation on selected benchmarks from \cite{SankaranarayananCG13}. We give the times (in seconds) and bounds computed by \cite{SankaranarayananCG13} and GuBPI. Details on the exact queries (the Q column) can be found in \ifFull{\cref{fig:pldi13resultsApp} in the Appendix}. }
\label{fig:pldi13results}
\vspace{-3mm}
\setlength\dashlinedash{0.5pt}
\setlength\dashlinegap{1pt}
\setlength\arrayrulewidth{0.5pt}
\centering
\footnotesize
\def1.1{1.2}
\begin{tabular}{llcccc}
\toprule
\multicolumn{2}{@{}c@{\hspace{0mm}}}{} &\multicolumn{2}{@{}c@{}}{\textbf{Tool from \cite{SankaranarayananCG13}}} & \multicolumn{2}{@{}c@{\hspace{4mm}}}{\textbf{GuBPI}}\\
\cmidrule[0.7pt](l{1mm}r{2mm}){3-4}
\cmidrule[0.7pt](l{1mm}){5-6}
\textbf{Program}& \textbf{Q}& $\boldsymbol{t}$ & \textbf{Result} & $\boldsymbol{t}$ & \textbf{Result} \\
\cmidrule[0.7pt](r{1mm}){1-1}
\cmidrule[0.7pt](l{1mm}r{2mm}){2-2}
\cmidrule[0.7pt](l{1mm}r{1mm}){3-3}
\cmidrule[0.7pt](l{1mm}r{2mm}){4-4}
\cmidrule[0.7pt](l{1mm}r{1mm}){5-5}
\cmidrule[0.7pt](l{1mm}r{0mm}){6-6}
tug-of-war & Q1 & 1.29 & $[0.6126, 0.6227]$ & 1.41 & $[0.6134, 0.6135]$ \\
\hdashline
tug-of-war & Q2 & 1.09 & $[0.5973, 0.6266]$ & 1.36 & $[0.6134, 0.6135]$ \\
\hdashline
beauquier-3 & Q1 & 1.15 & $[0.5000, 0.5261]$ & 18.8 & $[0.4999, 0.5001]$ \\
\hdashline
ex-book-s & Q1 & 8.48 & $[0.6633, 0.7234]$ & 1.65 & $[0.7417, 0.7418]$ \\
\hdashline
ex-book-s \!\!\!& Q2$^\star$ \!\!\! & 10.3 & $[0.3365, 0.3848]$ & 8.31 & $[0.4137, 0.4138]$ \\
\hdashline
ex-cart & Q1 & 2.41 & $[0.8980, 1.1573]$ & 1.87 & $[0.9999, 1.0001]$ \\
\hdashline
ex-cart & Q2 & 2.40 & $[0.8897, 1.1573]$ & 10.4 & $[0.9999, 1.0001]$ \\
\hdashline
ex-cart & Q3 & 0.15 & $[0.0000, 0.1150]$ & 75.4 & $[0.0000, 0.0001]$ \\
\hdashline
ex-ckd-epi-s\!\!\! & Q1$^\star$ \!\!\! & 0.15 & $[0.5515, 0.5632]$ & 2.01 & $[0.0003, 0.0004]$ \\
\hdashline
ex-ckd-epi-s\!\!\! & Q2$^\star$ \!\!\! & 0.08 & $[0.3019, 0.3149]$ & 2.26 & $[0.0003, 0.0004]$ \\
\hdashline
ex-fig6 & Q1 & 1.31 & $[0.1619, 0.7956]$ & 19.5 & $[0.1899, 0.1903]$ \\
\hdashline
ex-fig6 & Q2 & 1.80 & $[0.2916, 1.0571]$ & 19.7 & $[0.3705, 0.3720]$ \\
\hdashline
ex-fig6 & Q3 & 1.51 & $[0.4314, 2.0155]$ & 23.2 & $[0.7438, 0.7668]$ \\
\hdashline
ex-fig6 & Q4 & 3.96 & $[0.4400, 3.0956]$ & 26.6 & $[0.8682, 0.9666]$ \\
\hdashline
ex-fig7 & Q1 & 0.04& $[0.9921, 1.0000]$ & 0.81 & $[0.9980, 0.9981]$ \\
\hdashline
example4 & Q1 & 0.02 & $[0.1910, 0.1966]$ & 0.52 & $[0.1918, 0.1919]$ \\
\hdashline
example5 & Q1 & 0.06 & $[0.4478, 0.4708]$ & 0.49 & $[0.4540, 0.4541]$ \\
\hdashline
herman-3 & Q1 & 0.47 & $[0.3750, 0.4091]$ & 110 & $[0.3749, 0.3751]$ \\
\bottomrule
\end{tabular}
\end{table}
We have implemented our semantics in the prototype \href{https://gubpi-tool.github.io/}{GuBPI}, written in F\#.
In cases where we apply the linear optimisation of our semantics we use Vinci \cite{BuelerEF00} to discharge volume computations of convex polytopes.
We set out to answer the following questions:
\begin{enumerate}
\item How does GuBPI perform on instances that could already be solved (e.g. by PSI \cite{GehrMV16})?
\item Is GuBPI able to infer useful bounds on recursive programs that could not be handled rigorously before?
\end{enumerate}
\subsection{Probability Estimation}
\label{sec:evalEsti}
We collected a suite of 18 benchmarks from \cite{SankaranarayananCG13}.
Each benchmark consists of a program $P$ and a query $\varphi$ over the variables of $P$.
We bound the probability of the event described by $\varphi$ using the tool from \cite{SankaranarayananCG13} and GuBPI (\cref{fig:pldi13results}).
While our tool is generally slower than than that of \cite{SankaranarayananCG13}, the completion times are still reasonable.
Moreover, in many cases, the bounds returned by GuBPI are tighter than those of \cite{SankaranarayananCG13}.
In addition, for benchmarks marked with a $\star$, the two pairs of bounds contradict each other.%
\footnote{A stochastic simulation using $10^6$ samples in Anglican \cite{TolpinMW15} yielded results that fall within GuBPI's bounds but violate those computed by \cite{SankaranarayananCG13}.}
We should also remark that GuBPI cannot handle all benchmarks proposed in \cite{SankaranarayananCG13} because of the heavy use of conditionals, causing our precise symbolic analysis to suffer from the well-documented path explosion problem \cite{BoonstoppelCE08,Godefroid07,CadarGPDE08}.
Perhaps unsurprisingly, \cite{SankaranarayananCG13} can handle those examples much better, as one of their core contributions is a stochastic method to reduce the number of paths considered (see \cref{sec:related}).
Also note that \cite{SankaranarayananCG13} is restricted to uniform samples, linear guards and score-free programs, whereas we tackle a much more general problem.
\begin{table}
\caption{Probabilistic programs with discrete domains from PSI \cite{GehrMV16}. The times are given in seconds. }\label{tab:psi-discrete}
\vspace{-3mm}
\small
\def1.1{1.1}
\begin{tabular}{lcclcc}
\cmidrule[1pt](r{1mm}){1-3}
\cmidrule[1pt](l{1mm}){4-6}
\textbf{Instance} & $\boldsymbol{t}_\mathit{PSI}$ & $\boldsymbol{t}_\mathit{GuBPI}$ & \textbf{Instance} & $\boldsymbol{t}_\mathit{PSI}$ & $\boldsymbol{t}_\mathit{GuBPI}$ \\
\cmidrule[0.7pt](r{0.5mm}){1-1}
\cmidrule[0.7pt](lr{0.5mm}){2-2}
\cmidrule[0.7pt](l{1mm}r{1mm}){3-3}
\cmidrule[0.7pt](l{1mm}r{0.5mm}){4-4}
\cmidrule[0.7pt](lr{0.5mm}){5-5}
\cmidrule[0.7pt](l{1mm}){6-6}
burglarAlarm \!\!\!\!& 0.06 & 0.22 & coins & 0.01 & 0.18 \\
twoCoins & 0.02 & 0.18 & ev-model1 & 0.02 & 0.17 \\
grass & 0.09 & 0.29 & ev-model2 & 0.02 & 0.18 \\
noisyOr & 0.22 & 0.70 & murderMystery \!\!\!\! & 0.03 & 0.18 \\
bertrand & 0.04 & 0.17 & coinBiasSmall & 0.22 & 1.92 \\
coinPattern & 0.03 & 0.19 & gossip & 0.10 & 0.17 \\
\cmidrule[1pt](r{1mm}){1-3}
\cmidrule[1pt](l{1mm}){4-6}
\end{tabular}
\end{table}
\begin{figure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/coinBias-bounds.pdf}
\vspace{-6mm}
\subcaption{\texttt{coinBias} example from \cite{GehrMV16}. The program samples a beta prior on the bias of a coin and observes repeated coin flips (16.5 seconds).}
\end{subfigure}\hfill%
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/max-bounds.pdf}
\vspace{-6mm}
\subcaption{\texttt{max} example from \cite{GehrMV16}. The program compute the maximum of two i.i.d. normal samples (31.8 seconds).}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/binary-gmm-bounds-with-samples.pdf}
\vspace{-6mm}
\subcaption{Gaussian Mixture Model from \cite{ZhouGKRYW19} (39 seconds). MCMC methods (here Anglican's LMH) usually find only one mode.}\label{fig:gmm}
\end{subfigure}\hfill%
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{images/neals-funnel-with-samples.pdf}
\vspace{-6mm}
\subcaption{Neal's funnel from \cite{Neal03,GorinovaMH20} (2.8 seconds). HMC misses some of the probability mass around 0.}
\end{subfigure}%
\caption{Guaranteed Bounds computed by GuBPI for a selection of non-recursive models from \cite{GehrMV16,GehrSV20,ZhouGKRYW19, Neal03}. }\label{fig:psi-examples}
\end{figure}
\begin{figure*}[!t]
\begin{subfigure}[t]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{images/cav-example-7-bounds.pdf}
\vspace{-7mm}
\subcaption{\texttt{cav-example-7} example from the PSI repository. PSI bounds the depth resulting in a spike at 10, whereas GuBPI can compute bounds on the denotation of the unbounded program (104 seconds).}\label{fig:psi-bounded-1}
\end{subfigure}\hfill%
\begin{subfigure}[t]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{images/cav-example-5-bounds.pdf}
\vspace{-7mm}
\subcaption{\texttt{cav-example-5} example from the PSI repository. This program is included in the PSI repository but cannot be handled by PSI due to the unbounded loops (156 seconds).}\label{fig:psi-bounded-2}
\end{subfigure}\hfill%
\begin{subfigure}[t]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{images/add-uniform-with-counter-large-bounds.pdf}
\vspace{-7mm}
\subcaption{\texttt{add\_uniform\_with\_counter\_large} example from the PSI repository where GuBPI can handle the unbounded loop (21 seconds).}\label{fig:adduniforms}
\end{subfigure}
\begin{subfigure}[t]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{images/random-box-walk-bounds.pdf}
\vspace{-7mm}
\subcaption{\texttt{random-box-walk} models the cumulative distance traveled by a biased random walk.
If a uniformly sampled step $s$ has length less than $\tfrac{1}{2}$, we move $s$ to the left, otherwise $s$ to the right.
The walk stops when it crosses a threshold (153 seconds).}\label{fig:extra-recursive1}
\end{subfigure}\hfill%
\begin{subfigure}[t]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{images/growing-walk-bounds.pdf}
\vspace{-7mm}
\subcaption{\texttt{growing-walk}. The program models a geometric random walk where (with increasing distance) the step size of the walk is increased.
The cumulative distance is observed from a normal distribution centered at 3 (61 seconds).}\label{fig:extra-recursive2}
\end{subfigure}\hfill%
\begin{subfigure}[t]{0.30\textwidth}
\centering
\includegraphics[width=\textwidth]{images/param-estimation-recursive.pdf}
\vspace{-7mm}
\subcaption{\texttt{param-estimation-recursive}. We sample a uniform prior $p$ and (in each step) travel to the left with probability $p$ and to the right with probability $(1-p)$. We observe the walk to come to a halt at location $1$ (observed from a normal) and wish to find the posterior on $p$ (172 seconds).
}\label{fig:extra-recursive3}
\end{subfigure}
\vspace{-1mm}
\caption{Guaranteed bounds computed by GuBPI for a selection of recursive models.}
\label{fig:recursive-models}
\end{figure*}
\subsection{Exact Inference}
To evaluate our tool on instances that can be solved exactly, we compared with PSI \cite{GehrMV16,GehrSV20} which can, in certain cases, compute a closed-form solution of the posterior.
We note that whenever exact inference is possible, exact solutions will always be superior to mere bounds; and, due to the overhead of our semantics, exact solution will often be found faster.
Because of the different output format (i.e.~exact results vs.~bounds), a direct comparison between exact methods and GuBPI is challenging.
As a consistency check, we collected benchmarks from the PSI repository where the output domain is finite and GuBPI can therefore compute \emph{exact} results (tight bounds).
They agree with PSI in all cases, which includes 8 of the 21 benchmarks from \cite{GehrMV16}.
We report the computation times in \cref{tab:psi-discrete}.
We then considered examples where GuBPI computes non-tight bounds.
For space reasons, we can only include a selection of examples in this paper.
The bounds computed by GuBPI and a short description of each example is shown in \cref{fig:psi-examples}.
We can see that, despite the relatively loose bounds, they are still useful and provide the user with a rough---and most importantly, \emph{guaranteed to be correct}---idea of the denotation.
The success of exact solvers such as PSI depends on the underlying symbolic solver (and the optimisations implemented).
Consequently, there are instances where the symbolic solver cannot compute a closed-form (integral-free) solution.
Conversely, while our method is (theoretically) applicable to a very broad class of programs, there exist programs where the symbolic solver finds solutions but the analysis in GuBPI becomes infeasible due to the large number of interval traces required.
\subsection{Recursive Models}
We then evaluated our tool on complex models that \emph{cannot} be handled by any of the existing methods.
For space reasons, we only give an overview of some examples.
Unexpectedly, we found recursive models in the PSI repository:
there are examples that are created by unrolling loops to a fixed depth.
This fixed unrolling changes the posterior of the model.
Using our method we can handle those examples \emph{without} bounding the loop.
Three such examples are shown in \cref{fig:psi-bounded-1,fig:psi-bounded-2,fig:adduniforms}. In \cref{fig:psi-bounded-1}, PSI bounds the iterations resulting in a spike at 10 (the unrolling bound).
For \cref{fig:psi-bounded-2}, PSI does not provide any solution whereas GuBPI provides useful bounds.
For \cref{fig:adduniforms}, PSI bounds the loop to compute results (displayed in blue) whereas GuBPI computes the green bounds on the unbounded program.
It is obvious that the bounds differ significantly, highlighting the impact that unrolling to a fixed depth can have on the denotation.
This again strengthens the claim that rigorous methods that can handle unbounded loops are needed.
There also exist unbounded discrete examples where PSI computes results for the bounded version that are incorrect for the denotation of the unbounded program.
\cref{fig:extra-recursive1,fig:extra-recursive2,fig:extra-recursive3} depict further recursive examples (alongside a small description).
Lastly, as a \emph{very} challenging example, we consider the pedestrian example (\cref{ex:pedestrian}) again.
The bounds computed by GuBPI are given in \cref{fig:pedBounds} together with the two stochastic results from \cref{fig:pedestrian-stochastic}.
It is clear that the bounds are precise enough to rule out the HMC samples.
This example is very challenging (resulting in a running time of about 1.5h for GuBPI), so in this case guaranteed bounds are actually the only method to decide which of the two outcomes is plausible.%
\footnote{While the running time seems high, we note that Pyro HMC took about an hour to generate $10^4$ samples and produce the (wrong) histogram. Diagnostic methods like simulation-based calibration took even long (>30h) and delivered inconclusive results (see \cref{sec:sbc} for details).}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/pedestrian-bounds.pdf}
\vspace{-8mm}
\caption{Bounds for the pedestrian example \cref{ex:pedestrian} }\label{fig:pedBounds}
\end{figure}
\begin{table}
\caption{Running times of GuBPI vs SBC for (Pyro's) HMC. Times are given in seconds (s) and hours (h).}\label{tab:SBC}
\vspace{-2mm}
\def1.1{1.1}
\begin{tabular}{lll}
\toprule
\textbf{Instance} & $\boldsymbol{t}_\mathit{GuBPI}$ & $\boldsymbol{t}_\mathit{SBC}$ \\
\cmidrule[0.7pt](r{0.5mm}){1-1}
\cmidrule[0.7pt](lr{0.5mm}){2-2}
\cmidrule[0.7pt](l{0.5mm}){3-3}
%
Binary GMM (1-dimensional) (\cref{fig:gmm}) & 39s & 1h \\
%
Binary GMM (2-dimensional) & 4h & 1.5h \\
%
Pedestrian Example (\cref{fig:pedBounds})& 1.5h & >300h \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Comparison with Statistical Validation}
\label{sec:sbc}
A general approach to validating inference algorithms for a generative Bayesian model is \emph{simulation-based calibration} (SBC) \cite{SBC,cook2006validation}.
SBC draws a sample $\theta$ from the prior distribution of the parameters, generates data $y$ for these parameters, and runs the inference algorithm to produce posterior samples $\theta_1, \dots, \theta_L$ given $y$.
If the posterior samples follow the true posterior distribution, the rank statistic of the prior sample $\theta$ relative to the posterior samples will be uniformly distributed.
If the empirical distribution of the rank statistic after many such simulations is non-uniform, this indicates a problem with the inference.
While SBC is very general, it is computationally expensive because it performs inference in every simulation.
Moreover, as SBC is a stochastic validation approach, any fixed number of samples may fail to diagnose inference errors that only occur on a very low probability region.
We compare the running times of GuBPI and SBC for three examples where Pyro's HMC yields wrong results (\cref{tab:SBC}).
Running SBC for on the pedestrian example (with a reduced sample size and using the parameters recommended in \cite{SBC}) took 32 hours and was still inconclusive because of strong autocorrelation.
Reducing the latter via thinning requires more samples, and would increase the running time to >300 hours.
Similarly, GuBPI diagnoses the problem with the mixture model in \cref{fig:gmm} in significantly less time than SBC.
However, for higher-dimensional versions of this mixture model, SBC clearly outperforms GuBPI.
We give a more detailed discussion of SBC for these examples in \ifFull{\cref{sec:sbc-experiments}}.
\subsection{Limitations and Future Improvements}
The theoretical foundations of our interval-based semantics ensure that GuBPI is applicable to a very broad class of programs (c.f. \cref{sec:4intervals-theory}).
In practice, as usual for exact methods, GuBPI does not handle all examples equally well.
Firstly, as we already saw in \cref{sec:evalEsti}, the symbolic execution---which forms the entry point of the analysis---suffers from path explosion.
On some extreme loop/recursion-free programs (such as example-ckd-epi from \cite{SankaranarayananCG13}), our tool cannot compute all (finitely many) symbolic paths in reasonable time, let alone analyse them in our semantics.
Extending the approach from \cite{SankaranarayananCG13} to sample representative program paths (in the presence of conditioning) is an interesting future direction that we can combine with the rigour of analysis by using our type system.
Secondly, our interval-based semantics imposes bounds on each sampled variable and thus scales exponentially in the dimension of the model; this is amplified in the case where the optimised semantics (\cref{sec:linOpt}) is not applicable.
It would be interesting to explore whether this can be alleviated using different trace splitting techniques.
Lastly, the bounds inferred by our interval type system take the form a single interval with no information on the exact distribution along this interval.
For example, the most precise bound derivable for the term $\mu^f_x. x \oplus \big[f (x + \mathsf{sample}) \oplus f (x - \mathsf{sample})\big]$ is \scalebox{0.6}{$[a, b]\to \exType{[-\infty, \infty]}{[1, 1]}$} for any $a, b$.
After unrolling to a fixed depth, the approximation of the paths not terminating within the fixed depth is therefore imprecise.
For future work, it would be interesting to improve the bounds in our type system to provide more information about the distribution by means of rigorous approximations of the denotation of the fixpoint in question (i.e.~the probabilistic summary \cite{WangHR18,Muller-OlmS04,PodelskiSW05} of the fixpoint).
\section{Related Work}
\label{sec:related}
\paragraph{Interval trace semantics and Interval SPCF}
Our interval trace semantics to compute bounds on the denotation is similar to the semantics introduced by \citet{BeutnerO21}, who study an interval approximation to obtain \emph{lower} bounds on the termination probability.
By contrast, we study the more challenging problem of bounding the program denotation which requires us to track the weight of an execution, and to prove that the denotation approximates a Lebesgue integral, which requires novel proof ideas.
Moreover, whereas the termination probability of a program is always upper bounded by $1$, here we derive both lower and \emph{upper} bounds.
\paragraph{Probability estimation}
\citet{SankaranarayananCG13} introduced a static analysis framework to infer bounds on a class of definable events in (\emph{score-free}) probabilistic programs.
The idea of their approach is that if we find a finite set $\mathcal{T}$ of symbolic traces with cumulative probability at least $1-c$, and a given event $\varphi$ occurs with probability at most $b$ on the traces in $\mathcal{T}$, then $\varphi$ occurs with probability at most $b + c$ on the entire program.
In the presence of conditioning, the problem becomes vastly more difficult, as the aggregate weight on the unexplored paths can be unbounded, giving $\infty$ as the only derivable upper bound, which is useless.
In order to infer guaranteed bounds, it is necessary to analyse \emph{all} paths in the program, which we accomplish via static analysis and in particular our interval type system.
The approach from \cite{SankaranarayananCG13} was extended by \citet{AlbarghouthiDDN17} to compute the probability of events defined by arbitrary SMT constraints but is restricted to score-free and non-recursive programs.
Our interval-based approach, which may be seen as a variant of theirs, is founded on a complete semantics (\cref{thm:Completeness of interval approximations}), can handle recursive program with (soft) scoring and is applicable to a broad class of primitive functions.
Note that we consider programs with \emph{soft} conditioning in which scoring cannot be reduced to volume computation directly.%
\footnote{For programs including only hard-conditioning (i.e.~scoring is only possible with $0$ or $1$), the posterior probability of an event $\varphi$ can be computed by dividing the probability of all traces with weight $1$ on which $\varphi$ holds by the probability of all traces with weight $1$.}
Intuitively, soft conditioning performs a (global) re-weighting of the set of traces, which cannot be captured by (local) volume computations.
In our interval trace semantics we instead track approximation of the weight along each interval trace.
\paragraph{Exact inference}
There are numerous approaches to inferring the exact denotation of a probabilistic program.
\citet{HoltzenBM20} introduced an inference method to efficiently compute the denotation of programs with discrete distributions.
By exploiting program structure to factorise inference, their system Dice can perform exact inference on programs with hundreds of thousands of random variables.
\citet{GehrMV16} introduced PSI, an exact inference system that uses symbolic manipulation and integration.
A later extension, $\lambda$PSI \cite{GehrSV20}, adds support for higher-order functions and nested inference.
The PPL Hakaru \cite{NarayananCRSZ16} supports a variety of inference algorithms on programs with both discrete and continuous distributions.
Using program transformation and partial evaluation, Hakaru can perform exact inference via symbolic disintegration \cite{ShanR17} on a limited class of programs.
\citet{SaadRM21} introduced SPPL, a system that can compute exact answers to a range of probabilistic inference queries, by translating a restricted class of programs to sum-product expressions, which are highly effective representations for inference.
While exact results are obviously desirable, this kind of inference only works for a restricted family of programs: none of the above exact inference systems allow (unbounded) recursion.
Unlike our tool, they are therefore unable to, for instance, handle the challenging \cref{ex:pedestrian} or the programs in \cref{fig:recursive-models}.
\paragraph{Abstract interpretation}
\citet{Monniaux00, Monniaux01} developed an abstract domain for (score-free) probabilistic programs given by a weighted sum of abstract regions.
\citet{Smith08} considered truncated normal distributions as an abstract domain and developed analyses restricted to score-free programs with only linear expressions.
Extending both approaches to support soft conditioning is non-trivial as it requires the computation of integrals on the abstract regions.
In our interval-based semantics, we abstract the concrete traces (by means of interval traces) and not the denotation (distribution of program states).
This allows us to derive bounds on the weight (given by scoring) along the abstracted paths.
\citet{HuangDM21} discretise the domain of continuous samples into interval cubes and derive posterior approximations on each cube.
The resulting approximation converges to the true posterior (similar to approximate, stochastic methods) but does not provide exact/guaranteed bounds and is not applicable to recursive programs.
\paragraph{Refinement types}
Our interval type system (\cref{sec:5interval-analysis}) may be viewed as a type system that refines not just the value of an expression but also its weight \cite{FreemanP91}.
To our knowledge, no existing type refinement system can bound the weight of program executions.
Moreover, the by-design seamless integration with our interval trace semantics allows for a much cheaper type inference, without resorting to an SMT or Horn constraint solver.
This is a crucial advantage as a typical GuBPI inference queries the analysis numerous times (up to thousands of times for some examples).
\paragraph{Stochastic methods}
A general approach to validating inference algorithms for a generative Bayesian model is \emph{simulation-based calibration} (SBC) \cite{SBC,cook2006validation}, discussed in \cref{sec:sbc}.
\citet{Grosse2015} introduced a method to estimate the log marginal likelihood of a model by constructing stochastic lower/upper bounds.
They show that the true value can be sandwiched between these two stochastic bounds with high probability.
In closely related work \cite{Cusumano-Towner2017b,Grosse2016a}, this was applied to measure the accuracy of approximate probabilistic inference algorithms on a specified dataset.
By contrast, our posterior bounds are non-stochastic, and our method applicable to arbitrary programs of a universal PPL.
\section{Conclusion}
We have studied the problem of inferring guaranteed bounds on the posterior of programs of a universal PPL.
Our work is based on the interval trace semantics, and our constraint-based interval type system gives rise to a tool that can infer useful bounds on the posterior of interesting recursive programs. This is a capability beyond the reach of existing methods, such as exact inference.
As a method of Bayesian inference for statistical probabilistic programs, we can view our framework as occupying a useful middle ground between approximate stochastic inference and exact inference.
|
1,314,259,993,486 | arxiv | \section{Introduction}
A first order phase transition from hadronic to exotic matter phases may
proceed through the nucleation of droplets of the new phase. The formation
of droplets of quark matter \citep{Bombaci1, Mintz, Bombaci2} and antikaon
condensed matter \citep{Norsen, NorsenReddy} in neutrino-free neutron stars (NS) was
studied using the homogeneous nucleation theory of Langer \citep{Langer}.
Droplets of new phase may appear in the metastable nuclear matter due to thermal
fluctuations. The droplets of the stable phase with radii larger than a
critical radius survives and grows if the latent heat is transported
from the surface of the droplet to the metastable state. This heat
transportation occurs through thermal dissipation \cite {Las}
and viscous damping \citep{Raj}.
We have seen that the onset of antikaon condensate influences the shear
viscosity of NS matter composed of neutron(n), proton(p), electron(e) and
muon($\mu$) that can interact by strong or electromagnetic interactions
\citep{shear}. Effect of shear viscosity on the nucleation
of the antikaon condensed matter was recently been studied \citep{nucleat}
in deleptonised NS, after the neutrinos are emitted.
Here we investigate the contribution of neutrinos to the shear viscosity
and nucleation of antikaon condensates.
In the PNS where the temperature is of the order of a few 10s
of MeV, neutrinos are trapped because their mean free paths under these
conditions are small compared to the radius of the star. On the other hand,
they are very effective at transporting both heat and momentum because their
mean free paths are orders of magnitude larger than that for other particles.
\section{Formalism}
We adopt the homogeneous nucleation theory of Langer \citep{Langer} to calculate
the thermal fluctuation rate for a first order phase transition from the
charge-neutral and beta-equilibrated nuclear matter to $K^-$ condensed matter
in a neutrino-trapped PNS. The thermal nucleation rate is
given by
\begin{equation}
I= \Gamma_0 \exp(-\frac{\Delta F (R_c)}{T}),
\end{equation}
where $\Delta F= $ is the change in free energy required to activate the
formation of the critical droplet. $\Gamma_0=\frac{\kappa}{2 \pi}\Omega_0$, is the prefactor of which $\Omega_0=\frac{2}{3\sqrt{3}} \left(\frac {\sigma}{T}\right)^{3/2}
\left(\frac {R_C}{\xi}\right)^4~$ is the statistical prefactor and
$\kappa=\frac{2 \sigma}{R_c^3 (\Delta w)^2}\left[\lambda T+2 \left(\frac{4}{3} \eta+\zeta \right)\right]$
is the dynamical prefactor. Here $\sigma$ is the surface tension for the
surface separating the two phases, $\xi$ is the correlation length for kaons,
$\Delta w$ is the difference of the enthalpy of the two phases, $\lambda$ the
thermal conductivity, $\eta$ and $\zeta$ are the shear and bulk viscosity
respectively. The free energy is maximum at this critical radius given by
\begin{equation}
R_c(T)=\frac{2 \sigma}{\left(P^K-P^N\right)}.
\end{equation}
Finally the thermal nucleation time is given by $\tau_{th}=(V_{nuc}I)^{-1}$,
where $V_{nuc}=\frac {4 \pi}{3}R_{nuc}^3$ is the volume of the
core, where the thermodynamic variables is assumed to remain constant.
To calculate shear viscosity in the PNS, which is
mostly contributed by the neutrinos \citep{Pethick}, we consider the scattering
of neutrinos $\nu_e + N \rightarrow \nu_e + N $ where N=n, p, e.
Shear viscosity of neutrinos due to scattering is calculated using the
coupled Boltzmann transport equation \citep{Pethick}. For the deleptonised
NS, total shear viscosity is given by $\eta=\eta_n+\eta_p+\eta_e+\eta_{\mu}$ as
in Ref. \citep{shear}.
\paragraph{The Model EoS}
In order to calculate the shear viscosity and critical radius ($R_c$), we need
to know the EoS,
that we construct at finite temperature using the relativistic mean field
model \citep{Pons, crit}. The interaction
between baryons is mediated by the exchange of scalar ($\sigma$) and vector
($\omega,\rho$) mesons. This picture is consistently extended to include the
kaons. We use the parameter sets of GM1 model \citep{GM} for nucleon-meson
coupling constants. The kaon-meson coupling constants are determined using
quark model, isospin counting rule and the real part of $K^-$ potential depth,
that we take as -160 MeV in our calculation \citep{shear}.
\subsection{Results}
\begin{figure}[t]
\includegraphics[height=0.3\textheight, width=0.79\textwidth]{test1.eps}
\end{figure}
The total shear viscosity is shown as a
function of normalised baryon density for different temperatures in Fig. 1.
We consider two cases i)the deleptonised NS matter, where the total shear
viscosity has contribution from all the species such as n, p, e and $\mu$;
ii)for the neutrino-trapped PNS matter (lepton fraction
$Y_L=0.4$), where the major contribution comes from neutrinos. In
both the cases shear viscosity decreases with rising temperature.
The prefactor $\Gamma_0$ is plotted as a function of temperature for PNS
in Fig. 2 and is compared when it is approximated by $T^4$ from dimensional
analysis.
Here we find the shear viscosity term changes the prefactor by a large order of
magnitude compared to $T^4$ approximation. This difference is much more
pronounced in PNS compared to NS \citep{nucleat}.
In Fig. 3 we display the nucleation time as a function of temperature for
a set of values of surface tension at a fixed baryon density for
lepton-trapped PNS matter and find that both the droplet radii and the thermal
nucleation time strongly depend on the surface tension.
Nucleation may not occur in PNS for surface tension <30 MeV fm$^{-2}$ as
the radius of droplet is less than the correlation length ($\xi=\sim$ 5 fm).
We approximate that the radius of the droplet should be greater than $\xi$
in this calculation\citep{NorsenReddy,Las}. For NS case, nucleation is observed
to be possible for $\sigma < 20$ MeV fm$^{-2}$ \citep{nucleat}. Larger
viscosity there leads to larger value of T which might melt the condensate.
\begin{figure}
\includegraphics[height=0.3\textheight, width=0.79\textwidth]{test2.eps}
\end{figure}
Finally in Fig. 4 we compare the results of thermal nucleation time taking
into account the effect of shear viscosity in the prefactor with that of the
prefactor approximated by $T^4$.
For PNS we displayed the results for $n_b = 3.30n_0$ and $\sigma =35$MeV fm$^{-2}$. We find the result of the $T^4$ approximation a few orders
of magnitude higher than those of our calculation. These results demonstrate
the importance of including the shear viscosity in the prefactor of Eq. (1)
in the calculation of thermal nucleation time. We already obtained similar
results for NS matter \citep{nucleat}. Also, it may be mentioned that
nucleation of antikaon condensates is possible if $\tau_{th} < \tau_{cooling}
(\sim 100s)$. That is possible for PNS with $\sigma \geq 30$ MeV fm$^{-2}$.
\subsubsection{Summary}
We have investigated the role of shear viscosity on the thermal nucleation rate
for the formation of a critical droplet of antikaon condensed matter. For this we
considered a first order phase transition from the nuclear to antikaon condensed matter
in PNS. We have seen that the droplet radii increase with increasing surface tension.
We also compare the nucleation times for NS and PNS and found that nucleation is possible
for a lower value of surface
tension $\sigma <$ 20 MeV fm$^{-2}$ in NS while it may be possible only for higher value
in PNS ( $\sigma \geq $ 30 MeV fm$^{-2}$).
\begin{theacknowledgments}
S.B. would like to acknowledge the Department of Science \& Technology, India
for the travel grant to present this paper in the conference PANIC11.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
1,314,259,993,487 | arxiv | \section{Introduction}
All graphs considered in this paper will be finite, undirected, and may have loops and multiple edges, unless stated otherwise (in which case the graph will be referred to as a simple graph). We consider five types of walks in graphs: general walks, trails, paths, induced paths, and isometric paths. We follow the terminology used in~\cite{west2001introduction}. Given a non-negative integer $\ell$, a \emph{$v_{0},v_{\ell}$-walk of length $\ell$} in a graph $G$ is a sequence $(v_{0},e_{1},v_{1},\ldots,e_{\ell},v_{\ell})$, where $v_{0},\ldots,v_{\ell}\in V(G)$, $e_{1},\ldots,e_{\ell}\in E(G)$, and for all $i\in\{1,\ldots,\ell\}$ edge $e_{i}$ has endpoints $v_{i-1}$ and $v_{i}$.
If $v_{0}=v_{\ell}$, the walk is said to be \emph{closed}.
A walk in which all edges (resp.~vertices) are distinct is a \emph{trail} (resp.~a \emph{path}) in $G$.
A subgraph $H$ of a graph $G$ is an \emph{induced} subgraph of $G$ if the set of edges of $H$ is exactly the set of edges of $G$ having both endpoints in $V(H)$. The \emph{distance} between two vertices $u$ and $v$ in a graph $G$ is denoted by $d_{G}(u,v)$ and defined as the length of a shortest $u,v$-path in $G$ (or $\infty$
if there is no $u,v$-path in $G$).
A subgraph $H$ of $G$ is said to be \emph{isometric in $G$} if $d_{H}(u,v)=d_{G}(u,v)$ for every two vertices $u,v\in V(H)$.
Note that a path $P$ in a graph $G$ can be viewed as a subgraph of $G$ (with a pair of mutually inverse paths yielding the same subgraph).
In particular, we say that a path in $G$ is \emph{induced} if the corresponding subgraph is induced in $G$, and \emph{isometric} if the corresponding subgraph is isometric in $G$.
For a positive integer $k$ we denote by $P_{k}$ the graph corresponding to a $k$-vertex path (without a host graph $G$), that is,
the graph with $k$ vertices $v_1,\ldots, v_k$
and $\ell=k-1$ edges $\{\{v_{i},v_{i+1}\}\mid i\in \{1,\ldots,\ell\}\}$.
\subsection{Five types of walks}
We consider the following five types of walks:
\begin{table}[!h]
\centerline{\begin{tabular}{|c|c|c|c|c|c|}
\hline
$t$ & $\ensuremath\mathtt{wlk}$ & $\ensuremath\mathtt{trl}$ & $\ensuremath\mathtt{pth}$ & $\ensuremath\mathtt{ind}$ & $\ensuremath\mathtt{iso}$\\
\hline
walk of type $t$ &
walk & trail& path & induced path & isometric path\\
\hline
\end{tabular}
}
\end{table}
{For a graph $G$ and $t\in \{\ensuremath\mathtt{wlk},\ensuremath\mathtt{trl},\ensuremath\mathtt{pth},\ensuremath\mathtt{ind},\ensuremath\mathtt{iso}\}$, a \emph{$t$-walk} in $G$ is a walk in $G$ of type $t$.
We denote the set of all $t$-walks in $G$ by $\W {t}G$.}
Note that
$$\W {\ensuremath\mathtt{wlk}}G\supseteq\W {\ensuremath\mathtt{trl}}G\supseteq\W {\ensuremath\mathtt{pth}}G\supseteq\W {\ensuremath\mathtt{ind}}G\supseteq\W {\ensuremath\mathtt{iso}}G.$$
Moreover, for $k\in\{0,1\}$, equalities hold if we restrict ourselves to walks of length $k$.
(Note however that for $k=1$ the graph should not contain loops.)
\begin{sloppypar}
\begin{defn}[{Extension of a $t$-walk}]
Let $t\in\{\ensuremath\mathtt{wlk},\ensuremath\mathtt{trl},\ensuremath\mathtt{pth},\ensuremath\mathtt{ind},\ensuremath\mathtt{iso}\}$ and let $W,W'$ be two {\hbox{$t$-walks}} in a graph $G$.
Let $W=(v_{1},e_{1},v_{2},\ldots,e_{k-1},v_{k})$
for some vertices $v_{1},\ldots, v_k\in V(G)$ and edges $e_{1},\ldots, e_{k-1}\in E(G)$.
We say that $W'$ is a \emph{$t$-extension} of $W$
if $W'=(v_{0},e_{0},v_{1},e_1,\ldots,e_{k-1},v_{k}, e_k, v_{k+1})$ for some vertices $v_0,v_{k+1}\in V(G)$ and edges
$e_0,e_k\in E(G)$.
\end{defn}
\end{sloppypar}
A vertex $v$ in a graph $G$ is said to be \emph{simplicial} if its neighborhood forms a clique.
Note that $v\in V(G)$ is simplicial if and only if the corresponding one-vertex induced path $(v)\in\W \3G$ has no $\ensuremath\mathtt{ind}$-extension.
{Among other things, this concept is generalized in the following definition.}
{
\begin{defn}[Simplicial, closable, and avoidable $t$-walk]
Let $t\in\{\ensuremath\mathtt{wlk},\ensuremath\mathtt{trl},\ensuremath\mathtt{pth},\ensuremath\mathtt{ind},\ensuremath\mathtt{iso}\}$ and let $W$ a be a $t$-walk in a graph $G$. We say that $W$ is:
\begin{itemize}
\item \emph{$t$-simplicial} if it has no $t$-extension,
\item \emph{$t$-closable} if it is a subwalk of a closed $t$-walk in $G$,
\item \emph{$t$-avoidable} in $G$ if every $t$-extension of $W$ is $t$-closable.
\end{itemize}
In particular, every $t$-simplicial $t$-walk is $t$-avoidable.
\end{defn}}
\begin{defn}[Shift of a {$t$-walk}]
Let $t\in\{\ensuremath\mathtt{wlk},\ensuremath\mathtt{trl},\ensuremath\mathtt{pth},\ensuremath\mathtt{ind},\ensuremath\mathtt{iso}\}$ and {$W$ be a $t$-walk in $G$} having at least one edge.
Let $W=(v_{0},e_{1},v_{1},\ldots,e_{k},v_{k})$ for some $k\ge 1$, vertices $v_0,\ldots, v_k\in V(G)$, and edges
$e_1,\ldots,e_k\in E(G)$.
We say that {$t$}-walks
$W'=(v_{0},e_{1},v_{1},\ldots,v_{k-1})$
and
$W''=(v_{1},\ldots,v_{k-1},e_k,v_k)$
are {$t$-}\emph{shifts} of each other in $G$.
\end{defn}
Furthermore, given two $t$-walks $W$ and $W'$ in $G$, we say that $W$ \emph{can be $t$-shifted in $G$ to} $W'$ if there exists a sequence of $t$-walks $W=W_{0},W_{1},\ldots,W_{p}=W'$ in $G$ such that
for all $j\in\{1,\ldots,p\}$ we have $W_{j}\in\W tG$ and $W_{j}$
is a {$t$-shift} of $W_{j-1}$ in $G$.
Note that $p=0$ is allowed (in which case $W=W'$). We write $W\S{G}{t}W'$ if $W$ can be $t$-shifted to $W'$
in $G$. Note that for every graph $G$, the relation $\S{G}{t}$ is an equivalence relation on the set $\W tG$.
Whenever for some graph $G$ the type $t$ of walks under consideration is clear from context, {we just write $\S{G}{~}$ and talk about ``shifts'' instead of ``$t$-shifts'', about ``extensions of an induced path'' instead of ``$\ensuremath\mathtt{ind}$-extensions of an $\ensuremath\mathtt{ind}$-walk'', etc.}
\subsection{Main results}
Our main result is given by the following theorem.
\begin{thm}
\label{thm:main} Every walk, path, or induced path in a graph
can be shifted to an avoidable one.
\end{thm}
We prove Theorem~\ref{thm:main} in parts.
The statement for walks follows from Observation~\ref{obs:walks-1} in Section~\ref{sec:walks}.
The statements for induced paths and paths are Theorems~\ref{thm:avoidable-induced}~and~\ref{thm:paths} in Sections~\ref{sec:induced-paths} and~\ref{sec:paths}, respectively.
\begin{cor}
\label{cor:main} For every non-negative integer $\ell$
every graph:
\begin{itemize}
\item[\fbox{$\ensuremath\mathtt{wlk}$}] either contains no walk of length $\ell$, or contains an avoidable
walk of length~$\ell$;
\item[\fbox{$\ensuremath\mathtt{pth}$}] either contains no path of length $\ell$, or contains an avoidable
path of length~$\ell$;
\item[\fbox{$\ensuremath\mathtt{ind}$}] either contains no induced path of length $\ell$, or contains an avoidable
induced path of length~$\ell$.
\end{itemize}
\end{cor}
Note that every graph with at least one edge contains walks of all non-negative lengths.
On the other hand, we show that statements of Theorem~\ref{thm:main} and Corollary~\ref{cor:main}
do not extend to the cases of trails and of isometric paths.
\subsection{Related work}
The most important case in Corollary~\ref{cor:main} is the case of induced paths.
The corresponding statement was conjectured (and proved for $\ell = 1$) by Beisegel et al.~in~\cite{BCGMS19}.
We also refer to~\cite{BCGMS19} for motivation and more details.
For $\ell=0$ the result is much older; it follows from a work of Ohtsuki et al.~\cite{OCF1976}, see also~\cite{RTL1976}.
Chv\'atal et al.~\cite{MR1927566} proved the conjecture for graphs not containing induced cycles of length at least $\ell+4$ (in which case any avoidable induced path of length $\ell$ is simplicial).
Bonamy et al.~\cite{BDHT19} recently proved the conjecture in general.
Using a similar approach we strengthen their result further in Theorem~\ref{thm:main} (the case of induced paths).
Our results can be stated in terms of combinatorial reconfiguration.
We consider a reachability problem in which the states are walks of a fixed type and length in a graph, the transformations are corresponding shifts, and the target set consists of avoidable walks of the same type and length.
Several other results on reconfiguration of paths are known in the literature.
For example, Demaine et al.~\cite{MR3992972} proved that the reachability problem for shifting paths (``Given two paths in a graph, can one be transformed into the other one by a sequence of shifts?'') is \textsf{PSPACE}-complete.
For shortest $u,v$-paths where each transformation consists in changing a single vertex, the same result was obtained by Bonsma~\cite{MR3122210}.
\subsection{Preliminary definitions and notation}\label{sec:prelim}
Given a vertex $v\in V(G)$ we use standard notations; $N(v)$ and $N[v]$ stand for its open and closed neighborhood, respectively, and $G-v$ denotes the graph obtained from $G$ by removing a vertex $v$. The \emph{order} of a graph $G$ is the number of vertices in $G$. We denote the graph obtained from $G$ by contracting an edge $uv\in E(G)$ by $G/_{uv}$. After such a contraction, it will sometimes be useful to label the newly obtained vertex. We do this by writing $G/_{uv\to u'}$, where $u'$ is the new vertex corresponding to the contracted edge $uv$ in $G$.
Given two graphs $G$ and $H$, their \emph{Cartesian product} $G \square H$ is the graph with vertex set $V(G) \times V(H)$, where two vertices $(u,u')$ and $(v,v')$ are adjacent if and only if either
\begin{enumerate*}[label=(\roman*)]
\item $u = v$ and $u'$ is adjacent to $v'$ in $H$, or
\item $u' = v'$ and $u$ is adjacent to $v$ in $G$.
\end{enumerate*}
\subsection{Structure of the paper}
In Section~\ref{sec:trails}, we give examples of graphs containing trails of various lengths that do not contain any avoidable trails of the same length. Similar examples for isometric paths are constructed in Section~\ref{sec:isometric-paths}. In Section~\ref{sec:induced-paths} we derive our most important result, stating that every induced path in a graph can be shifted to an avoidable one. The analogous result for paths is proved in two different ways in Section~\ref{sec:paths}. For completeness, we also include the corresponding easy observations about walks in Section~\ref{sec:walks}. We conclude with some open problems in Section~\ref{sec:open}.
\section{Trails}\label{sec:trails}
In this section we will show that Theorem \ref{thm:main} does not extend to the case of trails.
We construct several counterexamples for various lengths $\ell$ of a trail.
For $\ell = 0$ consider the graph $G$ consisting of two vertices $u$ and $v$ joined by an edge, and having a loop at each of $u$ and $v$. Then, every trail of length $0$ has a unique extension in $G$ (up to reversing the extension) and this extension is not closable. Thus no trail of length $0$ is avoidable in $G$.
Now consider an odd integer $\ell\ge 1$ and let $G_\ell$ be the graph consisting of two vertices and $\ell+2$ parallel edges between them. Then, up to isomorphism there exists a unique trail of length $\ell$ in $G$. Furthermore, this trail has a unique extension in $G$ and this extension cannot be closed.
For $\ell=2$ consider the graph $G=K_4$. It is easily seen that up to isomorphism
there exists a unique trail of length $\ell$ in $G$.
Furthermore, this trail has exactly three extensions (see Fig.~\ref{pic:K4}),
two of which (those depicted in Fig.~\ref{pic:K4}(b,c)) cannot be closed.
\begin{figure}[!h]
\centering
\begin{tabular}{c@{\hskip1cm}c@{\hskip1cm}c}
\mpfile{graphs}{51}&\mpfile{graphs}{52}&\mpfile{graphs}{53}
\\
(a) & (b) & (c)
\end{tabular}
\caption{Thick lines: edges of the trail; wavy lines: edges of an
extension; ordinary lines: the remaining edges of the graph}\label{pic:K4}
\end{figure}
To get further examples in the class of simple graphs, consider a positive integer $j$, let $\ell = 4j-1$, and let $G$ be the complete bipartite graph $K_{2, 2j+1}$. Then again, up to isomorphism there exists a unique trail of length $\ell$ in $G$ and
its unique extension in $G$ cannot be closed.
\section{Isometric paths}\label{sec:isometric-paths}
This case is not covered by our main theorem (Theorem~\ref{thm:main}), as the following result shows.
\begin{thm}
\label{thm:isometric}For every non-negative integer $\ell$, there exists
a graph $G_{\ell}$ that contains an isometric path of length $\ell$ but
contains no avoidable isometric path of length $\ell$.
\end{thm}
\begin{proof}
For $\ell=0$, let $G_{0}=W_6$ be the wheel on 7 vertices, that is, the
graph obtained from the cycle $C_{6}$ by adding a universal vertex
(see Figure \ref{pic:iso-graphs}(a)). We claim that every vertex of $G_{0}$ is an isometric path of length 0 that is not avoidable. Indeed, since every vertex extends to an isometric
path of length 2, it is enough to show that no isometric path of length
2 in $G_{0}$ is closable. However, this follows from the fact that
$G_{0}$ contains a unique induced cycle of length greater than $3$,
namely the $C_{6}$, which is not isometric.
\begin{figure}[!h]
\centering
\begin{tabular}{c@{\hskip1cm}c}
\mpfile{graphs}1&\raisebox{6pt}{\mpfile{graphs}2}\\
(a) the wheel $W_6$& (b) the grid $P_3\Box P_3$
\end{tabular}
\caption{The cases $\ell=0$ and $\ell=1$}\label{pic:iso-graphs}
\end{figure}
For $\ell=1$ we give two examples: a small specific example and a similar one that is the smallest member of an infinite family of examples for all $\ell\ge 1$. The first one is the graph $G_{1}\cong P_{3}\Box P_{3}$ (see Figure \ref{pic:iso-graphs}(b)).
In this case every edge of $G_{1}$ is an isometric path of length 1 that is not avoidable.
Indeed, since every edge extends to an isometric path of length 3 (see Fig.~\ref{pic:P3P3}), it is enough to show that no isometric path of length 3 in $G_{1}$ is closable. However, this follows from the fact that $G_{1}$ contains a unique induced cycle of length greater than $4$. This cycle is of length 8 and is not isometric.
\begin{figure}
\centering
\begin{tabular}{c@{\hskip1cm}c@{\hskip1cm}c}
\mpfile{graphs}{21}&\mpfile{graphs}{22}&\mpfile{graphs}{23}
\end{tabular}
\caption{Isometric extensions of edges in $G_1$}\label{pic:P3P3}
\end{figure}
For $\ell\ge1$, let $G_{\ell}$ be any graph of the form $P_{n}\Box C_{n}$
where $n$ is an odd integer greater then $2\ell+4$. We denote vertices
of each factor by numbers from $[n]$, so every vertex
of $G_{\ell}$ is of the form $(i,j)$ for $i,j\in[n]$.
We start by characterizing sufficiently short isometric paths in $G_{\ell}$.
The following claim is implicit in \cite[Chapter 12]{IKR08}.
\medskip
\noindent \textbf{Claim 1.}
Let $P=(v_{0},\ldots,v_{k})$
with $k\le \ell+2$ be a path in $G_{\ell}$. Then $P$ is isometric in $G_{\ell}$ if and only if
for both coordinates the following implication holds: if two vertices of $P$ have the same value of the coordinate, then so does every vertex between them.
\begin{proof}
Suppose that $P$ is isometric in $G_{\ell}$. Take two vertices of $P$ with the same value of some coordinate.
Then there exists a unique shortest path between them in $G_{\ell}$, since $k\le \ell+2<n/2$.
So all edges of this path should belong to $P$.
For the opposite direction, let $u$ and $v$ be two vertices in $P$, and let $Q$ be the $u,v$-path contained in $P$.
We want to show that $Q$ is a shortest $u,v$-path in $G_{\ell}$.
Let $X$ and $Y$ denote the sets of values taken by the first and second coordinates of vertices in $Q$, respectively.
Since $\max\{|X|,|Y|\}\le \ell+3$ and $n\ge 2\ell+5$, the subgraph $G'$ of $G_{\ell}$ induced by $X\times Y$ is isometric in $G_{\ell}$.
Furthermore, if $P$ satisfies the condition from the claim, then the same condition holds for $Q$.
This implies that both coordinates are monotone along $Q$, so $Q$ is a shortest $u,v$-path in $G'$.
It follows that $Q$ is also a shortest $u,v$-path in $G_{\ell}$.
\end{proof}
To complete the proof we show that every isometric path of length
$\ell$ in $G_{\ell}$ has an extension that is not closable. Claim~1
implies that each isometric path $P$ of length $\ell$ has an isometric extension $Q$ that is not constant in the first coordinate (see Fig.~\ref{pic:PnCn}). Such a path $Q$ can only be contained in isometric cycles of length at least $2\ell+4>4$.
We complete the proof by the following claim.
\medskip
\noindent \textbf{Claim 2.} The only isometric cycles in $G_{\ell}$
are the cycles of length $n$ that are constant in the first coordinate
and the cycles of length 4.
\begin{proof}
It is easily seen that mentioned cycles are isometric.
\begin{figure}
\centering
\begin{minipage}{0.45\textwidth}
\centering
\begin{tabular}{c@{\hskip1.5cm}c}
\mpfile{graphs}{32}&\mpfile{graphs}{31}\\
(a) $x=1$ & (b) $x > 1$
\end{tabular}
\caption{Two relevant cases for constructing path $Q$ in the case when the vertices of $P$ agree in the first coordinate, with common value $x$.}\label{pic:PnCn}
\end{minipage}\quad\quad\quad\quad
\begin{minipage}{0.45\textwidth}
\centering
\mpfile{graphs}{4}
\caption{Situation in the proof of Claim 2.}\label{pic:C4}
\vspace*{19.8pt}
\end{minipage}
\end{figure}
For the converse direction, consider an isometric cycle $C$ in $G_{\ell}$
that is not constant in the first coordinate. We will show that $C$
is of length 4. Let $a\in[n]$ be the maximal value that
appears as the first coordinate of some vertex in $C$. Let $b\in[n]$
be the minimal value such that $\{ (a-1,b),(a,b)\} $ is
an edge of $C$. We may assume w.l.o.g.~that vertices $v_{0},v_{1},\dots$
of $C$ appear in cyclic order so that $v_{0}=(a-1,b)$ and $v_{1}=(a,b)$.
See Fig.~\ref{pic:C4}. Let $v_{i}=(a,c)$ be the vertex of $C$ having first coordinate $a$ such that $i$ is maximized.
Then, $v_{i+1}=(a-1,c)$ by the maximality of $a$ and $i$. Note
also that $i>1$ and hence $c>b$. By Claim~1, $v_{1},\dots,v_{i}$
are the only vertices in $C$ that maximize the first coordinate.
Cycle $C$ contains a shortest $v_{1},v_{i}$-path in $G_{\ell}$.
Since $n$ is odd, such a path is unique. Similarly, the shortest
$v_{0},v_{i+1}$-path in $G_{\ell}$ is contained in $C$, hence $v_{i+2}=(a-1,c-1)$.
Furthermore, $v_{i-1}=(a,c-1)$, which implies that $\{v_{i-1},v_{i+2}\}$
is an edge of $G_{\ell}$, hence it is also an edge of $C$ since $C$
is isometric. Therefore $v_{i+2}=v_{0}$ and $C$ is of length 4.
\end{proof}
This concludes the proof of Theorem \ref{thm:isometric}.
\end{proof}
\begin{rem}
We leave it to the careful reader to explain why our main construction works only for $\ell\ge1$ and for odd $n$, but not for the case $\ell=0$ or for the case when $n$ is even, and also why one could not replace $P_{n}\Box C_{n}$ by $P_{n}\Box P_{n}$ or $C_{n}\Box C_{n}$.
\end{rem}
\section{Induced paths}\label{sec:induced-paths}
The main result of this section is the following theorem, which settles the case of induced paths from Theorem~\ref{thm:main}.
\begin{thm}
\label{thm:avoidable-induced}
Every induced path in a graph $G$ can
be shifted to an avoidable one.
\end{thm}
We prove Theorem~\ref{thm:avoidable-induced} by adapting the approach used by Bonamy et al.~\cite{BDHT19} to prove that for every positive integer $k$, every graph that contains an induced $P_{k}$ also contains an avoidable induced $P_{k}$ (case $ \ensuremath\mathtt{ind}$ of Corollary~\ref{cor:main}).
We first fix some notation. We denote
a path or a cycle simply by a sequence of vertices, e.g., $P=p_{1}\ldots p_{k}$.
Correspondingly, for such a path $P$ and a vertex $x$ not on $P$ we will denote by $xP$ the sequence $xp_{1}\ldots p_{k}$ (which will typically be a path) and by $Px$ the sequence
$p_{1}\ldots p_{k}x$. Thus, if $P'$ is an extension of $P$, then there exist two vertices $x$ and $y$ not on $P$ such that $xPy$ is an extension of $P$. We often use the fact that for an induced subgraph $G'$ of a graph $G$ and two induced paths
$Q_{1}$ and $Q_{2}$ in $G'$, we have $Q_{1}\S{G}{~}Q_{2}$ whenever $Q_{1}\S{G'}{~}Q_{2}$.
We adapt the approach of~\cite{BDHT19} to shifting.
For a graph $G$ and a positive integer $k$, we say that:
\begin{itemize}
\item property $\Hb{G,k}$ holds if every induced path $P_{k}$ in $G$ can be shifted to an avoidable {induced} path;
\item for a vertex $v\in V(G)$, property $\Hr{G,k,v}$ holds if every induced path $P_{k}$ in $G-N[v]$ can be shifted in $G-N[v]$ to an avoidable {induced} path in $G$;
\item property $\Hr{G,k}$
holds if for every $v\in V(G)$ we have $\Hr{G,k,v}$.
\end{itemize}
\begin{lem}
\label{lem:Hr-Hb}$\Hr{G,k}$ implies $\Hb{G,k}$.
\end{lem}
\begin{proof}
Assume $\Hr{G,k}$ and let $Q=q_{1}\ldots q_{k}$ be an induced $P_{k}$
in $G$. If $Q$ is simplicial, then we are done, so assume that $xQy$
is an extension of $Q$ and define $Q'\coloneqq q_{2}\ldots q_{k}y$.
It is clear that $Q\S{G}{~}Q'$. Furthermore, by $\Hr{G,k,x}$ the path
$Q'$ can be shifted in $G-N[x]$ to a path $Q^{*}$ that
is avoidable in $G$. But then $Q\S{G}{~}Q'\S{{G-N[x]}}{~}Q^{*}$,
and hence $Q\S{G}{~}Q^{*}$. Since $Q$ was arbitrary, this shows $\Hb{G,k}$.
\end{proof}
We need the following result, which is implicit in the proof of \cite[Lemma 15]{BDHT19}.
\begin{lem}
\label{lem:contraction}Let $G$ be a graph, let $uv\in E(G)$, let
$G'\coloneqq G/_{uv\to u'}$ and let $P$ be an induced path in $G'-N[u']$.
Then $P$ is avoidable in $G$ whenever it is avoidable in $G'$.
\end{lem}
\noindent For the sake of completeness we include the proof.
\begin{proof}
Since $G'-N[u']=G-N[\{ u,v\}]$,
the path $P$ is an induced path in $G$. Suppose that $P$ is avoidable
in $G'$ and consider an extension $xPy$ of $P$ in $G$. Since $P$
is contained in $G'-N[u']$, vertices $x$ and $y$ are
distinct from $u$ and $v$. Therefore, $xPy$ is an induced path
in $G'-u'$. Since $P$ is avoidable in $G'$, there exists an induced
cycle $C$ in $G'$ containing $xPy$. If $C$ does not contain $u'$,
then $C$ is also induced in $G$. Otherwise, replacing $u'$ in $C$
with either $u$, $v$, $uv$, or $vu$ as appropriate, we obtain
an induced cycle in $G$ containing $xPy$. This shows that $P$ is
avoidable in $G$.
\end{proof}
\begin{lem}
\label{lem:Hr-forall}For any graph $G$ and positive integer $k,$
property $\Hr{G,k}$ holds.
\end{lem}
\begin{proof}
Fix $k$ and let $G$ be a graph of minimal order for which $\Hr{G,k}$
does not hold. In particular, let $u$ be a vertex in $G$ such that
$\Hr{G,k,u}$ does not hold. Then, there exists an induced path $Q$
in $G-N[u]$ that cannot be shifted in $G-N[u]$
to any avoidable path in $G$.
Since $G-N{[u]}$ is of smaller order than $G$, property
$\Hr{G-N{[u]},k}$ holds. By Lemma~\ref{lem:Hr-Hb}, property
$\Hb{G-N{[u]},k}$ holds as well. Therefore, there exists
a path $Q'=q_{1}\dots q_{k}$ such that $Q'$ is avoidable in $G-N[u]$
and $Q\S{G-N[u]}{~}Q'$. The choice of $Q$ implies that $Q'$
is not avoidable in $G$, thus $Q'$ has an extension $xQ'y$ that
is not closable in $G$. Note that precisely one of $x,y$ is a member
of $N(u)$, as otherwise the extension $xQ'y$ would be
closable in $G$. We may assume w.l.o.g. that $x$ is a common neighbor
of $u$ and $q_{1}$.
Set $G'\coloneqq G/_{ux\to u'}$.
Observe that $Q''\coloneqq q_{2}\dots q_{k}y$ does not contain $u$, $x$, or any neighbor in $G$ of $u$ or $x$. Therefore, $Q''$ is a path in $G'-N[u']$.
Again, the minimality of $G$ implies
property $\Hr{G',k}$, in particular, also $\Hr{G',k,u'}$ holds.
Hence, $Q''$ can be shifted in $G'-N[u']$ to an induced
path $Q^{*}$ that is avoidable in $G'$. So we have $Q\S{G-N[u]}{~}Q'\S{G-N[u]}{~}Q''\S{G'-N[u']}{~}Q^{*}$,
where the relations follow from the definitions of $Q',Q''$, and
$Q^{*}$, respectively. Since $G'-N[u']$ is an induced
subgraph of $G-N[u]$, we have $Q\S{G-N[u]}{~}Q^{*}$.
The choice of $Q$ implies that $Q^{*}$ is not avoidable in $G$,
which contradicts Lemma~\ref{lem:contraction}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:avoidable-induced}]
Immediate from Lemmas~\ref{lem:Hr-Hb} and \ref{lem:Hr-forall}.
\end{proof}
The proof of Theorem~\ref{thm:avoidable-induced} is constructive.
It gives an algorithm for computing a sequence of shifts transforming a given induced path in a graph $G$ to an avoidable one, see {Procedures~\ref{alg:shifting} and~\ref{alg:refined}}.
We do not know if the algorithm runs in polynomial time.
\begin{algorithm}[ht!]
\begin{algorithmic}[1]
\Require a graph $G$ and an induced path $P=p_{1}p_{2}\dots p_{k}$ in $G$
\Ensure a sequence $S$ of paths shifting $P$ to an avoidable induced path in $G$
\If{there exists an extension $xPy$ of $P$}
\State $Q\gets yp_k\ldots p_1x$
\State \Return $P,\textsc{RefinedShifting}(G,Q)$
\Else
\State \Return $P$
\EndIf
\end{algorithmic}
\caption{\label{alg:shifting}$\textsc{Shifting}(G,P)$}
\end{algorithm}
\begin{algorithm}[ht!]
\caption{\label{alg:refined}$\textsc{RefinedShifting}(G,P)$}
\begin{algorithmic}[1]
\Require a graph $G$ and an induced path $P=p_{1}\dots p_{k+2}$ in $G$
\Ensure a sequence $S$ of paths in $G-N[p_{k+2}]$ shifting $p_1\ldots p_k$ to an avoidable induced path in $G$
\State $P' \gets p_{1}\dots p_{k}$
\State $S\gets$ the one-element sequence containing path $P'$
\If{there exists an extension $xP'y$
in $G-N[p_{k+2}]$}
\State $S\gets S,\textsc{RefinedShifting}(G-N[p_{k+2}],xP'y)$
\EndIf
\State $Q\gets$ the end path of $S$
\If{$Q$ has an extension $xQy$ in $G$ such that
$y$ is the unique neighbor of $p_{k+2}$ in $\{ x,y\}$}
\State let $Q=q_{1}\dots q_{k}$ such that $y$ is adjacent to $q_{k}$
\State $Q'\gets xq_{1}\dots q_{k}$
\State $G'\gets G/_{p_{k+2}y\rightarrow y'}$
\State $S'\gets\textsc{RefinedShifting}(G',Q'y')$
\State \Return $S,S'$
\Else
\State \Return $S$
\EndIf
\end{algorithmic}
\end{algorithm}
\section{Paths}\label{sec:paths}
The main result of this section is the following theorem, which settles the case of paths from Theorem~\ref{thm:main}.
\begin{thm}\label{thm:paths}
Every path in a graph $G$ can be shifted to an avoidable one.
\end{thm}
We offer two proofs. The first proof will rely on several observations about line graphs.
Recall that the \emph{line graph} of a graph $G$ is the graph $G'$ with $V(G') = E(G)$ such that two distinct edges $e$ and $f$ of $G$ form a pair of adjacent vertices in $G'$ if and only if $e$ and $f$ share an endpoint in $G$.
\begin{lem}\label{lem:line-graphs}
Let $G$ be a graph and let $G'$ be its line graph. Then the following statements hold.
\begin{enumerate}[(a)]
\item\label{item-1} Let $P$ be a path of length $\ell\ge 1$ in $G$ and let $P'$ be the sequence of edges of $P$ along the path. Then $P'$ is an induced path of length $\ell-1$ in $G'$.
\item\label{item-2} Let $C'$ be an induced cycle of length at least four in $G'$. Then, the sequence of vertices of $C'$ along the cycle yields a sequence of edges of $G$ that forms a cycle $C$ in $G$.
\item\label{item-2.5} Let $P'$ be an induced path in $G'$ and let $\ell$ be the length of $P'$.
Then, the sequence of vertices of $P'$ along the path yields a sequence of edges of $G$ that forms a path $P$ of length $\ell+1$ in $G$.
\item\label{item-3} For every $\ensuremath\mathtt{ind}$-avoidable induced path $P'$ in $G'$, the corresponding path $P$ in $G$ (as in~\ref{item-2.5}) is a $\ensuremath\mathtt{pth}$-avoidable path in $G$.
\item\label{item-4}
For every two induced paths $P'$ and $Q'$ in $G'$ that are $\ensuremath\mathtt{ind}$-shifts of each other in $G'$, the corresponding paths $P$ and $Q$ in $G$ (as in~\ref{item-2.5}) are $\ensuremath\mathtt{pth}$-shifts of each other in~$G$.
\end{enumerate}
\end{lem}
For the sake of completeness we include a proof, which is lengthy but straightforward.
\begin{proof}
\ref{item-1}.
Let $e_1,\ldots, e_{\ell}$ be the edges of $P$ in order.
Since $P$ is a path in $G$, these edges are pairwise distinct.
Furthermore, for all $i,j\in \{1,\ldots, \ell\}$ with $i<j$, edges $e_i$ and $e_j$ share an endpoint in $G$ if and only if $j = i+1$; thus, $e_i$ and $e_j$ are adjacent as vertices of $G'$ if and only if $j = i+1$.
We conclude that $P'$ is an induced path of length $\ell-1$ in $G'$.
\ref{item-2}. Let $\ell\ge 4$ be the length of $C'$ and let $e_1,\ldots, e_\ell$ be a cyclic order of vertices of $C'$.
Then $e_1,\ldots, e_\ell$ are pairwise distinct edges of $G$, with two sharing an endpoint in $G$ if and only if they appear consecutively in the cyclic order.
In particular, since $\ell\ge 4$, no three of these edges share a common endpoint.
Thus, if for all $i\in \{1,\ldots, \ell\}$ we denote by $v_{i}$ the common endpoint in $G$ of $e_i$ and $e_{i+1}$ (indices modulo $\ell$), then vertices $v_1,\ldots, v_\ell$ are pairwise distinct, and $e_i = \{v_{i-1},v_i\}$ for all $i\in \{1,\ldots, \ell\}$ (with $v_0 = v_\ell$).
In particular, $C = (v_1,e_1,v_2,\ldots, v_{\ell},e_{\ell},v_1)$ is a cycle in $G$ formed by the edges of $C'$.
\ref{item-2.5}. The proof is very similar to (but simpler than) that of item~\ref{item-2}.
\ref{item-3}. Let $e_1,\ldots, e_{\ell+1}$ be the vertices of $P'$ in order.
By~\ref{item-2.5}, the sequence of edges $e_1,\ldots, e_{\ell+1}$ forms a path $P$ of length ${\ell+1}$ in $G$.
Suppose that $P'$ is an $\ensuremath\mathtt{ind}$-avoidable induced path in $G'$.
To show that $P$ is a $\ensuremath\mathtt{pth}$-avoidable path in $G$, we verify that every $\ensuremath\mathtt{pth}$-extension of $P$ is $\ensuremath\mathtt{pth}$-closable.
Let $Q$ be an arbitrary $\ensuremath\mathtt{pth}$-extension of $P$ in $G$.
Then there exist two edges $e_0$ and $e_{\ell+2}$ in $G$ such that $Q$ is a path of length $\ell+3$, with edges $e_0,e_1,\ldots, e_{\ell+1},e_{\ell+2}$ in order.
By part~\ref{item-1} of the lemma, this sequence of edges is a sequence of vertices in $G'$ forming an induced path $Q'$ of length $\ell+2$ in $G'$.
Note that $Q'$ is an $\ensuremath\mathtt{ind}$-extension of the induced path $P'$ in $G'$.
Since $P'$ is an $\ensuremath\mathtt{ind}$-avoidable induced path in $G$, every $\ensuremath\mathtt{ind}$-extension of $P'$ is $\ensuremath\mathtt{ind}$-closable.
In particular, $Q'$ is contained in an induced cycle $C'$ in $G'$.
Since $Q'$ is an induced path contained in $C'$, the length of $C'$ is at least $(\ell+2)+2\ge 4$.
Thus, by part~\ref{item-2} of the lemma, the sequence of vertices of $C'$ along the cycle yields a sequence of edges of $G$ that forms a cycle $C$ in $G$.
Furthermore, $Q$ is contained in $C$ and hence $\ensuremath\mathtt{pth}$-closable.
Thus, every $\ensuremath\mathtt{pth}$-extension of $P$ is closable and $P$ is indeed a $\ensuremath\mathtt{pth}$-avoidable path in~$G$.
\ref{item-4}. Let $\ell$ be the common length of the paths $P'$ and $Q'$.
Then $P$ and $Q$ are both of length $\ell+1$.
The paths $P'$ and $Q'$ are $\ensuremath\mathtt{ind}$-shifts of each other in $G'$, and hence, considering paths as subgraphs, the union of $P'$ and $Q'$ is an induced path $R'$ of length $\ell+1$ in $G'$.
Let $R$ be the path in $G$ corresponding to $R'$ (as in item~\ref{item-2.5} of the lemma).
Then $R$ is a path of length $\ell+2$ in $G$ that is the union of paths $P$ and $Q$.
This shows that $P$ and $Q$ are $\ensuremath\mathtt{pth}$-shifts of each other in~$G$.
\end{proof}
\begin{proof}[First proof of Theorem~\ref{thm:paths}.]
The first proof is based on a reduction to Theorem~\ref{thm:avoidable-induced}.
Let $P$ be a path in $G$ and let $\ell$ be the length
of $P$. Suppose that $\ell=0$. Then $P$ corresponds to a vertex $v\in V(G)$.
Let $U$ be the connected component of $G$ containing $v$. If $U$
contains only $v$ then clearly $P$ is avoidable in $G$. Otherwise,
let $u$ be a vertex in $U$ such that $U-u$ is connected. (Such
a vertex exists, for example, take a leaf of a spanning tree in $U$.)
Then $u$ is an avoidable path in $G$ such that $P\S{G}{\ensuremath\mathtt{pth}}u$ .
Suppose now that $\ell\ge1$ and let $G'$ be the line graph of $G$.
Let $P'$ be the sequence of edges of $P$.
By item~\ref{item-1} of Lemma~\ref{lem:line-graphs}, $P'$ is an induced path of length $\ell-1$ in $G'$.
By Theorem~\ref{thm:avoidable-induced} there
exists an induced path $Q'$ that is avoidable in $G'$ and such that
$P'\S{G'}{\ensuremath\mathtt{ind}}Q'$.
By items~\ref{item-2.5} and~\ref{item-3} of Lemma~\ref{lem:line-graphs}, the sequence of vertices of $Q'$ in $G'$ corresponds to a sequence of edges in $G$ forming a path $Q$ that is avoidable in $G$.
Furthermore, since $P'\S{G'}{\ensuremath\mathtt{ind}}Q'$, we conclude using item~\ref{item-4} of Lemma~\ref{lem:line-graphs} that $P\S{G}{\ensuremath\mathtt{pth}}Q$.
\end{proof}
In the second proof, all our arguments on paths will only depend on the corresponding sequences of vertices, even in the case of graphs with blue edges. Thus, we use notation introduced in Section~\ref{sec:induced-paths} and represent each path simply as a sequence of vertices.
\begin{proof}[Second proof of Theorem~\ref{thm:paths}.]
The second proof works directly on $G$ and is based on properties of depth-first search (DFS) trees. Let $P$ be a path in $G$ and let $\ell$ be the length of $P$. Consider a DFS traversal of $G$ starting in $P$ and let $T$ be the corresponding DFS tree.
Let $Q$ be a longest root-to-leaf path in $T$ such that $P$ is a subpath of $Q$. We shift $P$ along $Q$ all the way to the last vertex of $Q$, obtaining this way a path $P' = v'_0v'_1\ldots v'_\ell$, where $v'_\ell$ is a leaf in $T$. Let $Q'$ be a longest root-to-leaf path in the subtree of $T$ rooted at $v_0'$.
We now define a path $P''=v''_0v''_1\ldots v''_\ell$ depending on the length of $Q$.
If the length of $Q$ is at least~$2\ell$ we set $P''=P'$ (see Fig.~\ref{pic:path-tree}a).
Otherwise we shift $P'$ to the subpath $P''$ of $Q'$ such that $v''_\ell$
is a leaf in $T$ (see Fig.~\ref{pic:path-tree}b).
\begin{figure}[h]
\centering
\begin{tabular}{c@{\hskip1.62cm}c}
\raisebox{8.73pt}{\mpfile{trees}{11}} &\mpfile{trees}{1}\\
(a) & (b)
\end{tabular}
\caption{
The two cases from the second proof, depending on the length of $Q$
}\label{pic:path-tree}
\end{figure}
Note that the length of each path from $v''_0$ to a~leaf of the subtree of $T$ rooted at $v_0'$ is at most $\ell$, since $Q'$ is a longest root-to-leaf path in this subtree.
If $P''$ is avoidable in $G$, we are done. Otherwise, $P''$ has an extension $$xP''y = xv''_0v''_1\ldots v''_\ell y$$ that is not closable. Since $T$ is a~DFS tree in $G$, all neighbors of $v''_\ell$ in $G$ are ancestors of $v''_\ell$ in $T$. In particular, this implies that $y$ is an ancestor of $v''_\ell$ and hence also an ancestor of $v''_0$.
Note that $y$ is a vertex of the path $Q''=r\dots v'_0\dots v''_0$, where $r$ is the root of $T$ (see Fig.~\ref{pic:bad-ext} for the case when the length of $Q$ is less than $2\ell$).
\begin{figure}[ht]
\centering
\mpfile{trees}{2}
\caption{A hypothetical non-closable extension of $P''$}\label{pic:bad-ext}
\end{figure}
Since $xP''y$ is not closable, we infer that $x$ is not an ancestor of $v_0''$ in $T$.
Thus, $x$ is a child of $v''_0$ in $T$. Let $Q'''= v''_0x\dots w$ be a path in $T$ such that $w$ is a leaf in $T$.
We now shift $P''$ following $Q'''$ from $v''_0$ to the last vertex of $Q'''$, obtaining this way a path $P'''$, the last vertex of which is $w$. Note that, by choice of $P''$, vertex $v''_0$ belongs to the path $P'''$ (see Fig.~\ref{pic:bad-ext}). Thus, if $w$ has a neighbor in $G$ that is a proper ancestor of $v''_0$ in $T$, then $xP'y$ would be a closable extension of $P''$, which is not possible. We conclude that all neighbors of $w$ in $G$ are also vertices of $P'''$. Hence, $P'''$ is a simplicial path in $G$.
Since $P\S{G}{\ensuremath\mathtt{pth}}P'$, $P'\S{G}{\ensuremath\mathtt{pth}}P''$, and $P''\S{G}{\ensuremath\mathtt{pth}}P'''$, we have
$P\S{G}{\ensuremath\mathtt{pth}}P'''$. Thus, $P$ can always be shifted to an avoidable path in $G$.
\end{proof}
The second proof of Theorem~\ref{thm:paths} gives a polynomial-time algorithm for computing a sequence of shifts transforming a given path in a graph $G$ to an avoidable one, see Procedure~\ref{alg:shifting-simplepath}.
\begin{algorithm}[ht!]
\begin{algorithmic}[1]
\Require a graph $G$, a path $P$ in $G$
\Ensure a sequence $S$ of paths shifting $P$ to an avoidable path
in $G$
\State $\ell\gets\textsc{Length}(P)$
\State $T\gets \textsc{DFS}(G,P)$ \Comment{DFS tree w.r.t.~an ordering
starting from $P$}
\State $Q\gets \textsc{Longest}(T,P)$ \Comment{A longest root-to-leaf path in $T$ starting with $P$.}
\State $S \gets \textsc{ShiftAlong}(Q,P)$ \Comment{The sequence of shifts
along the path $Q$}
\State $P'\gets S[-1]$ \Comment{The last path in the sequence}
\State $v'_0 \gets P'[1]$ \Comment{The first vertex in the path}
\State $Q'\gets \textsc{Longest}(T, v'_0)$ \Comment{A longest
root-to-leaf path in the subtree of $T$ rooted at $v'_0$}
\If{$\textsc{Length}(Q')\leq \ell$}
\State $P''\gets P'$
\Else
\State $R'\gets \textsc{Reverse}(P'), Q'$
\State $S \gets S, \textsc{ShiftAlong}(R',\textsc{Reverse}(P'))$
\State $P''\gets S[-1]$ \Comment{The last path in the sequence}
\EndIf
\If{there exists an extension $xP''y$ of $P''$ which is not closable}
\State $v''_0\gets P''[1]$
\State $Q'''\gets \textsc{Longest}(T, v''_0x)$ \Comment{A longest
path to the leaf starting with the edge $v''_0x$}
\State $R''\gets \textsc{Reverse}(P''), Q'''$
\State \Return $S, \textsc{ShiftAlong}(R'',\textsc{Reverse}(P''))$
\Else
\State \Return $S$
\EndIf
\end{algorithmic}
\caption{\label{alg:shifting-simplepath}$\textsc{PathShifting}(G,P)$}
\end{algorithm}
\section{Walks}\label{sec:walks}
For this case we provide two simple observations.
The first one already suffices to prove the first claim of Theorem \ref{thm:main} and the case $\ensuremath\mathtt{wlk}$ of Corollary \ref{cor:main}.
\begin{observation}\label{obs:walks-1}
Every walk in a graph is avoidable.
\end{observation}
\begin{proof}
Indeed, any extension $W'$
of a walk
$W$ is a subwalk of the closed walk obtained by traversing $W'$ first
in one direction and then in the opposite one.
\end{proof}
Furthermore, if the graph is connected, then any walk can be shifted to any walk of the same length.
\begin{observation}\label{obs:walks-2}
Let $W$ and $W'$ be two walks of the same length $\ell$ in a connected graph $G$. Then, $W$ can be shifted to $W'$.
\end{observation}
\begin{proof}
Let $W^*$ be the concatenation of walks $W$, $W''$, and $W'$, where
$W''$ is an arbitrary walk in $G$ from the last vertex of $W$ to the first vertex of $W'$.
Clearly $W^*$ is also a walk in $G$, and its subwalks of length $\ell$ form a sequence of walks that shows that $W$ can be shifted to $W'$.
\end{proof}
\section{Open problems}\label{sec:open}
We conclude with the following open problems:
\begin{enumerate}
\item The proof of Theorem~\ref{thm:avoidable-induced} is constructive and produces a sequence $S$ of paths shifting a given induced path $P$ in a graph $G$ to an avoidable induced path. Similarly, our proofs of Theorem~\ref{thm:paths} do the same for the case of not necessarily induced paths.
For the latter case, we believe that with an appropriate compact representation of the output and a suitable implementation of Procedure~\ref{alg:shifting-simplepath} one can achieve linear running time.
On the other hand, about the induced case we know much less.
Given a graph $G$ and an induced path $P$ in $G$, is there a polynomial upper bound on the minimum length of a sequence of shifts transforming $P$ to an avoidable induced path and, if so,
can a sequence of polynomial length be computed efficiently?
In particular, does the algorithm given by the proof of Theorem~\ref{thm:avoidable-induced} (Procedures~\ref{alg:shifting} and~\ref{alg:refined}) run in polynomial time?
\item For a positive integer $k$, what are the graphs that have an avoidable trail of length $k$ whenever they have a trail of length $k$? What are the graphs for which the above property holds for all~$k$?
For a positive integer $k$, what are the graphs in which every trail of length $k$ can be shifted to an avoidable one? What are the graphs in which every trail can be shifted to an avoidable one?
What is the time complexity of recognizing graphs with above properties?
The above questions are also open for isometric paths.
\item In paper~\cite{MR3992972} the problem of determining whether there exists a sequence of shifts from a given path to another one is proved \textsf{PSPACE}-complete, while the computational complexity status of analogous problems for trails, induced paths, and isometric paths remains open.
The corresponding problem for walks is trivial.
\item Let us say that an induced path $P$ in a graph $G$ is \emph{strongly avoidable} if
there exists a component $C$ of $G-N[P]$ such that
every extension of $P$ can be closed to an induced cycle using only vertices of $C$.
It follows from~\cite[Theorem 5.1]{MR1626534} (see also~\cite{MR3303861}) that every graph $G$ has a strongly avoidable $P_1$. For $k>1$, which graphs have strongly avoidable induced paths $P_k$?
\end{enumerate}
\paragraph{Acknowledgments.}
The authors are grateful to two anonymous reviewers for helpful remarks.
The work for this paper was done in the framework of a bilateral project between Slovenia and Russia, financed by the Slovenian Research Agency (BI-RU/19-20-022). The second and third authors acknowledge partial support of the Slovenian Research Agency (I0-0035, research programs P1-0285, P1-0297, and P1-0383, and research projects J1-1692, J1-9110, J1-9187, N1-0102, and N1-0160).
The work of the first and fourth authors was done within the framework of the HSE University Basic Research Program. The work of the fourth author was partially supported by State Assignment, theme no.~0063-2019-0003.
|
1,314,259,993,488 | arxiv | \section{Introduction}
Different search operators have been proposed and applied in EAs~\cite{fogel1997handbook}. Each search operator has its own advantage. Therefore an interesting research issue is to combine the advantages of variant operators together and then design more efficient hybrid EAs. Currently hybridization of evolutionary algorithms becomes popular due to their capabilities in handling some real world problems~\cite{grosan2007hybrid}.
Mixed strategy EAs, inspired from strategies and games~\cite{dutta1999strategies}, aims at integrating several mutation operators into a single algorithm~\cite{he2005game}. At each generation, an individual will choose one mutation operator according to a strategy probability distribution. Mixed strategy evolutionary programming has been implemented for continuous optimization and experimental results show it performs better than its rival, i.e., pure strategy evolutionary programming which utilizes a single mutation operator~\cite{dong2007evolutionary,liang2010mixed}.
However no analysis has been made to answer the theoretical question: whether and when is the performance of mixed strategy EAs better than that of pure strategy EAs? This paper aims at providing an initial answer. In theory, many of EAs can be regarded as a matrix iteration procedure. Following matrix iteration analysis~\cite{varga2009matrix}, the performance of EAs is measured by the asymptotic convergence rate, i.e., the spectral radius of a probability transition sub-matrix associated with an EA. Alternatively the performance of EAs can be measured by the asymptotic hitting time~\cite{he2011population}, which approximatively equals the reciprocal of the asymptotic convergence rate. Then a theoretical analysis is made to compare the performance of mixed strategy and pure strategy EAs .
The rest of this paper is organized as follows.
Section 2 describes pure strategy and mixed strategy EAs. Section
3 defines asymptotic convergence rate and asymptotic hitting time. Section 4 makes a comparison of pure strategy and mixed strategy EAs. Section 5 concludes the paper.
\section{Pure Strategy and Mixed Strategy EAs}
Before starting a theoretical analysis of mixed strategy EAs, we first demonstrate the result of a computational experiment.
\begin{example}
Let's see an instance of the average capacity 0-1 knapsack problem~\cite{michalewicz1996genetic,he2007comparison}:
\begin{equation}
\begin{array}{llll}
&\mbox{maximize } \sum^{10}_{i=1}v_i b_i, & b_i\in \{ 0,1\}, \\
&\mbox{subject to } \sum^{10}_{i=1} w_i b_i \le C,
\end{array}
\end{equation}
where
$v_1=10$ and $v_i=1$ for $i=2, \cdots, 10$; $w_1=9$
and $w_i=1$ for $i=2, \cdots, 10$; $C=9$.
The fitness function is that for $x=(b_1, \cdots, b_{10})$
$$
f(x)=\left \{
\begin{array}{llll}
&\sum^{10}_{i=1}v_i b_i, &\mbox{if } \sum^{10}_{i=1} w_i b_i \le C, \\
& 0, &\mbox{if } \sum^{10}_{i=1} w_i b_i > C.
\end{array}
\right.
$$
We consider two types of mutation operators:
\begin{itemize}
\item s1: flip each bit $b_i$ with a probability $0.1$;
\item s2: flip each bit $b_i$ with a probability $0.9$;
\end{itemize}
The selection operator is to accept a better offspring only.
Three (1+1) EAs are compared in the computation experiment: (1) EA(s1) which adopts s1 only, (2) EA(s2) with s2 only, and (3) EA(s1,s2) which chooses either s1 or s2 with a probability $0.5$ at each generation.
Each of these three EAs runs 100 times independently. The computational experiment shows that EA(s1, s2) always finds the optimal solution more quickly than other twos.
\end{example}
This is a simple case study that shows a mixed strategy EA performs better than a pure strategy EA. In general, we need to answer the following theoretical question: whether or when do a mixed strategy EAs are better than pure strategy EAs?
Consider an instance of the discrete optimization problem which is to maximize an objective function $f(x)$:
\begin{equation}\label{equ1}
\max \{ f(x); x \in S \},
\end{equation}
where $S$ a finite set.
For the analysis convenience, suppose that all constraints have been removed through an appropriate penalty function method. Under this scenario, all points in $S$ are viewed as feasible solutions. In evolutionary computation, $f(x)$ is called a \emph{fitness function}.
The following notation is used in the algorithm and text thereafter.
\begin{itemize}
\item $x,y,z \in S$ are called \emph{points} in $S$, or \emph{individuals} in EAs or \emph{states} in Markov chains.
\item The \emph{optimal set} $S_{\mathrm{opt}}\subseteq S$ is the set consisting of all optimal solutions to Problem (\ref{equ1}) and \emph{non-optimal set} $S_{\mathrm{non}} := S \setminus S_{\mathrm{opt}}$.
\item $t$ is the generation counter. A random variable $\Phi_t$
represents the state of the $t$-th generation parent; $\Phi_{t+1/2}$ the state of the child which is generated through mutation.
\end{itemize}
The mutation and selection operators are defined as follows:
\begin{itemize}
\item A \emph{mutation operator} is a probability transition from $S$ to $S$. It is defined by a \emph{mutation probability transition matrix} $\mathbf{P}_m$ whose entries are given by
\begin{equation}
P_m(x,y), \quad x, y \in S.
\end{equation}
\item A \emph{strict elitist selection operator} is a mapping from $S \times S$ to $S$, that is for $x \in S$ and $y \in S$,
\begin{equation}
z=\left\{
\begin{array}{llll}
x, &\mbox{if } f(y) \le f(x),\\
y, &\mbox{if } f(y) > f(x).
\end{array}
\right.
\end{equation}
\end{itemize}
A \emph{pure strategy} (1+1) EA, which utilizes only one mutation operator, is described in Algorithm~\ref{alg1}.
\begin{algorithm}
\caption{Pure Strategy Evolutionary Algorithm EA(s) } \label{alg1}
\begin{algorithmic}[1]
\STATE \textbf{input}: fitness function;
\STATE generation counter $ t\leftarrow 0$;
\STATE initialize $ \Phi_0$;
\WHILE{stopping criterion is not satisfied}
\STATE $\Phi_{t+1/2}\leftarrow$ mutate $\Phi_t$ by mutation operator s;
\STATE evaluate the fitness of $\Phi_{t+1/2}$;
\STATE $\Phi_{t+1}\leftarrow$ select one individual from $\{ \Phi_t, \Phi_{t+1/2}\}$ by strict elitist selection;
\STATE $t\leftarrow t+1$;
\ENDWHILE \STATE \textbf{output}: the maximal value of the fitness function.
\end{algorithmic}
\end{algorithm}
The stopping criterion is that the running stops once an optimal solution is found. If an EA cannot find an optimal solution, then it will not stop and the running time is infinite. This is common in the theoretical analysis of EAs.
Let s1, ..., s$\kappa$ be $\kappa$ mutation operators (called \emph{strategies}).
Algorithm~\ref{alg2} describes the procedure of a \emph{mixed strategy} (1+1) EA. At the $t$-th generation,
one mutation operator is chosen from the $\kappa$ strategies according to a \emph{strategy probability distribution}
\begin{equation}
q_{s1}(x), \cdots, q_{s\kappa}(x),
\end{equation}
subject to $0\le q_s(x) \le 1$ and $\sum_s q_{s}(x)=1$.
Write this probability distribution in short by a vector $\mathbf{q}(x)=[q_s(x)]$.
\begin{algorithm}
\caption{Mixed Strategy Evolutionary Algorithm EA(s1, ..., s$\kappa$)} \label{alg2}
\begin{algorithmic}[1]
\STATE \textbf{input}: fitness function;
\STATE generation counter $ t\leftarrow 0$;
\STATE initialize $ \Phi_0$;
\WHILE{stopping criterion is not satisfied}
\STATE choose a mutation operator sk from s1, ..., s$\kappa$;
\STATE $\Phi_{t+1/2}\leftarrow$ mutate $\Phi_t$ by mutation operator $sk$;
\STATE evaluate $\Phi_{t+1/2}$;
\STATE $\Phi_{t+1}\leftarrow$ select one individual from $\{\Phi_t, \Phi_{t+1/2}\}$ by strict elitist selection;
\STATE $t\leftarrow t+1$;
\ENDWHILE \STATE \textbf{output}: the maximal value of the fitness function.
\end{algorithmic}
\end{algorithm}
Pure strategy EAs can be regarded a special case of mixed strategy EAs with only one strategy.
EAs can be classified into two types:
\begin{itemize}
\item A \emph{homogeneous EA} is an EA which applies the same mutation operators
and same strategy probability distribution for all generations.
\item An \emph{inhomogeneous} EA is an EA which doesn't apply the same mutation operators
or same strategy probability distribution for all generations.
\end{itemize}
This paper will only discuss \emph{homogeneous EAs} mainly due to the following reason:
\begin{itemize}
\item The probability transition matrices of an inhomogeneous EA may be chosen to be totally different at different generations. This makes the theoretical analysis of an inhomogeneous EA extremely hard.
\end{itemize}
\section{Asymptotic Convergence Rate and Asymptotic Hitting Time}
Suppose that a homogeneous EA is applied to maximize a fitness function $f(x)$, then the population sequence $\{\Phi_t, t=0, 1,\cdots\}$ can be modelled by a \emph{homogeneous Markov chain} \cite{rudolph1994convergence,he1999convergence}. Let $\mathbf{P}$ be the probability transition matrix, whose entries are given by
$$P(x,y)=P(\Phi_{t+1}= y \mid \Phi_t = x), \quad x , y \in S.$$
Starting from an initial state $x$, the mean number $m(x)$ of generations to find an optimal solution is called the \emph{hitting time} to the set $S_{\mathrm{opt}}$ \cite{he2003towards}.
$$\begin{array}{llll}
\tau(x)&:=&\min \{t; \Phi_t \in S_{\mathrm{opt}} \mid \Phi_0= x\},\\
m (x)&:=& E[\tau(x)]=\displaystyle \sum^{+\infty}_{t=0} t P(\tau(x)=t).
\end{array}
$$
Let's arrange all individuals in the order of their fitness from high to low: $ x_1, x_2, \cdots $, then their hitting times are:
$$ m(x_1), m(x_2), \cdots .$$
Denote it in short by a vector $\mathbf{m}=[m (x)]$.
Write the transition matrix $\mathbf{P}$ in the canonical form \cite{iosifescu1980finite},
\begin{equation}
\label{equ2} \mathbf{P} =
\begin{pmatrix}
\mathbf{I} & \mathbf{ 0} \\
* & \mathbf{T}
\end{pmatrix},
\end{equation}
where $\mathbf{I}$ is a unit matrix and $\mathbf{{ 0}}$ a zero matrix. $ \mathbf{T}$ denotes the probability transition sub-matrix among non-optimal states, whose entries are given by
$$
P(x,y), \quad x \in S_{\mathrm{non}}, y \in S_{\mathrm{non}}.
$$
The part $*$ plays no role in the analysis.
Since $\forall x \in S_{\mathrm{opt}}, m (x)=0$, it is sufficient to consider $ {m}(x)$ on non-optimal states $x \in S_{\mathrm{non}}$.
For the simplicity of notation, the vector ${\mathbf{m}}$ will also denote the hitting times for all non-optimal states:
$
[m (x)], x \in S_{\mathrm{non}}.
$
The Markov chain associated with an EA can be viewed as a matrix iterative procedure, where the iterative matrix is the probability transition sub-matrix $\mathbf{T}$.
Let $\mathbf{p}_0$ be the vector $[p_0(x)]$ which represents the probability distribution of the initial individual:
$$
p_0(x): =P(\Phi_0=x), \quad x \in S_{\mathrm{non}},
$$
and
$\mathbf{p}_t$ the vector $[p_t(x)]$ which represents the probability distribution of the $t$-generation individual:
$$
p_t(x): =P(\Phi_t=x), \quad x \in S_{\mathrm{non}}.
$$
If the spectral radius $\rho(\mathbf{T})$ of the matrix $\mathbf{T}$ satisfies: $\rho(\mathbf{T})< 1$, then we know~\cite{varga2009matrix}
$$
\lim_{t \to \infty} \parallel \mathbf{p}_{t} \parallel =0.
$$
Following matrix iterative analysis~\cite{varga2009matrix}, the asymptotic convergence rate of an EA is defined as below.
\begin{definition}
The \emph{asymptotic convergence rate} of an EA for maximizing $f(x)$ is
\begin{equation}
R(\mathbf{T}):=-\ln \rho(\mathbf{T})
\end{equation}
where $\mathbf{T}$ is the probability transition sub-matrix restricted to non-optimal states and $\rho(\mathbf{T})$ its spectral radius.
\end{definition}
Asymptotic convergence rate is different from previous definitions of convergence rate based on matrix norms or probability distribution \cite{he1999convergence}.
Note: Asymptotic convergence rate depends on both the probability transition sub-matrix $\mathbf{T}$ and fitness function $f(x)$. Because the spectral radius of the probability transition matrix
$\rho(\mathbf{P})=1$, thus $\rho(\mathbf{P})$ cannot be used to measure the performance of EAs. Becaue the mutation probability transition matrix is the same for all functions $f(x)$, and $\rho(\mathbf{P}_m)=1$, so $\rho(\mathbf{P}_m)$ cannot be used to measure the performance of EAs too.
If $\rho(\mathbf{T})<1$, then the hitting time vector satisfies (see Theorem 3.2 in \cite{iosifescu1980finite}),
\begin{equation}\label{equ3}
\mathbf{m}=(\mathbf{I}- \mathbf{T})^{-1} \mathbf{1}.
\end{equation}
The matrix
$\mathbf{N}:=(\mathbf{I}-\mathbf{T})^{-1}
$ is called the \emph{fundamental matrix} of the Markov chain, where $\mathbf{T}$ is the probability transition sub-matrix restricted to non-optimal states.
The spectral radius $\rho(\mathbf{N})$ of the fundamental matrix can be used to measure the performance of EAs too.
\begin{definition}
The \emph{asymptotic hitting time} of an EA for maximizing $f(x)$ is
$$
T(\mathbf{T})=
\left\{ \begin{array}{lll}
\rho(\mathbf{N})=\rho((\mathbf{I}-\mathbf{T})^{-1}), &\mbox{if } \rho(\mathbf{T})<1,\\
+\infty, &\mbox{if } \rho(\mathbf{T})=1.
\end{array}
\right.
$$
where $\mathbf{T}$ is the probability transition sub-matrix restricted to non-optimal states and $\mathbf{N}$ is the fundamental matrix.
\end{definition}
From Lemma 5 in \cite{he2011population},, we know the asymptotic hitting time is between the best and worst case hitting times, i.e.,
\begin{equation}
\min\{ m(x); x \in S_{\mathrm{non}}\} \le T(\mathbf{T}) \le \max \{ m(x); x \in S_{\mathrm{non}} \}.
\end{equation}
From Lemma 3 in \cite{he2011population}, we know
\begin{lemma} \label{lem1}
For any homogeneous (1+1)-EA using strictly elitist selection, it holds
$$
\begin{array}{llll}
&\rho(\mathbf{T})= \max\{ P(x,x); x \in S_{\mathrm{non}}\},\\
&\rho(\mathbf{N})=\displaystyle \frac{1}{1-\rho(\mathbf{T})}, &\mbox{if }\rho(\mathbf{T}) <1.
\end{array}
$$
\end{lemma}
From Lemma~\ref{lem1} and Taylor series, we get that
$$
R(\mathbf{T}) T(\mathbf{T}) =
\sum^{\infty}_{k=1} \frac{1}{k } \left( \frac{1}{T(\mathbf{T})}\right)^{k-1}.
$$
If we make a mild assumption $ T(\mathbf{T}) \ge 2,$ (i.e., the asymptotic hitting time is at least two generations), then the asymptotic hitting time approximatively equals the reciprocal of the asymptotic convergence rate (see Figure~1).
\begin{figure}
\label{fig1}
\caption{The relationship between the asymptotic hitting time and asymptotic convergence rate: $1/R(\mathbf{T}) < T(\mathbf{T}) <1.5/R(\mathbf{T})$ if $\rho(\mathbf{T})\ge 0.5$.}
\begin{center}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=5cm,y=1cm]
\draw[->,color=black] (-0.3,0) -- (1.1,0);
\foreach \x in {-0.2,0.2,0.4,0.6,0.8,1}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt) node[below] {\footnotesize $\x$};
\draw[color=black] (0.95,0.03) node [anchor=south west] { $\rho(\mathbf{T})$};
\draw[->,color=black] (0,-0.2) -- (0,2.9);
\foreach \y in {,0.5,1,1.5,2,2.5}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt) node[left] {\footnotesize $\y$};
\draw[color=black] (0.01,2.75) node [anchor=west] { $R (\mathbf{T})\times T(\mathbf{T}) $};
\draw[color=black] (0pt,-10pt) node[right] {\footnotesize $0$};
\clip(-0.3,-0.2) rectangle (1.1,2.9);
\draw[smooth,samples=100,domain=0.1:0.99] plot(\x,{(-ln((\x)))/(1-(\x))});
\end{tikzpicture}
\end{center}
\end{figure}
\begin{example}\label{exa2}
Consider the problem of maximizing the One-Max function:
$$
f(x)= \mid x \mid,
$$
where
$x=(b_1 \cdots b_n)$ a binary string, $n$ the string length and $\mid x \mid := \sum^n_{i=1} b_i$.
The mutation operator used in the (1+1) EA is to
choose one bit randomly and then flip it.
Then asymptotic convergence rate and asymptotic hitting time are
$$
\begin{array}{ccc}
{1}/{n} < R(\mathbf{T}) < {1}/{(n-1)}, \\
T(\mathbf{T})=n.
\end{array}
$$
\end{example}
\section{A Comparison of Pure Strategy and Mixed Strategy}
In this section, subscripts ${\mathbf{q}}$ and s are added to distinguish between a mixed strategy EA using a strategy probability distribution $\mathbf{q}$ and a pure strategy EA using a pure strategy s. For example, $\mathbf{T}_{\mathbf{q}}$ denotes the probability transition sub-matrix of a mixed strategy EA; $\mathbf{T}_{s}$ the transition sub-matrix of a pure strategy EA.
\begin{theorem}
\label{the2}
Let $s1, \cdots s\kappa$ be $\kappa$ mutation operators.
\begin{enumerate}
\item The asymptotic convergence rate of any mixed strategy EA consisting of these $\kappa$ mutation operators is not smaller than the worst pure strategy EA using only one of these mutation operator;
\item and the asymptotic hitting time of any mixed strategy EA is not larger than the worst pure strategy EA using one only of these mutation operator.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) From Lemma~\ref{lem1} we know
$$\begin{array}{lll}
\rho(\mathbf{T}_{\mathbf{q}}) &=&\max\{ \displaystyle\frac{1}{\kappa} \sum^\kappa_{k=1} P_{s_k} (x, x); x \in S_{\mathrm{non}} \}\\
&\le & \displaystyle \frac{1}{\kappa} \sum^\kappa_{k=1} \rho(\mathbf{T}_{sk}) \\
&\le& \max\{ \rho(\mathbf{T}_{sk}
); k=1, \cdots, \kappa\}.
\end{array}$$
Thus we get that
$$
R(\mathbf{T}_{\mathbf{q}}) :=-\ln \rho (\mathbf{T}_{\mathbf{q}}) \ge \max\{-\ln \rho(\mathbf{T}_{sk}); k=1, \cdots, \kappa\}.
$$
(2)
From Lemma \ref{lem1}, we know
$$
\rho(\mathbf{N})=\displaystyle \frac{1}{1-\rho(\mathbf{T})},
$$ then we get
$
\rho(\mathbf{N}_{\mathbf{q}})
\le \max \{\rho(\mathbf{N}_{s_k}); k=1, \cdots, \kappa \}.
$
\qed
\end{proof}
In the following we investigate whether and when the performance of a mixed strategy EA is better than a pure strategy EA.
\begin{definition}
A mutation operator s1
is called \emph{complementary} to another mutation operator s2 on a fitness function $f(x)$ if
for any $x$ such that
\begin{equation}
P_{s1} (x, x) =\rho(\mathbf{T}_{s1}),
\end{equation}
it holds
\begin{equation}
P_{s2} (x, x)< \rho(\mathbf{T}_{s1}).
\end{equation}
\end{definition}
\begin{theorem}\label{the3}
Let $f(x)$ be a fitness function and EA(s1) a pure strategy EA. If a mutation operator s2 is complementary to s1, then it is possible to design a mixed strategy EA(s1,s2) which satisfies
\begin{enumerate}
\item its asymptotic convergence rate is larger than that of EA(s1);
\item and its asymptotic hitting time is shorter than that of EA(s1).
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Design a mixed strategy EA(s1, s2) as follows. For any $x$ such that
$$
P_{s1} (x, x) =\rho(\mathbf{T}_{s1}),
$$
let the strategy probability distribution satisfy
$$
q_{s2}(x)=1.
$$
For any other $x$, let the strategy probability distribution satisfy
$$
q_{s1}(x)=1.
$$
Because s2 is complementary to s1, we get that
\[
\rho(\mathbf{T}_{\mathbf{q}}) < \rho(\mathbf{T}_{s1}) ,
\]
and then
\[
-\ln \rho(\mathbf{T}_{\mathbf{q}}) > - \ln \rho(\mathbf{T}_{s1}) ,
\]
which proves the first conclusion in the theorem.
(2) From Lemma~\ref{lem1}
$$
\rho(\mathbf{N}) = \frac{1}{1-\rho{(\mathbf{T}})}
$$
we get that
$$
\rho(\mathbf{N}_{\mathbf{q}}) < \rho(\mathbf{N}_{sk}), \quad \forall k=1, \cdots, \kappa,
$$
which proves the second conclusion in the theorem. \qed
\end{proof}
\begin{definition}
$\kappa$ mutation operators $s1, \cdots, s\kappa$ are called \emph{mutually complementary} on a fitness function $f(x)$ if
for any $ x \in S_{\mathrm{non}}$ and $sl \in \{ s1, \cdots, s\kappa \} $ such that
\begin{equation}
P_{sl} (x,x) \ge \min \{\rho(\mathbf{T}_{s1}), \cdots, \rho(\mathbf{T}_{s\kappa}) \},
\end{equation}
it holds: $\exists sk \neq sl$,
\begin{equation}
P_{sk} (x,x) < \min \{\rho(\mathbf{T}_{s1}), \cdots, \rho(\mathbf{T}_{s\kappa}) \}.
\end{equation}
\end{definition}
\begin{theorem}\label{the4}
Let $f(x)$ be a fitness function and $s1, \cdots, s\kappa$ be $\kappa$ mutation operators. If these mutation operators are mutually complementary, then it is possible to design a mixed strategy EA which satisfies
\begin{enumerate}
\item its asymptotic convergence rate is larger than that of any pure strategy EA using one mutation operator;
\item and its asymptotic hitting time is shorter than that of any pure strategy EA using one mutation operator.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) We design a mixed strategy EA(s1, ..., s$\kappa$) as follows.
For any $x$ and any strategy $sl \in \{s1, \cdots, s\kappa \}$ such that
$$
\begin{array}{llll}
P_{sl} (x,x)\ge \min \{\rho(\mathbf{T}_{s1}), \cdots, \rho(\mathbf{T}_{s\kappa}) \},\\
\end{array}
$$
from the mutually complementary condition, we know $\exists sk \neq sl$, it holds
$$
P_{sk} (x,x) < \min \{\rho(\mathbf{T}_{s1}), \cdots, \rho(\mathbf{T}_{s\kappa}) \}.
$$
Let the strategy probability distribution satisfy
$$
q_{sk}(x)=1.
$$
For any other $x$,
we assign a strategy probability distribution in any way.
Because the mutation operators are mutually complementary, we get that
\[
\rho(\mathbf{T}_{\mathbf{q}}) < \min \{\rho(\mathbf{T}_{s1}), \cdots, \rho(\mathbf{T}_{s\kappa}) \},
\]
and then
\[
-\ln \rho(\mathbf{T}_{\mathbf{q}}) > \min \{-\ln \rho(\mathbf{T}_{s1}), \cdots, -\ln \rho(\mathbf{T}_{s\kappa}) \},
\]
which proves the first conclusion in the theorem.
(2) From Lemma~\ref{lem1}
$$
\rho(\mathbf{N}) = \frac{1}{1-\rho{(\mathbf{T}})},
$$
we get that
$$
\rho (\mathbf{N}_{\mathbf{q}}) < \rho (\mathbf{N}_{sk}), \quad \forall k=1, \cdots, \kappa,
$$
which proves the second conclusion in the theorem. \qed
\end{proof}
\begin{example}\label{exa3}
Consider the problem of maximizing the following fitness function $f(x)$ (see Figure~\ref{fig2}):
$$
f(x)=\left
\{\begin{array}{lll}
\mid x \mid, & \mbox{if } \mid x \mid < 0.5 n \mbox{ and } \mid x \mid \mbox{ is even};\\
\mid x \mid+2, & \mbox{if } \mid x \mid < 0.5 n \mbox{ and } \mid x \mid \mbox{ is odd};\\
\mid x \mid, & \mbox{if } \mid x \mid \ge 0.5 n.
\end{array}
\right.
$$
where
$x=(b_1 \cdots b_n)$ is a binary string, $n$ the string length and $\mid x \mid := \sum^n_{i=1} b_i$.
\begin{figure}[ht]
\caption{The shape of the function $f(x)$ in Example \ref{exa3} when $n=16$.}\label{fig2}
\begin{center}
\definecolor{qqqqff}{rgb}{0,0,1}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.3cm,y=0.175cm]
\draw[->,color=black] (-1.5,0) -- (18.5,0);
\foreach \x in {,2,4,6,8,10,12,14,16,18}
\draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt) node[below] {\footnotesize $\x$};
\draw[color=black] (15.12,0.2) node [anchor=south west] { $\mid x \mid$};
\draw[->,color=black] (0,-1.5) -- (0,18.5);
\foreach \y in {,5,10,15}
\draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt) node[left] {\footnotesize $\y$};
\draw[color=black] (0.2,17.45) node [anchor=west] { $f(x)$};
\draw[color=black] (0pt,-10pt) node[right] {\footnotesize $0$};
\clip(-1.5,-1.5) rectangle (18.5,18.5);
\draw (0,0)-- (1,3);
\draw (1,3)-- (2,2);
\draw (2,2)-- (3,5);
\draw (3,5)-- (4,4);
\draw (4,4)-- (5,7);
\draw (5,7)-- (6,6);
\draw (6,6)-- (7,9);
\draw (7,9)-- (8,8);
\draw (8,8)-- (9.01,9.11);
\draw (9.01,9.11)-- (10,10);
\draw (10,10)-- (11,11);
\draw (11,11)-- (12,12);
\draw (12,12)-- (13,13);
\draw (13,13)-- (14,14);
\draw (14,14)-- (15,15);
\draw (15,15)-- (16,16);
\begin{scriptsize}
\fill [color=black] (1,3) circle (1.5pt);
\fill [color=black] (3,5) circle (1.5pt);
\fill [color=black] (5,7) circle (1.5pt);
\fill [color=black] (7,9) circle (1.5pt);
\fill [color=black] (0,0) circle (1.5pt);
\fill [color=black] (2,2) circle (1.5pt);
\fill [color=black] (4,4) circle (1.5pt);
\fill [color=black] (6,6) circle (1.5pt);
\fill [color=black] (8,8) circle (1.5pt);
\fill [color=black] (10,10) circle (1.5pt);
\fill [color=black] (12,12) circle (1.5pt);
\fill [color=black] (11,11) circle (1.5pt);
\fill [color=black] (13,13) circle (1.5pt);
\fill [color=black] (14,14) circle (1.5pt);
\fill [color=black] (15,15) circle (1.5pt);
\fill [color=black] (16,16) circle (1.5pt);
\fill [color=black] (9.01,9.11) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\end{center}
\end{figure}
Consider two common mutation operators:
\begin{itemize}
\item s1: to choose one bit randomly and then flip it;
\item s2: to flip each bit independently with a probability $1/n$.
\end{itemize}
EA(s1) uses the mutation operator s1 only. Then
$
\rho(\mathbf{T}_{s1})=1,
$
and then the asymptotic convergence rate is
$R(\mathbf{T}_{s1})=0.$
EA(s2) utilizes the mutation operator s2 only. Then
$$
\rho(\mathbf{T}_{s2})=1-\frac{1}{n} \left( 1- \frac{1}{n} \right)^{n-1}.
$$
We have
$$
\min \{ \rho(\mathbf{T}_{s1}),\rho(\mathbf{T}_{s2})\} = 1-\frac{1}{n} \left( 1- \frac{1}{n} \right)^{n-1}.
$$
(1) For any $x$ such that
$$
P_{s1} (x,x) \ge 1-\frac{1}{n} \left( 1- \frac{1}{n} \right)^{n-1},$$
we have
$$
P_{s1} (x,x) =1,$$
and we know that
$$
P_{s2} (x,x) < 1-\frac{1}{n} \left( 1- \frac{1}{n} \right)^{n-1}.
$$
(2) For any $x$ such that
$$
P_{s2} (x,x) =\rho(\mathbf{T}_{s2})=1-\frac{1}{n} \left( 1- \frac{1}{n} \right)^{n-1},
$$
we know that
$$
P_{s1} (x,x) =1-\frac{1}{n} < \rho(\mathbf{T}_{s2})=1-\frac{1}{n} \left( 1- \frac{1}{n} \right)^{n-1}.
$$
Hence these two mutation operators are mutually complementary.
We design a mixed strategy EA(s1,s2) as follows:
let the strategy probability distribution satisfy
$$
q_{s1}(x)=
\left\{
\begin{array}{lll}
0, &\mbox{if } \mid x \mid \le 0.5n;\\
1, &\mbox{if } \mid x \mid > 0.5n.
\end{array}
\right.
$$
According to Theorem~\ref{the4}, the asymptotic convergence rate of this mixed strategy EA(s1,s2) is larger than that of either EA(s1) or EA(s2).
\end{example}
\section{Conclusion and Discussion}
The result of this paper is summarized in three points.
\begin{itemize}
\item Asymptotic convergence rate and asymptotic hitting time are proposed to measure the performance of EAs. They are seldom used in evaluating the performance of EAs before.
\item It is proven that the asymptotic convergence rate and asymptotic hitting time of any mixed strategy (1+1) EA consisting of several mutation operators is not worse than that of the worst pure strategy (1+1) EA using only one of these mutation operators.
\item Furthermore, if these mutation operators are mutually complementary, then it is possible to design a mixed strategy EA whose performance (asymptotic convergence rate and asymptotic hitting time) is better than that of any pure strategy EA using one mutation operator.
\end{itemize}
An argument is that several mutation operators can be applied simultaneously, e.g., in a population-based EA, different individuals adopt different mutation operators. However in this case, the number of fitness evaluations at each generation is larger than that of a (1+1) EA. Therefore a fair comparison should be a population-based mixed strategy EA against a population-based pure strategy EA. Due to the length restriction, this issue will not be discussed in the paper.
\subsubsection*{Acknowledgement:} J. He is partially supported by the EPSRC under Grant EP/I009809/1.
H. Dong is partially supported by the National Natural Science Foundation of China under Grant No.~60973075 and Natural Science Foundation of Heilongjiang Province of China under Grant No.~F200937, China.
|
1,314,259,993,489 | arxiv |
\section{Datasets and Evaluation Setup}
\label{sec:supp-datasets}
We evaluate our proposed approach for joint 3D pose and focal length estimation in the wild on three challenging real-world dataset with different object categories: Pix3D~\cite{Sun2018pix3d} (\textit{bed}, \textit{chair}, \textit{sofa}, \textit{table}), Comp~\cite{Wang2018fine} (\textit{car}), and Stanford~\cite{Wang2018fine} (\textit{car}). These datasets provide category-level 3D pose and focal length annotations for RGB images taken in the wild and have only been available recently.
Previous datasets were either captured using a single camera with constant focal length (category-level: KITTI or instance-level: LineMOD~\cite{Hinterstoisser2011gradient}, T-LESS~\cite{Hodavn2017tless}, YCB~\cite{Calli2015ycb}), or lacked focal length annotations (category-level: Pascal3D+~\cite{Xiang2014beyond}, ObjectNet3D~\cite{Xiang2016objectnet3d}). Due to the lack of focal length annotations, Pascal3D+ and ObjectNet3D are only meaningful for coarse 3D rotation estimation but not for fine-grained 3D pose estimation because they assume an almost orthographic camera for all images.
As a consequence of this previous lack of datasets, there is little research on 3D pose and focal length estimation in the wild~\cite{Wang2018fine}. Existing 3D pose estimation methods either assume the focal length to be given or evaluate on datasets which were captured using a single camera with constant focal length. However, in the wild, images are captured with multiple cameras having different focal lengths and the focal length is unknown during inference. Moreover, approaches for instance-level 3D pose estimation cannot be applied to category-level 3D pose estimation, as they assume that objects encountered during testing have already been seen during training~\cite{Sundermeyer2018implicit}.
\begin{figure}
\setlength{\tabcolsep}{1pt}
\setlength{\fboxsep}{-2pt}
\setlength{\fboxrule}{2pt}
\definecolor{boxgreen}{rgb}{0.3, 1.0, 0.3}
\definecolor{boxred}{rgb}{1.0, 0.3, 0.3}
\newcommand{\colImgN}[1]{{\includegraphics[width=0.19\linewidth]{#1}}}
\newcommand{\colImgR}[1]{{\color{boxred}\fbox{\colImgN{#1}}}}
\newcommand{\colImgG}[1]{{\color{boxgreen}\fbox{\colImgN{#1}}}}
\centering
\begin{tabular}{ccccc}
\colImgN{Images/Focal/trans_562.png}&\colImgN{Images/Focal/focal1.pdf}& \colImgN{Images/Focal/focal2.pdf}&\colImgN{Images/Focal/focal3.pdf}&\colImgN{Images/Focal/focal4.pdf}\\[-3pt]
\footnotesize Image&\footnotesize Ground Truth&\footnotesize $f$ GT&\footnotesize $f$ pred&\footnotesize $f$ constant\\
\end{tabular}
\caption{In the case of unknown intrinsics, the 3D pose of an object is ambiguous. Our approach finds a geometric consensus between all projection parameters, which results in a precise 2D-3D alignment for any initial focal length. However, a good initial focal length is required to compute an accurate 3D pose, as illustrated by the visualization of the object-to-camera distance.}
\label{fig:supp_focal}
\end{figure}
The Pix3D dataset provides multiple categories, however, we only train and evaluate on categories which have more than 300 non-occluded and non-truncated samples (\textit{bed}, \textit{chair}, \textit{sofa}, \textit{table}). Further, we restrict the training and evaluation to samples marked as non-occluded and non-truncated, because we do not know which objects parts are occluded nor the extent of the occlusion, and many objects are heavily truncated. For each category, we select 50\% of the samples for training and the other 50\% for testing. To the best of our knowledge, we are the first to report results for 3D pose and focal length estimation on Pix3D.
The Comp and Stanford datasets only provide one category (\textit{car}). Most images show one prominent car which is non-occluded and non-truncated. The two datasets already provide a train-test split. Thus, we use all available samples from Comp and Stanford for training and evaluation.
\section{Appearance Ambiguities}
\label{sec:supp-ambiguities}
In the main paper, we discuss appearance ambiguities resulting from different focal lengths and show the importance of the focal length for estimating 3D poses from 2D-3D correspondences quantitatively. This is also emphasized by the qualitative example shown in Figure~\ref{fig:supp_focal}. In this experiment, we initialize our geometric optimization with three different focal lengths (ground truth, predicted, and constant). We use the predicted 3D pose and focal length to project the ground truth 3D model onto the image and additionally visualize the object-to-camera distance.
Our geometric optimization finds a consensus between the individual projection parameters, which results in a precise 2D-3D alignment for any initial focal length, because we optimize the reprojection error during inference. However, the 3D pose of an object is ambiguous in the case of unknown intrinsics. Thus, a good initial focal length is a key factor
in achieving high accuracy in terms of 3D translation, as can be seen from the visualization of the object-to-camera distance in Figure~\ref{fig:supp_focal}. Our predicted focal length is significantly more accurate than the best possible constant focal length, \ie, the median of the training dataset.
\section{Training Details}
\label{sec:supp-training}
For our implementation, we resize and pad images to a spatial resolution of $512\times512$ maintaining the aspect ratio. In this way, we are able to use a batch size of 6 on a 12GB GPU. We train our networks for 200 epochs and employ a staged training strategy for fine-tuning a model pre-trained on COCO~\cite{Lin2014microsoft}: First, we freeze all pre-trained weights and only train our focal length and 2D-3D correspondences branches using a learning rate of $1e^{-3}$. During training, we gradually unfreeze all network layers and finally train the entire model using a learning rate of $1e^{-4}$.
We employ different forms of data augmentation commonly used in object detection~\cite{He2017mask}. In this case, some techniques like mirroring or jittering of location, scale, and rotation require adjusting the training target accordingly, while independent pixel augmentations like additive noise do not.
Balancing individual loss terms is crucial for training a multi-task network. We weight the focal loss with $0.1$, the 2D-3D correspondences loss with $10.0$, and the object detection loss with $1.0$, however, the specific numbers are highly dependent on the implementation.
\section{Qualitative Results}
\label{sec:supp-quali}
Figure~\ref{fig:supp_multi} shows additional qualitative 3D pose and focal length estimation results for multiple objects in a single image. We predict 3D poses for multiple objects, however, all evaluated datasets only provide 3D pose annotations for one instance per image.
\begin{figure}[h]
\setlength{\tabcolsep}{1pt}
\setlength{\fboxsep}{-2pt}
\setlength{\fboxrule}{2pt}
\definecolor{boxgreen}{rgb}{0.3, 1.0, 0.3}
\definecolor{boxred}{rgb}{1.0, 0.3, 0.3}
\newcommand{\colImgN}[1]{{\includegraphics[width=0.24\linewidth]{#1}}}
\newcommand{\colImgR}[1]{{\color{boxred}\fbox{\colImgN{#1}}}}
\newcommand{\colImgG}[1]{{\color{boxgreen}\fbox{\colImgN{#1}}}}
\centering
\begin{tabular}{cccc}
\colImgN{Images/Multi/multi_7874.png}& \colImgR{Images/Multi/multi_7874_gt.png}&
\colImgG{Images/Multi/multi_7874_lf.png}&
\colImgG{Images/Multi/multi_7874_bb.png}\\[-1.5pt]
\colImgN{Images/Multi/multi_6363.png}& \colImgR{Images/Multi/multi_6363_gt.png}& \colImgG{Images/Multi/multi_6363_dir.png}&
\colImgG{Images/Multi/multi_6363_bb.png}\\[-1.5pt]
\colImgN{Images/Multi/multi_2370.png}& \colImgR{Images/Multi/multi_2370_gt.png}&
\colImgG{Images/Multi/multi_2370_lf.png}&
\colImgG{Images/Multi/multi_2370_bb.png}\\[-1.5pt]
\colImgN{Images/Multi/multi_125.png}& \colImgR{Images/Multi/multi_125_gt.png}&
\colImgG{Images/Multi/multi_125_lf.png}&
\colImgG{Images/Multi/multi_125_bb.png}\\[-1.5pt]
\colImgN{Images/Multi/multi_779.png}& \colImgR{Images/Multi/multi_779_gt.png}&
\colImgG{Images/Multi/multi_779_lf.png}&
\colImgG{Images/Multi/multi_779_bb.png}\\[-3pt]
\footnotesize Image&\footnotesize Ground Truth&\footnotesize Ours-LF&\footnotesize Ours-BB\\
\end{tabular}
\caption{Additional qualitative 3D pose and focal length estimation results for multiple objects in a single image. We predict 3D poses for multiple objects (green frames), however, all evaluated datasets only provide 3D pose annotations for one instance per image (red frames).}
\label{fig:supp_multi}
\end{figure}
\section{Qualitative Predictions}
\label{sec:supp-corres}
\begin{table*}
\begin{center}
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{lcc|cc|c|c|c|cc}
\toprule
\multicolumn{3}{c}{}&\multicolumn{2}{c}{\bf Rotation}&\multicolumn{1}{c}{\bf Translation}&\multicolumn{1}{c}{\bf Pose}&\multicolumn{1}{c}{\bf Focal}&\multicolumn{2}{c}{\bf Projection}\\
\cmidrule(lr){4-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}\cmidrule(lr){9-10}
\multirow{2}{*}{Method}&\multicolumn{1}{c}{\multirow{2}{*}{Dataset}}&\multicolumn{1}{c}{\multirow{2}{*}{Class}}&\multicolumn{1}{c}{$MedErr_R$}&\multicolumn{1}{c}{\multirow{2}{*}{$Acc_{R\frac{\pi}{6}}$}}&\multicolumn{1}{c}{$MedErr_{t}$}&\multicolumn{1}{c}{$MedErr_{R,t}$}&\multicolumn{1}{c}{$MedErr_f$}&\multicolumn{1}{c}{$MedErr_{P}$}&\multicolumn{1}{c}{\multirow{2}{*}{$Acc_{P_{0.1}}$}}\\
&\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$\cdot1$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$\cdot10^{1}$}&\multicolumn{1}{c}{$\cdot10^{1}$}&\multicolumn{1}{c}{$\cdot10^{1}$}&\multicolumn{1}{c}{$\cdot10^{2}$}\\
\midrule
Ours-LF \textit{initial}&\multirow{2}{*}{Pix3D}&\multirow{2}{*}{$mean$}&7.10&87.9\%&1.89&1.32&1.73&3.98&84.7\%\\
Ours-LF \textit{refined}&&&\bf6.92&\bf88.4\%&\bf1.85&\bf1.30&\bf1.72&\bf3.85&\bf85.5\%\\
\midrule
Ours-BB \textit{initial}&\multirow{2}{*}{Pix3D}&\multirow{2}{*}{$mean$}&7.04&90.1\%&1.98&1.33&1.77&3.87&86.8\%\\
Ours-BB \textit{refined}&&&\bf6.89&\bf90.8\%&\bf1.94&\bf1.30&\bf1.75&\bf3.66&\bf88.0\%\\
\bottomrule
\end{tabular}
\end{center}
\caption{Ablation study on joint 3D pose and focal length refinement. We compare our initial solution to the final solution obtained by our joint refinement. Jointly optimizing all parameters results in an improvement across all metrics.}
\label{table:ablation}
\end{table*}
Qualitative examples of our predicted 2D-3D correspondences are presented in Figure~\ref{fig:supp_correspondences}. The predicted correspondences do not contain single extreme outliers, because they are computed from a low dimensional feature embedding which produces consistent predictions. If our prediction fails entire regions of 2D-3D correspondences are corrupt. In such cases, we cannot estimate the pose correctly, not even with robust methods.
Considering our predicted location fields, we observe that the overall shape of the object is recovered very accurately. In specific cases, thin object parts and details are not detected, \eg, the skinny legs of a table as shown in Figure~\ref{fig:supp_correspondences}. To address this issue, the spatial resolution of the predicted location field can be increased. In this work, we follow the architecture of Mask R-CNN and use a spatial resolution of $28\times28$~\cite{He2017mask}.
Considering our 3D bounding box corner projections, we observe that the predicted 2D locations are close to the ground truth 2D locations. Also, the perspective box-shape is well recovered and there is a consensus between the individual points. The predictions are even accurate for corners which project outside the image area, as shown in Figure~\ref{fig:supp_correspondences}.
\section{Failure Cases}
\label{sec:supp-failure}
Figure~\ref{fig:supp_failure} shows failure cases of our approach using our two different methods for establishing 2D-3D correspondences (Ours-LF and Ours-BB). Most failure cases relate to strong truncations, heavy occlusions, or poses which are far from the poses seen during training. Naturally, the annotations are not perfect and some occluded or truncated samples are marked as non-occluded and non-truncated, or the 3D pose annotation is incorrect. In some cases, our approach makes a correct prediction, but this prediction is considered wrong because of an erroneous ground truth 3D pose annotation, as shown in Figure~\ref{fig:supp_failure}. Interestingly, there is a large overlap between the failure cases of both methods, which indicates that the respective samples are significantly different from the samples seen during training.
\begin{figure}[h]
\setlength{\tabcolsep}{1pt}
\setlength{\fboxsep}{-2pt}
\setlength{\fboxrule}{2pt}
\definecolor{boxgreen}{rgb}{0.3, 1.0, 0.3}
\definecolor{boxred}{rgb}{1.0, 0.3, 0.3}
\newcommand{\colImgN}[1]{{\includegraphics[width=0.19\linewidth]{#1}}}
\newcommand{\colImgR}[1]{{\color{boxred}\fbox{\colImgN{#1}}}}
\newcommand{\colImgG}[1]{{\color{boxgreen}\fbox{\colImgN{#1}}}}
\centering
\begin{tabular}{ccccc}
\colImgN{Images/2D-3D/corr_comp_303.png}& \colImgN{Images/2D-3D/corr_comp_303_LF_gt_0.png}& \colImgN{Images/2D-3D/corr_comp_303_LF_gt_1.png}& \colImgN{Images/2D-3D/corr_comp_303_LF_gt_2.png}& \colImgN{Images/2D-3D/corr_comp_303_BB_gt.png}\\[-1.5pt]
&\colImgN{Images/2D-3D/corr_comp_303_LF_pred_0.png}& \colImgN{Images/2D-3D/corr_comp_303_LF_pred_1.png}& \colImgN{Images/2D-3D/corr_comp_303_LF_pred_2.png}& \colImgN{Images/2D-3D/corr_comp_303_BB_pred.png}\\[-1.5pt]
\colImgN{Images/2D-3D/corr_comp_443.png}& \colImgN{Images/2D-3D/corr_comp_443_LF_gt_0.png}& \colImgN{Images/2D-3D/corr_comp_443_LF_gt_1.png}& \colImgN{Images/2D-3D/corr_comp_443_LF_gt_2.png}& \colImgN{Images/2D-3D/corr_comp_443_BB_gt.png}\\[-1.5pt]
&\colImgN{Images/2D-3D/corr_comp_443_LF_pred_0.png}& \colImgN{Images/2D-3D/corr_comp_443_LF_pred_1.png}& \colImgN{Images/2D-3D/corr_comp_443_LF_pred_2.png}& \colImgN{Images/2D-3D/corr_comp_443_BB_pred.png}\\[-1.5pt]
\colImgN{Images/2D-3D/corr_comp_606.png}& \colImgN{Images/2D-3D/corr_comp_606_LF_gt_0.png}& \colImgN{Images/2D-3D/corr_comp_606_LF_gt_1.png}& \colImgN{Images/2D-3D/corr_comp_606_LF_gt_2.png}& \colImgN{Images/2D-3D/corr_comp_606_BB_gt.png}\\[-1.5pt]
&\colImgN{Images/2D-3D/corr_comp_606_LF_pred_0.png}& \colImgN{Images/2D-3D/corr_comp_606_LF_pred_1.png}& \colImgN{Images/2D-3D/corr_comp_606_LF_pred_2.png}& \colImgN{Images/2D-3D/corr_comp_606_BB_pred.png}\\[-1.5pt]
\colImgN{Images/2D-3D/corr_pix3d_34.png}& \colImgN{Images/2D-3D/corr_pix3d_34_LF_gt_0.png}& \colImgN{Images/2D-3D/corr_pix3d_34_LF_gt_1.png}& \colImgN{Images/2D-3D/corr_pix3d_34_LF_gt_2.png}& \colImgN{Images/2D-3D/corr_pix3d_34_BB_gt.png}\\[-1.5pt]
&\colImgN{Images/2D-3D/corr_pix3d_34_LF_pred_0.png}& \colImgN{Images/2D-3D/corr_pix3d_34_LF_pred_1.png}& \colImgN{Images/2D-3D/corr_pix3d_34_LF_pred_2.png}& \colImgN{Images/2D-3D/corr_pix3d_34_BB_pred.png}\\[-1.5pt]
\colImgN{Images/2D-3D/corr_pix3d_202.png}& \colImgN{Images/2D-3D/corr_pix3d_202_LF_gt_0.png}& \colImgN{Images/2D-3D/corr_pix3d_202_LF_gt_1.png}& \colImgN{Images/2D-3D/corr_pix3d_202_LF_gt_2.png}& \colImgN{Images/2D-3D/corr_pix3d_202_BB_gt.png}\\[-1.5pt]
&\colImgN{Images/2D-3D/corr_pix3d_202_LF_pred_0.png}& \colImgN{Images/2D-3D/corr_pix3d_202_LF_pred_1.png}& \colImgN{Images/2D-3D/corr_pix3d_202_LF_pred_2.png}& \colImgN{Images/2D-3D/corr_pix3d_202_BB_pred.png}\\[-1.5pt]
\colImgN{Images/2D-3D/corr_pix3d_1633.png}& \colImgN{Images/2D-3D/corr_pix3d_1633_LF_gt_0.png}& \colImgN{Images/2D-3D/corr_pix3d_1633_LF_gt_1.png}& \colImgN{Images/2D-3D/corr_pix3d_1633_LF_gt_2.png}& \colImgN{Images/2D-3D/corr_pix3d_1633_BB_gt.png}\\[-1.5pt]
&\colImgN{Images/2D-3D/corr_pix3d_1633_LF_pred_0.png}& \colImgN{Images/2D-3D/corr_pix3d_1633_LF_pred_1.png}& \colImgN{Images/2D-3D/corr_pix3d_1633_LF_pred_2.png}& \colImgN{Images/2D-3D/corr_pix3d_1633_BB_pred.png}\\[-1.5pt]
\colImgN{Images/2D-3D/corr_pix3d_2128.png}& \colImgN{Images/2D-3D/corr_pix3d_2128_LF_gt_0.png}& \colImgN{Images/2D-3D/corr_pix3d_2128_LF_gt_1.png}& \colImgN{Images/2D-3D/corr_pix3d_2128_LF_gt_2.png}& \colImgN{Images/2D-3D/corr_pix3d_2128_BB_gt.png}\\[-1.5pt]
&\colImgN{Images/2D-3D/corr_pix3d_2128_LF_pred_0.png}& \colImgN{Images/2D-3D/corr_pix3d_2128_LF_pred_1.png}& \colImgN{Images/2D-3D/corr_pix3d_2128_LF_pred_2.png}& \colImgN{Images/2D-3D/corr_pix3d_2128_BB_pred.png}\\[-3pt]
\footnotesize Image&\footnotesize LF (X)&\footnotesize LF (Y)&\footnotesize LF (Z)&\footnotesize BB\\
\end{tabular}
\caption{Qualitative examples of our predicted 2D-3D correspondences. For each object, we show two forms of 2D-3D correspondences: the location field (LF) and the projections of the object's 3D bounding box corners (BB). For each example image, the top row shows the ground truth, the bottom row shows our predictions.}
\label{fig:supp_correspondences}
\end{figure}
\begin{figure}[h]
\setlength{\tabcolsep}{1pt}
\setlength{\fboxsep}{-2pt}
\setlength{\fboxrule}{2pt}
\definecolor{boxgreen}{rgb}{0.3, 1.0, 0.3}
\definecolor{boxred}{rgb}{1.0, 0.3, 0.3}
\newcommand{\colImgN}[1]{{\includegraphics[width=0.24\linewidth]{#1}}}
\newcommand{\colImgR}[1]{{\color{boxred}\fbox{\colImgN{#1}}}}
\newcommand{\colImgG}[1]{{\color{boxgreen}\fbox{\colImgN{#1}}}}
\centering
\begin{tabular}{ccc}
\colImgN{Images/Failure/failure_637.png}& \colImgN{Images/Failure/failure_637_gt.png}& \colImgR{Images/Failure/failure_637_lf.png}\\[-1.5pt]
\colImgN{Images/Failure/failure_66.png}& \colImgN{Images/Failure/failure_66_gt.png}&\colImgR{Images/Failure/failure_66_lf.png}\\[-1.5pt]
\colImgN{Images/Failure/failure_121.png}& \colImgN{Images/Failure/failure_121_gt.png}&\colImgR{Images/Failure/failure_121_lf.png}\\[-3pt]
\footnotesize Image&\footnotesize Ground Truth&\footnotesize Ours-LF\\[0.5em]
\multicolumn{3}{c}{(a) Failure cases of Ours-LF}\\[1.5em]
\colImgN{Images/Failure/failure_951.png}& \colImgN{Images/Failure/failure_951_gt.png}& \colImgR{Images/Failure/failure_951_bb.png}\\[-1.5pt]
\colImgN{Images/Failure/failure_61.png}& \colImgN{Images/Failure/failure_61_gt.png}& \colImgR{Images/Failure/failure_61_bb.png}\\[-1.5pt]
\colImgN{Images/Failure/failure_155.png}& \colImgN{Images/Failure/failure_155_gt.png}& \colImgR{Images/Failure/failure_155_bb.png}\\[-3pt]
\footnotesize Image&\footnotesize Ground Truth&\footnotesize Ours-BB\\[0.5em]
\multicolumn{3}{c}{(b) Failure cases of Ours-BB}\\[1.5em]
\colImgN{Images/Failure/failure_77.png}& \colImgR{Images/Failure/failure_77_gt.png}& \colImgG{Images/Failure/failure_77_lf.png}\\[-1.5pt]
\colImgN{Images/Failure/failure_1448.png}& \colImgR{Images/Failure/failure_1448_gt.png}& \colImgG{Images/Failure/failure_1448_lf.png}\\[-1.5pt]
\colImgN{Images/Failure/failure_1794.png}& \colImgR{Images/Failure/failure_1794_gt.png}& \colImgG{Images/Failure/failure_1794_lf.png}\\[-3pt]
\footnotesize Image&\footnotesize Ground Truth&\footnotesize Ours-BB\\[0.5em]
\multicolumn{3}{c}{(c) Erroneous ground truth annotations}\\[0.5em]
\end{tabular}
\caption{Example failure cases of our approach for (a) Ours-LF and (b) Ours-BB. Most failure cases relate to strong truncations, heavy occlusions, or poses which are far from the poses seen during training. (c) In some cases, our approach makes a correct prediction, but the ground truth 3D pose annotation is corrupt, \eg, the annotator confused the back and the front of a car or mislabeled the location of the object in the image. We highlight samples showing incorrect predictions or erroneous ground truth annotations with red frames.}
\label{fig:supp_failure}
\vspace{-0.6cm}
\end{figure}
\section{Ablation Study}
\label{sec:supp-abl}
Finally, Table~\ref{table:ablation} presents quantitative results of our approach with and without joint 3D pose and focal length refinement. For this purpose, we compare our initial solution obtained by EP\emph{n}P~\cite{Lepetit2009epnp} with our predicted focal length to the final solution computed by our joint 3D pose and focal length refinement. Jointly optimizing all parameters results in an improvement across all metrics. In fact, the initial solution already outperforms the state-of-the-art by a large margin.
Our geometric optimization is fast and efficient. In our implementation, the geometric optimization with joint refinement (Stage 2) takes only 5 ms, while the CNN forward pass (Stage 1) takes 60 ms per image on average.
\end{document}
\section{Introduction}
\label{sec:intro}
3D object pose estimation aims at predicting the 3D rotation and 3D translation of objects relative to the camera. It is a fundamental yet unsolved computer vision problem with many applications, including augmented reality, robotics, and scene understanding. Recently, there have been great advances in 3D object pose estimation from single RGB images on the category level~\cite{Tulsiani2015viewpoints,Ren2015faster,Mousavian20163d,Grabner2018a}, thanks to the development of deep learning and the creation of large-scale datasets providing 3D annotations for RGB images~\cite{Xiang2014beyond,Xiang2016objectnet3d}.
While recent approaches achieve high accuracy in terms of 3D rotation, their accuracy in terms of 3D translation is often low~\cite{Wang2018fine,Mottaghi2015coarse}. The main reason for this discrepancy is illustrated in Figure~\ref{fig:teaser}, where we compare two images of an object captured with cameras having different focal lengths. The appearance of the object is similar in both images, even though the 3D poses are significantly different. In fact, the appearance of an object in an image is not only determined by the 3D pose, but also by the camera intrinsics. While changes in the 3D rotation always significantly effect the appearance, changes in the 3D translation do not if the translation direction and the ratio between the object-to-camera distance and the focal length remain constant. Thus, estimating the 3D translation of objects from RGB images in the case of unknown intrinsics is highly ambiguous.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Images/teaser_illustration5.pdf}
\end{center}
\vspace{-0.377cm}
\caption{Images captured with two cameras having different focal lengths. The appearance of the chair is similar in both images, but the 3D poses are significantly different due to the distinct focal lengths and object-to-camera distances.}
\label{fig:teaser}
\vspace{-0.2cm}
\end{figure}
Existing approaches assume that the 3D pose estimation method will implicitly learn the subtle appearance variations caused by different focal lengths from the data and adapt the prediction accordingly~\cite{Wang2018fine,Mottaghi2015coarse}. In practice, however, this is not the case, because deep networks do not find the solutions we intend without explicit guidance.
To overcome this limitation, we propose to explicitly estimate and integrate the focal length into the 3D pose estimation. For this purpose, we introduce a two-stage approach that combines deep learning techniques and geometric algorithms. In the first stage, we estimate an initial focal length and establish 2D-3D correspondences from a single RGB image using a deep network. In the second stage, we perform a geometric optimization on the predicted correspondences to recover 3D poses and refine the focal length. In particular, we minimize the reprojection error between predicted 2D locations and 3D points subject to the 3D rotation, 3D translation, and the focal length by solving a P\emph{n}P f problem~\cite{Nakano2016versatile}. In this way, we exploit the geometric prior given by the focal length for 3D pose estimation.
In contrast to existing approaches, which also predict 3D poses and the focal length but only perform an independent estimation of the individual parameters~\cite{Wang2018fine}, our approach has two main advantages: First, explicitly modeling the focal length in the 3D pose estimation yields significantly improved 3D translation and 3D pose accuracy. Second, our approach finds a geometric consensus between 3D poses and the focal length. This results in a significantly improved 2D-3D alignment when projecting 3D models of objects back onto the image, which is important for many applications like augmented reality. Therefore, we call our method \emph{Geometric Projection Parameter Consensus} (GP\textsuperscript{2}C).
In addition, we explore two possible methods for establishing 2D-3D correspondences from RGB images, which approach the task from different directions. Our first method predicts 3D points for known 2D locations by estimating a 3D coordinate for each object pixel~\cite{Brachmann2014learning,Brachmann2016uncertainty,Jafari2018ipose}. Our second method predicts 2D locations for known 3D points by estimating the 2D projections of the object's 3D bounding box corners~\cite{Grabner2018a,Rad2017iccv,Tekin2018real}. Our experiments show that both methods achieve comparable accuracy, but each method has its respective advantages and disadvantages. Thus, we provide a detailed discussion comparing the two methods.
To demonstrate the benefits of our joint 3D pose and focal length estimation approach, we evaluate it on three challenging real-world datasets with different object categories: Pix3D~\cite{Sun2018pix3d} (\textit{bed}, \textit{chair}, \textit{sofa}, \textit{table}), Comp~\cite{Wang2018fine} (\textit{car}), and Stanford~\cite{Wang2018fine} (\textit{car}). We present quantitative as well as qualitative results and significantly outperform the state-of-the-art. To summarize, our main contributions are:
\begin{itemize}
\item We present the first method for joint 3D pose and focal length estimation that enforces a geometric consensus between 3D poses and the focal length.
\item We outperform the state-of-the-art by up to 20\% absolute in multiple metrics covering different aspects of projective geometry including 3D translation, 3D pose, focal length, and projection accuracy.
\end{itemize}
\section{Related Work}
\label{sec:relatedwork}
In this section, we discuss previous work on 3D pose estimation for object categories and approaches for estimating the camera intrinsics, in particular, the focal length.
\subsection{3D Pose Estimation}
A recent trend in computer vision is to predict pose parameters directly using deep learning. In this context, numerous works predict only the 3D rotation of objects using CNNs. These methods perform rotation classification~\cite{Tulsiani2015pose,Tulsiani2015viewpoints,Ren2015faster}, regression~\cite{Massa2016crafting,Xiang2016objectnet3d}, or apply hybrid variants of both \cite{Mahendran2018mixed} using different parametrizations such as Euler angles, quaternions, or exponentials maps.
In this work, however, we focus on the estimation of the full 3D pose, \ie, the 3D rotation and 3D translation of objects. In this case, many approaches combine the 3D rotation estimation techniques described above with 3D translation regression~\cite{Mousavian20163d,Mottaghi2015coarse,Li2018unified}. Because detecting and localizing objects in 2D is often a first step towards estimating the 3D pose, recent approaches integrate 3D pose estimation techniques into object detection pipelines making the entire system end-to-end trainable~\cite{Xiang2018posecnn,Kundu20183d,Wang2018fine,Kehl2017ssd}. However, these methods do not explicitly take the camera intrinsics into account, which results in poor performance on images captured with different focal lengths, for example.
In contrast to these direct approaches, there is a large amount of research on recovering the pose from 2D-3D correspondences, additionally considering a camera model~\cite{Hartley2003multiple}. In this context, recent approaches use CNNs to predict the 2D locations of the projections of 3D keypoints from RGB images~\cite{pepik20153d,pavlakos17object3d}. While \cite{pepik20153d} recovers the 3D pose from the predicted 2D locations and a given 3D model using a P\emph{n}P~algorithm, \cite{pavlakos17object3d} recovers the 3D pose from the predicted 2D locations alone using a trained deformable shape model. However, these approaches rely on category-specific semantic 3D keypoints which need to be selected and annotated manually for each 3D model.
In this work, we also predict 2D-3D correspondences from RGB images, but do not rely on category-specific 3D keypoints. In particular, we explore two different strategies. Our first strategy is to predict 3D points for known 2D locations. A natural choice is to predict a 3D point for each image pixel~\cite{Brachmann2014learning}. In this case, it is important to know which pixels belong to an object and which pixels belong to the background or another object~\cite{Brachmann2016uncertainty}. Recently, it has been shown that deep learning techniques for instance segmentation~\cite{He2017mask} significantly increase the accuracy on this task~\cite{Wang2018fine,Jafari2018ipose}. In contrast to our approach, \cite{Jafari2018ipose} relies on two disjoint networks for instance segmentation and 3D point regression followed by a geometric optimization assuming a constant focal length. Instead, we use a single network to perform both tasks and additionally optimize the focal length. \cite{Wang2018fine} on the other hand also regresses 3D points with a single network, but relies on a second network to estimate the 3D rotation from these points, compared to our approach which uses a geometric optimization on arbitrary 2D-3D correspondences for joint 3D pose and focal length estimation.
Our second strategy is to predict the 2D locations of known 3D points. In this case, we choose to predict virtual 3D points which generalize across different objects and categories, \eg, the corners of the 3D bounding box of an object~\cite{Rad2017iccv,Tekin2018real}, instead of category-specific 3D keypoints. Recently, it has been shown that this approach can be extended to make predictions without the use of 3D models during inference~\cite{Grabner2018a}. In contrast to our work, \cite{Grabner2018a} assumes that all objects are already detected and localized in 2D, and uses a constant focal length.
\subsection{Focal Length Estimation}
Computing the focal length and other camera intrinsics from 2D-3D correspondences has a long tradition in computer vision~\cite{Faugeras93a,Hartley2003multiple}. In this context, the intrinsic and extrinsic parameters of the camera are often recovered jointly~\cite{Nakano2016versatile,Wu2015p3}. For this purpose, numerous works explicitly estimate the focal length and the 3D pose of the camera by solving a P\emph{n}P f problem~\cite{Penate2013exhaustive,Zheng2014general,zheng2016direct}.
In practice, these methods require precise 2D-3D correspondences, which are often selected manually or using calibration grids~\cite{Tsai1987versatile,Zhang2000flexible}. Many applications, however, require automatic calibration. In specific cases, it is possible to exploit geometric image elements such as lines~\cite{Dubska2015fully}, vanishing points~\cite{Szeliski2010computer}, or circles~\cite{Chen2004camera} to compute the intrinsics, but these methods do not generalize to arbitrary natural images.
Thus, recent works estimate the focal length from RGB images without requiring particular geometric structures using deep learning~\cite{Workman2015deepfocal,Wang2018fine}. In this work, we take a similar approach. However, in contrast to existing methods, we propose a different parametrization and additionally use 2D-3D correspondences to refine the predicted focal length.
\section{Joint 3D Pose and Focal Length Estimation}
\label{sec:method}
Given a single RGB image, we want to predict the focal length and the 3D pose of each object in an image. For this purpose, we introduce a two-stage approach that combines deep learning techniques and geometric algorithms, as shown in Figure~\ref{fig:system}. In the first stage, we predict an initial focal length and establish 2D-3D correspondences using deep learning (Sec.~\ref{sec:deep-contribution}). In the second stage, we perform a geometric optimization on the predicted correspondences to recover 3D poses and refine the focal length (Sec.~\ref{sec:geometric-contribution}).
\subsection{Stage 1: Deep Focal Length and 2D-3D\\ \hspace*{1.15cm} Correspondence Estimation}
\label{sec:deep-contribution}
To predict the focal length as well as 2D-3D correspondences with a single deep network, we extend the generalized Faster/Mask R-CNN framework~\cite{Ren2015faster,He2017mask}. This generic multi-task framework includes a 2D object detection pipeline to perform per-image and per-object computations. In this way, we address multiple different tasks using a single end-to-end trainable network. For our implementation, we use a Feature Pyramid Network~\cite{Lin2017feature} on top of a ResNet-101 backbone~\cite{He2016deep,He2016identity} and finetune a model pre-trained for instance segmentation on COCO~\cite{Lin2014microsoft}.
In the context of the generalized Faster/Mask R-CNN framework, an output branch provides one or more subnetworks with different structure and functionality. We introduce two dedicated output branches for estimating the focal length and 2D-3D correspondences alongside the existing object detection branches.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{Images/sys_overview34.pdf}
\end{center}
\vspace{-0.122cm}
\caption{Overview of our proposed two-stage approach. {\bf Stage 1:} We predict an initial focal length and establish 2D-3D correspondences using deep learning. {\bf Stage 2:} We perform a geometric optimization on the predicted correspondences to recover 3D poses and refine the focal length.}
\label{fig:system}
\end{figure}
\vspace{0.15cm}\noindent\textbf{Focal Length.} The focal length branch provides one subnetwork which performs a per-image computation. In this case, we regress a scalar for each image from the entire spatial resolution of the shared feature maps computed by the convolutional network backbone. In contrast to previous work, we propose to regress a logarithmic parametrization of the focal length
\begin{equation}
y_f = ln(f),
\end{equation}
instead of predicting the focal length $f$ directly~\cite{Wang2018fine}, which has two advantages: First, the logarithmic parametrization reduces the bias towards minimizing the error on long focal lengths during the optimization of the network. This is meaningful because, regarding the estimation of the focal length, the relative error is more important than the absolute error. Second, the logarithmic parametrization achieves a more balanced sensitivity across the entire range of the focal length. Otherwise, the sensitivity is significantly higher for short focal lengths than for long focal lengths. During training, we optimize $y_f$ using the Huber loss~\cite{Huber1964robust}.
\vspace{0.15cm}\noindent\textbf{2D-3D correspondences.} For establishing 2D-3D correspondences, we explore two distinct methods. Both methods approach the problem from different directions and produce significantly different correspondences and representations, as shown in Figure~\ref{fig:representations}. However, our overall approach works with any kind of 2D-3D correspondences and does not depend on a specific format. Thus, the method for establishing correspondences can be exchanged. This is extremely useful, because different methods have their respective advantages and disadvantages which we discuss in our experiments in Sec.~\ref{sec:discussion}.
Our first method predicts 3D points for known 2D locations. In particular, we establish correspondences between 2D image pixels which belong to the object and 3D coordinates on the surface of the object. We represent these correspondences in the form of a location field~(LF)~\cite{Wang2018fine}, which provides dense 2D-3D correspondences in an image-like format, as shown in Figure~\ref{fig:representations-lf}. A location field has the same size and spatial resolution as its reference RGB image, but the three channels encode XYZ 3D coordinates in the object coordinate system instead of RGB colors. Due to its image-like structure, this representation is well-suited for regression with a CNN.
Our second method predicts 2D locations for known 3D points. In this case, we predict the 2D projections of the object's 3D bounding box corners~(BB)~\cite{Rad2017iccv}, as shown in Figure~\ref{fig:representations-bb}. Since the 3D coordinates of the bounding box corners are unknown during inference, we also predict the 3D dimensions of the object along the XYZ axes~\cite{Grabner2018a} from which we derive the required 3D points. We represent these sparse 2D-3D correspondences in the form of a 19-dimensional vector, which consists of the 2D locations of the eight bounding box corners (16 values) and the 3D dimensions of the object (3 values).
\begin{figure}
\begin{subfigure}{0.2\linewidth}
\begin{center}
\includegraphics[height=2.4cm]{Images/sample.png}
\caption{}
\end{center}
\end{subfigure}\begin{subfigure}{0.6\linewidth}
\begin{center}
\includegraphics[height=2.4cm]{Images/sample_lf.png}
\caption{}
\label{fig:representations-lf}
\end{center}
\end{subfigure}\begin{subfigure}{0.2\linewidth}
\begin{center}
\includegraphics[height=2.4cm]{Images/sample_bb8.png}
\caption{}
\label{fig:representations-bb}
\end{center}
\end{subfigure}
\caption{Visualization of two different forms of 2D-3D correspondences: (a) Image, (b) Location field which encodes XYZ 3D coordinates for each pixel ({\bf LF}), and (c) 2D projections of the object's 3D bounding box corners ({\bf BB}).}
\label{fig:representations}
\end{figure}
As shown in Figure~\ref{fig:overview}, we implement a separate 2D-3D correspondences branch for each method. In contrast to the focal length branch, both branches perform region-based per-object computations: For each detected object, an associated spatial region of interest in the feature maps is aligned to a fixed size feature representation with a low spatial resolution, \eg, $14\times14$. These aligned features serve as an input to one of our two proposed branches. Thus, the chosen 2D-3D correspondences branch is evaluated $N$ times for each image, where $N$ is the number of detected objects. We identify the chosen 2D-3D correspondences method by adding a suffix: Ours-LF or Ours-BB.
For the LF method, the correspondences branch provides two different fully convolutional subnetworks to predict a tensor of 3D points and a 2D object mask at a spatial resolution of $28\times28$. The 2D mask is then applied to the tensor of 3D points to get a low-resolution location field. We found this approach to produce significantly higher accuracy compared to directly regressing a low-resolution location field which tends to predict over-smoothed 3D coordinates around the object silhouette.
The resulting low-resolution location field can be upscaled and padded to obtain a high-resolution location field with the same spatial resolution as the input image. However, we sample 2D-3D correspondences from the low-resolution location field and only adjust their 2D locations to match the input image resolution. In this way, we avoid generating a large number of 2D-3D correspondences without providing additional information.
For the BB method, the correspondences branch also provides two subnetworks, but this time with fully connected output layers. One subnetwork predicts the 2D locations of the object's 3D bounding box corners, the other subnetwork estimates the 3D dimensions of the object. In this case, we regress the 2D location in normalized coordinates relative to the spatial resolution of the aligned features. Again, we adjust the predicted 2D locations to match the input image resolution.
During training, we optimize the 3D points and 2D mask (Ours-LF), or the 2D projections and 3D dimensions (Ours-BB) using the Huber loss~\cite{Huber1964robust}. The final network loss is a combination of our focal length loss, our chosen 2D-3D correspondences loss, and the 2D object detection losses of the generalized Faster/Mask R-CNN framework~\cite{Ren2015faster,He2017mask}.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{Images/info_graphic61.pdf}
\end{center}
\caption{Two alternative branches for predicting 2D-3D correspondences from an RGB image ({\bf LF} and {\bf BB}).}
\label{fig:overview}
\end{figure}
\subsection{Stage 2: Geometric Optimization}
\label{sec:geometric-contribution}
Once we established correspondences between 2D locations and 3D points, we use the same geometric optimization for all methods. In this case, we perform a non-linear optimization of the P\emph{n}P f problem~\cite{Nakano2016versatile} which finds a geometric consensus between the individual projection parameters. In particular, we minimize the reprojection error
\begin{equation}
e_{\text{reproj}} = \frac{1}{N}\sum_{i=1}^N\mathcal{L}(\Vert \text{Proj}_{R,t,f}(\boldsymbol X_i) - \boldsymbol x_i \Vert_2) \> ,
\end{equation}
where $\boldsymbol X_i$ is a 3D point and $\boldsymbol x_i$ its corresponding 2D location. $\text{Proj}_{R,t,f}(\cdot)$ performs the projection from the 3D object coordinate system onto the 2D image plane with respect to the rotation $R$, translation $t$, and focal length $f$. $\mathcal{L}(\cdot)$ is a loss function, such as the standard squared loss $\mathcal{L}(x) = x^2$ or the more robust Cauchy loss~\cite{Triggs1999bundle} $\mathcal{L}(x) = ln(1 + x^2)$, and $N$ denotes the number of correspondences.
We minimize $e_{\text{reproj}}$ over both the 3D pose and the focal length. In this case, a minimum of four 2D-3D correspondences is needed to find a unique solution~\cite{Wu2006pnp}, because each correspondence gives two independent equations and we optimize seven parameters: the 3-DoF rotation, the 3-DoF translation, and the 1-DoF focal length. In practice, however, it is important to use more 2D-3D correspondences to compensate for the presence of noise.
Following the strategy of previous P\emph{n}P(f) approaches~\cite{Lepetit2009epnp,Hesch2011direct,Penate2013exhaustive}, we compute an initial solution in $O(n)$ time followed by an iterative refinement technique. For our initial solution, we compute the 3D rotation and 3D translation using EP\emph{n}P~\cite{Lepetit2009epnp} with our predicted focal length. Providing a good initial focal length is a key factor in achieving high accuracy in terms of 3D translation. In theory, it is also possible to recover the focal length using 2D-3D correspondences from scratch~\cite{Penate2013exhaustive,Nakano2016versatile}, but this requires extremely accurate and clean correspondences. For correspondence estimation on the category level in the wild, however, we are facing fuzzy and noisy predictions. In this case, a low reprojection error is achieved by finding the correct ratio between the object-to-camera distance and the focal length. Thus, we cannot assume that the geometric optimization will find the correct absolute focal length from scratch.
Taking this into account, we jointly optimize the 3D rotation, 3D translation, and focal length during our iterative refinement. For this purpose, we employ a Newton-Step-based optimization~\cite{Conn2000trust} depending on the loss function $\mathcal{L}$, \ie, Levenberg-Marquardt~\cite{More1978levenberg} (squared loss) or Subspace Trust-Region Interior-Reflective~\cite{Branch1999subspace} (Cauchy loss).
Our approach naturally handles different projection models (egocentric or allocentric)~\cite{Kundu20183d}. Additionally, jointly optimizing the 3D poses of multiple objects in an image together with the focal length is straightforward. In this case, we compute the initial solution as before, but perform our iterative refinement for $1 + 6N$ parameters where $N$ is the number of detected objects. We did not evaluate this joint refinement though, because available category level datasets with focal length annotations just provide 3D annotations for one object per image, even if there are multiple objects in the image~\cite{Sun2018pix3d,Wang2018fine}. In most cases, we are still able to detect the other objects, but do not have ground truth annotations to evaluate them, as shown in our qualitative results in Sec.~\ref{sec:sota}. Moreover, our approach can readily be extended to deal with more complex camera models including skew, off-center principal point, asymmetric aspect ratio or lens distortions~\cite{Nakano2016versatile}. However, currently there are no datasets with this kind of annotations.
\section{Conclusion}
Estimating the 3D poses of objects in the wild is an important but challenging task. In particular, predicting the 3D translation is difficult due to ambiguous appearances resulting from different focal lengths. For this purpose, we present the first joint 3D pose and focal length estimation approach that enforces a geometric consensus between 3D poses and the focal length. Our approach combines deep learning techniques and geometric algorithms to explicitly estimate and integrate the focal length into the 3D pose estimation. We evaluate our approach on three challenging real-world datasets (Pix3D, Comp, and Stanford) and significantly outperform the state-of-the-art by up to 20\%.
\section{Experimental Results}
\label{sec:experiments}
\newcommand{\text{gt}}{\text{gt}}
\newcommand{\text{pred}}{\text{pred}}
\definecolor{lightgreen}{RGB}{200,240,217}
\definecolor{lightred}{RGB}{240,200,200}
\begin{table*}
\centering
\setlength{\tabcolsep}{3.2pt}
\begin{tabular}{lcc|c|cc|c|c|c|cc}
\toprule
\multicolumn{3}{c}{}&\multicolumn{1}{c}{\bf Detection}&\multicolumn{2}{c}{\bf Rotation}&\multicolumn{1}{c}{\bf Translation}&\multicolumn{1}{c}{\bf Pose}&\multicolumn{1}{c}{\bf Focal}&\multicolumn{2}{c}{\bf Projection}\\
\cmidrule(lr){4-4}\cmidrule(lr){5-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}\cmidrule(lr){9-9}\cmidrule(lr){10-11}
\multirow{2}{*}{Method}&\multicolumn{1}{c}{\multirow{2}{*}{Dataset}}&\multicolumn{1}{c}{\multirow{2}{*}{Class}}&\multicolumn{1}{c}{\multirow{2}{*}{$Acc_{D_{0.5}}$}}&\multicolumn{1}{c}{$MedErr_R$}&\multicolumn{1}{c}{\multirow{2}{*}{$Acc_{R\frac{\pi}{6}}$}}&\multicolumn{1}{c}{$MedErr_{t}$}&\multicolumn{1}{c}{$MedErr_{R,t}$}&\multicolumn{1}{c}{$MedErr_f$}&\multicolumn{1}{c}{$MedErr_{P}$}&\multicolumn{1}{c}{\multirow{2}{*}{$Acc_{P_{0.1}}$}}\\
&&\multicolumn{1}{c}{}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$\cdot1$}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{$\cdot10^{1}$}&\multicolumn{1}{c}{$\cdot10^{1}$}&\multicolumn{1}{c}{$\cdot10^{1}$}&\multicolumn{1}{c}{$\cdot10^{2}$}\\
\midrule
\cite{Wang2018fine}&\multirow{3}{*}{Pix3D}&\multirow{3}{*}{bed}&98.4\%&5.82&95.3\%&1.95&1.56&2.22&6.05&74.9\%\\
Ours-LF&&&99.0\%&\bf5.13&96.3\%&\bf1.41&\bf1.04&\bf1.43&\bf3.52&90.6\%\\
Ours-BB&&&\bf99.5\%&5.40&\bf97.9\%&1.66&1.17&1.59&3.55&\bf93.2\%\\
\midrule
\cite{Wang2018fine}&\multirow{3}{*}{Pix3D}&\multirow{3}{*}{chair}&94.9\%&7.52&88.0\%&2.69&1.58&1.98&6.04&75.3\%\\
Ours-LF&&&95.2\%&7.52&88.8\%&1.92&1.21&1.62&3.41&88.2\%\\
Ours-BB&&&\bf97.3\%&\bf6.95&\bf91.0\%&\bf1.68&\bf1.08&\bf1.58&\bf3.24&\bf90.9\%\\
\midrule
\cite{Wang2018fine}&\multirow{3}{*}{Pix3D}&\multirow{3}{*}{sofa}&96.5\%&4.73&94.8\%&2.28&1.62&2.42&4.33&82.2\%\\
Ours-LF&&&96.5\%&4.49&95.0\%&1.92&1.33&1.79&2.56&93.7\%\\
Ours-BB&&&\bf98.3\%&\bf4.40&\bf97.0\%&\bf1.63&\bf1.16&\bf1.73&\bf2.13&\bf95.6\%\\
\midrule
\cite{Wang2018fine}&\multirow{3}{*}{Pix3D}&\multirow{3}{*}{table}&94.0\%&10.94&72.9\%&3.16&2.28&3.03&8.90&53.6\%\\
Ours-LF&&&94.0\%&\bf10.53&73.5\%&\bf2.16&\bf1.62&\bf2.05&5.92&69.5\%\\
Ours-BB&&&\bf95.7\%&10.80&\bf77.2\%&2.81&1.78&2.10&\bf5.74&\bf72.4\%\\
\midrule
\cite{Wang2018fine}&\multirow{3}{*}{Pix3D}&\multirow{3}{*}{$mean$}&96.0\%&7.25&87.8\%&\cellcolor{lightred}2.52&\cellcolor{lightred}1.76&\cellcolor{lightred}2.41&\cellcolor{lightred}6.33&\cellcolor{lightred}71.5\%\\
Ours-LF&&&96.2\%&6.92&88.4\%&\bf\cellcolor{lightgreen}1.85&\bf\cellcolor{lightgreen}1.30&\bf\cellcolor{lightgreen}1.72&\cellcolor{lightgreen}3.85&\cellcolor{lightgreen}85.5\%\\
Ours-BB&&&\bf97.7\%&\bf6.89&\bf90.8\%&\cellcolor{lightgreen}1.94&\bf\cellcolor{lightgreen}1.30&\cellcolor{lightgreen}1.75&\bf\cellcolor{lightgreen}3.66&\bf\cellcolor{lightgreen}88.0\%\\
\midrule
\midrule
\cite{Wang2018fine}&\multirow{3}{*}{Comp}&\multirow{3}{*}{car}&\bf98.9\%&5.24&97.6\%&\cellcolor{lightred}3.30&\cellcolor{lightred}2.35&\cellcolor{lightred}3.23&\cellcolor{lightred}7.85&\cellcolor{lightred}73.7\%\\
Ours-LF&&&98.8\%&5.23&97.9\%&\cellcolor{lightgreen}2.61&\cellcolor{lightgreen}1.86&\cellcolor{lightgreen}2.97&\cellcolor{lightgreen}4.21&\cellcolor{lightgreen}95.1\%\\
Ours-BB&&&\bf98.9\%&\bf4.87&\bf98.1\%&\bf\cellcolor{lightgreen}2.55&\bf\cellcolor{lightgreen}1.84&\bf\cellcolor{lightgreen}2.95&\bf\cellcolor{lightgreen}3.87&\bf\cellcolor{lightgreen}95.7\%\\
\midrule
\midrule
\cite{Wang2018fine}&\multirow{3}{*}{Stanford}&\multirow{3}{*}{car}&\bf99.6\%&5.43&98.0\%&\cellcolor{lightred}2.33&\cellcolor{lightred}1.80&\cellcolor{lightred}2.34&\cellcolor{lightred}7.46&\cellcolor{lightred}76.4\%\\
Ours-LF&&&\bf99.6\%&5.38&\bf98.3\%&\cellcolor{lightgreen}1.93&\cellcolor{lightgreen}1.51&\bf\cellcolor{lightgreen}2.01&\cellcolor{lightgreen}3.72&\cellcolor{lightgreen}96.2\%\\
Ours-BB&&&\bf99.6\%&\bf5.24&\bf98.3\%&\bf\cellcolor{lightgreen}1.92&\bf\cellcolor{lightgreen}1.47&\cellcolor{lightgreen}2.07&\bf\cellcolor{lightgreen}3.25&\bf\cellcolor{lightgreen}96.5\%\\
\bottomrule
\end{tabular}
\caption{Experimental results on the Pix3D, Comp, and Stanford datasets. We significantly outperform the state-of-the-art in the 3D translation, 3D pose, focal length, and projection metrics. We explain the reported numbers in detail in Sec.~\ref{sec:sota}.}
\label{table:pix3d}
\end{table*}
To demonstrate the benefits of our joint 3D pose and focal length estimation approach (GP\textsuperscript{2}C), we evaluate it on three challenging real-world datasets\footnote{Details on the datasets and the evaluation setup are provided in the \textbf{supplementary material}.} with different object categories: Pix3D~\cite{Sun2018pix3d} (\textit{bed}, \textit{chair}, \textit{sofa}, \textit{table}), Comp~\cite{Wang2018fine} (\textit{car}), and Stanford~\cite{Wang2018fine} (\textit{car}). In particular, we provide a quantitative and qualitative evaluation of our approach in comparison to the state-of-the-art in Sec.~\ref{sec:sota}, analyze important aspects in Sec.~\ref{sec:analysis}, and discuss advantages and disadvantages of our two presented methods for establishing 2D-3D correspondences in Sec.~\ref{sec:discussion}. To cover different aspects of projective geometry in our evaluation, we use the following well-established metrics:
\vspace{0.15cm}\noindent\textbf{Detection.}
We report the detection accuracy $Acc_{D_{0.5}}$ which gives the percentage of objects for which the intersection over union between the ground truth 2D bounding box and the predicted 2D bounding box is larger than 50\%~\cite{Xiang2014beyond}. This metric is an upper bound for other $Acc$ metrics since we do not make blind predictions.
\vspace{0.15cm}\noindent\textbf{Rotation.}
We compute the geodesic distance
\begin{equation}
e_R = \frac{\Vert \text{log}(R_\text{gt}^T R_\text{pred}^{\vphantom{T}} )\Vert_F}{\sqrt{2}}
\end{equation}
\noindent between the ground truth rotation matrix $R_\text{gt}$ and the predicted rotation matrix $R_\text{pred}$ which gives the minimal angular distance. We report the median of this distance ($MedErr_R$) and the percentage of objects for which the distance is below the threshold of $\frac{\pi}{6}$ or $30^\circ$ ($Acc_{R\frac{\pi}{6}}$)~\cite{Tulsiani2015viewpoints}.
\vspace{0.15cm}\noindent\textbf{Translation.}
We report the relative translation distance
\begin{equation}
e_t = \frac{\Vert t_\text{gt} - t_\text{pred} \Vert_2}{\Vert t_\text{gt} \Vert_2}
\end{equation}
between the ground truth translation $t_\text{gt}$ and the predicted translation $t_\text{pred}$~\cite{Hodavn2016evaluation}.
\vspace{0.15cm}\noindent\textbf{Pose.}
We calculate the average normalized distance of all transformed model points in 3D space
\begin{equation}
e_{R,t} = \underset{\boldsymbol{X} \in \mathcal{M}}{\text{avg}} \frac{d_{\text{bbox}}}{d_{\text{img}}} \cdot \frac{\Vert \text{Transf}_\text{gt}(\boldsymbol X) - \text{Transf}_\text{pred}(\boldsymbol X) \Vert_2}{\Vert t_\text{gt} \Vert_2}
\end{equation}
to evaluate 3D pose accuracy~\cite{Hinterstoisser2012model,Hodavn2016evaluation}. In this case, each 3D point $\boldsymbol X$ of the ground truth 3D model $\mathcal{M}$ is transformed using the ground truth 3D pose $\text{Transf}_\text{gt}(\cdot)$ and the predicted 3D pose $\text{Transf}_\text{pred}(\cdot)$ subject to rotation and translation. We normalize this distance by the relative size of the object in the image using the ratio between the ground truth 2D bounding box diagonal $d_{\text{bbox}}$ and the image diagonal $d_{\text{img}}$, and the L2-norm of the ground truth translation $\Vert t_\text{gt} \Vert_2$. This normalization provides an unbiased metric for 3D pose evaluation in the case of unknown intrinsics.
\vspace{0.15cm}\noindent\textbf{Focal Length.}
We report the relative focal length error
\begin{equation}
e_{f} = \frac{\vert f_\text{gt} - f_\text{pred} \vert}{f_\text{gt}}
\end{equation}
between the ground truth focal length $f_\text{gt}$ and the predicted focal length $f_\text{pred}$~\cite{Penate2013exhaustive,Wu2015p3}.
\vspace{0.15cm}\noindent\textbf{Projection.}
To evaluate all projection parameters, we compute the average normalized reprojection distance
\begin{equation}
e_{P} = \underset{\boldsymbol{X} \in \mathcal{M}}{\text{avg}} \frac{\Vert \text{Proj}_\text{gt}(\boldsymbol X) - \text{Proj}_\text{pred}(\boldsymbol X) \Vert_2}{d_{\text{bbox}}} \> .
\end{equation}
In this case, each 3D point $\boldsymbol X$ of the ground truth 3D model $\mathcal{M}$ is projected to a 2D location using the ground truth projection parameters $\text{Proj}_\text{gt}(\cdot)$ and the predicted projection parameters $\text{Proj}_\text{pred}(\cdot)$ subject to rotation, translation, and focal length. $d_{\text{bbox}}$ is the ground truth 2D bounding box diagonal. We report the median of this distance ($MedErr_P$) and the percentage of objects for which the distance is below the threshold of $0.1$ ($Acc_{P_{0.1}}$)~\cite{Wang2018fine}.
\begin{figure}[h!]
\setlength{\tabcolsep}{1pt}
\setlength{\fboxsep}{-2pt}
\setlength{\fboxrule}{2pt}
\definecolor{boxgreen}{rgb}{0.3, 1.0, 0.3}
\definecolor{boxred}{rgb}{1.0, 0.3, 0.3}
\newcommand{\colImgN}[1]{{\includegraphics[width=0.19\linewidth]{#1}}}
\newcommand{\colImgR}[1]{{\color{boxred}\fbox{\colImgN{#1}}}}
\newcommand{\colImgG}[1]{{\color{boxgreen}\fbox{\colImgN{#1}}}}
\centering
\begin{tabular}{ccccc}
\colImgN{Images/Collage/comp_1427.png}& \colImgN{Images/Collage/comp_1427_gt.png}& \colImgR{Images/Collage/comp_1427_dir.png}& \colImgG{Images/Collage/comp_1427_lf.png}& \colImgG{Images/Collage/comp_1427_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/comp_543.png}& \colImgN{Images/Collage/comp_543_gt.png}& \colImgR{Images/Collage/comp_543_dir.png}& \colImgG{Images/Collage/comp_543_lf.png}& \colImgG{Images/Collage/comp_543_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/stanford_505.png}& \colImgN{Images/Collage/stanford_505_gt.png}& \colImgR{Images/Collage/stanford_505_dir.png}& \colImgG{Images/Collage/stanford_505_lf.png}& \colImgG{Images/Collage/stanford_505_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/59.png}& \colImgN{Images/Collage/59_gt.png}& \colImgR{Images/Collage/59_dir.png}& \colImgG{Images/Collage/59_lf.png}& \colImgG{Images/Collage/59_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/184.png}& \colImgN{Images/Collage/184_gt.png}& \colImgR{Images/Collage/184_dir.png}& \colImgG{Images/Collage/184_lf.png}& \colImgG{Images/Collage/184_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/405.png}& \colImgN{Images/Collage/405_gt.png}& \colImgR{Images/Collage/405_dir.png}& \colImgG{Images/Collage/405_lf.png}& \colImgG{Images/Collage/405_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/1458.png}& \colImgN{Images/Collage/1458_gt.png}& \colImgR{Images/Collage/1458_dir.png}& \colImgG{Images/Collage/1458_lf.png}& \colImgG{Images/Collage/1458_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/228.png}& \colImgN{Images/Collage/228_gt.png}& \colImgR{Images/Collage/228_dir.png}& \colImgG{Images/Collage/228_lf.png}& \colImgG{Images/Collage/228_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/1669.png}& \colImgN{Images/Collage/1669_gt.png}& \colImgR{Images/Collage/1669_dir.png}& \colImgG{Images/Collage/1669_lf.png}& \colImgG{Images/Collage/1669_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/1723.png}& \colImgN{Images/Collage/1723_gt.png}& \colImgR{Images/Collage/1723_dir.png}& \colImgG{Images/Collage/1723_lf.png}& \colImgG{Images/Collage/1723_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/2281.png}& \colImgN{Images/Collage/2281_gt.png}& \colImgR{Images/Collage/2281_dir.png}& \colImgG{Images/Collage/2281_lf.png}& \colImgG{Images/Collage/2281_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/2449.png}& \colImgN{Images/Collage/2449_gt.png}& \colImgR{Images/Collage/2449_dir.png}& \colImgG{Images/Collage/2449_lf.png}&
\colImgG{Images/Collage/2449_bb.png}\\[-1.5pt]
\colImgN{Images/Collage/2295.png}& \colImgN{Images/Collage/2295_gt.png}& \colImgR{Images/Collage/2295_dir.png}& \colImgR{Images/Collage/2295_lf.png}& \colImgR{Images/Collage/2295_bb.png}\\[-1.5pt]
\footnotesize Image&\footnotesize Ground Truth&\footnotesize \cite{Wang2018fine}&\footnotesize Ours-LF&\footnotesize Ours-BB\\[-3pt]
\end{tabular}
\caption{Qualitative 3D pose and focal length estimation results for all evaluated datasets and categories. We project the ground truth 3D model onto the image using the 3D pose and focal length predicted by different approaches. In contrast to \cite{Wang2018fine}, our approach finds a geometric consensus between the parameters which results in improved 2D-3D alignment, \eg, the {\bf scale} of the projection. We highlight respective samples with frames. {\bf Best viewed in digital zoom.}}
\label{fig:collage}
\vspace{-0.605cm}
\end{figure}
\subsection{Comparison to the State-of-the-Art}
\label{sec:sota}
We first present quantitative results of our approach using our two different methods for establishing 2D-3D correspondences (Ours-LF and Ours-BB) and compare them to the state-of-the-art. To this end, we reimplemented the approach of~\cite{Wang2018fine} and achieve comparable results, even outperforming their reported $MedErr_P$ and $Acc_{P_{0.1}}$ scores due to our improved backbone architecture and initialization. The results are summarized in Table~\ref{table:pix3d}. We achieve consistent results across all datasets and categories, thus, we provide a joint discussion based on the evaluated metrics:
\vspace{0.15cm}\noindent\textbf{Detection.}
All methods achieve high detection accuracy ($Acc_{D_{0.5}}$). This is not surprising, because we fine-tune a model pre-trained for instance segmentation on COCO~\cite{Lin2014microsoft}. In fact, all evaluated categories are also present in COCO.
\vspace{0.15cm}\noindent\textbf{Rotation.}
Also, all methods achieve high rotation accuracy~($MedErr_R$ and $Acc_{R\frac{\pi}{6}}$). Our reported numbers are in line with the results of previous work on rotation estimation in the wild~\cite{Grabner2018a,Wang2018fine,Tulsiani2015viewpoints} and confirm that 3D rotation can robustly be recovered from 2D observations up to a certain precision. Only for the category $table$, we observe sub-average accuracy. In fact, almost all tables have symmetries, as can be seen in Figure~\ref{fig:collage}, which sometimes confuse all evaluated methods, because they predict a single 3D pose rather than a distribution (see last $table$ sample).
\vspace{0.15cm}\noindent\textbf{Translation.}
In terms of translation accuracy ($MedErr_t$), our approach significantly outperforms the state-of-the-art. Directly predicting the 3D translation from a local image window of an object is highly ambiguous in the case of unknown intrinsics. By explicitly estimating and integrating the focal length into the 3D pose estimation, we exploit a geometric prior and achieve a relative improvement of 20\%.
\vspace{0.15cm}\noindent\textbf{Pose.}
In the case of unknown intrinsics, the 3D pose accuracy ($MedErr_{R,t}$) is primarily governed by the translation accuracy. Therefore, we also observe a relative improvement of 20\% compared to the state-of-the-art.
\vspace{0.15cm}\noindent\textbf{Focal Length.}
Considering the focal length accuracy ($MedErr_f$), our approach outperforms the state-of-the-art by a relative improvement of 10\% due to our logarithmic parametrization and refinement.
\vspace{0.15cm}\noindent\textbf{Projection.}
Finally, we report the projection metrics ($MedErr_P$ and $Acc_{P_{0.1}}$), which evaluate all predicted parameters. In these metrics, we achieve the largest improvement compared to the state-of-the-art: {\bf 20\% absolute} in $Acc_{P_{0.1}}$ and {\bf 40\% relative} in $MedErr_P$ across all datasets. In contrast to an independent estimation of the individual projection parameters, our approach finds a geometric consensus which results in improved 2D-3D alignment and reprojection error. This quantitative improvement is also reflected in our qualitative results shown in Figure~\ref{fig:collage}. In this experiment, our approach consistently produces a higher quality 2D-3D alignment compared to the state-of-the-art for objects of different categories. This significant improvement can be accounted to the fact that we minimize the reprojection error during inference. However, we want to emphasize that the 3D model is only used for the evaluation. The 3D poses and focal length are solely computed from a single RGB image in our approach.
\subsection{Analysis}
\label{sec:analysis}
\begin{table}
\centering
\begin{tabular}{lc|cc}
\toprule
\multicolumn{2}{c}{}&\multicolumn{2}{c}{\bf Projection}\\
\cmidrule(lr){3-4}
Method&\multicolumn{1}{c}{P\emph{n}P}&$MedErr_P \cdot 10^{2}$&$Acc_{P_{0.1}}$\\
\midrule
\multirow{3}{*}{Ours-LF}&Standard&3.88&85.3\%\\
&RANSAC&3.87&85.4\%\\
&Cauchy&\bf3.85&\bf85.5\%\\
\midrule
\multirow{3}{*}{Ours-BB}&Standard&3.68&87.5\%\\
&RANSAC&3.68&87.6\%\\
&Cauchy&\bf3.66&\bf88.0\%\\
\bottomrule
\end{tabular}
\vspace{-0.1cm}
\caption{Evaluation of different P\emph{n}P~strategies. The results show that our predicted 2D-3D correspondences are reliable and do not contain single extreme outliers.}
\label{table:pnp}
\vspace{-0.3cm}
\end{table}
Next, we analyze two important aspects of our approach: (a) the robustness of our predicted 2D-3D correspondences and (b) the importance of the focal length for estimating 3D poses from these correspondences. For this purpose, we perform experiments on Pix3D, which is the most challenging dataset, because it provides multiple object categories and has the largest variation in object scale.
First, we run our approaches using different P\emph{n}P~strategies and compare the obtained results using the projection metrics ($MedErr_P$ and $Acc_{P_{0.1}}$) in Table~\ref{table:pnp}. In particular, we compare the standard approach, which is sensitive to outliers due to the squared loss $\mathcal{L}(x) = x^2$, to the more robust RANSAC scheme and Cauchy loss~\cite{Triggs1999bundle} $\mathcal{L}(x) = ln(1 + x^2)$.
All three P\emph{n}P~strategies achieve similar performance for both Ours-LF and Ours-BB. This experiment shows that our predicted 2D-3D correspondences do not contain single extreme outliers which are often present in traditional interest-point-based approaches. This is due to the fact that all 2D-3D correspondences are computed from a low dimensional feature embedding which produces consistent predictions\footnote{Qualitative examples of our predicted 2D-3D correspondences are provided in the \textbf{supplementary material}.}.
Second, to demonstrate the importance of the focal length for estimating 3D poses from 2D-3D correspondences, we initialize the geometric optimization with three different focal lengths and compare the results using the 3D pose distance in Figure~\ref{fig:curves}. In this experiment, we plot the percentage of objects for which the 3D pose distance is below a threshold varying in the range [0,1] ($Acc_{R,t}$).
As expected, if we initialize the geometric optimization with the ground truth focal length, we achieve the highest 3D pose accuracy. However, for 3D pose estimation in the wild, the focal length is unknown during inference. In this case, we can use a constant or a predicted focal length for initialization. Even if we use the best possible constant focal length, which is the median focal length of the training dataset, the accuracy drops significantly. Instead, if we initialize using our predicted focal length, we achieve improved 3D pose accuracy. However, there is still a gap in the accuracy compared to using the ground truth focal length.
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\begin{center}
\input{Graphs/curve_lf.tex}
\vspace{-0.5cm}
\caption{Ours-LF}
\end{center}
\end{subfigure}\begin{subfigure}{0.5\linewidth}
\begin{center}
\input{Graphs/curve_bb.tex}
\vspace{-0.5cm}
\caption{Ours-BB}
\end{center}
\end{subfigure}
\vspace{-0.2cm}
\caption{Evaluation of different initial focal lengths. The results show that a good initial estimate of the focal length is a key factor for achieving high 3D pose accuracy.}
\label{fig:curves}
\vspace{-0.276cm}
\end{figure}
\subsection{Discussion}
\label{sec:discussion}
So far, our results show that both presented 2D-3D correspondence estimation methods (LF and BB) achieve a similar level of accuracy. However, each method has specific characteristics advantageous for different tasks.
For example, LF implicitly handles truncations and occlusions, because it estimates 3D points for visible object parts and resolves occlusions using the 2D mask. Moreover, the predicted dense 2D-3D correspondences might also be useful for other tasks like dense depth estimation or shape reconstruction. However, this method requires detailed 3D models for training.
In contrast, BB only requires accurate 3D bounding boxes for training. The overall design of this method is simpler and more lightweight, which makes it easier to implement and train. This is also reflected in our reported numbers, which show a slight advantage compared to LF. Additionally, BB always gives a fixed number of sparse 2D-3D correspondences. This results in fast inference, which is beneficial for real-time applications, for example. However, while this method is well-suited for dealing with box-shaped objects like cars, other approaches might perform better on highly non-box-shaped objects.
|
1,314,259,993,490 | arxiv | \section{Introduction}
Permutation complexity of aperiodic words is a relatively new notion of word complexity which was first introduced and studied by Makarov $\cite{Makar06}$ based on ideas of S.V. Avgustinovich (see the acknowledgements in $\cite{FlaFrid}$), and is based on the idea of an infinite permutation associated to an aperiodic word. For an infinite aperiodic word $\w$, no two shifts of $\w$ are identical. Thus, given a linear order on the symbols used to compose $\w$, no two shifts of $\w$ are equal lexicographically. The infinite permutation associated with $\w$ is the linear order on $\N$ induced by the lexicographic order of the shifts of $\w$. The permutation complexity of the word $\w$ will be the number of distinct subpermutations of a given length of the infinite permutation associated with $\w$.
Infinite permutations associated with infinite aperiodic words over a binary alphabet act fairly well-behaved, but many of the arguments used for binary words break down when used with words over more than two symbols. Given a subpermutation of length $n$ of an infinite permutation associated with a binary word, a portion of length $n-1$ of the word can be recovered from the subpermutation. This is not always the case for subpermutations associated with words over 3 or more symbols. For example, consider the permutation $(1 \hspace{0.5ex} 2 \hspace{0.5ex} 3)$. If this permutation is associated with a binary word over $\{0,1 \}$, with $0<1$, it could only correspond to the word $00$. On the other hand, if this permutation is associated with a word over 3 symbols, suppose $\{0,1,2 \}$ with $0<1<2$, then the permutation could be associated with any of $00$, $01$, $11$, or $12$.
For binary words the subpermutations depend on the order on the symbols used to compose $\w$, but the permutation complexity does not depend on the order. For words over 3 or more symbols, not only do the subpermutations depend on the order on the alphabet but so does the permutation complexity. For example, consider the Fibonacci word
$$t = 0100101001001010010100100101\ldots,$$
defined by iterating the morphism $0 \mapsto 01, 1 \mapsto 0$ on the letter $0$, and suppose the 1s are replaced by alternating $a$'s and $b$'s to create the word:
$$\hat{t} = 0a00b0a00b00a0b00a0b00a00b0a\ldots.$$
If the symbols in $\hat{t}$ are ordered $0<a<b$ there will be 5 distinct subpermutations of length 3, and if the symbols are ordered $a<0<b$ there will be only 4 distinct subpermutations of length 3. The verification of this fact is left to the reader.
In view of the notion of an infinite permutation associated to an aperiodic word, it is natural to compute the permutation complexity of well-known classes of words. In $\cite{Makar09}$, Makarov computes the permutation complexity of Sturmian words. The goal of this paper is to determine the permutation complexity of the Thue-Morse word.
The Thue-Morse word, $T = T_0T_1T_2 \cdots$, is:
$$ T = 0110 1001 1001 0110 1001 0110 0110 1001 \cdots,$$
which can be generated by the morphism:
$$\mu_T:0 \mapsto 01, \hspace{1.5ex} 1 \mapsto 10, $$
by iterating on the letter $0$. Axel Thue introduced this word in his studies of repetitions in words, and proved that the word $T$ is overlap-free ($\cite{Thue12}$). A word $\w$ is said to be $\textit{overlap-free}$ if it does not contain a factor of the form $vuvuv$ for words $u$ and $v$, with $v$ non-empty.
The Thue-Morse word was again discovered independently by Marston Morse in 1921 $\cite{Morse21}$ through his study of differential geometry, and used in the foundations of symbolic dynamics. For a more in depth look at further properties, independent discoveries, and applications of the Thue-Morse word see $\cite{AllSha99}$.
The factor complexity of the Thue-Morse word was computed independently by two groups in 1989, Brlek $\cite{Brlek89}$ and de Luca and Varricchio $\cite{LucaVarr89}$. Our proof of the permutation complexity of the Thue-Morse word does not use the factor complexity function.
The permutation complexity of the Thue-Morse word can be found as follows. For any $n \geq 2$, we can write $n$ as $n = 2^a + b$, with $0 < b \leq 2^a$. Using this notation, it will shown that the formula for the permutation complexity of T, initially conjectured by M. Makarov, is
$$ \tau_T(n) = 2( 2^{a+1} + b - 2 ). $$
We give a a non-trivial proof of this formula here. We start with some basic notation and definitions. Some properties of infinite permutations are given in Section $\ref{GeneralPermResults}$. The infinite permutation associated with the Thue-Morse word, $\pi_T$, is introduced in Section $\ref{ThueMorsePermutation}$. Patterns found in the subpermutations of $\pi_T$ are studied in Section $\ref{TypeKandCompPairs}$, while Section $\ref{SecTypeOnePairs}$ investigates when a specific pattern occurs. The formula for the permutation complexity is established in Section $\ref{FormulaForPermComp}$. Low order subpermutations are listed in Appendix $\ref{SecTheSubperms}$ to be used as a base case for induction arguments.
\subsection{Words}
A $\textit{word}$ is a finite, (right) infinite, or bi-infinite sequence of symbols taken from a finite non-empty set, $A$, called an $\textit{alphabet}$. The standard operation on words is concatenation, and is represented by juxtaposition of letters and words. A $\textit{finite word}$ over $A$ is a word of the form $u = a_1 a_2 \ldots a_n$ with $n \geq 0$ (if $n=0$ we say $u$ is the $\textit{empty word}$, denoted $\epsilon$) and each $a_i \in A$; the $\textit{length}$ of the word $u$ is the number of symbols in the sequence and is denoted by $\abs{u} = n$. For $a \in A$, let $\abs{u}_a$ denote the number of occurrences of the letter $a$ in the word $u$. The set of all finite words over the alphabet $A$ is denoted by $A^*$, and is a free monoid with concatenation of words as the operation.
A $\textit{(right) infinite word}$ over $A$ is a word of the form $\w = \w_0 \w_1 \w_2 \ldots$ with each $\w_i \in A$, and the set of all infinite words over $A$ is denoted $A^\N$. Given $\w \in A^* \cup A^\N$, any word of the form $u=\w_i\w_{i+1} \ldots \w_{i+n-1}$, with $i \geq 0$, is called a $\textit{factor}$ of $\w$ of length $n \geq 1$. The set of all factors of a word $\w$ is denoted by $\scr{F}(\w)$. The set of all factors of length $n$ of $\w$ is denoted $\scr{F}_\w(n)$, and let $\rho_\w(n) = \abs{\scr{F}_\w (n)}$. The function $\rho_\w: \N \rightarrow \N $ is called the $\textit{factor complexity function}$, or $\textit{subword complexity function}$, of $\w$ and it counts the number of factors of length $n$ of $\w$. For a natural number $i$ we denote by $\w[i] = \w_i\w_{i+1}\w_{i+2}\w_{i+3}\ldots$ the $i$$\textit{-letter shift of}$ $\w$. For natural numbers $i \leq j$, $\w[i,j] = \w_i\w_{i+1}\w_{i+2} \ldots \w_j$ denotes the factor of length $j-i+1$ starting at position $i$ in $\w$.
For words $u \in A^*$ and $v \in A^* \cup A^\N$ where $\w = uv$, we call $u$ a $\textit{prefix}$ of $\w$ and $v$ a $\textit{suffix}$ of $\w$. A word $\w$ is said to be $\textit{periodic}$ of period $p$ if for each $i \in \N$, $\w_i = \w_{i+p}$, and $\w$ is said to be $\textit{eventually periodic}$ of period $p$ if there exists an $N \in \N$ so that for each $i > N$, $\w_i = \w_{i+p}$; or equivalently, $\w$ has a periodic suffix. A word $\w$ is said to be $\textit{aperiodic}$ if it is not periodic or eventually periodic.
Let $A$ and $B$ be two finite alphabets. A map $\varphi: A^* \rightarrow B^*$ so that $\varphi(uv) = \varphi(u)\varphi(v)$ for any $u,v \in A^*$ is called a $\textit{morphism}$ of $A^*$ into $B^*$, and $\varphi$ is defined by the image of each letter in $A$. A morphism on $A$ is a morphism from $A^*$ into $A^*$, also called an $\textit{endomorphism}$ of $A$. A morphism $\varphi$ is said to be $\textit{non-erasing}$ if the image of any non-empty word is not empty.
The action of a morphism $\varphi$ on $A$ can naturally be extended from $A^*$ to $A^\N$. For any $\w = \w_0\w_1\w_2\ldots \in A^\N$, we define $\varphi(\w) = \varphi(\w_0)\varphi(\w_1)\varphi(w_2)\ldots$ as in the case for words in $A^*$. We say that a word $\w$ is a $\textit{fixed point}$ of the morphism $\varphi$ if $\varphi(\w) = \w$. If $\varphi$ is a morphism on A and if $\varphi(a) = au$ for some $a \in A$ and non-empty $u \in A^*$, $\varphi$ is said to be $\textit{prolongable}$ on $a$. If $\varphi$ is a morphism on $A$ that is prolongable on some $a \in A$, then $\varphi^n(a)$ is a proper prefix of $\varphi^{n+1}(a)$ for each $n \in \N$. The limit of the sequence $\left\{ \varphi^n(a) \right\}_{n \in \N}$ will be the unique infinite word
$$ \w = \lim_{n \rightarrow \infty} \varphi^n(a) = \varphi^\infty(a) = au\varphi(u)\varphi^2(u) \cdots $$
where $\w$ is a fixed point of $\varphi$, and we say that $\w$ is $\textit{generated}$ by $\varphi$.
\subsection{Permutations on words}
The idea of an infinite permutation that will be here used was introduced in $\cite{FlaFrid}$. This paper will be dealing with permutation complexity of infinite words so the set used in the following definition will be $\N$ rather than an arbitrary countable set. To define an $\textit{infinite permutation}$ $\pi$, start with a linear order $\prec_\pi$ on $\N$, together with the usual order $<$ on $\N$. To be more specific, an infinite permutation is the ordered triple $\pi = \left\langle \N,\prec_\pi,< \right\rangle$, where $\prec_\pi$ and $<$ are linear orders on $\N$. The notation to be used here will be $\pi(i) < \pi(j)$ rather than $i \prec_\pi j.$
Given an infinite aperiodic word $\w = \w_0\w_1\w_2 \ldots$ on an alphabet $A$, fix a linear order on $A$. We will use the binary alphabet $A = \{0, 1\}$ and use the natural ordering $0<1$. Once a linear order is set on the alphabet, we can then define an order on the natural numbers based on the lexicographic order of shifts of $\w$. Considering two shifts of $\w$ with $a \neq b$, $\w[a] = \w_a\w_{a+1}\w_{a+2} \ldots$ and $\w[b] = \w_b\w_{b+1}\w_{b+2} \ldots$, we know that $\w[a] \neq \w[b]$ since $\w$ is aperiodic. Thus there exists some minimal number $c \geq 0$ so that $\w_{a+c} \neq \w_{b+c}$ and for each $0 \leq i < c$ we have $\w_{a+i} = \w_{b+i}$. We call $\pi_\w$ the infinite permutation associated with $\w$ and say that $\pi_\w(a) < \pi_\w(b)$ if $\w_{a+c} < \w_{b+c}$, else we say that $\pi_\w(b) < \pi_\w(a)$.
For natural numbers $a \leq b$ consider the factor $\w[a, b] = \w_a\w_{a+1} \ldots \w_b$ of $\w$ of length $b - a + 1$. Denote the finite permutation of $\{ 1, 2, \ldots , b - a + 1 \}$ corresponding to the linear order by $\pi_\w[a,b]$. That is $\pi_\w[a,b]$ is the permutation of $\{ 1, 2, \ldots , b - a + 1 \}$ so that for each $0 \leq i,j \leq (b - a)$, $ \pi_\w[a,b](i) < \pi_\w[a,b](j)$ if and only if $\pi_\w(a + i) < \pi_\w(a + j)$. Say that $p = p_0p_1 \cdots p_n$ is a $\textit{(finite) subpermutation}$ of $\pi_\w$ if $p = \pi_\w[a,a+n]$ for some $a,n \geq 0$. For the subpermutation $p = \pi_\w[a,a+n]$ of $\{1, 2, \cdots, n+1 \}$, we say the $\textit{length}$ of $p$ is $n+1$.
Denote the set of all subpermutations of $\pi_\w$ by $Perm_{\pi_\w}$, and for each positive integer $n$ let
$$Perm_{\pi_\w}(n) = \{ \hspace{1.0ex} \pi_\w[i,i+n-1] \hspace{1.0ex} \left| \hspace{1.0ex} i \geq 0 \right. \hspace{1.0ex} \}$$
denote the set of distinct finite subpermutations of $\pi_\w$ of length $n$. The $\textit{permutation complexity function}$ of $\w$ is defined as the total number of distinct subpermutations of $\pi_\w$ of a length $n$, denoted $\tau_\w(n) = \abs{Perm_{\pi_\w}(n)}$.
\begin{example}
Let's consider the well-known Fibonacci word,
$$t = 0100101001001010010100100101\ldots,$$
with the alphabet $A = \{0,1 \}$ ordered as $0 < 1$. We can see that $t[2] = 001010\ldots $ is lexicographically less than $t[1] = 100101\ldots$, and thus $\pi_t(2) < \pi_t(1)$.
Then for a subpermutation, consider the factor $t[3,5] = 010$. We see that $\pi_t[3,5] = (231)$ because in lexicographic order if we have $\pi_t(5) < \pi_t(3) < \pi_t(4)$.
\end{example}
\section{Some General Permutation Results}
\label{GeneralPermResults}
Initially work has been done with infinite binary words (see $\cite{Makar06,FlaFrid,Makar09,Makar09TM,Makar10,AvgFriKamSal}$). Suppose $\w = \w_0\w_1\w_2\ldots$ is an aperiodic infinite word over the alphabet $A=\{ 0,1 \}$. First let's look at some remarks about permutations generated by binary words where we use the natural order on $A$.
\begin{claim}
\emph{($\cite{Makar06}$)}
\label{PCClaim01}
For an infinite aperiodic word $\w$ over $A = \{ 0, 1 \}$ with the natural ordering we have:
(1) $\pi_\w(i) < \pi_\w(i+1)$ if and only if $\w_i = 0$.
(2) $\pi_\w(i) > \pi_\w(i+1)$ if and only if $\w_i = 1$.
(3) If $\w_i = \w_j$, then $\pi_\w(i) < \pi_\w(j)$ if and only if $\pi_\w(i+1) < \pi_\w(j+1)$
\end{claim}
\begin{lemma}
\emph{($\cite{Makar06}$)}
\label{PermComp01}
Given two infinite binary words u = $u_0u_1\ldots$ and $v=v_0v_1 \ldots$ with $\pi_u[0, n+1] = \pi_v[0, n+1]$, it follows that $u[0,n] = v[0,n]$.
\end{lemma}
We do have a trivial upper bound for $\tau_\w(n)$ being the number of permutations of length $n$, which is $n!$. Lemma $\ref{PermComp01}$ directly implies a lower bound for the permutation complexity for a binary aperiodic word $\w$, namely the factor complexity of $\w$. Thus, initial bounds on the permutation complexity can be seen to be:
$$ \rho_\w(n-1) \leq \tau_\w(n) \leq n!$$
For $a \in A = \{0,1\}$, let $\bar{a}$ denote the $\textit{complement}$ of $a$, that is $\bar{0} = 1$ and $\bar{1} = 0$. If $u = u_1u_2u_3 \cdots$ is a word over $A$, the $\textit{complement}$ of $u$ is defined to be the word composed of the complement of the letters in $u$, that is $\bar{u} = \bar{u}_1\bar{u}_2\bar{u}_3 \cdots$. Let $\w$ be an infinite aperiodic binary word, we say the set of factors of $\w$ is $\textit{closed under complementation}$ if for each $u \in \scr{F}(\w)$ then $\bar{u} \in \scr{F}(\w)$. The following lemma shows an interesting property of the subpermutations of the infinite permutation $\pi_\w$.
\begin{lemma}
\label{ClosedUnderCompliment}
Let $\w = \w_0\w_1\w_2\cdots$ be an infinite aperiodic binary word with factors closed under complementation. If $p$ is a subpermutation of $\pi_\w$ of length $n$, then the subpermutation $q$ defined by $q_i = n-p_i +1$ for each $i$, is also a subpermutation of $\pi_\w$ of length $n$.
\end{lemma}
\begin{proof}
Let $p$ be a subpermutation of $\pi_\w$. There is an $a \in \N$ so that $p = \pi_\w[a,a+n-1]$. For each $i,j \in \{0,1, \ldots, n-1 \}$, if $p_i < p_j$ then $\w[a+i] < \w[a+j]$ and there is some finite word $u_{i,j}$ so that
\begin{align*}
\w[a+i] &= u_{i,j}0\cdots \\
\w[a+j] &= u_{i,j}1\cdots
\end{align*}
Let $v$ be the prefix of $\w[a]$ so that for each $i,j \in \{ 0,1, \ldots, n-1 \}$, $v$ contains both $u_{i,j}0$ and $u_{i,j}1$. Since the set of factors of $\w$ is closed under complementation, $\bar{v}$ is a factor of $\w$. There is a $b$ so that $\bar{v}$ is a prefix of $\w[b]$, and let $q = \pi_\w[b,b+n-1]$. For each $i,j \in \{0, 1, \ldots, n-1 \}$, if $p_i < p_j$
\begin{align*}
\w[b+i] &= \bar{u}_{i,j}1\cdots \\
\w[b+j] &= \bar{u}_{i,j}0\cdots
\end{align*}
and thus, $q_i > q_j$.
For any $i \in \{0,1, \ldots, n-1 \}$ there are $p_i - 1$ many $j$ so that $p_j < p_i$ and there are $n - p_i$ many $j$ so that $p_j > p_i$. Therefore there are $n - p_i$ many $j$ so that $q_j < q_i$, so $q_i = n - p_i + 1$.
$\qed$
\end{proof}
\begin{definition}
\label{SameForm}
Two permutations $p$ and $q$ of $\{1, 2, \ldots, n \}$ have the $\textit{same form}$ if for each $i = 0, 1, \ldots, n-1$, $p_i < p_{i+1}$ if and only if $q_i < q_{i+1}$. For a binary word $u$ of length $n-1$, say that $p$ $\textit{has form u}$ if
$$p_i<p_{i+1} \Longleftrightarrow u_i = 0$$
for each $i = 0, 1, \ldots, n-2$.
\end{definition}
\section{The Thue-Morse Permutation}
\label{ThueMorsePermutation}
In this section the action of the Thue-Morse morphism on the subpermutations of $\pi_T$ will be investigated. This action will induce a well-defined map on the subpermutations of $\pi_T$ and lead to an initial upper-bound on the permutation complexity of $T$.
The Thue-Morse word is:
$$ T = 0110 1001 1001 0110 1001 0110 0110 1001 \cdots,$$
and the Thue-Morse morphism is:
$$\mu_T:0 \rightarrow 01, \hspace{1.5ex} 1 \rightarrow 10. $$
It can readily be verified that if $a$ is a natural number then
$$\mu_T(T[a]) = T[2a]$$
since for any letter $x \in \{0,1 \}$, $\abs{\mu_T(x)}=2$.
A nice property of the factors of $T$ is that any factor of length 5 or greater contains either $00$ or $11$. Another interesting property is that for any $i \in \N$, $T[2i,2i+1]$ will be either 01 or 10. Thus any occurrence of $00$ or $11$ must be a factor of the form $T[2i+1,2i+2]$ for some $i \in \N$. Therefore any factors $T[2i,2i+n]$ and $T[2j+1,2j+1+n]$ where $n \geq 4$ cannot be equal based on the location of the factors $00$ or $11$.
Let $\pi_T$ be the infinite permutation associated to the Thue-Morse word $T$. For notational purposes, the set of all subpermutations of $\pi_T$ of length $n$ will be denoted as $Perm(n)$.
Let $a$ and $n$ be natural numbers and suppose we want to determine if $T[a] < T[a+n]$. There will be some (possibly empty) factor $u$ of $T$, and suffixes $x$ and $y$ of $T$ so that $T[a] = u\lambda x$ and $T[a+n] = u \bar{\lambda}y$, for $\lambda \in \{0,1 \}$. If $\abs{u} \geq n+1$ we would have $T_{a+i} = T_{a+n+i}$ for each $i = 0, 1, \ldots, n$, and thus $T[a,a+n] = T[a+n,a+2n]$, and $T[a,a+2n]$ would violate the fact that $T$ is overlap-free. Thus $\abs{u} \leq n$, and if $\abs{u} = n$ we have $T[a,a+n-1] = T[a+n,a+2n-1]$ and $T_{2n} = \overline{T_a}$. Therefore the subpermutation $\pi_T[a,a+n]$ can be determined within the factor $T[a,a+2n]$ of length $2n+1$. Thus the trivial bounds for the permutation complexity of the Thue-Morse word $T$ are
$$ \rho_T(n-1) \leq \tau_T(n) \leq \rho_T(2n-1). $$
Since the factor complexity of the Thue-Morse word is known (see $\cite{Brlek89,LucaVarr89}$) we can find all factors of a given length. Thus for any natural number $n$, all factors of $T$ of length $2n-1$ can be identified and thus the set of all subpermutations of $\pi_T$ of length $n$, $Perm(n)$, can be identified as well. The subpermutations of $\{1, 2, \ldots, n\}$ have been identified for relatively low $n$ (up to $n=65$) and in these cases no more than two subpermutations of any length were identified to have the same form. In other words, for any factor $u$ of $T$ of length $n \leq 64$ there are at most two subpermutations of length $n+1$ having form $u$.
This section will deal with some properties of $\pi_T$. Something to note about the Thue-Morse morphism is that it is an order preserving morphism, as shown by the following lemma.
\begin{lemma}
\label{OrderPresMorph}
For natural numbers $a$ and $b$, $T[a] < T[b]$ if and only if $\mu_T(T[a]) < \mu_T(T[b])$.
\end{lemma}
\begin{proof}
If $T[a] < T[b]$, then there exists a finite factor $u$ of $T$, and suffixes $x$ and $y$ of $T$ so that
\begin{align*}
T[a] &= u0x \\
T[b] &= u1y.
\end{align*}
Thus we can see
\begin{align*}
\mu_T(T[a]) &= \mu_T(u)01\mu_T(x) \\
\mu_T(T[b]) &= \mu_T(u)10\mu_T(y)
\end{align*}
and therefore $\mu_T(T[a]) < \mu_T(T[b])$.
Suppose $\mu_T(T[a]) < \mu_T(T[b])$, then there exists a finite factor $u$ of $T$, and suffixes $x$ and $y$ of $T$ so that
\begin{align*}
\mu_T(T[a]) &= u0x \\
\mu_T(T[b]) &= u1y
\end{align*}
If $u$ ends with a $0$, then $\mu_T(T[a])$ would have 00 at the end of $u0$, so $u$ ends with $10$ and $0x$ starts with $01$. If $u$ ends with a $1$, then $\mu_T(T[b])$ would have 11 at the end of $u1$, so $u$ ends with $01$ and $1x$ starts with $10$. In either case we have there is some factor $v$ so that $\mu_T(v) = u$. Hence a prefix of $\mu_T(T[a])$ is $\mu_T(v)01$ and a prefix of $\mu_T(T[b])$ is $\mu_T(v)10$
Thus a prefix of $T[a]$ is $v0$ and a prefix of $T[b]$ is $v1$. Therefore $T[a] < T[b]$.
$\qed$
\end{proof}
\begin{lemma}
\label{ImgOfZerosAndOnesTM}
If $u$ and $v$ are shifts of $T$ so that for some $a$ and $b$ $u = 0T[a]$ and $v = 1T[b]$, and hence $u<v$, $\mu_T(u) = 01\mu_T(T[a])$, and $\mu_T(v) = 10\mu_T(T[b])$. Thus $0\mu_T(T[b]) < 01\mu_T(T[a]) < 10\mu_T(T[b]) < 1\mu_T(T[a])$.
\end{lemma}
\begin{proof}
The first letters in $T[a]$ will be either $01$ or $1$, thus $\mu_T(T[a])$ will start with either $0110$ or $10$, respectively. The first letters in $T[b]$ will be either $10$ or $0$, thus $\mu_T(T[b])$ will start with either $1001$ or $01$, respectively.
Then $0\mu_T(T[b])$ will start with $01001$ or $001$ and $01\mu_T(T[a])$ will start with $010110$ or $0110$. Thus $001<01001<010110<0110$, so
$$0\mu_T(T[b]) < 01\mu_T(T[a]).$$
Then $10\mu_T(T[b])$ will start with $101001$ or $1001$ and $1\mu_T(T[a])$ will start with $10110$ or $110$. Thus $1001<101001<10110<110$, so
$$10\mu_T(T[b]) < 1\mu_T(T[a]).$$
Therefore
$$0\mu_T(T[b]) < 01\mu_T(T[a]) < 10\mu_T(T[b]) < 1\mu_T(T[a]).$$
$\qed$
\end{proof}
Let $u$ be a factor of $T$ of length $n$. There is an $a \in \N$ so that $u = T[a,a+n-1]$. Also recall that $\abs{u}_1$ is the number of occurrences of the letter $1$ in $u$, and that $\abs{u}_1 = n-\abs{u}_0$. Let $p = \pi_T[a,a+n]$ be a subpermutation of $\pi_T$ with form $u$. Then $\mu_T(u) = T[2a, 2a + 2n-1]$, and let $p'$ be the subpermutation $p' = \pi_T[2a, 2a + 2n]$ with form $\mu_T(u)$. When Lemma $\ref{ImgOfZerosAndOnesTM}$ is used with this notation, for $0\leq i,j \leq n-1$, where $T_i=0$ and $T_j = 1$, we have $p_i < p_j$ and $p'_{2j+1} < p'_{2i} < p'_{2j} < p'_{2i+1}$. The following lemma describes the values of $p'$ in terms of the values of $p$.
\begin{proposition}
\label{CalculateTheFwdImage}
Let $u$, $p$, and $p'$ be as described above. For any $i \in \{0, 1, \ldots, n \}$:
$$ p'_{2i} = p_i + \abs{u}_1 $$
and for any $i \in \{0, 1, \ldots, n-1 \}$:
$$ p'_{2i+1} = \begin{cases}
p_i + \abs{u}_1+(n+1) & \text{if $p_i < p_{i+1}$ and $p_i < p_n$} \\
p_i + \abs{u}_1+n & \text{if $p_i < p_{i+1}$ and $p_i > p_n$} \\
p_i + \abs{u}_1 - n & \text{if $p_i > p_{i+1}$ and $p_i < p_n$} \\
p_i + \abs{u}_1 - (n+1) & \text{if $p_i > p_{i+1}$ and $p_i > p_n$}
\end{cases} $$
\end{proposition}
\begin{proof}
To take care of the $p'_{2i}$ terms, let $i \in \{0, 1, \ldots, n \}$. There will be $p_i-1$ many $j$ so that $p_i > p_j$, so there are $p_i-1$ many $j$ so that $p'_{2i} > p'_{2j}$. Clearly, if $p_i < p_j$ then $p'_{2i} < p'_{2j}$. So there are exactly $p_i-1$ many even $j$ so that $p'_{2i} > p'_j$. There are $\abs{u}_1$ many $j$ so that $T_{a+j} = 1$, so there are $\abs{u}_1$ many $j$ so that $p'_{2i} > p'_{2j+1}$ and $\abs{u}_0$ many $j$ so that $T_{a+j}=0$, so $p'_{2i} < p'_{2j+1}$. So there are exactly $\abs{u}_1$ many odd $j$ so that $p'_{2i} > p'_j$. Thus there are exactly $p_i-1+\abs{u}_1$ many $j$ so that $p'_{2i} > p'_j$, and therefore $ p'_{2i} =(p_i-1+\abs{u}_1)+1 = p_i + \abs{u}_1 $.
The $p'_{2i+1}$ terms will be done in two cases. First when $p_i < p_{i+1}$ and then when $p_i > p_{i+1}$.
$\textbf{Case a:}$ Suppose that $p_i < p_{i+1}$, so $T_{a+i} = 0$. For each $j = 0, 1, \ldots, n$ we must have $p'_{2i+1} > p'_{2j}$, so for each even $j$ (there are $n+1$ many such $j$) $p'_{2i+1} > p'_j$. There are $\abs{u}_1$ many $j$ so that $T_{a+j} = 1$, so there are $\abs{u}_1$ many $j$ so that $p_{2i+1} > p_{2j+1}$. Thus the only other $j$ where $p'_{2j+1}$ can be less than $p'_{2i+1}$ are $j \in \{0, 1, \ldots, n-1 \}$ where $T_{a+j} = 0$ and $p_i > p_j$.
$\textbf{Subcase a.1:}$ If $p_i < p_n$ then there are $p_i - 1$ many $j$ so that $T_{a+j}=0$ and $p_i > p_j$, and then $n-p_i-\abs{u}_1= \abs{u}_0 - p_i$ many $j$ so that $T_{a+j}=0$ and $p_i < p_j$. Thus there can only be $(n+1) + \abs{u}_1 + p_i - 1$ many $j$ so that $p'_{2i+1} > p'_j$, and therefore $p'_{2i+1} = (n+1) + \abs{u}_1 + p_i - 1 + 1 = p_i + \abs{u}_1 + (n+1)$.
$\textbf{Subcase a.2:}$ If $p_i > p_n$ then there are $p_i - 2$ many $j$ so that $T_{a+j}=0$ and $p_i > p_j$ (since $T_{a+n}$ is not in $u = T[a,a+n-1]$), and then $n-(p_i-1)-\abs{u}_1= \abs{u}_0 - (p_i - 1)$ many $j$ so that $T_{a+j}=0$ and $p_i < p_j$. Thus there can only be $(n+1) + \abs{u}_1 + p_i - 2$ many $j$ so that $p'_{2i+1} > p'_j$, and therefore $p'_{2i+1} = (n+1) + \abs{u}_1 + p_i - 2 + 1 = p_i + \abs{u}_1 + n$.
$\textbf{Case b:}$ Suppose that $p_i > p_{i+1}$, so $T_{a+i} = 1$. For each $j = 0, 1, \ldots, n$ we must have $p'_{2i+1} < p'_{2j}$, so for each even $j$ (there are $n+1$ many such $j$) $p'_{2i+1} < p'_j$. There are $\abs{u}_0$ many $j$ so that $T_{a+j} = 0$, so there are $\abs{u}_0$ many $j$ so that $p_{2i+1} < p_{2j+1}$. Thus the only other $j$ where $p'_{2j+1}$ can be less than $p'_{2i+1}$ are $j \in \{0, 1, \ldots, n-1 \}$ where $T_{a+j} = 1$ and $p_i > p_j$.
$\textbf{Subcase b.1:}$ If $p_i < p_n$ then there are $(p_i - 1) - \abs{u}_0$ many $j$ so that $T_{a+j}=1$ and $p_i > p_j$, and there can only be $\abs{u}_1 - (p_i - 1 - \abs{u}_0) - 1 = n-p_i$ many $j$ so that $T_{a+j}=1$ and $p_i < p_j$ (since $T_{a+n}$ is not in $u = T[a,a+n-1]$). Thus there can only be $(p_i - 1) - \abs{u}_0 = p_i - 1 - (n-\abs{u}_1) = p_i +\abs{u}_1 - n -1$ many $j$ so that $p'_{2i+1} > p'_j$, and therefore $p'_{2i+1} = p_i +\abs{u}_1 - n -1 + 1 = p_i + \abs{u}_1 - n$.
$\textbf{Subcase b.2:}$ If $p_i > p_n$ then there are $(p_i - 2) - \abs{u}_0$ many $j$ so that $T_{a+j}=1$ and $p_i > p_j$ (since $T_{a+n}$ is not in $u = T[a,a+n-1]$), and there can only be $\abs{u}_1 - (p_i - 2 - \abs{u}_0) - 1 = (n+1)-p_i$ many $j$ so that $T_{a+j}=1$ and $p_i < p_j$. Thus there can only be $(p_i - 2) - \abs{u}_0 = p_i - 2 - (n-\abs{u}_1) = p_i +\abs{u}_1 - n -2$ many $j$ so that $p'_{2i+1} > p'_j$, and therefore $p'_{2i+1} = p_i +\abs{u}_1 - n -2 + 1 = p_i + \abs{u}_1 - (n+1)$.
$\qed$
\end{proof}
Fix a subpermutation $p=\pi_T[a,a+n]$, and then let $p'=\pi_T[2a,2a+2n]$. So the terms of $p'$ can be defined using the method defined in Proposition $\ref{CalculateTheFwdImage}$. Let $q=\pi_T[b,b+n]$, $b \neq a$, be a subpermutation of $\pi_T$ and let $q'=\pi_T[2b,2b+2n]$ as in Proposition $\ref{CalculateTheFwdImage}$. The following lemma concerns the relationship of $p$ and $q$ to $p'$ and $q'$. Therefore the idea of $p'$ can ne used to define a map on the subpermutations of $\pi_T$, and the map will be well-defined by Proposition $\ref{CalculateTheFwdImage}$.
\begin{lemma}
\label{pISqIFFppISqp}
$p \neq q$ if and only if $p' \neq q'$.
\end{lemma}
\begin{proof}
Supposing that $p \neq q$, there are $i,j \in \{0, 1, \ldots, n \}$ so that $p_i < p_j$ and $q_i > q_j$ and thus
\begin{align*}
T[a+i] &< T[a+j] \\
T[b+i] &> T[b+i].
\end{align*}
Then since the Thue-Morse morphism is order preserving we have
\begin{align*}
T[2(a+i)] = \mu_T(T[a+i]) &< \mu_T(T[a+j]) = T[2(a+j)] \\
T[2(b+i)] = \mu_T(T[b+i]) &> \mu_T(T[b+j]) = T[2(b+j)].
\end{align*}
Therefore $p'_{2(a+i)}<p'_{2(a+j)}$ and $q'_{2(b+i)}>q'_{2(b+j)}$ so $p' \neq q'$.
Now to show by contrapositive, suppose that $p=q$, so $p_i = q_i$ for each $i \in \{0, 1, \ldots, n \}$. Since $p=q$, $p$ and $q$ have the same form, because $p_i < p_{i+1}$ if and only if $q_i < q_{i+1}$, so $T[a,a+n-1] = T[b,b+n-1]$ and thus $T[2a,2a+2n-1] = T[2b,2b+2n-1]$. Then by Proposition $\ref{CalculateTheFwdImage}$ it should be clear that for each $j \in \{0, 1, \ldots, 2n \}$ we have $p'_j = q'_j$, and thus $p' = q'$.
Therefore if $p' \neq q'$ then $p \neq q$.
$\qed$
\end{proof}
\begin{corollary}
\label{CorTo_pisqiff}
If $p=\pi_T[a,a+n]=\pi_T[b,b+n]$ for some $a \neq b$, then $\pi_T[2a,2a+2n]=\pi_t[2b,2b+2n]$.
\end{corollary}
Thus there is a well-defined function on the subpermutations of $\pi_T$. Let $p = \pi_T[a,a+n]$, and define $\phi(p) = p' = \pi_T[2a,2a+2n]$ using the formula in Proposition $\ref{CalculateTheFwdImage}$. Thus we have the map
$$\phi:Perm(n+1) \rightarrow Perm(2n+1)$$
which is injective by Lemma $\ref{pISqIFFppISqp}$. Not all subpermutations of $\pi_T$ will be the image under $\phi$ of another subpermutation.
Let $n \geq 5$ and $a$ be natural numbers. Then $n$ and $a$ can be either even or odd, and for the subpermutation $\pi_T[a,a+n]$, there exist natural numbers $b$ and $m$ so that one of 4 cases hold:
\begin{enumerate}
\item $\pi_T[a,a+n] = \pi_T[2b,2b+2m]$, even starting position with odd length
\item $\pi_T[a,a+n] = \pi_T[2b,2b+2m-1]$, even starting position with even length
\item $\pi_T[a,a+n] = \pi_T[2b+1,2b+2m]$, odd starting position with even length
\item $\pi_T[a,a+n] = \pi_T[2b+1,2b+2m+1]$, odd starting position with odd length
\end{enumerate}
Consider two subpermutations of length $n > 5$, $\pi_T[2c, 2c+n]$ and $\pi_T[2d+1, 2d+n+1]$. The subpermutations $\pi_T[2c, 2c+n]$ will have form $T[2c, 2c+n-1]$, and $\pi_T[2d+1, 2d+n+1]$ will have form $T[2d+1, 2d+n]$. Since the length of these factors is at least 5, we know that $T[2c, 2c+n-1] \neq T[2d+1, 2d+n]$, and thus $\pi_T[2c, 2c+n] \neq \pi_T[2d+1, 2d+n+1]$ because they do not have the same form. Thus we can break up the set $Perm(n)$ into two classes of subpermutations, namely the subpermutations that start at an even position or an odd position. So say that $Perm_{ev}(n)$ is the set of subpermutations $p$ of length $n$ so that $p = \pi_T[2b,2b+n-1]$ for some $b$, and that $Perm_{odd}(n)$ is the set of subpermutations $p$ of length $n$ so that $p = \pi_T[2b+1,2b+n]$ for some $b$. Thus
$$Perm(n) = Perm_{ev}(n) \cup Perm_{odd}(n),$$
where we have
$$Perm_{ev}(n) \cap Perm_{odd}(n) = \emptyset.$$
Thus for $n \geq 3$, $Perm_{ev}(2n+1)$ is the set of all subpermutations of length $2n+1$ starting at an even position. So for $\pi_T[2a,2a+2n]$, we know there is a subpermutation $p = \pi_T[a,a+n]$ so that $\phi(p) = p' = \pi_T[2a,2a+2n]$. Thus the map
$$\phi:Perm(n+1) \rightarrow Perm_{ev}(2n+1)$$
is also a surjective map, and is thus a bijection. The next definition about the restriction of subpermutations will be helpful to count the size of the sets $Perm_{odd}(2n)$, $Perm_{ev}(2n)$, and $Perm_{odd}(2n+1)$.
\begin{definition}
Let $p = \pi[a,a+n]$ be a subpermutation of the infinite permutation $\pi$. The $\textit{left restriction of}$ $p$, denoted by $L(p)$, is the subpermutation of $p$ so that $L(p) = \pi[a, a+n-1]$. The $\textit{right restriction of}$ $p$, denoted by $R(p)$, is the subpermutation of $p$ so that $R(p) = \pi[a+1, a+n]$. The $\textit{middle restriction of}$ $p$, denoted by $M(p)$, is the subpermutation of $p$ so that $M(p) = R(L(p)) = L(R(p)) = \pi[a+1, a+n-1]$.
\end{definition}
For each $i$, there are $p_i-1$ terms in $p$ that are less than $p_i$ and there are $n-p_i$ terms that are greater than $p_i$. Thus consider $i \in \{0, 1, \ldots, n-1\}$ and the values of $L(p)_i$ and $R(p)_i$. If $p_0 < p_{i+1}$ there will be $p_{i+1}-2$ terms in $R(p)$ less than $R(p)_i$ so we have $R(p)_i = p_{i+1}-1$. In a similar sense, if $p_n < p_i$ we have $L(p)_i = p_i - 1$. If $p_0 > p_{i+1}$ there will be $p_{i+1}-1$ terms in $R(p)$ less than $R(p)_i$ so we have $R(p)_i = p_{i+1}$. In a similar sense, if $p_n > p_i$ we have $L(p)_i = p_i $.
The values in $M(p)$ can be found by finding the values in $R(L(p))$ or $L(R(p))$. Since $R(L(p))$ or $L(R(p))$ correspond to the same subpermutation of $p$, $R(L(p))_i < R(L(p))_j$ if and only if $L(R(p))_i < L(R(p))_j$. Therefore $R(L(p)) = L(R(p))$.
It should also be clear that if there are two subpermutations $p= \pi_T[a,a+n]$ and $q = \pi_T[b,b+n]$ so that $p=q$ then $L(p) = L(q)$, $R(p) = R(q)$, and $M(p) = M(q)$ since if $p=q$ then $p_i < p_j$ if and only if $q_i < q_j$.
For $p=\pi_T[a,a+n]$, we can then define three additional maps by looking at the left, right, and middle restrictions of $\phi(p) = p'$. These maps are
\begin{align*}
\phi_L:Perm(n+1) &\rightarrow Perm_{ev}(2n) \\
\phi_R:Perm(n+1) &\rightarrow Perm_{odd}(2n) \\
\phi_M:Perm(n+2) &\rightarrow Perm_{odd}(2n+1)
\end{align*}
and are defined by
\begin{align*}
\phi_L(p) &= L(\phi(p)) = L(p')\\
\phi_R(p) &= R(\phi(p)) = R(p')\\
\phi_M(p) &= M(\phi(p)) = M(p')
\end{align*}
It can be readily verified that these three maps are surjective. To see an example of this, consider the map $\phi_L$, and let $\pi_T[2b,2b+2n-1]$ be a subpermutation in $Perm_{ev}(2n)$. Then for the subpermutation $p=\pi_T[b,b+n]$, $\phi_L(p) = L(p') = \pi_T[2b,2b+2n-1]$ so $\phi_L$ is surjective. A similar argument will show that $\phi_R$ and $\phi_M$ are also surjective.
\begin{lemma}
\label{UpperBoundForTau}
For $n \geq 2$:
\begin{align*}
\tau_T(2n) &\leq 2(\tau_T(n+1)) \\
\tau_T(2n+1) &\leq \tau_T(n+1) + \tau_T(n+2)
\end{align*}
\end{lemma}
\begin{proof}
Let $n \geq 2$. We have:
\begin{align*}
\abs{Perm_{ev}(2n)} &\leq \abs{Perm(n+1)} \\
\abs{Perm_{odd}(2n)} &\leq \abs{Perm(n+1)} \\
\\
\abs{Perm_{ev}(2n+1)} &= \abs{Perm(n+1)} \\
\abs{Perm_{odd}(2n+1)} &\leq \abs{Perm(n+2)}
\end{align*}
since $\phi$ is a bijection, and the 3 maps $\phi_L$, $\phi_R$, and $\phi_M$ are all surjective. Thus we have the following inequalities:
\begin{align*}
\tau_T(2n) &= \abs{Perm(2n)} = \abs{Perm_{ev}(2n)} + \abs{Perm_{odd}(2n)} \\
&\leq \abs{Perm(n+1)} + \abs{Perm(n+1)} = 2(\tau_T(n+1)) \\
\\
\tau_T(2n+1) &= \abs{Perm(2n+1)} = \abs{Perm_{ev}(2n+1)} + \abs{Perm_{odd}(2n+1)} \\
&\leq \abs{Perm(n+1)} + \abs{Perm(n+2)} = \tau_T(n+1) + \tau_T(n+2)
\end{align*}
$\qed$
\end{proof}
The three maps $\phi_L$, $\phi_R$, and $\phi_M$ are not injective maps. To see this, consider the subpermutations
\begin{align*}
&p=\pi_T[5,9] = [2 \hspace{.5ex} 3 \hspace{.5ex} 5 \hspace{.5ex} 4 \hspace{.5ex} 1] \\
&q=\pi_T[23,27] = [1 \hspace{.5ex} 3 \hspace{.5ex} 5 \hspace{.5ex} 4 \hspace{.5ex} 2].
\end{align*}
Both of these subpermutations have form $T[5,8] = T[23,26] = 0011$. Then applying the maps we see:
\begin{align*}
&p' = \phi(p) = \pi_T[10,18] = [4 \hspace{.5ex} 8 \hspace{.5ex} 5 \hspace{.5ex} 9 \hspace{.5ex} 7 \hspace{.5ex} 2 \hspace{.5ex} 6 \hspace{.5ex} 1 \hspace{.5ex} 3] \\
&q' = \phi(q) = \pi_T[46,54] = [3 \hspace{.5ex} 8 \hspace{.5ex} 5 \hspace{.5ex} 9 \hspace{.5ex} 7 \hspace{.5ex} 2 \hspace{.5ex} 6 \hspace{.5ex} 1 \hspace{.5ex} 4]
\end{align*}
\begin{align*}
&\phi_L(p) = \pi_T[10,17] = [3 \hspace{.5ex} 7 \hspace{.5ex} 4 \hspace{.5ex} 8 \hspace{.5ex} 6 \hspace{.5ex} 2 \hspace{.5ex} 5 \hspace{.5ex} 1] \\
&\phi_L(q) = \pi_T[46,53] = [3 \hspace{.5ex} 7 \hspace{.5ex} 4 \hspace{.5ex} 8 \hspace{.5ex} 6 \hspace{.5ex} 2 \hspace{.5ex} 5 \hspace{.5ex} 1]
\end{align*}
\begin{align*}
&\phi_R(p) = \pi_T[11,18] = [7 \hspace{.5ex} 4 \hspace{.5ex} 8 \hspace{.5ex} 6 \hspace{.5ex} 2 \hspace{.5ex} 5 \hspace{.5ex} 1 \hspace{.5ex} 3] \\
&\phi_R(q) = \pi_T[47,54] = [7 \hspace{.5ex} 4 \hspace{.5ex} 8 \hspace{.5ex} 6 \hspace{.5ex} 2 \hspace{.5ex} 5 \hspace{.5ex} 1 \hspace{.5ex} 3]
\end{align*}
\begin{align*}
&\phi_M(p) = \pi_T[11,17] = [6 \hspace{.5ex} 3 \hspace{.5ex} 7 \hspace{.5ex} 5 \hspace{.5ex} 2 \hspace{.5ex} 4 \hspace{.5ex} 1] \\
&\phi_M(q) = \pi_T[47,53] = [6 \hspace{.5ex} 3 \hspace{.5ex} 7 \hspace{.5ex} 5 \hspace{.5ex} 2 \hspace{.5ex} 4 \hspace{.5ex} 1]
\end{align*}
So $p' \neq q'$ but $\phi_L(p) = \phi_L(q)$, $\phi_R(p) = \phi_R(q)$, and $\phi_M(p) = \phi_M(q)$, and these maps are not injective in general. Hence the values in Lemma $\ref{UpperBoundForTau}$ are only an upper bound. The next goal is to determine when these maps are not injective.
\section{Type $k$ and Complementary Pairs}
\label{TypeKandCompPairs}
An interesting pattern occurs in some subpermutations of $\pi_T$. The subpermutations that follow this pattern are said to be subpermutations of type $k$ which is described in the next definition. Proposition $\ref{CalculateTheFwdImage}$ will be used inductively to show the maps $\phi$, $\phi_L$, $\phi_R$, and $\phi_M$ preserve subpermutations of type $k$. An induction argument with this fact will be used to show that two subpermutations have the same form if and only if they are a complimentary pair of type $k$, defined below. A corollary of this will determine when the maps $\phi_L$, $\phi_R$, and $\phi_M$ are bijective.
\begin{definition}
A subpermutation $p = \pi_T[a,a+n]$ is of $\textit{type $k$}$, for $k \geq 1$, if $p$ can be decomposed as
$$ p = [\alpha_1 \cdots \alpha_k \lambda_1 \cdots \lambda_l \beta_1 \cdots \beta_k] $$
where $\alpha_i = \beta_i + \varepsilon$ for each $i = 1, 2, \ldots, k$ and an $\varepsilon \in \{ -1, 1 \}$.
\end{definition}
Some examples of subpermutations of type $1$, 2, and 3 (respectively) are:
\begin{align*}
&\pi_T[5,9] = [2 \hspace{.5ex} 3 \hspace{.5ex} 5 \hspace{.5ex} 4 \hspace{.5ex} 1] \\
&\pi_T[20,25] = [2 \hspace{.5ex} 5 \hspace{.5ex} 4 \hspace{.5ex} 1 \hspace{.5ex} 3 \hspace{.5ex} 6] \\
&\pi_T[6,12] = [3 \hspace{.5ex} 7 \hspace{.5ex} 5 \hspace{.5ex} 1 \hspace{.5ex} 2 \hspace{.5ex} 6 \hspace{.5ex} 4]
\end{align*}
\begin{definition}
Suppose that the subpermutation $p = \pi_T[a,a+n]$ is of type $k$ so that for $\varepsilon \in \{-1, 1 \}$, $\alpha_i = \beta_i + \varepsilon$ for each $i = 1, 2, \ldots, k$. If there exists a subpermutation $q = \pi_T[b,b+n]$ of type $k$ so that $p$ and $q$ can be decomposed as:
\begin{align*}
p &= \pi_T[a,a+n] = [\alpha_1 \cdots \alpha_k \lambda_1 \cdots \lambda_l \beta_1 \cdots \beta_k] \\
q &= \pi_T[b,b+n] = [\beta_1 \cdots \beta_k \lambda_1 \cdots \lambda_l \alpha_1 \cdots \alpha_k]
\end{align*}
then $p$ and $q$ are said to be a $\textit{complementary pair of type $k$}$. If $p$ and $q$ are a $\textit{complementary pair of type}$ $k \leq 0$ then $p = q$.
\end{definition}
The subpermutations
\begin{align*}
&\pi_T[5,9] = [2 \hspace{.5ex} 3 \hspace{.5ex} 5 \hspace{.5ex} 4 \hspace{.5ex} 1] \\
&\pi_T[23,27] = [1 \hspace{.5ex} 3 \hspace{.5ex} 5 \hspace{.5ex} 4 \hspace{.5ex} 2]
\end{align*}
are a complementary pair of type 1. The following subpermutation of type 1
$$ \pi_T[0,3] = [2 \hspace{.5ex} 4 \hspace{.5ex} 3 \hspace{.5ex} 1] $$
does not have a complementary pair, since $[1 \hspace{.5ex} 4 \hspace{.5ex} 3 \hspace{.5ex} 2] $ is not a subpermutation of $\pi_T$.
The following proposition considers subpermutations of type $k$, and complementary pairs of type $k$.
\begin{proposition}
\label{ImageOfTypeK}
Suppose $p = \pi_T[a,a+n]$ is of type $k$ and $q = \pi_T[b,b+n]$ is of type $k$, with $k \geq 1$, and that $p$ and $q$ are a complementary pair of type $k$.
\begin{itemize}
\item[(a)] $\phi(p)$ is of type $2k-1$, and if $k \geq 2$ then $\phi_L(p)$ and $\phi_R(p)$ are of type $2k-2$ and $\phi_M(p)$ is of type $2k-3$.
\item[(b)] $\phi(p)$ and $\phi(q)$ are a complementary pair of type $2k-1$.
\item[(c)] $\phi_L(p)$ and $\phi_L(q)$ are a complementary pair of type $2k-2$.
\item[(d)] $\phi_R(p)$ and $\phi_R(q)$ are a complementary pair of type $2k-2$.
\item[(e)] $\phi_M(p)$ and $\phi_M(q)$ are a complementary pair of type $2k-3$.
\end{itemize}
\end{proposition}
\begin{proof}
Since $p$ and $q$ are a complementary pair of type $k$ they can be decomposed as
\begin{align*}
p &= \pi_T[a,a+n] = [\alpha_1 \cdots \alpha_k \lambda_1 \cdots \lambda_l \beta_1 \cdots \beta_k] \\
q &= \pi_T[b,b+n] = [\beta_1 \cdots \beta_k \lambda_1 \cdots \lambda_l \alpha_1 \cdots \alpha_k]
\end{align*}
and for $\varepsilon \in \{-1, 1 \}$, $\alpha_i = \beta_i + \varepsilon$ for each $i = 1, 2, \ldots, k$. For the values of $k$ and $l$, $2k+l = n+1$ and $4k+2l-1=2n+1$.
$\vspace{1.0ex}$
$\textbf{(a)}$ The first thing to show is that $\phi(p)$ is of type $2k-1$.
For $i \in \{0, 1, \ldots, k-1 \}$ we have $p_i = p_{n-(k-1)+i}+\varepsilon$, so by Proposition $\ref{CalculateTheFwdImage}$:
$$p'_{2i} = p'_{2(n-(k-1)+i)} +\varepsilon $$
For $i \in \{0, 1, \ldots, k-2 \}$, $p_i < p_{i+1}$ if and only if $p_{n-(k-1)+i} < p_{n-(k-1)+i+1}$, and $p_i < p_n$ if and only if $p_{n-(k-1)+i} < p_n$ since $p_i$ and $p_{n-(k-1)+i}$ are consecutive values. By Proposition $\ref{CalculateTheFwdImage}$:
$$ p'_{2i+1} = p'_{2(n-(k-1)+i)+1} +\varepsilon $$
So for each $i \in \{0, 1, \ldots, 2k-2 \}$: $p'_i = p'_{2n-2k+2+i}+\varepsilon$, and $\phi(p)$ can be decomposed as
$$\phi(p) = \pi_T[2a,2a+2n] = [\alpha'_1 \cdots \alpha'_{2k-1} \lambda'_1 \cdots \lambda'_{2l+1} \beta'_1 \cdots \beta'_{2k-1}],$$
where $\alpha'_i = \beta'_i + \varepsilon$, so $\phi(p)=p'$ is of type $2k-1$.
Next, suppose that $k \geq 2$ so $2k-1 \geq 3$, we show that $\phi_L(p) = L(p')$ and $\phi_R(p) = R(p')$ are of type $2k-2$ and $\phi_M(p)$ is of type $2k-3$.
Let $i \in \{0, 1, \ldots, 2k-3\}$, and consider $\phi_L(p) = L(p')$. Since $p'_i$ and $p'_{2n-2k+2+i}$ are consecutive values, $p'_i < p'_{2n}$ if and only if $p'_{2n-2k+2+i} < p'_{2n}$. So if $L(p')_i = p'_i$ then $L(p')_{2n-2k+2+i} = p'_{2n-2k+2+i}$, and if $L(p')_i = p'_i-1$ then $L(p')_{2n-2k+2+i} = p'_{2n-2k+2+i}-1$. In either case, $L(p')_i = L(p')_{2n-2k+2+i} + \varepsilon$ and there is a decomposition
$$\phi_L(p) = \pi_T[2a,2a+2n-1] = [\alpha'_1 \cdots \alpha'_{2k-2} \lambda'_1 \cdots \lambda'_{2l+2} \beta'_1 \cdots \beta'_{2k-2}],$$
and $\phi_L(p)$ is of type $2k-2$.
Now consider $\phi_R(p) = R(p')$. Since $p'_{i+1}$ and $p'_{2n-2k+2+i+1}$ are consecutive values, $p'_{i+1} < p'_0$ if and only if $p'_{2n-2k+2+i+1} < p'_0$. So if $R(p')_i = p'_{i+1}$ then $R(p')_{2n-2k+2+i} = p'_{2n-2k+2+i+1}$, and if $R(p')_i = p'_{i+1}-1$ then $R(p')_{2n-2k+2+i} = p'_{2n-2k+2+i+1}-1$. In either case, $R(p')_i = R(p')_{2n-2k+2+i} + \varepsilon$ and there is a decomposition
$$\phi_R(p) = \pi_T[2a+1,2a+2n] = [\alpha'_1 \cdots \alpha'_{2k-2} \lambda'_1 \cdots \lambda'_{2l+2} \beta'_1 \cdots \beta'_{2k-2}],$$
and $\phi_R(p)$ is of type $2k-2$.
Now consider $\phi_M(p)$, and let $i \in \{0, 1, \ldots, 2k-4\}$. Since $R(p')_i$ and $R(p')_{2n-2k+1+i}$ are consecutive values; $R(p')_i < R(p')_{2n-1}$ if and only if $R(p')_{2n-2k+1+i} < R(p')_{2n-1}$. So if $M(p')_i = L(R(p'))_i = R(p')_i$ then $M(p')_{2n-2k+1+i} = L(R(p'))_{2n-2k+1+i} = R(p')_{2n-2k+1+i}$, and if $M(p')_i = L(R(p'))_i = R(p')_i-1$ then $M(p')_{2n-2k+1+i} = L(R(p'))_{2n-2k+1+i} = R(p')_{2n-2k+1+i}-1$. In either case, $M(p')_i = M(p')_{2n-2k+2+i} + \varepsilon$ and there is a decomposition
$$\phi_M(p) = \pi_T[2a+1,2a+2n-1] = [\alpha'_1 \cdots \alpha'_{2k-3} \lambda'_1 \cdots \lambda'_{2l+3} \beta'_1 \cdots \beta'_{2k-3}],$$
and $\phi_R(p)$ is of type $2k-3$.
$\vspace{1.0ex}$
$\textbf{(b)}$ From (a), $\phi(q) = q'$ is of type $2k-1$. Since $p$ and $q$ are a complementary pair of type $k$, $p_i = p_{n-k+1+i} + \varepsilon = q_i + \varepsilon = q_{n-k+1+i}$ for each $i \in \{0, 1, \ldots, k-1 \}$, and $p_{k+i} = q_{k+i}$ for each $i \in \{0, 1, \ldots, l-1 \}$. Thus for $i \in \{0, 1, \ldots, k-1 \}$:
\begin{align*}
p'_{2i} &= p'_{2(n-k+1 + i)} + \varepsilon \\
p'_{2i} &= q'_{2(n-k+1 + i)} \\
q'_{2(n-k+1+i)} &= q'_{2i}+ \varepsilon
\end{align*}
For $i \in \{0, 1, \ldots, k-2 \}$:
\begin{align*}
p'_{2i+1} &= p'_{2(n-k+1+i)+1} + \varepsilon \\
p'_{2i+1} &= q'_{2(n-k+1+i)+1} \\
q'_{2(n-k+1+i)+1} &= q'_{2i+1}+ \varepsilon
\end{align*}
We know that $p_{k-1} = p_n + \varepsilon = q_{k-1} + \varepsilon = q_n$, so $p_{k-1} > p_n$ and $q_{k-1} < q_n$. Thus if $p_{k-1} < p_k$
$$ p'_{2k-1} = p_{k-1} + \abs{u}_1 + n = q_{k-1} + 1 + \abs{u}_1 + n = q_{k-1} + 1 + \abs{u}_1 + (n+1) = q'_{2k-1} $$
and if $p_{k-1} > p_k$
$$ p'_{2k-1} = p_{k-1} + \abs{u}_1 - (n+1) = q_{k-1} + 1 + \abs{u}_1 - (n+1) = q_{k-1} + \abs{u}_1 - n = q'_{2k-1}. $$
By Proposition $\ref{CalculateTheFwdImage}$, since $p_{k+i} = q_{k+i}$ for each $i \in \{0, 1, \ldots, l-1 \}$,
\begin{align*}
p'_{2(k+i)} &= q'_{2(k+i)} \\
p'_{2(k+i)+1} &= q'_{2(k+i)+1}
\end{align*}
Thus there are decompositions of $\phi(p) = p'$ and $\phi(q) = q'$ so that
$$\phi(p) = \pi_T[2a,2a+2n] = [\alpha'_1 \cdots \alpha'_{2k-1} \lambda'_1 \cdots \lambda'_{2l+1} \beta'_1 \cdots \beta'_{2k-1}],$$
$$\phi(q) = \pi_T[2b,2b+2n] = [\beta'_1 \cdots \beta'_{2k-1} \lambda'_1 \cdots \lambda'_{2l+1} \alpha'_1 \cdots \alpha'_{2k-1}],$$
where $\alpha'_i = \beta'_i + \varepsilon$. Therefore $\phi(p)=p'$ and $\phi(q)=q'$ are a complementary pair of type $2k-1$.
$\vspace{1.0ex}$
$\textbf{(c)}$ From (b), $\phi(p)=p'$ and $\phi(q)=q'$ are a complementary pair of type $2k-1$. Suppose $k \geq 2$ and so $2k-3 \geq 1$, and let $i \in \{0, 1, \ldots, 2k-3\}$, then $p'_i = q'_i + \varepsilon = p'_{2n-2k+2 +i} + \varepsilon = q'_{2n-2k+2+i}$. Thus $p'_i$ and $p'_{2n-2k+2 +i}$ are consecutive values, as are $q'_i$ and $q'_{2n-2k+2 +i}$, also $p'_{2n} < p'_i$ if and only if $p'_{2n} < p'_{2n-2k+2 +i}$, and
$$p'_{2n} < p'_i \text{ and } p'_{2n} < p'_{2n-2k+2 +i} \hspace{1.5ex} \Longleftrightarrow \hspace{1.5ex} q'_{2n} < q'_i \text{ and } q'_{2n} < q'_{2n-2k+2 +i}.$$
If $L(p')_i = p'_i-1$ or $L(p')_i = p'_i$, we have $L(q')_i = q'_i-1$ or $L(q')_i = q'_i$ (respectively), and $L(p')_i = L(q')_i+ \varepsilon = L(p')_{2n-2k+2 +i} + \varepsilon = L(q')_{2n-2k+2 +i}$.
Now let $i \in \{0, 1, \ldots, 2l \}$, so $p'_{2k-1+i} = q'_{2k-1+i}$. Thus $p'_{2n} < p'_{2k-1+i}$ if and only if $q'_{2n} < q'_{2k-1+i}$, and so we have $L(p')_{2k-1+i} = L(q')_{2k-1+i}$.
Then $p'_{2k-2} = q'_{2k-2} + \varepsilon = p'_{2n} + \varepsilon = q'_{2n}$, so $p'_{2k-2} > p'_{2n}$ if and only if $q'_{2k-2} < q'_{2n}$. If $p'_{2k-2} > p'_{2n}$ and $q'_{2k-2} < q'_{2n}$, then $p'_{2k-2} = q'_{2k-2} + 1 = p'_{2n} + 1 = q'_{2n}$ so
$$ L(p')_{2k-2} = p'_{2k-2} - 1 = q'_{2k-2} = L(q')_{2k-2}.,$$
If $p'_{2k-2} < p'_{2n}$ and $q'_{2k-2} > q'_{2n}$, then $p'_{2k-2} = q'_{2k-2} - 1 = p'_{2n} - 1 = q'_{2n}$ so
$$ L(p')_{2k-2} = p'_{2k-2} = q'_{2k-2} - 1 = L(q')_{2k-2}.$$
In either case, $ L(p')_{2k-2} = L(q')_{2k-2}$. Thus there are decompositions of $\phi_L(p) = L(p')$ and $\phi_L(q) = L(q')$ so that
$$\phi_L(p) = \pi_T[2a,2a+2n-1] = [\alpha'_1 \cdots \alpha'_{2k-2} \lambda'_1 \cdots \lambda'_{2l+2} \beta'_1 \cdots \beta'_{2k-2}],$$
$$\phi_L(q) = \pi_T[2b,2b+2n-1] = [\beta'_1 \cdots \beta'_{2k-2} \lambda'_1 \cdots \lambda'_{2l+2} \alpha'_1 \cdots \alpha'_{2k-2}],$$
where $\alpha'_i = \beta'_i + \varepsilon$. Therefore $\phi_L(p)$ and $\phi_L(q)$ are a complementary pair of type $2k-2$.
Now suppose that $k=1$ and so $2k-1 = 1$. Then $p'_0 = q'_0 + \varepsilon = p'_{2n} + \varepsilon= q'_{2n}$ and $p'_i = q'_i$ for $i = 1,2, \ldots, 2n-1$. If $p'_0 > p'_{2n}$ and $q'_0 < q'_{2n}$, then $p'_0 = q'_0 + 1 = p'_{2n} + 1 = q'_{2n}$ so
$$ L(p')_0 = p'_0 - 1 = q'_0 = L(q')_0.$$
If $p'_0 < p'_{2n}$ and $q'_0 > q'_{2n}$, then $p'_0 = q'_0 - 1 = p'_{2n} - 1 = q'_{2n}$ so
$$ L(p')_0 = p'_0 = q'_0 - 1 = L(q')_0.$$
In either case, $ L(p')_0 = L(q')_0$. Then for each $i \in \{1, 2, \ldots, 2n-1 \}$, $p'_i = q'_i$, and $p'_{2n} < p'_i$ if and only if $q'_{2n} < q'_i$ so $L(p')_i = L(q')_i$. Therefore, if $k=1$ then $\phi_L(p) = \phi_L(q)$.
$\vspace{1.0ex}$
$\textbf{(d)}$ From (b), $\phi(p)=p'$ and $\phi(q)=q'$ are a complementary pair of type $2k-1$. Suppose $k \geq 2$ and so $2k-3 \geq 1$, and let $i \in \{0, 1, \ldots, 2k-3\}$, then $p'_{i+1} = q'_{i+1} + \varepsilon = p'_{2n-2k+2 +i+1} + \varepsilon = q'_{2n-2k+2+i+1}$. Thus $p'_{i+1}$ and $p'_{2n-2k+2 +i+1}$ are consecutive values, as are $q'_{i+1}$ and $q'_{2n-2k+2 +i+1}$, also $p'_{2n} < p'_{i+1}$ if and only if $p'_{2n} < p'_{2n-2k+2 +i+1}$, and
$$p'_0 < p'_{i+1} \text{ and } p'_0 < p'_{2n-2k+2 +i+1} \hspace{1.5ex} \Longleftrightarrow \hspace{1.5ex} q'_0 < q'_{i+1} \text{ and } q'_0 < q'_{2n-2k+2 +i+1}.$$
If $R(p')_i = p'_{i+1}-1$ or $R(p')_i = p'_{i+1}$, we have $R(q')_i = q'_{i+1}-1$ or $R(q')_i = q'_{i+1}$ (respectively), and $R(p')_i = R(q')_i+ \varepsilon = R(p')_{2n-2k+2 +i} + \varepsilon = R(q')_{2n-2k+2 +i}$.
Now let $i \in \{0, 1, \ldots, 2l \}$, so $p'_{2k-1+i} = q'_{2k-1+i}$. Thus $p'_0 < p'_{2k-1+i}$ if and only if $q'_0 < q'_{2k-1+i}$, and so we have $R(p')_{2k-1+i-1} = R(q')_{2k-1+i-1}$.
Then $p'_0 = q'_0 + \varepsilon = p'_{2n-2k+2} + \varepsilon = q'_{2n-2k+2}$, so $p'_{2n-2k+2} > p'_0$ if and only if $q'_{2n-2k+2} < q'_0$. If $p'_{2n-2k+2} > p'_0$ and $q'_{2n-2k+2} < q'_0$, then $p'_{2n-2k+2} = q'_{2n-2k+2} + 1 = p'_0 + 1 = q'_0$ so
$$ R(p')_{2n-2k+1} = p'_{2n-2k+2} - 1 = q'_{2n-2k+2} = R(q')_{2n-2k+1}.$$
If $p'_{2n-2k+2} < p'_0$ and $q'_{2n-2k+2} > q'_0$, then $p'_{2n-2k+2} = q'_{2n-2k+2} - 1 = p'_0 - 1 = q'_0$ so
$$ R(p')_{2n-2k+1} = p'_{2n-2k+2} = q'_{2n-2k+2} - 1 = R(q')_{2n-2k+1}.$$
In either case, $ R(p')_{2n-2k+1} = R(q')_{2n-2k+1}$. Thus there are decompositions of $\phi_R(p) = R(p')$ and $\phi_R(q) = R(q')$ so that
$$\phi_L(p) = \pi_T[2a+1,2a+2n] = [\alpha'_1 \cdots \alpha'_{2k-2} \lambda'_1 \cdots \lambda'_{2l+2} \beta'_1 \cdots \beta'_{2k-2}],$$
$$\phi_L(q) = \pi_T[2b+1,2b+2n] = [\beta'_1 \cdots \beta'_{2k-2} \lambda'_1 \cdots \lambda'_{2l+2} \alpha'_1 \cdots \alpha'_{2k-2}],$$
where $\alpha'_i = \beta'_i + \varepsilon$. Therefore $\phi_R(p)$ and $\phi_R(q)$ are a complementary pair of type $2k-2$.
Now suppose that $k=1$ and so $2k-1 = 1$. Then $p'_0 = q'_0 + \varepsilon = p'_{2n} + \varepsilon= q'_{2n}$ and $p'_i = q'_i$ for $i = 1,2, \ldots, 2n-1$. If $p'_0 > p'_{2n}$ and $q'_0 < q'_{2n}$, then $p'_0 = q'_0 + 1 = p'_{2n} + 1 = q'_{2n}$ so
$$ R(p')_{2n-1} = p'_{2n} = q'_{2n}-1 = R(q')_{2n-1}.$$
If $p'_0 < p'_{2n}$ and $q'_0 > q'_{2n}$, then $p'_0 = q'_0 - 1 = p'_{2n} - 1 = q'_{2n}$ so
$$ R(p')_{2n-1} = p'_{2n} - 1 = q'_{2n} = R(q')_{2n-1}.$$
In either case, $ R(p')_0 = R(q')_0$. Then for each $i \in \{1, 2, \ldots, 2n-1 \}$, $p'_i = q'_i$, and $p'_0 < p'_i$ if and only if $q'_0 < q'_i$ so $R(p')_{i-1} = R(q')_{i-1}$. Therefore, if $k=1$ then $\phi_R(p) = \phi_R(q)$.
$\vspace{1.0ex}$
$\textbf{(e)}$ From (c), $\phi_R(p)=R(p')$ and $\phi_R(q)=R(q')$ are a complementary pair of type $2k-2$. Suppose $k \geq 2$ and so $2k-4 \geq 0$, and let $i \in \{0, \ldots, 2k-4\}$, then $R(p')_i = R(q')_i + \varepsilon = R(p')_{2n-2k+3 +i} + \varepsilon = R(q')_{2n-2k+3+i}$. Thus $R(p')_i$ and $R(p')_{2n-2k+3 +i}$ are consecutive values, as are $R(q')_i$ and $R(q')_{2n-2k+3 +i}$, and $R(p')_{2n-1} < R(p')_i$ if and only if $R(p')_{2n-1} < R(p')_{2n-2k+3 +i}$, and
$$R(p')_{2n-1} < R(p')_i \text{ and } R(p')_{2n-1} < R(p')_{2n-2k+3 +i} \hspace{1.5ex} \Longleftrightarrow \hspace{1.5ex} R(q')_{2n-1} < R(q')_i \text{ and } R(q')_{2n-1} < R(q')_{2n-2k+3 +i}.$$
If $L(R(p'))_i = R(p')_i-1$ or $L(R(p'))_i = R(p')_i$, we have $L(R(q'))_i = R(q')_i-1$ or $L(R(q'))_i = R(q')_i$ (respectively), and $L(R(p'))_i = L(R(q'))_i+ \varepsilon = L(R(p'))_{2n-2k+2 +i} + \varepsilon = L(R(q'))_{2n-2k+2 +i}$.
Now let $i \in \{0, 1, \ldots, 2l+1 \}$, so $R(p')_{2k-2+i} = R(q')_{2k-2+i}$. Thus $R(p')_{2n-1} < R(p')_{2k-2+i}$ if and only if $R(q')_{2n-1} < R(q')_{2k-2+i}$, and so we have $L(R(p'))_{2k-1+i} = L(R(q'))_{2k-1+i}$.
Then $R(p')_{2k-3} = R(q')_{2k-3} + \varepsilon = R(p')_{2n-1} + \varepsilon = R(q')_{2n-1}$, so $R(p')_{2k-3} > R(p')_{2n-1}$ if and only if $R(q')_{2k-3} < R(q')_{2n-1}$. If $R(p')_{2k-3} > R(p')_{2n-1}$ and $R(q')_{2k-3} < R(q')_{2n-1}$, then $R(p')_{2k-3} = R(q')_{2k-3} + 1 = R(p')_{2n-1} + 1 = R(q')_{2n-1}$ so
$$ L(R(p'))_{2k-3} = R(p')_{2k-3} - 1 = R(q')_{2k-3} = L(R(q'))_{2k-3}.$$
If $R(p')_{2k-3} < R(p')_{2n-1}$ and $R(q')_{2k-3} > R(q')_{2n-1}$, then $R(p')_{2k-3} = R(q')_{2k-3} - 1 = R(p')_{2n-1} - 1 = R(q')_{2n-1}$
$$ L(R(p'))_{2k-3} = R(p')_{2k-3} = R(q')_{2k-3} - 1 = L(R(q'))_{2k-2-1}.$$
In either case, $ L(R(p'))_{2k-3} = L(q')_{2k-3}$. Thus there are decompositions of $\phi_M(p) = L(R(p'))$ and $\phi_M(q) = L(R(q'))$ so that
$$\phi_M(p) = \pi_T[2a-1,2a+2n-1] = [\alpha'_1 \cdots \alpha'_{2k-3} \lambda'_1 \cdots \lambda'_{2l+3} \beta'_1 \cdots \beta'_{2k-3}],$$
$$\phi_M(q) = \pi_T[2b-1,2b+2n-1] = [\beta'_1 \cdots \beta'_{2k-3} \lambda'_1 \cdots \lambda'_{2l+3} \alpha'_1 \cdots \alpha'_{2k-3}],$$
where $\alpha'_i = \beta'_i + \varepsilon$. Therefore $\phi_M(p)$ and $\phi_M(q)$ are a complementary pair of type $2k-3$.
Now suppose that $k=1$ and so $2k-1 = 1$. Then $\phi_R(p) = \phi_R(q)$, and thus $L(R(p')) = L(R(q'))$. Therefore, if $k=1$ then $\phi_M(p) = \phi_M(q)$.
$\qed$
\end{proof}
\begin{theorem}
\label{SameFormIFFCompPair}
Let $p$ and $q$ be distinct subpermutations of $\pi_T$. Then $p$ and $q$ have the same form if and only if $p$ and $q$ are a complementary pair of type $k$, for some $k \geq 1$.
\end{theorem}
\begin{proof}
First, suppose that $p$ and $q$ are a complementary pair of type $k$, for some $k \geq 1$. So there are decompositions:
\begin{align*}
p &= \pi_T[a,a+n] = [\alpha_1 \cdots \alpha_k \lambda_1 \cdots \lambda_l \beta_1 \cdots \beta_k] \\
q &= \pi_T[b,b+n] = [\beta_1 \cdots \beta_k \lambda_1 \cdots \lambda_l \alpha_1 \cdots \alpha_k]
\end{align*}
so that for $\varepsilon \in \{-1, 1 \}$, $\alpha_i = \beta_i + \varepsilon$ for each $i \in \{1, 2, \ldots, k \}$.
For each $i \in \{0, 1, \ldots, k-2 \}$, $p_i$ and $p_{n-k+1+i}$ are consecutive values, as are $q_i$ and $q_{n-k+1+i}$, so
$$ p_i < p_{i+1} \text{ and } p_{n-k+1+i} < p_{n-k+1+i+1} \hspace{1.5ex} \Longleftrightarrow \hspace{1.5ex} q_i < q_{i+1} \text{ and } q_{n-k+1+i} < q_{n-k+1+i+1}.$$
Since $p_{k-1} = q_{k-1} + \varepsilon$, $p_{k+l} + \varepsilon = q_{k+l}$, $p_k = q_k$, and $p_{k+l-1} = q_{k+l-1}$:
\begin{align*}
p_{k-1} < p_k \hspace{1.5ex} &\Longleftrightarrow \hspace{1.5ex} q_{k-1} < q_k \\
p_{k+l-1} < p_{k+l} \hspace{1.5ex} &\Longleftrightarrow \hspace{1.5ex} q_{k+l-1} < q_{k+l}.
\end{align*}
For each $i \in \{0, 1, \ldots, l-2 \}$, $p_{k+i} = q_{k+i}$, so
$$ p_{k+i} < p_{k+i+1} \hspace{1.5ex} \Longleftrightarrow \hspace{1.5ex} q_{k+i} < q_{k+i+1}. $$
Therefore $p_i < p_{i+1}$ if and only if $q_i < q_{i+1}$ for each $i \in \{0, 1, \ldots, n-1 \}$, so $p$ and $q$ have the same form.
$\vspace{1.0ex}$
To show that distinct subpermutations with the same form are a complementary pair of type $k$, for some $k \geq 1$, an induction argument will be used. The subpermutations of lengths 2 through 9 are listed in Appendix $\ref{SecTheSubperms}$, along with the form of the subpermutations. It can be seen that distinct subpermutations with the same form are a complementary pair of type $k$, for some $k \geq 1$.
Assume that $n \geq 9$ and that the theorem is true for all subpermutations of length at most $n$. Let $p'$ and $q'$ be distinct subpermutations of length $n+1$ with the same form, so $p'_i < p_{i+1}$ if and only if $q'_i < q'_{i+1}$ for each $i = 0, 1, \ldots, n-1$.
Then
$$p', q' \in Perm_{ev}(n+1) \hspace{3.0ex} \text{ or } \hspace{3.0ex} p', q' \in Perm_{odd}(n+1).$$
If, without loss of generality, $p' \in Perm_{ev}(n+1)$ and $q' \in Perm_{odd}(n+1)$, then $p' = \pi_T[2a,2a+n]$ and $q' = \pi_T[2b+1,2b+n+1]$, so $T[2a,2a+n-1] = T[2b+1,2b+n]$. Since $n \geq 9$, $T[2a,2a+n-1]$ will contain either 00 or 11, so there is some $c$ so that $T[2a+2c+1,2a+2c+2]$ is 00 or 11. Then also, $T[2b+1 + 2c+1,2b+1 + 2c+2] = T[2b+2c+2,2b+2c+3]$ must be the same as $T[2a+2c+1,2a+2c+2]$, but $T[2b+2c+2,2b+2c+3]$ is either $\mu_T(0) = 01$ or $\mu_T(1) = 10$, so $T[2b+2c+2,2b+2c+3] \neq T[2a+2c+1,2a+2c+2]$. Therefore, either $p',q' \in Perm_{ev}(n+1)$ or $p',q' \in Perm_{odd}(n+1)$
Thus one of the 4 following cases must hold:
\begin{enumerate}
\item $p',q' \in Perm_{ev}(n+1)$ and $n+1$ is odd
\item $p',q' \in Perm_{ev}(n+1)$ and $n+1$ is even
\item $p',q' \in Perm_{odd}(n+1)$and $n+1$ is even
\item $p',q' \in Perm_{odd}(n+1)$and $n+1$ is odd
\end{enumerate}
$\vspace{0.5ex}$
$\textbf{Case 1}$ Suppose $p',q' \in Perm_{ev}(n+1)$ and $n+1 = 2m+1$, so there are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2m]$ and $q' = \pi_T[2b,2b+2m]$, and
$$ p = \pi_T[a,a+m] \hspace{8.0ex} q = \pi_T[b,b+m], $$
$$ p' = \phi(p) \hspace{8.0ex} q' = \phi(q). $$
If $T[a,a+m-1] \neq T[b,b+m-1]$ then $T[2a,2a+2m-1] \neq T[2b,2b+2m-1]$. Hence
$$T[a,a+m-1] = T[b,b+m-1]$$
and $p$ and $q$ have the same form. If $p=q$ then $p'=q'$, by Lemma $\ref{pISqIFFppISqp}$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $k$, for some $k \geq 1$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi(p) = p'$ and $\phi(q) = q'$ are a complementary pair of type $2k-1$.
$\vspace{0.5ex}$
$\textbf{Case 2}$ Suppose $p',q' \in Perm_{ev}(n+1)$ and $n+1 = 2m$, so there are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2m-1]$ and $q' = \pi_T[2b,2b+2m-1]$, and
$$ p = \pi_T[a,a+m] \hspace{8.0ex} q = \pi_T[b,b+m], $$
$$ p' = \phi_L(p) \hspace{8.0ex} q' = \phi_L(q). $$
Since $p'$ and $q'$ have the same form, $T[2a,2a+2m-2] = T[2b,2b+2m-2]$. Thus $T_{2a+2m-2} = T_{2b+2m-2}$ implies $T_{a+m-1} = T_{b+m-1}$, so
$$ T[2a+2m-2,2a+2m-1] = \mu_T(T_{a+m-1}) = \mu_T(T_{b+m-1}) = T[2b+2m-2,2b+2m-1]$$
and
$$T[2a,2a+2m-1] = T[2b,2b+2m-1].$$
If $T[a,a+m-1] \neq T[b,b+m-1]$ then $T[2a,2a+2m-1] \neq T[2b,2b+2m-1]$. Hence
$$T[a,a+m-1] = T[b,b+m-1]$$
and $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Lemma $\ref{pISqIFFppISqp}$, and $p'=L(\phi(p))=L(\phi(q))=q'$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $k$, for some $k \geq 1$. If $k=1$, then $\phi_L(p)$ and $\phi(q)_L$ are a complementary pair of type $2k-2 = 0$ and $p' = q'$, thus $k \geq 2$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi_L(p) = p'$ and $\phi_L(q) = q'$ are a complementary pair of type $2k-2 \geq 2$.
$\vspace{0.5ex}$
$\textbf{Case 3}$ Suppose $p',q' \in Perm_{odd}(n+1)$ and $n+1 = 2m$, so there are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2m]$ and $q' = \pi_T[2b+1,2b+2m]$, and
$$ p = \pi_T[a,a+m] \hspace{8.0ex} q = \pi_T[b,b+m], $$
$$ p' = \phi_R(p) \hspace{8.0ex} q' = \phi_R(q). $$
Since $p'$ and $q'$ have the same form, $T[2a+1,2a+2m-1] = T[2b+1,2b+2m-1]$. Thus $T_{2a+1} = T_{2b+1}$ implies $T_{a} = T_{b}$, so
$$ T[2a,2a+1] = \mu_T(T_{a}) = \mu_T(T_{b}) = T[2b,2b+1]$$
and
$$T[2a,2a+2m-1] = T[2b,2b+2m-1].$$
If $T[a,a+m-1] \neq T[b,b+m-1]$ then $T[2a,2a+2m-1] \neq T[2b,2b+2m-1]$. Hence
$$T[a,a+m-1] = T[b,b+m-1]$$
and $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Lemma $\ref{pISqIFFppISqp}$, and $p'=R(\phi(p))=R(\phi(q))=q'$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $k$, for some $k \geq 1$. If $k=1$, then $\phi_R(p)$ and $\phi_R(q)$ are a complementary pair of type $2k-2 = 0$ and $p' = q'$, thus $k \geq 2$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi_R(p) = p'$ and $\phi(q)_R = q'$ are a complementary pair of type $2k-2 \geq 2$.
$\vspace{0.5ex}$
$\textbf{Case 4}$ Suppose $p',q' \in Perm_{odd}(n+1)$ and $n+1 = 2m+1$, so there are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2m+1]$ and $q' = \pi_T[2b+1,2b+2m+1]$, and
$$ p = \pi_T[a,a+m+1] \hspace{8.0ex} q = \pi_T[b,b+m+1], $$
$$ p' = \phi_M(p) \hspace{8.0ex} q' = \phi_M(q). $$
Since $p'$ and $q'$ have the same form, $T[2a+1,2a+2m] = T[2b+1,2b+2m]$. Thus $T_{2a+1} = T_{2b+1}$ implies $T_{a} = T_{b}$, so
$$ T[2a,2a+1] = \mu_T(T_{a}) = \mu_T(T_{b}) = T[2b,2b+1]$$
and $T_{2a+2m} = T_{2b+2m}$ implies $T_{a+m} = T_{b+m}$, so
$$ T[2a+2m,2a+2m+1] = \mu_T(T_{a+m}) = \mu_T(T_{b+m}) = T[2b+2m,2b+2m+1].$$
Therefore,
$$T[2a,2a+2m+1] = T[2b,2b+2m+1].$$
If $T[a,a+m] \neq T[b,b+m]$ then $T[2a,2a+2m+1] \neq T[2b,2b+2m+1]$. Hence
$$T[a,a+m] = T[b,b+m]$$
and $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Lemma $\ref{pISqIFFppISqp}$, and $p'=M(\phi(p))=M(\phi(q))=q'$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $k$, for some $k \geq 1$. If $k=1$, then $\phi_M(p)$ and $\phi_M(q)$ are a complementary pair of type $2k-3 = -1$ and $p' = q'$, thus $k \geq 2$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi_M(p) = p'$ and $\phi_M(q) = q'$ are a complementary pair of type $2k-3 \geq 1$.
Therefore subpermutations $p$ and $q$ have the same form if and only if $p$ and $q$ are a complementary pair of type $k$, for some $k \geq 1$.
$\qed$
\end{proof}
There are a number of useful corollaries of Theorem $\ref{SameFormIFFCompPair}$. These corollaries give the number of subpermutations that can have the same form and show when the maps $\phi_L$, $\phi_R$, and $\phi_M$ are not injective.
\begin{corollary}
\label{AtMostOneCompliment}
For a subpermutation $p$ of $\pi_T$, there can be at most one subpermutation $q$ of $\pi_T$ so that $p$ and $q$ are a complementary pair.
\end{corollary}
\begin{proof}
Assume that $p$ is a subpermutation of $\pi_T$ so that $p$ and $q$ are a complementary pair of type $s$, and $p$ and $r$ are a complementary pair of type $t$. Moreover, $s \neq t$, and thus $q \neq r$. Then there are decompositions:
\begin{align*}
p &= \pi_T[a,a+n] = [\alpha_1 \cdots \alpha_s \lambda_1 \cdots \lambda_x \beta_1 \cdots \beta_s] \\
q &= \pi_T[b,b+n] = [\beta_1 \cdots \beta_s \lambda_1 \cdots \lambda_x \alpha_1 \cdots \alpha_s]
\end{align*}
so that for $\varepsilon_s \in \{-1, 1 \}$, $\alpha_i = \beta_i + \varepsilon_s$ for each $i = 1, 2, \ldots, s$, and
\begin{align*}
p &= \pi_T[a,a+n] = [\alpha'_1 \cdots \alpha'_t \lambda'_1 \cdots \lambda'_y \beta'_1 \cdots \beta'_t] \\
r &= \pi_T[b,b+n] = [\beta'_1 \cdots \beta'_t \lambda'_1 \cdots \lambda'_y \alpha'_1 \cdots \alpha'_t]
\end{align*}
so that for $\varepsilon_t \in \{-1, 1 \}$, $\alpha'_i = \beta'_i + \varepsilon_t$ for each $i = 1, 2, \ldots, t$.
Since $p$ and $q$ are a complementary pair they have the same form, as do $p$ and $r$. Thus $q$ and $r$ are distinct subpermutations with the same form, so by Theorem $\ref{SameFormIFFCompPair}$ $q$ and $r$ are a complementary pair of type $k$, for some $k$.
If $\beta_1 = \beta'_1$ then $p_{n-s+1} = p_{n-t+1}$, but since $s \neq t$ this cannot happen. Thus $\beta_1 \neq \beta'_1$ and $\varepsilon_s \neq \varepsilon_t$, so $\varepsilon_s = -\varepsilon_t$. Hence
\begin{align*}
\alpha_1 = \beta_1 + \varepsilon_s \hspace{3.0ex} \Rightarrow \hspace{3.0ex} \beta_1 &= \alpha_1 - \varepsilon_s \\
\alpha'_1 = \beta'_1 + \varepsilon_t \hspace{3.0ex} \Rightarrow \hspace{3.0ex} \beta'_1 = \alpha'_1 - \varepsilon_t \hspace{3.0ex} \Rightarrow \hspace{3.0ex} \beta'_1 &= \alpha_1 + \varepsilon_s.
\end{align*}
Therefore $q_0 \neq r_0 \pm 1$, and $q$ and $r$ are not a complementary pair, contradicting the assumption.
$\qed$
\end{proof}
The next corollary follows directly from Theorem $\ref{SameFormIFFCompPair}$ and Corollary $\ref{AtMostOneCompliment}$
\begin{corollary}
\label{AtMostTwoSubpermsWithSameForm}
For a factor $u$ of $T$, there are at most two subpermutations of $\pi_T$ with form $u$.
\end{corollary}
The next corollary shows when the maps $\phi_L(p)$, $\phi_R(p)$, and $\phi_M(p)$ are not injective.
\begin{corollary}
\label{WhenTheMapsFailToBeABijection}
For subpermutations $p = \pi_T[a,a+n]$ and $q = \pi_T[b,b+n]$, where $p \neq q$:
\begin{itemize}
\item[(a)] $\phi_L(p) = \phi_L(q)$ if and only if $p$ and $q$ are a complementary pair of type 1.
\item[(b)] $\phi_R(p) = \phi_R(q)$ if and only if $p$ and $q$ are a complementary pair of type 1.
\item[(c)] $\phi_M(p) = \phi_M(q)$ if and only if $p$ and $q$ are a complementary pair of type 1.
\end{itemize}
\end{corollary}
\begin{proof}
It should be clear for all three cases that if $p$ and $q$ are a complementary pair of type 1 then
$$\phi_L(p) = \phi_L(q) \hspace{6.0ex} \phi_R(p) = \phi_R(q) \hspace{6.0ex} \phi_M(p) = \phi_M(q)$$
by Proposition $\ref{ImageOfTypeK}$. For the three cases, let $p = \pi_T[a,a+n]$ and $q = \pi_T[b,b+n]$ and $p \neq q$.
$\textbf{(a)}$ Suppose $\phi_L(p) = \phi_L(q)$, so $\pi_T[2a,2a+2n-1] = \pi_T[2b,2b+2n-1]$ and $T[2a,2a+2n-2] = T[2b,2b+2n-2]$. Thus $T_{2a+2n-2} = T_{2b+2n-2}$ implies $T_{a+n-1} = T_{b+n-1}$, so
$$ T[2a+2n-2,2a+2n-1] = \mu_T(T_{a+n-1}) = \mu_T(T_{b+n-1}) = T[2b+2n-2,2b+2n-1]$$
and
$$T[2a,2a+2n-1] = T[2b,2b+2n-1].$$
If $T[a,a+n-1] \neq T[b,b+n-1]$ then $T[2a,2a+2n-1] \neq T[2b,2b+2n-1]$. Hence
$$T[a,a+n-1] = T[b,b+n-1]$$
and $p$ and $q$ have the same form. By Theorem $\ref{SameFormIFFCompPair}$, $p$ and $q$ are a complementary pair of type $k \geq 1$. If $k > 1$, then $\phi_L(p)$ and $\phi_L(q)$ are a complementary pair of type $2k-2 > 1$, so $\phi_L(p) \neq \phi_L(q)$. Therefore $p$ and $q$ are a complementary pair of type 1.
$\vspace{0.5ex}$
$\textbf{(b)}$ Suppose $\phi_R(p) = \phi_R(q)$, so $\pi_T[2a+1,2a+2n] = \pi_T[2b+1,2b+2n]$ and $T[2a+1,2a+2n-1] = T[2b+1,2b+2n-1]$. Thus $T_{2a+1} = T_{2b+1}$ implies $T_a = T_b$, so
$$ T[2a,2a+1] = \mu_T(T_{a}) = \mu_T(T_{b}) = T[2b,2b+1]$$
and
$$T[2a,2a+2n-1] = T[2b,2b+2n-1].$$
If $T[a,a+n-1] \neq T[b,b+n-1]$ then $T[2a,2a+2n-1] \neq T[2b,2b+2n-1]$. Hence
$$T[a,a+n-1] = T[b,b+n-1]$$
and $p$ and $q$ have the same form. By Theorem $\ref{SameFormIFFCompPair}$, $p$ and $q$ are a complementary pair of type $k \geq 1$. If $k > 1$, then $\phi_R(p)$ and $\phi_R(q)$ are a complementary pair of type $2k-2 > 1$, so $\phi_R(p) \neq \phi_R(q)$. Therefore $p$ and $q$ are a complementary pair of type 1.
$\vspace{0.5ex}$
$\textbf{(c)}$ Suppose $\phi_M(p) = \phi_M(q)$, so $\pi_T[2a+1,2a+2n-1] = \pi_T[2b+1,2b+2n-1]$ and $T[2a+1,2a+2n-2] = T[2b+1,2b+2n-2]$. Thus $T_{2a+1} = T_{2b+1}$ implies $T_{a} = T_{b}$, so
$$ T[2a,2a+1] = \mu_T(T_{a}) = \mu_T(T_{b}) = T[2b,2b+1]$$
and $T_{2a+2n} = T_{2b+2n}$ implies $T_{a+n} = T_{b+n}$, so
$$ T[2a+2n,2a+2n+1] = \mu_T(T_{a+n}) = \mu_T(T_{b+n}) = T[2b+2n,2b+2n+1].$$
Therefore,
$$T[2a,2a+2n+1] = T[2b,2b+2n+1].$$
If $T[a,a+n] \neq T[b,b+n]$ then $T[2a,2a+2n+1] \neq T[2b,2b+2n+1]$. Hence
$$T[a,a+n] = T[b,b+n]$$
and $p$ and $q$ have the same form. By Theorem $\ref{SameFormIFFCompPair}$, $p$ and $q$ are a complementary pair of type $k \geq 1$. If $k > 1$, then $\phi_M(p)$ and $\phi_M(q)$ are a complementary pair of type $2k-3 \geq 1$, so $\phi_M(p) \neq \phi_M(q)$. Therefore $p$ and $q$ are a complementary pair of type 1.
$\qed$
\end{proof}
So when there are complementary pairs of type 1 none of the maps $\phi_L$, $\phi_R$, and $\phi_M$ are injective, and thus they are not bijective. In cases where there are no complementary pairs of type 1 the maps $\phi_L$, $\phi_R$, and $\phi_M$ are injective and the inequalities in Lemma $\ref{UpperBoundForTau}$ become equalities. So we need to know when complementary pairs of type 1 will occur, and how many complementary pairs there are.
\section{Type 1 Pairs}
\label{SecTypeOnePairs}
This section investigates when complementary pairs of type 1 arise and the number of pairs that occur. To show when the maps $\phi_L$, $\phi_R$, and $\phi_M$ are bijections we need to consider when complementary pairs of type 1 occur. The following lemma shows when there are complementary pairs of type $k$, for each $k \geq 0$. An induction argument will be used with Proposition $\ref{ImageOfTypeK}$ and Theorem $\ref{SameFormIFFCompPair}$ to show that all complementary pairs of a given length are of same type.
\begin{proposition}
\label{LengthOfAlphaForAGBForN}
Let $n > 4$ be a natural number and let $p$ and $q$ be subpermutations of $\pi_T$ of length $n+1$ with the same form. There exist $r$ and $c$ so that $n =2^r+c$, where $0 \leq c < 2^r$.
\begin{itemize}
\item[(a)] If $0 \leq c < 2^{r-1}+1$, then either $p = q$ or $p$ and $q$ are a complementary pair of type $c+1$.
\item[(b)] If $2^{r-1}+1 \leq c < 2^r$, then $p = q$.
\end{itemize}
\end{proposition}
\begin{proof}
This will be proved using an induction argument on $r$. By looking at the subpermutations in Appendix $\ref{SecTheSubperms}$ it can be readily verified that the lemma is true for $r = 2$ and $c = 0, 1, 2, 3$, so for $n = 4, 5, 6, 7$. Suppose that $r>2$ and that the statement of the lemma is true when $n < 2^r$. It will be shown that it is true for all $n = 2^r+c$ where $0 \leq c < 2^r$.
$\vspace{0.5ex}$
$\textbf{(a)}$ Let $n=2^r+c$ with $0 \leq c < 2^{r-1}+1$. If $p' = q'$ the proposition is satisfied, so assume that $p' \neq q'$. As it was stated in the proof of Theorem $\ref{SameFormIFFCompPair}$, if $p' \in Perm_{ev}(n+1)$ and $q' \in Perm_{odd}(n+1)$, then $p'$ and $q'$ cannot have the same form. We must also consider when $n+1$ is both even and odd. So there will be four subcases to consider, when $p',q' \in Perm_{ev}(n+1)$ or when $p',q' \in Perm_{odd}(n+1)$ and when $n+1$ is even or odd.
$\textbf{Case a.1:}$ Suppose $p',q' \in Perm_{ev}(n+1)$ and $n+1$ is odd, so $c$ is even. There is a $d$ so that $c=2d$, with $0 \leq d < 2^{r-2}+1$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2^r+2d]$ and $q' = \pi_T[2b,2b+2^r+2d]$, and
$$ p = \pi_T[a,a+2^{r-1}+d] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d], $$
$$ p' = \phi(p) \hspace{8.0ex} q' = \phi(q). $$
If $T[a,a+2^{r-1}+d-1] \neq T[b,b+2^{r-1}+d-1]$ then $T[2a,2a+2^r+2d-1] \neq T[2b,2b+2^r+2d-1]$. Hence
$$T[a,a+2^{r-1}+d-1] = T[b,b+2^{r-1}+d-1]$$
and $p$ and $q$ have the same form. If $p=q$ then $p'=q'$, by Corollary $\ref{CorTo_pisqiff}$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $d+1$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi(p) = p'$ and $\phi(q) = q'$ are a complementary pair of type $2(d+1)-1 = 2d+1 = c+1$.
$\vspace{0.5ex}$
$\textbf{Case a.2:}$ Suppose $p',q' \in Perm_{odd}(n+1)$ and $n+1$ is odd, so $c$ is even. There is a $d$ so that $c=2d$, with $0 \leq d < 2^{r-2}+1$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2^r+2d+1]$ and $q' = \pi_T[2b+1,2b+2^r+2d+1]$, and
$$ p = \pi_T[a,a+2^{r-1}+d+1] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d+1], $$
$$ p' = \phi_M(p) \hspace{8.0ex} q' = \phi_M(q). $$
Since $p'$ and $q'$ have the same form, $T[2a+1,2a+2^r+2d] = T[2b+1,2b+2^r+2d]$. Thus $T_{2a+1} = T_{2b+1}$ implies $T_{a} = T_{b}$, so
$$ T[2a,2a+1] = \mu_T(T_{a}) = \mu_T(T_{b}) = T[2b,2b+1]$$
and $T_{2a+2^r+2d} = T_{2b+2^r+2d}$ implies $T_{a+2^{r-1}+d} = T_{b+2^{r-1}+d}$, so
$$ T[2a+2^r+2d,2a+2^r+2d+1] = \mu_T(T_{a+2^{r-1}+d}) = \mu_T(T_{b+2^{r-1}+d}) = T[2b+2^r+2d,2b+2^r+2d+1].$$
Therefore,
$$T[2a,2a+2^r+2d+1] = T[2b,2b+2^r+2d+1].$$
If $T[a,a+2^{r-1}+d] \neq T[b,b+2^{r-1}+d]$ then $T[2a,2a+2^r+2d+1] \neq T[2b,2b+2^r+2d+1]$. Hence
$$T[a,a+2^{r-1}+d] = T[b,b+2^{r-1}+d]$$
and $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Corollary $\ref{CorTo_pisqiff}$, and $p'=M(\phi(p))=M(\phi(q))=q'$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $d+2$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi_M(p) = p'$ and $\phi_M(q) = q'$ are a complementary pair of type $2(d+2)-3 = 2d+1 = c+1$.
$\vspace{0.5ex}$
$\textbf{Case a.3:}$ Suppose $p',q' \in Perm_{ev}(n+1)$ and $n+1$ is even, so $c$ is odd. There is a $d$ so that $c=2d+1$, with $0 \leq d < 2^{r-2}+1$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2^r+2d+1]$ and $q' = \pi_T[2b,2b+2^r+2d+1]$, and
$$ p = \pi_T[a,a+2^{r-1}+d+1] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d+1], $$
$$ p' = \phi_L(p) \hspace{8.0ex} q' = \phi_L(q). $$
Since $p'$ and $q'$ have the same form, $T[2a,2a+2^r+2d] = T[2b,2b+2^r+2d]$. Thus $T_{2a+2^r+2d} = T_{2b+2^r+2d}$ implies $T_{a+2^{r-1}+d} = T_{b+2^{r-1}+d}$, so
$$ T[2a+2^r+2d,2a+2^r+2d+1] = \mu_T(T_{a+2^{r-1}+d}) = \mu_T(T_{b+2^{r-1}+d}) = T[2b+22^r+2d,2b+22^r+2d+1]$$
and
$$T[2a,2a+2^r+2d+1] = T[2b,2b+2^r+2d+1].$$
If $T[a,a+m-1] \neq T[b,b+m-1]$ then $T[2a,2a+2m-1] \neq T[2b,2b+2m-1]$. Hence
$$T[a,a+2^{r-1}+d] = T[b,b+2^{r-1}+d]$$
and $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Corollary $\ref{CorTo_pisqiff}$, and $p'=L(\phi(p))=L(\phi(q))=q'$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $d+2$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi_L(p) = p'$ and $\phi_L(q) = q'$ are a complementary pair of type $2(d+2)-2 = 2d+2 = c+1$.
$\vspace{0.5ex}$
$\textbf{Case a.4:}$ Suppose $p',q' \in Perm_{odd}(n+1)$ and $n+1$ is even, so $c$ is odd. There is a $d$ so that $c=2d+1$, with $0 \leq d < 2^{r-2}+1$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2^r+2d+2]$ and $q' = \pi_T[2b+1,2b+2^r+2d+2]$, and
$$ p = \pi_T[a,a+2^{r-1}+d+1] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d+1], $$
$$ p' = \phi_R(p) \hspace{8.0ex} q' = \phi_R(q). $$
Since $p'$ and $q'$ have the same form, $T[2a+1,2a+2^r+2d+1] = T[2b+1,2b+2^r+2d+1]$. Thus $T_{2a+1} = T_{2b+1}$ implies $T_{a} = T_{b}$, so
$$ T[2a,2a+1] = \mu_T(T_{a}) = \mu_T(T_{b}) = T[2b,2b+1]$$
and
$$T[2a,2a+2^r+2d+1] = T[2b,2b+2^r+2d+1].$$
If $T[a,a+2^{r-1}+d] \neq T[b,b+2^{r-1}+d]$ then $T[2a,2a+2^r+2d+1] \neq T[2b,2b+2^r+2d+1]$. Hence
$$T[a,a+2^{r-1}+d] = T[b,b+2^{r-1}+d]$$
and $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Corollary $\ref{CorTo_pisqiff}$, and $p'=R(\phi(p))=R(\phi(q))=q'$, thus $p \neq q$. By the induction hypothesis, $p$ and $q$ are a complementary pair of type $d+2$. Therefore, by Proposition $\ref{ImageOfTypeK}$, $\phi_R(p) = p'$ and $\phi(q)_R = q'$ are a complementary pair of type $2(d+2)-2 = 2d+2 = c+1$.
$\vspace{0.5ex}$
$\textbf{(b)}$ Let $n=2^r+c$ with $2^{r-1}+1 \leq c < 2^r$. There will again be the four subcases from part $(a)$ when $2^{r-1}+1 \leq c < 2^r-2$, when $p',q' \in Perm_{ev}(n+1)$ or when $p',q' \in Perm_{odd}(n+1)$ and when $n+1$ is even or odd. There will also be 2 additional special cases to consider, which are when $c = 2^r-2$ and $c=2^r-1$.
$\textbf{Case b.1:}$ Suppose $p',q' \in Perm_{ev}(n+1)$ and $n+1$ is odd, so $c$ is even. There is a $d$ so that $c=2d$, with $2^{r-2}+1 \leq d < 2^{r-1}$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2^r+2d]$ and $q' = \pi_T[2b,2b+2^r+2d]$, and
$$ p = \pi_T[a,a+2^{r-1}+d] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d], $$
$$ p' = \phi(p) \hspace{8.0ex} q' = \phi(q). $$
As in case $\textbf{a.1}$, $T[a,a+2^{r-1}+d-1] = T[b,b+2^{r-1}+d-1]$, so $p$ and $q$ have the same form. By the induction hypothesis $p = q$, so by Corollary $\ref{CorTo_pisqiff}$, $p' = \phi(p) = \phi(q) = q'$.
$\vspace{0.5ex}$
$\textbf{Case b.2:}$ Suppose $p',q' \in Perm_{odd}(n+1)$ and $n+1$ is odd, so $c$ is even. There is a $d$ so that $c=2d$, with $2^{r-2}+1 \leq d < 2^{r-1}$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2^r+2d+1]$ and $q' = \pi_T[2b+1,2b+2^r+2d+1]$, and
$$ p = \pi_T[a,a+2^{r-1}+d+1] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d+1], $$
$$ p' = \phi_M(p) \hspace{8.0ex} q' = \phi_M(q). $$
As in case $\textbf{a.2}$, $T[a,a+2^{r-1}+d] = T[b,b+2^{r-1}+d]$, so $p$ and $q$ have the same form. By the induction hypothesis $p = q$, so by Corollary $\ref{CorTo_pisqiff}$, $\phi(p) = \phi(q)$ and therefore $p' = \phi_M(p) = \phi_M(q) = q'$.
$\vspace{0.5ex}$
$\textbf{Case b.3:}$ Suppose $p',q' \in Perm_{ev}(n+1)$ and $n+1$ is even, so $c$ is odd. There is a $d$ so that $c=2d+1$, with $2^{r-2}+1 \leq d < 2^{r-1}$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2^r+2d+1]$ and $q' = \pi_T[2b,2b+2^r+2d+1]$, and
$$ p = \pi_T[a,a+2^{r-1}+d+1] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d+1], $$
$$ p' = \phi_L(p) \hspace{8.0ex} q' = \phi_L(q). $$
As in case $\textbf{a.3}$, $T[a,a+2^{r-1}+d] = T[b,b+2^{r-1}+d]$, so $p$ and $q$ have the same form. By the induction hypothesis $p = q$, so by Corollary $\ref{CorTo_pisqiff}$, $\phi(p) = \phi(q)$ and therefore $p' = \phi_L(p) = \phi_L(q) = q'$.
$\vspace{0.5ex}$
$\textbf{Case b.4:}$ Suppose $p',q' \in Perm_{odd}(n+1)$ and $n+1$ is even, so $c$ is odd. There is a $d$ so that $c=2d+1$, with $2^{r-2}+1 \leq d < 2^{r-1}$, and there are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2^r+2d+2]$ and $q' = \pi_T[2b+1,2b+2^r+2d+2]$, and
$$ p = \pi_T[a,a+2^{r-1}+d+1] \hspace{8.0ex} q = \pi_T[b,b+2^{r-1}+d+1], $$
$$ p' = \phi_R(p) \hspace{8.0ex} q' = \phi_R(q). $$
As in case $\textbf{a.4}$, $T[a,a+2^{r-1}+d] = T[b,b+2^{r-1}+d]$, so $p$ and $q$ have the same form. By the induction hypothesis $p = q$, so by Corollary $\ref{CorTo_pisqiff}$, $\phi(p) = \phi(q)$ and therefore $p' = \phi_R(p) = \phi_R(q) = q'$.
$\vspace{0.5ex}$
$\textbf{Case b.5:}$ Suppose $c = 2^r-2$. Thus $n = 2^r + 2^r-2 = 2^{r+1} - 2$, and the subpermutations $p'$ and $q'$ will have odd length. There will be two subcases, these being when $p',q' \in Perm_{ev}(n+1)$ and when $p',q' \in Perm_{odd}(n+1)$.
$\textbf{Case b.5.i:}$ Suppose $p',q' \in Perm_{ev}(n+1)$. There are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2^{r+1}-2]$ and $q' = \pi_T[2b,2b+2^{r+1}-2]$, and
$$ p = \pi_T[a,a+2^r-1] \hspace{8.0ex} q = \pi_T[b,b+2^r-1], $$
$$ p' = \phi(p) \hspace{8.0ex} q' = \phi(q). $$
As in cases $\textbf{a.1}$ and $\textbf{b.1}$, $T[a,a+2^r-2] = T[b,b+2^r-2]$, so $p$ and $q$ have the same form. By the induction hypothesis $p = q$, so by Corollary $\ref{CorTo_pisqiff}$, $p' = \phi(p) = \phi(q) = q'$.
$\textbf{Case b.5.ii:}$ Suppose $p',q' \in Perm_{odd}(n+1)$. There are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2^{r+1}-1]$ and $q' = \pi_T[2b+1,2b+2^{r+1}-1]$, and
$$ p = \pi_T[a,a+2^r] \hspace{8.0ex} q = \pi_T[b,b+2^r], $$
$$ p' = \phi_M(p) \hspace{8.0ex} q' = \phi_M(q). $$
As in cases $\textbf{a.2}$ and $\textbf{b.2}$, $T[a,a+2^r-1] = T[b,b+2^r-1]$, so $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Corollary $\ref{CorTo_pisqiff}$, and $p'=M(\phi(p))=M(\phi(q))=q'$. If $p \neq q$ then by case $\textbf{a.1}$, $p$ and $q$ are a complementary pair of type 1. Therefore, by Proposition $\ref{ImageOfTypeK}$, $p' = \phi_M(p) = \phi_M(q) = q'$.
$\vspace{0.5ex}$
$\textbf{Case b.6:}$ Suppose $c = 2^r-1$. Thus $n = 2^r + 2^r-1 = 2^{r+1}-1$, and the subpermutations $p'$ and $q'$ will have even length. There will be two subcases, these being when $p',q' \in Perm_{ev}(n+1)$ and when $p',q' \in Perm_{odd}(n+1)$.
$\textbf{Case b.6.i:}$ Suppose $p',q' \in Perm_{ev}(n+1)$. There are numbers $a$ and $b$ so that $p' = \pi_T[2a,2a+2^{r+1}-1]$ and $q' = \pi_T[2b,2b+2^{r+1}-1]$, and
$$ p = \pi_T[a,a+2^r] \hspace{8.0ex} q = \pi_T[b,b+2^r], $$
$$ p' = \phi_L(p) \hspace{8.0ex} q' = \phi_L(q). $$
As in cases $\textbf{a.3}$ and $\textbf{b.3}$, $T[a,a+2^r-1] = T[b,b+2^r-1]$, so $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Corollary $\ref{CorTo_pisqiff}$, and $p'=L(\phi(p))=L(\phi(q))=q'$. If $p \neq q$ then by case $\textbf{a.1}$, $p$ and $q$ are a complementary pair of type 1. Therefore, by Proposition $\ref{ImageOfTypeK}$, $p' = \phi_L(p) = \phi_L(q) = q'$.
$\textbf{Case b.6.ii:}$ Suppose $p',q' \in Perm_{odd}(n+1)$. There are numbers $a$ and $b$ so that $p' = \pi_T[2a+1,2a+2^{r+1}]$ and $q' = \pi_T[2b+1,2b+2^{r+1}]$, and
$$ p = \pi_T[a,a+2^r] \hspace{8.0ex} q = \pi_T[b,b+2^r], $$
$$ p' = \phi_R(p) \hspace{8.0ex} q' = \phi_R(q). $$
As in cases $\textbf{a.4}$ and $\textbf{b.4}$, $T[a,a+2^r-1] = T[b,b+2^r-1]$, so $p$ and $q$ have the same form. If $p=q$ then $\phi(p)=\phi(q)$, by Corollary $\ref{CorTo_pisqiff}$, and $p'=R(\phi(p))=R(\phi(q))=q'$. If $p \neq q$ then by case $\textbf{a.1}$, $p$ and $q$ are a complementary pair of type 1. Therefore, by Proposition $\ref{ImageOfTypeK}$, $p' = \phi_R(p) = \phi_R(q) = q'$.
Therefore the lemma is true when $n=2^r+c$ with $0 \leq c < 2^r$, and therefore for all $n$.
$\qed$
\end{proof}
Thus, only subpermutations of length $2^r+1$ can be a complementary pair of type 1, and we have the following corollary.
\begin{corollary}
\label{BijectionForTheMaps}
If $n \neq 2^r$, for $r \geq 1$, then for any subpermutations $p = \pi_T[a,a+n]$ and $q = \pi_T[b,b+n]$
\begin{itemize}
\item[(a)] $\phi_L(p) = \phi_L(q)$ if and only if $p = q$.
\item[(b)] $\phi_R(p) = \phi_R(q)$ if and only if $p = q$.
\item[(c)] $\phi_M(p) = \phi_M(q)$ if and only if $p = q$.
\end{itemize}
\end{corollary}
\begin{proof}
It should be clear in each case that if $p = q$ then
$$\phi_L(p) = \phi_L(q) \hspace{5.0ex} \phi_R(p) = \phi_R(q) \hspace{5.0ex} \phi_M(p) = \phi_M(q).$$
Suppose $\phi_L(p) = \phi_L(q)$. If $p \neq q$, by Corollary $\ref{WhenTheMapsFailToBeABijection}$ $p$ and $q$ are a complementary pair of type 1. By Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are cannot be complementary pair of type 1, therefore $p=q$.
A similar argument will show if $\phi_R(p) = \phi_R(q)$ then $p=q$, and if $\phi_M(p) = \phi_M(q)$ then $p=q$.
$\qed$
\end{proof}
We now consider the number of factors $u$ of $T$ of length $2^r$ that have two subpermutations which form a complementary pair of type 1.
\begin{lemma}
\label{NumberOfFactWithLenAlphaOne}
Let $n = 2^r$ or $2^r+1$, with $r \geq 2$. Then there are exactly $2^r$ factors $u$ of $T$ of length $n$ so that there exist subpermutations $p = \pi_T[a,a+n]$ and $q = \pi_T[b,b+n]$ with form $u$ and $p \neq q$.
\end{lemma}
\begin{proof}
It can be readily verified by looking at the subpermutations in Appendix $\ref{SecTheSubperms}$ that the lemma is true for $r=2$. So there are 4 factors $u$ of $T$ of length 4 with two distinct subpermutations of length 5 with form $u$, and there are 4 factors $v$ of $T$ of length 5 with two distinct subpermutations of length 6 with form $v$.
Suppose $r \geq 2$ and that the lemma is true for $r$. We now show the lemma is true for $r+1$. Let $\Gamma$ be the set of factors of length $2^r$, $\abs{\Gamma} = 2^r$, so that for $u \in \Gamma$ there are subpermutations $p$ and $q$ with form $u$ so that $p \neq q$, hence, by Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 1. Let $\Gamma'$ be the set of factors of length $2^{r+1}$ so that if $u \in \Gamma'$ then there exist subpermutations $p$ and $q$ with form $u$ so that $p \neq q$. Let $\Delta$ be the set of factors of length $2^r+1$, $\abs{\Delta} = 2^r$, so that for $v \in \Delta$ there are subpermutations $p$ and $q$ with form $v$ so that $p \neq q$, hence, by Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 2. Let $\Delta'$ be the set of factors of length $2^{r+1}+1$ so that if $v \in \Delta'$ then there exist subpermutations $p$ and $q$ with form $v$ so that $p \neq q$.
The sizes of $\Gamma'$ and $\Delta'$ will be considered in two cases.
$\vspace{0.5ex}$
$\textbf{Case $\Gamma'$:}$ Any factor in $\Gamma'$ will either start in an even position or an odd position, call these sets of factors $\Gamma'_{ev}$ and $\Gamma'_{odd}$ and hence
$$ \Gamma' = \Gamma'_{ev} \cup \Gamma'_{odd} .$$
Since the factors are of length $2^{r+1} \geq 8$, for any factors $s \in \Gamma'_{ev}$ and $t \in \Gamma'_{odd}$, $s \neq t$, thus
$$ \Gamma'_{ev} \cap \Gamma'_{odd} = \emptyset.$$
There will be two subcases to establish the size of $\Gamma'$, first by showing the size of $\Gamma'_{ev}$ and then the size of $\Gamma'_{odd}$.
$\textbf{Subcase $\Gamma'_{ev}$:}$ For $u \in \Gamma$ there are subpermutations $p$ and $q$ of $\pi_T$ of length $2^r+1$, so that $p \neq q$. By Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 1. By Proposition $\ref{ImageOfTypeK}$ $\phi(p)$ and $\phi(q)$ are a complementary pair of type 1, so $\phi(p) \neq \phi(q)$ and they both have form $\mu_T(u)$. Therefore for each $u \in \Gamma$, $\mu_T(u) \in \Gamma'_{ev}$. Hence
$$ \abs{\Gamma'_{ev}} \geq \abs{\Gamma}.$$
Suppose that $u' \in \Gamma'_{ev}$, so there are subpermutations $p' = \pi_T[2a, 2a+2^{r+1}]$ and $q' = \pi_T[2b, 2b+2^{r+1}]$ with form $u' = T[2a, 2a+2^{r+1}-1] = T[2b, 2b+2^{r+1}-1]$, so that $p' \neq q'$. Hence there exist subpermutations $p$ and $q$ so that $\phi(p) = p'$ and $\phi(q) = q'$. As in case $\textbf{a.1}$ of Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 1 with form $u$ where $\mu_T(u) = u'$. Thus for each $u' \in \Gamma'_{ev}$, there is some $u \in \Gamma$ so that $\mu_T(u) = u'$. Hence
$$ \abs{\Gamma'_{ev}} \leq \abs{\Gamma}.$$
Therefore $\abs{\Gamma'_{ev}} = \abs{\Gamma}$.
$\textbf{Subcase $\Gamma'_{odd}$:}$ For $u \in \Delta$, $u = T[a,a+2^r]$ , there are subpermutations $p$ and $q$ of $\pi_T$ of length $2^r+2$, so that $p \neq q$. By Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 2. By Proposition $\ref{ImageOfTypeK}$, $\phi(p)$ and $\phi(q)$ are a complementary pair of type 3 with form $\mu_T(u) = T[2a,2a+2^{r+1}+1]$ and $\phi_M(p)$ and $\phi_M(q)$ are a complementary pair of type 1, so $\phi_M(p) \neq \phi_M(q)$ and they both have form $T[2a+1,2a+2^{r+1}]$. Therefore for each $T[a,a+2^r] \in \Delta$, $T[2a+1,2a+2^{r+1}] \in \Gamma'_{odd}$. Hence
$$ \abs{\Gamma'_{odd}} \geq \abs{\Delta}.$$
Suppose that $u' \in \Gamma'_{odd}$, so there are subpermutations $p' = \pi_T[2a+1, 2a+2^{r+1}+1]$ and $q' = \pi_T[2b+1, 2b+2^{r+1}+1]$ with form $u' = T[2a+1, 2a+2^{r+1}] = T[2b+1, 2b+2^{r+1}]$, so that $p' \neq q'$. Hence there exist subpermutations $p$ and $q$ so that $\phi_M(p) = p'$ and $\phi_M(q) = q'$. As in case $\textbf{a.2}$ of Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 2 with form $T[a,a+2^r]$. Thus for each $u' \in \Gamma'_{odd}$, there is some $T[a,a+2^r] \in \Delta$ so that $u' = T[2a+1, 2a+2^{r+1}]$. Hence
$$ \abs{\Gamma'_{odd}} \leq \abs{\Delta}.$$
Therefore $\abs{\Gamma'_{odd}} = \abs{\Delta}$.
Therefore
$$\abs{\Gamma'} = \abs{\Gamma'_{ev}} +\abs{\Gamma'_{odd}} = \abs{\Gamma} + \abs{\Delta} = 2^r + 2^r = 2^{r+1}. $$
$\vspace{0.5ex}$
$\textbf{Case $\Delta'$:}$ Any factor in $\Delta'$ will either start in an even position or an odd position, call these sets of factors $\Delta'_{ev}$ and $\Delta'_{odd}$ and hence
$$ \Delta' = \Delta'_{ev} \cup \Delta'_{odd} .$$
Since the factors are of length $2^{r+1}+1 \geq 8$, for any factors $s \in \Delta'_{ev}$ and $t \in \Delta'_{odd}$, $s \neq t$, thus
$$ \Delta'_{ev} \cap \Delta'_{odd} = \emptyset.$$
There will be two subcases to establish the size of $\Delta'$, first by showing the size of $\Delta'_{ev}$ and then the size of $\Delta'_{odd}$.
$\textbf{Subcase $\Delta'_{ev}$:}$ For $u \in \Delta$, $u = T[a,a+2^r]$ , there are subpermutations $p$ and $q$ of $\pi_T$ of length $2^r+2$, so that $p \neq q$. By Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 2. By Proposition $\ref{ImageOfTypeK}$, $\phi(p)$ and $\phi(q)$ are a complementary pair of type 3 with form $\mu_T(u) = T[2a,2a+2^{r+1}+1]$ and $\phi_L(p)$ and $\phi_L(q)$ are a complementary pair of type 2, so $\phi_L(p) \neq \phi_L(q)$ and they both have form $T[2a,2a+2^{r+1}]$. Therefore for each $T[a,a+2^r] \in \Delta$, $T[2a,2a+2^{r+1}] \in \Delta'_{ev}$. Hence
$$ \abs{\Delta'_{ev}} \geq \abs{\Delta}.$$
Suppose that $u' \in \Delta'_{ev}$, so there are subpermutations $p' = \pi_T[2a+1, 2a+2^{r+1}+2]$ and $q' = \pi_T[2b+1, 2b+2^{r+1}+2]$ with form $u' = T[2a+1, 2a+2^{r+1}+1] = T[2b+1, 2b+2^{r+1}+1]$, so that $p' \neq q'$. Hence there exist subpermutations $p$ and $q$ so that $\phi_L(p) = p'$ and $\phi_L(q) = q'$. As in case $\textbf{a.3}$ of Proposition $\ref{LengthOfAlphaForAGBForN}$, $p$ and $q$ are a complementary pair of type 2 with form $u = T[a,a+2^r]$. Thus for each $u' \in \Gamma'_{ev}$, there is some $T[a,a+2^r] \in \Delta$ so that $u' = T[2a+1, 2a+2^{r+1}+1]$. Hence
$$ \abs{\Delta'_{ev}} \leq \abs{\Delta}.$$
Therefore $\abs{\Delta'_{ev}} = \abs{\Delta}$.
$\textbf{Subcase $\Delta'_{odd}$:}$ A symmetric argument to the argument used in Subcase $\Delta'_{ev}$ will show $\abs{\Delta'_{odd}} = \abs{\Delta}$.
Therefore
$$\abs{\Delta'} = \abs{\Delta'_{ev}} +\abs{\Delta'_{odd}} = \abs{\Delta} + \abs{\Delta} = 2^r + 2^r = 2^{r+1}. $$
$\qed$
\end{proof}
Now we know when there are complementary pairs of type 1, and how many pairs of type 1 there are in each case.
\section{Permutation Complexity of $T$}
\label{FormulaForPermComp}
We are now ready to give a recursive definition for the permutation complexity of $T$. To show this we consider when the maps $\phi$, $\phi_L$, $\phi_R$, and $\phi_M$ are bijective. After the recursive definition is given, it will be shown that the recursive definition yields a formula for the permutation complexity.
\begin{proposition}
\label{RecursivePermComp}
Let $n \in \N$. When $2n+1 = 2^r-1$, for some $r \geq 3$:
$$ \tau_T(2n+1) = \tau_T(n+1) + \tau_T(n+2) - 2^{r-1}. $$
When $2n = 2^r$, for some $r \geq 3$:
$$ \tau_T(2n) = 2(\tau_T(n+1) - 2^{r-1}). $$
For all other $n \geq 3$:
\begin{align*}
\tau_T(2n+1) &= \tau_T(n+1) + \tau_T(n+2) \\
\tau_T(2n) &= 2(\tau_T(n+1)).
\end{align*}
\end{proposition}
\begin{proof}
For any $n$,
$$ \tau_T(n) = \abs{Perm(n)} = \abs{Perm_{ev}(n)} + \abs{Perm_{odd}(n)}. $$
This proof will be done in three cases. The first is when $2n+1 = 2^r-1$ for some $r \geq 3$, the second is when $2n = 2^r$ for some $r \geq 3$, and the third for all other $n$.
$\textbf{Case $2n+1 = 2^r-1$: }$ It can be readily verified by looking at the subpermutations in Appendix $\ref{SecTheSubperms}$ that the proposition is true for $r=3$. Suppose $r \geq 3$ and the lemma is true for $r$. We show that the lemma is true for $r+1$. So $2n+1 = 2^{r+1}-1$, and
$$ Perm(2n+1) = Perm_{ev}(2n+1) + Perm_{odd}(2n+1). $$
Since the map
$$\phi: Perm(n+1) \rightarrow Perm_{ev}(2n+1)$$
is a bijection, the size of $Perm(n+1)$ is the same as the size of $Perm_{ev}(2n+1)$. Therefore
$$ \abs{Perm_{ev}(2n+1)} = \abs{Perm(n+1)} = \tau_T(n+1). $$
Then the map
$$ \phi_M: Perm(n+2) \rightarrow Perm_{odd}(2n+1) $$
is a surjective map, so
$$ \abs{Perm_{odd}(2n+1)} \leq \abs{Perm(n+2)}, $$
but it is not injective because $n+2 = 2^r + 1$. So there are $2^r$ factors $u$ of length $2^r$ with a complementary pair of type 1 by Proposition $\ref{LengthOfAlphaForAGBForN}$ and Lemma $\ref{NumberOfFactWithLenAlphaOne}$. Thus there are exactly $2^{r}$ complementary pairs of type 1 in $Perm(n+2)$. So $2^{r+1}$ subpermutations in $Perm(n+2)$ will be mapped to $2^r$ subpermutations in $Perm_{odd}(2n+1)$ under $\phi_M$. The other $Perm(n+2) - 2^{r+1}$ subpermutations in $Perm(n+2)$ are pairwise distinct and not complementary pairs, and thus will be pairwise distinct under $\phi_M$. Hence
$$ \abs{Perm_{odd}(2n+1)} = \left( \abs{Perm(n+2)} - 2^{r+1} \right) + 2^r = \tau_T(n+2) - 2^r. $$
Therefore
$$ \tau_T(n) = \tau_T(n+1) + \tau_T(n+2) - 2^r. $$
$\vspace{0.5ex}$
$\textbf{Case $2n+1 = 2^r$: }$ It can be readily verified by looking at the subpermutations in Appendix $\ref{SecTheSubperms}$ that the proposition is true for $r=3$. Suppose $r \geq 3$ and the lemma is true for $r$, and we show that the lemma is true for $r+1$. So $2n+1 = 2^{r+1}$, and
$$ Perm(2n) = Perm_{ev}(2n) + Perm_{odd}(2n). $$
The map
$$ \phi_L: Perm(n+1) \rightarrow Perm_{ev}(2n) $$
is a surjective map, so
$$ \abs{Perm_{ev}(2n)} \leq \abs{Perm(n+1)}, $$
but it is not injective because $n+1 = 2^r + 1$. So there are $2^r$ factors $u$ of length $2^r$ with a complementary pair of type 1 by Proposition $\ref{LengthOfAlphaForAGBForN}$ and Lemma $\ref{NumberOfFactWithLenAlphaOne}$. Thus there are exactly $2^{r}$ complementary pairs of type 1 in $Perm(n+1)$. So $2^{r+1}$ subpermutations in $Perm(n+1)$ will be mapped to $2^r$ subpermutations in $Perm_{ev}(2n)$ under $\phi_M$. The other $Perm(n+1) - 2^{r+1}$ subpermutations in $Perm(n+1)$ are pairwise distinct and not complementary pairs, and thus will be pairwise distinct under $\phi_L$. Hence
$$ \abs{Perm_{ev}(2n)} = \left( \abs{Perm(n+1)} - 2^{r+1} \right) + 2^r = \abs{Perm(n+1)} - 2^r. $$
The map
$$ \phi_R: Perm(n+1) \rightarrow Perm_{odd}(2n) $$
is a surjective map, so
$$ \abs{Perm_{odd}(2n)} \leq \abs{Perm(n+1)}, $$
but it is not injective because $n+1 = 2^r + 1$. By a similar argument to above we can see that
$$ \abs{Perm_{odd}(2n)} = \abs{Perm(n+1)} - 2^r. $$
Therefore
$$ \tau_T(n) = (\abs{Perm(n+1)} - 2^r) + (\abs{Perm(n+1)} - 2^r) = 2(\tau_T(n+1) - 2^r). $$
$\vspace{0.5ex}$
$\textbf{Case $n \geq 3$: }$ It can be readily verified by looking at the subpermutations in Appendix $\ref{SecTheSubperms}$ that the proposition is true for $n=3$. Suppose $n \geq 3$ and the lemma is true for $n$, and we show that the lemma is true for $n+1$. Since $2(n+1)+1, 2(n+1) \notin \{2^r-1, 2^r | r \geq 2\}$ for any $r$, we have $n+2, n+3 \notin \{ 2^r+1 | r \geq 2 \}$. So for $2(n+1)$ and $2(n+1) + 1$ we know that the maps
\begin{align*}
&\phi: Perm(n+2) \rightarrow Perm_{ev}(2(n+1)+1) \\
&\phi_L: Perm(n+2) \rightarrow Perm_{ev}(2(n+1)) \\
&\phi_R: Perm(n+2) \rightarrow Perm_{odd}(2(n+1)) \\
&\phi_M: Perm(n+3) \rightarrow Perm_{odd}(2(n+1)+1)
\end{align*}
are all bijections. Therefore:
\begin{align*}
&\abs{Perm_{ev}(2(n+1)+1)} = \abs{Perm(n+2)} = \tau_T(n+2) \\
&\abs{Perm_{ev}(2(n+1))} = \abs{Perm(n+2)} = \tau_T(n+2) \\
&\abs{Perm_{odd}(2(n+1))} = \abs{Perm(n+2)} = \tau_T(n+2) \\
&\abs{Perm_{odd}(2(n+1)+1)} = \abs{Perm(n+3)} = \tau_T(n+3).
\end{align*}
So:
\begin{align*}
&\tau_T(2(n+1)) = \abs{Perm_{ev}(2(n+1))} + \abs{Perm_{odd}(2(n+1))} = 2( \tau_T(n+2)) \\
&\tau_T(2(n+1)+1) = \abs{Perm_{ev}(2(n+1)+1)} + \abs{Perm_{odd}(2(n+1)+1)} = \tau_T(n+2) + \tau_T(n+3).
\end{align*}
$\qed$
\end{proof}
\begin{theorem}
\label{PermCompIsTheFormula}
For any $n \geq 6$, where $n = 2^a + b$ with $0 < b \leq 2^a$,
$$ \tau_T(n) = 2(2^{a+1}+b-2).$$
\end{theorem}
\begin{proof}
The proof will be done by induction on $n$. The above formula can be readily verified by looking at the subpermutations listed in Appendix $\ref{SecTheSubperms}$ for $n \leq 9$. Suppose the theorem is true for all values less than or equal to $2n$.
$\textbf{Case $2n+1 = 2^a-1$: }$ Suppose $2n+1 = 2^a-1$. If $2n+1 = 2^a-1 = 2^{a-1}+2^{a-1} - 1$, then $n = 2^{a-1}-1$, so $n+1 = 2^{a-1} = 2^{a-2}+2^{a-2}$ and $n+2=2^{a-1}+1$. Thus:
\begin{align*}
\tau_T(n+1) &= 2(2^{a-2+1}+2^{a-2}-2) = 2(2^{a-1}+2^{a-2}-2) = 2(3(2^{a-2})-2)\\
\tau_T(n+2) &= 2(2^{a-1+1}+1-2) = 2(2^a-1)
\end{align*}
From Proposition $\ref{RecursivePermComp}$:
\begin{align*}
\tau_T(2n+1) &= 2(3(2^{a-2})-2) + 2(2^a-1) - 2^{a-1} = 2(3(2^{a-2})-2 + 2^a-1 - 2^{a-2}) \\
&= 2(2(2^{a-2}) + 2^a-3) = 2(2^a + (2^{a-1}-1) - 2)
\end{align*}
$\vspace{0.5ex}$
$\textbf{Case $2n+2 = 2(n+1) = 2^a$: }$ Suppose $2n+2 = 2(n+1) = 2^a = 2^{a-1}+2^{a-1}$:
\begin{align*}
\tau_T(2(n+1)) &= 2(2(2^a-1) - 2^{a-1}) = 2(2^{a+1} - 2^{a-1} - 2) = 2(3(2^{a-1}) - 2)\\
&= 2(2(2^{a-1}) + 2^{a-1} - 2) = 2(2^a + 2^{a-1} - 2)
\end{align*}
$\vspace{0.5ex}$
$\textbf{Case Else: }$ Suppose $2n+1 = 2^a + b$, $2n+2 = 2(n+1) = 2^a + b+1$, and $0 < b < 2^a - 1$. Since $2n+1 = 2^a + b$ is odd, $b$ is odd. So $n = 2^{a-1}+\frac{b-1}{2}$, $n+1 = 2^{a-1}+\frac{b+1}{2}$, and $n+2 = 2^{a-1}+\frac{b+3}{2}$. Thus:
\begin{align*}
\tau_T(n+1) &= 2(2^a + \frac{b+1}{2} -2)\\
\tau_T(n+2) &= 2(2^a + \frac{b+3}{2} -2).
\end{align*}
From Proposition $\ref{RecursivePermComp}$:
\begin{align*}
\tau_T(2n+1) &= 2(2^a + \frac{b+1}{2} -2) + 2(2^a + \frac{b+3}{2} -2) = 2(2^a + 2^a + \frac{b+1}{2}+ \frac{b+3}{2} -2 -2)\\
&= 2(2^{a+1} + \frac{2b+4}{2} -4) = 2(2^{a+1} + b -2) \\
\\
\tau_T(2(n+1)) &= 2(2(2^a + \frac{b+3}{2} -2)) = 2(2^{a+1} + b+3 -4)\\
&= 2(2^{a+1} + (b+1) -2).
\end{align*}
Therefore, for all $n \geq 6$, where $n = 2^a + b$ with $0 < b \leq 2^a$, $ \tau_T(n) = 2(2^{a+1}+b-2)$
$\qed$
\end{proof}
\section{Conclusion}
\label{SecConclusion}
There seem to be some natural ways to continue this research. For the binary doubling map $\delta$, defined as $\delta(0) =00$ and $\delta(1)=11$, it has been shown that $T$ and $\delta(T)$ have the same factor complexity ($\cite{AberBrle02}$). One natural question is, do $T$ and $\delta(T)$ have the same permutation complexity? The answer is no. As can be seen in Appendix $\ref{SecTheSubperms}$, $\tau_T(5) = 14$ but $\tau_{\delta(T)}(5) = 16$. With $T$, there are at most two distinct subpermutations that have the same, but with $\delta(T)$ there are cases where three subpermutations have the same form. One open question is, what is the permutation complexity of $\delta(T)$?
This paper also investigates the action of the $\mu_T$ on the subpermutations of $\pi_T$. Since $\mu_T$ is an order preserving map, we know that if there are distinct subpermutations $\pi_T[a,a+n]$ and $\pi_T[b,b+n]$ then $\pi_T[2a,2a+2n] \neq \pi_T[2b,2b+2n]$. This seems to be true in general for binary words that are fixed points of morphisms by using a similar argument from Lemma $\ref{pISqIFFppISqp}$, but the converse is not true in general. Another open question is to investigate properties of infinite permutations associated with aperiodic binary words that are fixed points of a morphism. For such words, is there a way to define a mapping on the subpermutations of $\pi_\w$ similar to the map $\phi$ defined on the subpermutations of $\pi_T$?
These are only a couple of the open questions in the area of permutation complexity.
\paragraph{Acknowledgements:}
Steve Widmer thanks Luca Zamboni and Amy Glen for comments and suggestions that helped him to improve and clarify this paper.
|
1,314,259,993,491 | arxiv | \section{Introduction}
The plan for this article is a follows. I'll start by describing a
simple example of the Riemannian version of the prolongation procedure
of \cite{BCEG}. Next, I will explain how the direct observations used
in this example can be replaced by tools from representation theory to
make the procedure work in general. The whole procedure is based on an
inclusion of the group $O(n)$ into $O(n+1,1)$. Interpreting this
inclusion geometrically leads to a relation to conformal geometry,
that I will discuss next. Via the conformal Cartan connection, the
ideas used in the prolongation procedure lead to a construction of
conformally invariant differential operators from a twisted de--Rham
sequence. On manifolds which are locally conformally flat, this leads
to resolutions of certain locally constant sheaves, which are
equivalent to the (generalized) Bernstein--Gelfand--Gelfand (BGG)
resolutions from representation theory. In the end, I will outline
generalizations to other geometric structures.
It should be pointed out right at the beginning, that this
presentation basically reverses the historical development. The BGG
resolutions in representation theory were originally introduced in
\cite{BGG} and \cite{Lepowsky} in the 1970's. The constructions were
purely algebraic and combinatorial, based on the classification of
homomorphisms of Verma modules. It was known to the experts that there
is a relation to invariant differential operators on homogeneous
spaces, with conformally invariant operators on the sphere as a
special case. However it took some time until the relevance of ideas
and techniques from representation theory in conformal geometry was
more widely appreciated. An important step in this direction was the
work on the curved translation principle in \cite{Eastwood-Rice}. In
the sequel, there were some attempts to construct invariant
differential operators via a geometric version of the generalized BGG
resolutions for conformal and related structures, in particular in
\cite{Baston}.
This problem was completely solved in the general setting of parabolic
geometries in \cite{CSS:BGG}, and the construction was significantly
simplified in \cite{CD}. In these constructions, the operators occur
in patterns, and the first operators in each pattern form an
overdetermined system. For each of these systems, existence of
solutions is an interesting geometric condition. In \cite{BCEG} it was
shown that weakening the requirement on invariance (for example
forgetting conformal aspects and just thinking about Riemannian
metrics) the construction of a BGG sequence can be simplified.
Moreover, it can be used to rewrite the overdetermined system given by
the first operator(s) in the sequence as a first order closed system,
and this continues to work if one adds arbitrary lower order terms.
I am emphasizing these aspects because I hope that this will clarify
two points which would otherwise remain rather mysterious. On the one
hand, we will not start with some overdetermined system and try to
rewrite this in closed form. Rather than that, our starting point is
an auxiliary first order system of certain type which is rewritten
equivalently in two different ways, once as a higher order system and
once in closed form. Only in the end, it will follow from
representation theory, which systems are covered by the procedure.
On the other hand, if one starts the procedure in a purely Riemannian
setting, there are some choices which seem unmotivated. These choices
are often dictated if one requires conformal invariance.
\section{An example of the prolongation procedure}\label{2}
\setcounter{proposition}1
\setcounter{theorem}3
\subsection{The setup}\label{2.1}
The basics of Riemannian geometry are closely related to
representation theory of the orthogonal group $O(n)$. Any
representation of $O(n)$ gives rise to a natural vector bundle on
$n$--dimensional Riemannian manifolds and any $O(n)$--equivariant map
between two such representation induces a natural vector bundle map.
This can be proved formally using associated bundles to the
orthonormal frame bundle.
Informally, it suffices to know (at least for tensor bundles) that the
standard representation corresponds to the tangent or cotangent
bundle, and the correspondence is natural with respect to direct sums
and tensor products. A linear map between two representations of
$O(n)$ can be expressed in terms of a basis induced from an
orthonormal basis in the standard representation. Starting from a
local orthonormal frame of the (co)tangent bundle, one may locally use
the same formula in induced frames on any Riemannian manifold.
Equivariancy under the group $O(n)$ means that the result is
independent of the initial choice of a local orthonormal frame. Hence
one obtains a global, well defined bundle map.
The basic strategy for our prolongation procedure is to embed $O(n)$
into a larger Lie group $G$, and then look how representations of $G$
behave when viewed as representations of the subgroup $O(n)$. In this
way, representation theory is used as a way to organize symmetries. A
well known inclusion of this type is $O(n)\hookrightarrow O(n+1)$,
which is related to viewing the sphere $S^n$ as a homogeneous
Riemannian manifold. We use a similar, but slightly more involved
inclusion.
Consider $\Bbb V:=\Bbb R^{n+2}$ with coordinates numbered from $0$ to
$n+1$ and the inner product defined by
$$
\langle(x_0,\dots,x_{n_1}),(y_0,\dots,y_{n+1})\rangle:=
x_0y_{n+1}+x_{n+1}y_0+\sum_{i=1}^nx_iy_i.
$$
For this choice of inner product, the basis vectors $e_1,\dots,e_n$
span a subspace $\Bbb V_1$ which is a standard Euclidean $\Bbb R^n$,
while the two additional coordinates are what physicists call light
cone coordinates, i.e.~the define a signature $(1,1)$ inner product on
$\Bbb R^2$. Hence the whole form has signature $(n+1,1)$ and we
consider its orthogonal group $G=O(\Bbb V)\cong O(n+1,1)$. There is an
evident inclusion $O(n)\hookrightarrow G$ given by letting $A\in O(n)$
act on $\Bbb V_1$ and leaving the orthocomplement of $\Bbb V_1$ fixed.
In terms of matrices, this inclusion maps $A\in O(n)$ to the block
diagonal matrix $\left(\begin{smallmatrix} 1 & 0 & 0\\ 0 & A & 0\\ 0 &
0 & 1\end{smallmatrix}\right)$ with blocks of sizes $1$, $n$, and
$1$. The geometric meaning of this inclusion will be discussed later.
The representation of $A$ as a block matrix shows that, as a
representation of $O(n)$, $\Bbb V=\Bbb V_0\oplus\Bbb V_1\oplus V_2$,
where $\Bbb V_0$ and $\Bbb V_2$ are trivial representations spanned by
$e_{n+1}$ and $e_0$, respectively. We will often denote elements of
$\Bbb V$ by column vectors with three rows, with the bottom row
corresponding to $\Bbb V_0$.
If we think of $\Bbb V$ as representing a bundle, then differential
forms with values in that bundle correspond to the representations
$\Lambda^k\Bbb R^n\otimes\Bbb V$ for $k=0,\dots,n$. Of course, for each
$k$, this representation decomposes as $\oplus_{i=0}^2(\Lambda^k\Bbb
R^n\otimes\Bbb V_i)$, but for the middle component $\Lambda^k\Bbb
R^n\otimes\Bbb V_1$, there is a finer decomposition. For example, if
$k=1$, then $\Bbb R^n\otimes\Bbb R^n$ decomposes as
$$
\Bbb R\oplus S^2_0\Bbb R^n\oplus\Lambda^2\Bbb R^n
$$
into trace--part, tracefree symmetric part and skew part. We actually
need only $k=0,1,2$, where we get the picture
\begin{equation}
\xymatrix@R=5pt@C=40pt{%
\Bbb R \ar@{<->}[dr] & \Bbb R^n & \Lambda^2\Bbb R^n\\
\Bbb R^n & \Bbb R\oplus S^2_0\Bbb R^n\oplus\Lambda^2\Bbb R^n &
\Bbb R^n\oplus
W_2\oplus \Lambda^3\Bbb R^n \ar@{<->}[ul] \\
\Bbb R & \Bbb R^n \ar@{<->}[ul] & \Lambda^2\Bbb R^n \ar@{<->}[ul] }
\label{example-decomp}\end{equation}
and we have indicated some components which are isomorphic as
representations of $O(n)$. Observe that assigning homogeneity $k+i$ to elements
of $\Lambda^k\Bbb R^n\otimes\Bbb V_i$, we have chosen to identify
components of the same homogeneity.
We will make these identifications explicit in the language of bundles
immediately, but let us first state how we will use them. For the left
column, we will on the one hand define $\partial:\Bbb V\to \Bbb
R^n\otimes\Bbb V$, which vanishes on $\Bbb V_0$ and is injective on
$\Bbb V_1\oplus\Bbb V_2$. On the other hand, we will define
$\delta^*:\Bbb R^n\otimes\Bbb V\to\Bbb V$ by using inverse
identifications. For the right hand column, we will only use the
identifications from right to left to define $\delta^*:\Lambda^2\Bbb
R^n\otimes\Bbb R\to\Bbb R^n\otimes\Bbb V$. Evidently, this map has
values in the kernel of $\delta^*:\Bbb R^n\otimes\Bbb V\to\Bbb V$, so
$\delta^*\o\delta^*=0$. By constructions, all these maps preserve
homogeneity. We also observe that $\operatorname{ker}(\delta^*)=S^2_0\Bbb
R^n\oplus\operatorname{im}(\delta^*)\subset\Bbb R^n\otimes\Bbb V$.
Now we can carry all this over to any Riemannian manifold of dimension
$n$. Sections of the bundle $V$ corresponding to $\Bbb V$ can be
viewed as triples consisting of two functions and a one--form. Since
the representation $\Bbb R^n$ corresponds to $T^*M$, the bundle
corresponding to $\Lambda^k\Bbb R^n\otimes\Bbb V$ is $\Lambda^kT^*M\otimes V$.
Sections of this bundle are triples consisting of two $k$--forms and
one $T^*M$--valued $k$--form. If there is no danger of confusion with
abstract indices, we will use subscripts $i=0,1,2$ to denote the
component of a section in $\Lambda^kT^*M\otimes V_i$. To specify our maps,
we use abstract index notation and define $\partial:V\to T^*M\otimes
V$, $\delta^*:T^*M\otimes V\to V$ and $\delta^*:\Lambda^2T^*M\otimes V\to
T^*M\otimes V$ by
$$
\partial\begin{pmatrix} h \\ \phi_b \\ f\end{pmatrix}:=\begin{pmatrix}
0 \\ hg_{ab} \\ -\phi_a \end{pmatrix}\quad
\delta^*\begin{pmatrix} h_b \\ \phi_{bc} \\
f_b\end{pmatrix}:= \begin{pmatrix} \tfrac{1}{n}\phi^c_c \\ -f_b
\\ 0\end{pmatrix} \quad \delta^*\begin{pmatrix} h_{ab} \\ \phi_{abc} \\
f_{ab}\end{pmatrix}:= \begin{pmatrix}
\tfrac{-1}{n-1}\phi_{ac}{}^c \\ \tfrac{1}{2}f_{ab}
\\ 0\end{pmatrix}
$$
The numerical factors are chosen in such a way that our example
fits into the general framework developed in section \ref{3}.
We can differentiate sections of $V$ using the component--wise
Levi--Civita connection, which we denote by $\nabla$. Note that this
raises homogeneity by one. The core of the method is to mix this
differential term with an algebraic one. We consider the operation
$\Gamma(V)\to\Omega^1(M,V)$ defined by $\Sigma\mapsto \nabla\Sigma+\partial\Sigma$.
Since $\partial$ is tensorial and linear, this defines a linear
connection $\tilde\nabla$ on the vector bundle $V$.
We are ready to define the class of systems that we will look at.
Choose a bundle map (not necessarily linear) $A:V_0\oplus V_1\to
S^2_0T^*M$, and view it as $A:V\to T^*M\otimes V$. Notice that this
implies that $A$ increases homogeneities. Then consider the system
\begin{equation}
\label{example-basic}
\tilde\nabla\Sigma+A(\Sigma)=\delta^*\psi\qquad\text{for some\
}\psi\in\Omega^2(M,V).
\end{equation}
We will show that on the one hand, this is equivalent to a second
order system on the $V_0$--component $\Sigma_0$ of $\Sigma$ and on the other
hand, it is equivalent to a first order system on $\Sigma$ in closed form.
\subsection{The first splitting operator}\label{2.2}
Since $A$ by definition has values in $\operatorname{ker}(\delta^*)$ and
$\delta^*\o\delta^*=0$, the system \eqref{example-basic} implies
$\delta^*(\tilde\nabla\Sigma)=0$. Hence we first have to analyze the
operator $\delta^*\o\tilde\nabla:\Gamma(V)\to\Gamma(V)$. Using abstract
indices and denoting the Levi--Civita connection by $\nabla_a$ we
obtain
$$
\Sigma=\begin{pmatrix} h \\ \phi_b \\
f\end{pmatrix}\overset{\tilde\nabla_a}{\mapsto}
\begin{pmatrix} \nabla_a h \\ \nabla_a\phi_b+hg_{ab} \\
\nabla_a f-\phi_a\end{pmatrix}\overset{\delta^*}{\mapsto}
\begin{pmatrix} \tfrac{1}{n}\nabla^b\phi_b+h \\ -\nabla_a f+\phi_a \\
0\end{pmatrix}
$$
From this we can read off the set of all solutions of
$\delta^*\tilde\nabla\Sigma=0$. We can arbitrarily choose $f$. Vanishing
of the middle row then forces $\phi_a=\nabla_a f$, and inserting this,
vanishing of the top row is equivalent to
$h=-\tfrac{1}{n}\nabla^b\nabla_b f=-\tfrac{1}{n}\Delta f$, where
$\Delta$ denotes the Laplacian. Hence we get
\begin{proposition}
For any $f\in C^\infty(M,\Bbb R)$, there is a unique $\Sigma\in\Gamma(V)$
such that $\Sigma_0=f$ and $\delta^*(\nabla\Sigma+\delta\Sigma)=0$. Mapping
$f$ to this unique $\Sigma$ defines a second order linear differential
operator $L:\Gamma(V_0)\to\Gamma(V)$, which is explicitly given by
$$
L(f)=\begin{pmatrix}-\tfrac{1}{n}\Delta f \\ \nabla_a f\\
f\end{pmatrix}=\sum_{i=0}^2(-1)^i(\delta^*\nabla)^i
\begin{pmatrix} 0 \\ 0\\ f\end{pmatrix}.
$$
\end{proposition}
The natural interpretation of this result is that $V_0$ is viewed as a
quotient bundle of $V$, so we have the tensorial projection
$\Sigma\mapsto\Sigma_0$. The operator $L$ constructed provides a
differential splitting of this tensorial projection, which is
characterized by the simple property that its values are in the kernel
of $\delta^*\tilde\nabla$. Therefore, $L$ and its generalizations are
called \textit{splitting operators}.
\subsection{Rewriting as a higher order system}\label{2.3}
We have just seen that the system $\tilde\nabla\Sigma+A(\Sigma)=\delta^*\psi$
from \ref{2.1} implies that $\Sigma=L(f)$, where $f=\Sigma_0$. Now by
Proposition \ref{2.2}, the components of $L(f)$ in $V_0$ and $V_1$ are
$f$ and $\nabla f$, respectively. Hence $f\mapsto A(L(f))$ is a first
order differential operator $\Gamma(V_0)\to\Gamma(S^2_0T^*M)$. Conversely,
any first order operator
$D_1:\Gamma(V_0)\to\Gamma(S^2_0T^*M)\subset\Omega^1(M,V)$ can be written as
$D_1(f)=A(L(f))$ for some $A:V\to T^*M\otimes V$ as in \ref{2.1}.
Next, for $f\in\Gamma(V_0)$ we compute
$$
\tilde\nabla L(f)=\tilde\nabla_a \begin{pmatrix} -\tfrac{1}{n}\Delta f\\ \nabla_b f\\
f\end{pmatrix}=
\begin{pmatrix} -\tfrac{1}{n}\nabla_a\Delta f \\ \nabla_a\nabla_b
f-\tfrac{1}{n}g_{ab}\Delta f\\ 0 \end{pmatrix}.
$$
The middle component of this expression is the tracefree part
$\nabla_{(a}\nabla_{b)_0}f$ of $\nabla^2 f$.
\begin{proposition}
For any operator $D_1:C^\infty(M,\Bbb R)\to\Gamma(S^2_0T^*M)$ of first
order, there is a bundle map $A:V\to T^*M\otimes V$ such that
$f\mapsto L(f)$ and $\Sigma\mapsto\Sigma_0$ induce inverse bijections
between the sets of solutions of
\begin{equation}\label{example-ho}
\nabla_{(a}\nabla_{b)_0}f+D_1(f)=0
\end{equation}
and of the basic system \eqref{example-basic}.
\end{proposition}
\begin{proof}
We can choose $A:V_0\oplus V_1\to S^2_0T^*M\subset T^*M\otimes V$ in
such a way that $D_1(f)=A(L(f))$ for all $f\in\Gamma(V_0)$. From above
we see that $\tilde\nabla L(f)+A(L(f))$ has vanishing bottom
component and middle component equal to
$\nabla_{(a}\nabla_{b)_0}f+D_1(f)$. From \eqref{example-decomp} we
see that sections of $\operatorname{im}(\delta^*)\subset T^*M\otimes V$ are
characterized by the facts that the bottom component vanishes, while
the middle one is skew symmetric. Hence $L(f)$ solves
\eqref{example-basic} if and only if $f$ solves \eqref{example-ho}.
Conversely, we know from \ref{2.2} any solution $\Sigma$ of
\eqref{example-basic} satisfies $\Sigma=L(\Sigma_0)$, and the result
follows.
\end{proof}
Notice that in this result we do not require $D_1$ to be linear. In
technical terms, an operator can be written in the form
$f\mapsto\nabla_{(a}\nabla_{b)_0}f+D_1(f)$ for a first order operator
$D_1$, if and only if it is of second order, quasi--linear and its
principal symbol is the projection $S^2T^*M\to S^2_0T^*M$ onto the
tracefree part.
\subsection{Rewriting in closed form}\label{2.4}
Suppose that $\Sigma$ is a solution of \eqref{example-basic},
i.e.~$\tilde\nabla\Sigma+A(\Sigma)=\delta^*\psi$ for some $\psi$. Then the
discussion in \ref{2.3} shows that the two bottom components of
$\tilde\nabla\Sigma+A(\Sigma)$ actually have to vanish. Denoting the
components of $\Sigma$ as before, there must be a one--form $\tau_a$ such
that
\begin{equation}\label{almost-closed}
\begin{pmatrix}
\nabla_ah+\tau_a\\ \nabla_a\phi_b+hg_{ab}+A_{ab}(f,\phi) \\
\nabla_af-\phi_a
\end{pmatrix}=0.
\end{equation}
Apart from the occurrence of $\tau_a$, this is a first order system in
closed form, so it remains to compute this one--form.
To do this, we use the \textit{covariant exterior derivative}
$d^{\tilde\nabla}:\Omega^1(M,V)\to\Omega^2(M,V)$ associated to
$\tilde\nabla$. This is obtained by coupling the exterior derivative
to the connection $\tilde\nabla$, so in particular on one--forms we
obtain
$$
d^{\tilde\nabla}\omega(\xi,\eta)=\tilde\nabla_\xi(\omega(\eta))-
\tilde\nabla_\eta(\omega(\xi))-\omega([\xi,\eta]).
$$
Explicitly, on $\Omega^1(M,V)$ the operator $d^{\tilde\nabla}$ is
given by
\begin{equation}\label{example-cov-ext}
\begin{pmatrix} h_b \\ \phi_{bc} \\
f_b\end{pmatrix}\mapsto 2\begin{pmatrix} \nabla_{[a} h_{b]} \\
\nabla_{[a}\phi_{b]c}-h_{[a}g_{b]c}\\
\nabla_{[a}f_{b]}+\phi_{[ab]}\end{pmatrix},
\end{equation}
where square brackets indicate indicate an alternation of abstract
indices.
Now almost by definition, $d^{\tilde\nabla}\tilde\nabla\Sigma$ is given
by the action of the curvature of $\tilde\nabla$ on $\Sigma$. One easily
computes directly that this coincides with the component--wise action
of the Riemann curvature. In particular, this is only non--trivial on
the middle component. On the other hand, since $A(\Sigma)=A_{ab}(f,\phi)$
is symmetric, we see that $d^{\tilde\nabla}(A(\Sigma))$ is concentrated
in the middle component, and it certainly can be written as
$\Phi_{abc}(f,\nabla f,\phi,\nabla\phi)$ for an appropriate bundle map
$\Phi$. Together with the explicit formula, this shows that applying
the covariant exterior derivative to \eqref{almost-closed} we obtain
$$
\begin{pmatrix}
2\nabla_{[a}\tau_{b]}\\ -R_{ab}{}^d{}_c\phi_d+\Phi_{abc}(f,\nabla
f,\phi,\nabla\phi)-2\tau_{[a}g_{b]c}\\ 0
\end{pmatrix}=0.
$$
Applying $\delta^*$, we obtain an element with the bottom two rows
equal to zero and top row given by
$$
\tfrac{1}{n-1}\big(R_a{}^c{}^d{}_c\phi_d-\Phi_a{}^c{}_c(f,\nabla
f,\phi,\nabla\phi)\big)+\tau_a,
$$
which gives a formula for $\tau_a$. Finally, we define a bundle map
$C:V\to T^*M\otimes V$ by
$$
C\begin{pmatrix} h\\ \phi_b\\ f\end{pmatrix}:=
\begin{pmatrix}
\tfrac{-1}{n-1}\big(R_a{}^c{}^d{}_c\phi_d-
\Phi_a{}^c{}_c(f,\phi,\phi,-hg-A(f,\phi))\big)\\
A_{ab}(f,\phi)\\ 0
\end{pmatrix}
$$
to obtain
\begin{theorem}
Let $D:C^\infty(M,\Bbb R)\to\Gamma(S^2_0T^*M)$ be a quasi--linear
differential operator of second order whose principal symbol is the
projection $S^2T^*M\to S^2_0T^*M$ onto the tracefree part. Then
there is a bundle map $C:V\to T^*M\otimes V$ which has the property
that $f\mapsto L(f)$ and $\Sigma\mapsto\Sigma_0$ induce inverse bijections
between the sets of solutions of $D(f)=0$ and of
$\tilde\nabla\Sigma+C(\Sigma)=0$. If $D$ is linear, then $C$ can be
chosen to be a vector bundle map.
\end{theorem}
Since for any bundle map $C$, a solution of $\tilde\nabla\Sigma+C(\Sigma)=0$
is determined by its value in a single point, we conclude that any
solution of $D(f)=0$ is uniquely determined by the values of $f$,
$\nabla f$ and $\Delta f$ in one point. Moreover, if $D$ is linear,
then the dimension of the space of solutions is always $\leq n+2$. In
this case, $\tilde\nabla+C$ defines a linear connection on the bundle
$V$, and the maximal dimension can be only attained if this connection
is flat.
Let us make the last step explicit for
$D(f)=\nabla_{(a}\nabla_{b)_0}f+A_{ab}f$ with some fixed section
$A_{ab}\in\Gamma(S^2_0T^*M)$. From formula \eqref{example-cov-ext} we
conclude that
$$
\Phi_{abc}(f,\nabla
f,\phi,\nabla\phi)=2f\nabla_{[a}A_{b]c}+2A_{c[b}\nabla_{a]}f,
$$
and inserting we obtain the closed system
$$
\begin{cases}
\nabla_ah-\tfrac{1}{n-1}\big(R_a{}^c{}^d{}_c\phi_d+
f\nabla^cA_{ac}+\phi^cA_{ac}\big)=0\\
\nabla_a\phi_b+hg_{ab}+fA_{ab}=0\\
\nabla_af-\phi_a=0
\end{cases}
$$
which is equivalent to $\nabla_{(a}\nabla_{b)_0}f+A_{ab}f=0$.
\subsection{Remark}\label{2.5}
As a slight detour (which however is very useful for the purpose of
motivation) let me explain why the equation
$\nabla_{(a}\nabla_{b)_0}f+A_{ab}f=0$ is of geometric interest. Let
us suppose that $f$ is a nonzero function. The we can use it to
conformally rescale the metric $g$ to $\hat g:=\tfrac{1}{f^2} g$. Now
one can compute how a conformal rescaling affects various quantities,
for example the Levi--Civita connection. In particular, we can look at
the conformal behavior of the Riemannian curvature tensor. Recall that
the Riemann curvature can be decomposed into various components
according to the decomposition of $S^2(\Lambda^2\Bbb R^n)$ as a
representation of $O(n)$. The highest weight part is the \textit{Weyl
curvature}, which is independent of conformal rescalings.
Contracting the Riemann curvature via
$\text{Ric}_{ab}:=R_{ca}{}^c{}_b$, one obtains the \textit{Ricci
curvature}, which is a symmetric two tensor. This can be further
decomposed into the \textit{scalar curvature} $R:=\text{Ric}^a{}_a$
and the tracefree part
$\text{Ric}^0_{ab}=\text{Ric}_{ab}-\tfrac{1}{n}Rg_{ab}$. Recall that a
Riemannian metric is called an \textit{Einstein metric} if the Ricci
curvature is proportional to the metric, i.e.~if
$\text{Ric}^0_{ab}=0$.
The behavior of the tracefree part of the Ricci curvature under a
conformal change $\hat g:=\tfrac{1}{f^2} g$ is easily determined
explicitly, see~\cite{BEG}. In particular, $\hat g$ is Einstein if and
only if
$$
\nabla_{(a}\nabla_{b)_0}f+A_{ij} f=0
$$
for an appropriately chosen $A_{ij}\in\Gamma(S^2_0T^*M)$. Hence
existence of a nowhere vanishing solution to this equation is
equivalent to the possibility of rescaling $g$ conformally to an
Einstein metric.
From above we know that for a general non--trivial solution $f$ of
this system and each $x\in M$, at least one of $f(x)$, $\nabla f(x)$,
and $\Delta f(x)$ must be nonzero. Hence $\{x:f(x)\neq 0\}$ is a dense
open subset of $M$, and one obtains a conformal rescaling to an
Einstein metric on this subset.
\section{The general procedure}\label{3}
\setcounter{lemma}1
\setcounter{theorem}1
\setcounter{proposition}2
The procedure carried out in an example in section \ref{2} can be
vastly generalized by replacing the standard representation by an
arbitrary irreducible representation of $G\cong O(n+1,1)$. (Things
also work for spinor representations, if one uses $Spin(n+1,1)$
instead.) However, one has to replace direct observations by tools
from representation theory, and we discuss in this section, how this
is done.
\subsection{The Lie algebra $\frak{o}(n+1,1)$}\label{3.1}
We first have to look at the Lie algebra $\frak g\cong\frak{o}(n+1,1)$
of $G=O(\Bbb V)$. For the choice of inner product used in \ref{2.1}
this has the form
$$
\frak g=\left\{\begin{pmatrix} a & Z & 0 \\ X & A & -Z^t \\ 0 & -X^t &
-a\end{pmatrix}: A\in\frak o(n), a\in\Bbb R, X\in\Bbb R^n,
Z\in\Bbb R^{n*},\right\}.
$$
The central block formed by $A$ represents the subgroup $O(n)$. The
element $E:=\left(\begin{smallmatrix} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 &
-1\end{smallmatrix}\right)$ is called the \textit{grading
element}. Forming the commutator with $E$ is a diagonalizable map
$\frak g\to\frak g$ with eigenvalues $-1$, $0$, and $1$, and we denote
by $\frak g_i$ the eigenspace for the eigenvalue $i$. Hence ${\mathfrak g}_{-1}$
corresponds to $X$, ${\mathfrak g}_1$ to $Z$ and ${\mathfrak g}_0$ to $A$ and $a$.
Moreover, the Jacobi identity immediately implies that
$[{\mathfrak g}_i,{\mathfrak g}_j]\subset{\mathfrak g}_{i+j}$ with the convention that
${\mathfrak g}_{i+j}=\{0\}$ unless $i+j\in \{-1,0,1\}$. Such a decomposition is
called a \textit{$|1|$--grading} of ${\mathfrak g}$. In particular, restricting
the adjoint action to $\frak o(n)$, one obtains actions on ${\mathfrak g}_{-1}$
and ${\mathfrak g}_1$, which are the standard representation respectively its
dual (and hence isomorphic to the standard representation).
Since the grading element $E$ acts diagonalizably under the adjoint
representation, it also acts diagonalizably on any finite dimensional
irreducible representation $\Bbb W$ of $\frak g$. If $w\in\Bbb W$ is
an eigenvector for the eigenvalue $j$, and $Y\in{\mathfrak g}_i$, then
$E\cdot Y\cdot w=Y\cdot E\cdot w+[E,Y]\cdot w$
shows that $Y\cdot w$ is an eigenvector with eigenvalue $i+j$. From
irreducibility it follows easily that denoting by $j_0$ the lowest
eigenvalue, the set of eigenvalues is $\{j_0,j_0+1,\dots,j_0+N\}$ for
some $N\geq 1$. Correspondingly, we obtain a decomposition $\Bbb
W=\Bbb W_0\oplus\dots\oplus \Bbb W_N$ such that ${\mathfrak g}_i\cdot\Bbb
W_j\subset\Bbb W_{i+j}$. In particular, each of the subspaces $\Bbb
W_j$ is invariant under the action of ${\mathfrak g}_0$ and hence in particular
under the action of $\frak o(n)$. Notice that the decomposition $\Bbb
V=\Bbb V_0\oplus\Bbb V_1\oplus\Bbb V_2$ used in section \ref{2} is
obtained in this way.
One can find a Cartan subalgebra of (the complexification of) $\frak
g$ which is spanned by $E$ and a Cartan subalgebra of (the
complexification of) $\frak o(n)$. The theorem of the highest weight
then leads to a bijective correspondence between finite dimensional
irreducible representations $\Bbb W$ of $\frak g$ and pairs $(\Bbb
W_0,r)$, where $\Bbb W_0$ is a finite dimensional irreducible
representation of $\frak o(n)$ and $r\geq 1$ is an integer. Basically,
the highest weight of $\Bbb W_0$ is the restriction to the Cartan
subalgebra of $\frak o(n)$ of the highest weight of $\Bbb W$, while
$r$ is related to the value of the highest weight on $E$. As the
notation suggests, we can arrange things in such a way that $\Bbb W_0$
is the lowest eigenspace for the action of $E$ on $\Bbb W$. For
example, the standard representation $\Bbb V$ in this notation
corresponds to $(\Bbb R,2)$. The explicit version of this
correspondence is not too important here, it is described in terms of
highest weights in \cite{BCEG} and in terms of Young diagrams in
\cite{Mike:prolon}. It turns out that, given $\Bbb W_0$ and $r$, the number
$N$ which describes the length of the grading can be easily computed.
\subsection{Kostant's version of the Bott--Borel--Weil theorem}\label{3.2}
Suppose that $\Bbb W$ is a finite dimensional irreducible
representation of $\frak g$, decomposed as $\Bbb
W_0\oplus\dots\oplus\Bbb W_N$ as above. Then we can view $\Lambda^k\Bbb
R^n\otimes\Bbb W$ as $\Lambda^k{\mathfrak g}_1\otimes\Bbb W$, which leads to two
natural families of $O(n)$--equivariant maps. First we define
$\partial^*:\Lambda^k{\mathfrak g}_1\otimes\Bbb W\to\Lambda^{k-1}{\mathfrak g}_1\otimes\Bbb W$ by
$$
\partial^*(Z_1\wedge\dots\wedge Z_k\otimes w):=\textstyle\sum_{i=1}^k (-1)^i
Z_1\wedge\dots \wedge\widehat{Z_i}\wedge\dots\wedge Z_k\otimes
Z_i\cdot w,
$$
where the hat denotes omission. Note that if $w\in\Bbb W_j$, then
$Z_i\cdot w\in W_{j+1}$, so this operation preserves homogeneity. On
the other hand, we have $Z_i\cdot Z_j\cdot w-Z_j\cdot Z_i\cdot
w=[Z_i,Z_j]\cdot w=0$, since ${\mathfrak g}_1$ is a commutative subalgebra. This
easily implies that $\partial^*\o\partial^*=0$.
Next, there is an evident duality between ${\mathfrak g}_{-1}$ and ${\mathfrak g}_1$,
which is compatible with Lie theoretic methods since it is induced by
the Killing form of ${\mathfrak g}$. Using this, we can identify
$\Lambda^k{\mathfrak g}_1\otimes\Bbb W$ with the space of $k$--linear alternating
maps ${\mathfrak g}_{-1}^k\to\Bbb W$. This gives rise to a natural map
$\partial=\partial_k:\Lambda^k{\mathfrak g}_1\otimes\Bbb
W\to\Lambda^{k+1}{\mathfrak g}_1\otimes\Bbb W$ defined by
$$
\partial\alpha(X_0,\dots,X_k):=
\textstyle\sum_{i=0}^k(-1)^iX_i\cdot\alpha(X_0,\dots,\widehat{X_i},\dots,X_k).
$$
In this picture, homogeneity boils down to the usual notion for
multilinear maps, i.e.~$\alpha:({\mathfrak g}_{-1})^k\to\Bbb W$ is homogeneous of
degree $\ell$ if it has values in $\Bbb W_{\ell-k}$. From this it
follows immediately that $\partial$ preserves homogeneities, and
$\partial\o\partial=0$ since ${\mathfrak g}_{-1}$ is commutative.
As a first step towards the proof of his version of the
Bott--Borel--Weil--theorem (see \cite{Kostant}), B.~Kostant proved the
following result:
\begin{lemma}
The maps $\partial$ and $\partial^*$ are adjoint with respect to an
inner product of Lie theoretic origin. For each degree $k$, one
obtains an algebraic Hodge decomposition
$$
\Lambda^k{\mathfrak g}_1\otimes\Bbb W=\operatorname{im}(\partial)\oplus
(\operatorname{ker}(\partial)\cap\operatorname{ker}(\partial^*))\oplus\operatorname{im}(\partial^*),
$$
with the first two summands adding up to $\operatorname{ker}(\partial)$ and the last
two summands adding up to $\operatorname{ker}(\partial^*)$.
In particular, the restrictions of the canonical projections to the
subspace $\Bbb H_k:=\operatorname{ker}(\partial)\cap\operatorname{ker}(\partial^*)$ induce
isomorphisms $\Bbb H_k\cong\operatorname{ker}(\partial)/\operatorname{im}(\partial)$ and $\Bbb
H_k\cong\operatorname{ker}(\partial^*)/\operatorname{im}(\partial^*)$.
\end{lemma}
Since $\partial$ and $\partial^*$ are ${\mathfrak g}_0$--equivariant, all spaces
in the lemma are naturally representations of ${\mathfrak g}_0$ and all
statements include the ${\mathfrak g}_0$--module structure. Looking at the Hodge
decomposition more closely, we see that for each $k$, the map
$\partial$ induces an isomorphism
$$
\Lambda^k{\mathfrak g}_1\otimes\Bbb
W\supset\operatorname{im}(\partial^*)\to\operatorname{im}(\partial)\subset
\Lambda^{k+1}{\mathfrak g}_1\otimes\Bbb W,
$$
while $\partial^*$ induces an isomorphism in the opposite
direction. In general, these two map are not inverse to each other, so
we replace $\partial^*$ by the map $\delta^*$ which vanishes on
$\operatorname{ker}(\partial^*)$ and is inverse to $\partial$ on $\operatorname{im}(\partial)$. Of
course, $\delta^*\o\delta^*=0$ and it computes the same cohomology as
$\partial^*$.
Kostant's version of the BBW--theorem computes (in a more general
setting to be discussed below) the representations $\Bbb H_k$ in an
explicit and algorithmic way. We only need the cases $k=0$ and $k=1$
here, but to formulate the result for $k=1$ we need a bit of
background. Suppose that $\Bbb E$ and $\Bbb F$ are finite dimensional
representations of a semisimple Lie algebra. Then the tensor product
$\Bbb E\otimes\Bbb F$ contains a unique irreducible component whose
highest weight is the sum of the highest weights of $\Bbb E$ and $\Bbb
F$. This component is called the \textit{Cartan product} of $\Bbb E$
and $\Bbb F$ and denoted by $\Bbb E\circledcirc \Bbb F$. Moreover,
there is a nonzero equivariant map $\Bbb E\otimes\Bbb F\to \Bbb
E\circledcirc\Bbb F$, which is unique up to multiples. This
equivariant map is also referred to as the \textit{Cartan product}.
The part of Kostant's version of the BBW--theorem that we need (proved
in \cite{BCEG} in this form) reads as follows,
\begin{theorem}
Let $\Bbb W=\Bbb W_0\oplus\dots\oplus\Bbb W_N$ be the irreducible
representation of $\frak g$ corresponding to the pair $(\Bbb
W_0,r)$.
\noindent
(i) In degree zero, $\operatorname{im}(\partial^*)=\Bbb W_1\oplus\dots\oplus\Bbb
W_N$ and $\Bbb H_0=\operatorname{ker}(\partial)=\Bbb W_0$.
\noindent
(ii) The subspace $\Bbb H_1\subset{\mathfrak g}_1\otimes\Bbb W$ is isomorphic to
$S^r_0{\mathfrak g}_1\circledcirc\Bbb W_0$. It is contained in ${\mathfrak g}_1\otimes\Bbb
W_{r-1}$ and it is the only irreducible component of
$\Lambda^*{\mathfrak g}_1\otimes \Bbb W$ of this isomorphism type.
\end{theorem}
\subsection{Some more algebra}\label{3.2a}
Using Theorem \ref{3.2} we can now deduce the key algebraic ingredient
for the procedure. For each $i\geq 1$ we have $\partial:\Bbb
W_i\to{\mathfrak g}_1\otimes\Bbb W_{i-1}$. Next, we consider
$(\operatorname{id}\otimes\partial)\o\partial:\Bbb W_i\to\otimes^2{\mathfrak g}_1\otimes\Bbb
W_{i-2}$, and so on, to obtain ${\mathfrak g}_0$--equivariant maps
$$
\phi_i:=(\operatorname{id}\otimes\dots\otimes\operatorname{id}\otimes\partial)\o\dots\o
(\operatorname{id}\otimes\partial)\otimes\partial:\Bbb W_i\to\otimes^i{\mathfrak g}_1\otimes\Bbb W_0
$$
for $i=1,\dots,N$, and we put $\phi_0=\operatorname{id}_{\Bbb W_0}$.
\begin{proposition}
Let $\Bbb W=\Bbb W_0\oplus\dots\oplus\Bbb W_N$ correspond to $(\Bbb
W_0,r)$ and let $\Bbb K\subset S^r{\mathfrak g}_1\otimes\Bbb W_0$ be the kernel
of the Cartan product. Then we have
\noindent
(i) For each $i$, the map $\phi_i:\Bbb W_i\to\otimes^i{\mathfrak g}_1\otimes\Bbb
W_0$ is injective and hence an isomorphism onto its image. This image
is given by
$$
\operatorname{im}(\phi_i)=
\begin{cases}
S^i{\mathfrak g}_1\otimes\Bbb W_0\qquad i<r\\
(S^i{\mathfrak g}_1\otimes\Bbb W_0)\cap (S^{i-r}{\mathfrak g}_1\otimes\Bbb K)\qquad
i\geq r.
\end{cases}
$$
(ii) For each $i<r$, the restriction of the map $\delta^*\otimes
\phi_{i-1}^{-1}$ to $S^i{\mathfrak g}_1\otimes\Bbb W_0\subset{\mathfrak g}_1\otimes
S^{i-1}{\mathfrak g}_1\otimes\Bbb W_0$ coincides with $\phi_i^{-1}$.
\end{proposition}
\begin{proof} (sketch)
(i) Part (i) of Theorem \ref{3.2} shows that $\partial:\Bbb
W_i\to{\mathfrak g}_1\otimes\Bbb W_{i-1}$ is injective for each $i\geq 1$, so
injectivity of the $\phi_i$ follows. Moreover, for $i\neq r$, the
image of this map coincides with the kernel of
$\partial_1:{\mathfrak g}_1\otimes\Bbb W_{i-1}\to \Lambda^2{\mathfrak g}_1\otimes \Bbb
W_{i-2}$, while for $i=r$ this kernel in addition contains a
complementary subspace isomorphic to $S^k{\mathfrak g}_1\circledcirc\Bbb W_0$.
A moment of thought shows that $\partial_1$ can be written as
$2\text{Alt}\o(\operatorname{id}\otimes\partial_0)$, where $\text{Alt}$ denotes
the alternation. This immediately implies that the $\phi_i$ all have
values in $S^i{\mathfrak g}_1\otimes\Bbb W_0$ as well as the claim about the
image for $i<r$.
It further implies that $\operatorname{id}\otimes\phi_{r-1}$ restricts to isomorphisms
$$
\xymatrix@R=8pt{%
{\mathfrak g}_1\otimes\Bbb W_{r-1} \ar[r] & {\mathfrak g}_1\otimes S^{r-1}{\mathfrak g}_1\otimes\Bbb
W_0\\
\operatorname{ker}(\partial)\ar@{^{(}->}[u]\ar[r] & S^r {\mathfrak g}_1\otimes\Bbb
W_0\ar@{^{(}->}[u]\\
\operatorname{im}(\partial)\ar@{^{(}->}[u]\ar[r] & \Bbb K\ar@{^{(}->}[u],
}$$
which proves the claim on the image for $i=r$. For $i>r$ the claim
then follows easily as above.
\noindent
(ii) This follows immediately from the fact that
$\delta^*|_{\operatorname{im}(\partial)}$ inverts $\partial|_{\operatorname{im}(\delta^*)}$.
\end{proof}
\subsection{Step one of the prolongation procedure}\label{3.3}
The developments in \ref{3.1}--\ref{3.2a} carry over to an arbitrary
Riemannian manifold $(M,g)$ of dimension $n$. The representation $\Bbb
W$ corresponds to a vector bundle $W=\oplus_{i=0}^NW_i$. Likewise,
$\Bbb H_1$ corresponds to a direct summand $H_1\subset T^*M\otimes
W_{r-1}$ which is isomorphic to $S^r_0T^*M\circledcirc W_0$. The maps
$\partial$, $\partial^*$, and $\delta^*$ induce vector bundle maps on
the bundles $\Lambda^kT^*M\otimes W$ of $W$--valued differential forms,
and for $i=0,\dots,N$, the map $\phi_i$ induces a vector bundle map
$W_i\to S^iT^*M\otimes W_0$. We will denote all these maps by the same
symbols as their algebraic counterparts. Finally, the Cartan product
gives rise to a vector bundle map $S^rT^*M\otimes W_0\to H_1$, which
is unique up to multiples.
We have the component--wise Levi--Civita connection $\nabla$ on $W$.
We will denote a typical section of $W$ by $\Sigma$. The subscript $i$
will indicate the component in $\Lambda^kT^*M\otimes W_i$. Now we define a
linear connection $\tilde\nabla$ on $W$ by
$\tilde\nabla\Sigma:=\nabla\Sigma+\partial(\Sigma)$,
i.e.~$(\tilde\nabla\Sigma)_i=\nabla \Sigma_i+\partial(\Sigma_{i+1})$. Next, we
choose a bundle map $A:W_0\oplus\dots\oplus W_{r-1}\to H_1$, view it
as $A:W\to T^*M\otimes W$ and consider the system
\begin{equation}
\label{basic}
\tilde\nabla \Sigma+A(\Sigma)=\delta^*\psi\quad\text{for some\
}\psi\in\Omega^2(M,W).
\end{equation}
Since $A$ has values in $\operatorname{ker}(\delta^*)$ and $\delta^*\o\delta^*=0$,
any solution $\Sigma$ of this system has the property that
$\delta^*\tilde\nabla\Sigma=0$.
To rewrite the system equivalently as a higher order system, we define
a linear differential operator $L:\Gamma(W_0)\to\Gamma(W)$ by
$$
L(f):=\textstyle\sum_{i=0}^N(-1)^i(\delta^*\o\nabla)^i f.
$$
\begin{proposition}
(i) For $f\in\Gamma(W_0)$ we have $L(f)_0=f$ and
$\partial^*\tilde\nabla L(f)=0$, and $L(f)$ is uniquely determined
by these two properties.
\noindent
(ii) For $\ell=0,\dots,N$ the component $L(f)_\ell$ depends only on
the $\ell$--jet of $f$. More precisely, denoting by $J^\ell W_0$ the
$\ell$th jet prolongation of the bundle $W_0$, the operator $L$
induces vector bundle maps $J^\ell W_0\to W_0\oplus\dots\oplus
W_\ell$, which are isomorphisms for all $\ell<r$.
\end{proposition}
\begin{proof}
(i) Putting $\Sigma=L(f)$ it is evident that $\Sigma_0=f$ and
$\Sigma_{i+1}=-\delta^*\nabla \Sigma_i$ for all $i\geq 0$. Therefore,
$$
(\tilde\nabla\Sigma)_i=\nabla\Sigma_i+\partial(\Sigma_{i+1})=\nabla\Sigma_i-
\partial\delta^*\nabla \Sigma_i
$$
for all $i$. Since $\delta^*\partial$ is the identity on
$\operatorname{im}(\delta^*)$, we get $\delta^*\tilde\nabla\Sigma=0$.
Conversely, expanding the equation $0=\delta^*\tilde\nabla \Sigma$ in
components we obtain
$$
\Sigma_{i+1}=\delta^*\partial\Sigma_{i+1}=-\delta^*\nabla\Sigma_i,
$$
which inductively implies $\Sigma=L(\Sigma_0)$.
\noindent
(ii) By definition, $L(f)_\ell$ depends only on $\ell$ derivatives of
$f$. Again by definition, $L(f)_1=\delta^*\nabla f$, and if $r>1$,
this equals $\phi_1^{-1}(\nabla f)$. Naturality of $\delta^*$ implies
that
$$
L(f)_2=\delta^*\nabla\delta^*\nabla
f=\delta^*\o(\operatorname{id}\otimes\delta^*)(\nabla^2
f)=\delta^*\o(\operatorname{id}\otimes\phi_1^{-1})(\nabla^2 f).
$$
Replacing $\nabla^2$ by by its symmetrization changes the
expression by a term of order zero, so we see that, if $r>2$ and up to
lower order terms, $L(f)_2$ is obtained by applying $\phi_2^{-1}$ to
the symmetrization of $\nabla^2 f$. Using part (ii) of Proposition
\ref{3.2a} and induction, we conclude that for $\ell<r$ and up to
lower order terms $L(f)_\ell$ is obtained by applying $\phi_\ell^{-1}$
to the symmetrized $\ell$th covariant derivative of $\ell$, and the
claim follows.
\end{proof}
Note that part (ii) immediately implies that for a bundle map $A$ as
defined above, $f\mapsto A(L(f))$ is a differential operator
$\Gamma(W_0)\to\Gamma(H_1)$ of order at most $r-1$ and any such operator is
obtained in this way.
\subsection{The second step of the procedure}\label{3.4}
For a section $f\in\Gamma(W_0)$ we next define $D^{\Bbb W}(f)\in\Gamma(H_1)$
to be the component of $\tilde\nabla L(f)$ in
$\Gamma(H_1)\subset\Omega^1(M,W)$. We know that $(\tilde\nabla
L(f))_{r-1}=\nabla L(f)_{r-1}+\partial L(f)_r$, and the second summand
does not contribute to the $H_1$--component. Moreover, from the proof
of Proposition \ref{3.3} we know that, up to lower order terms,
$L(f)_{r-1}$ is obtained by applying $\phi_{r-1}^{-1}$ to the
symmetrized $(r-1)$--fold covariant derivative of $f$. Hence up to
lower order terms, $\nabla L(f)_{r-1}$ is obtained by applying
$\operatorname{id}\otimes\phi_{r-1}^{-1}$ to the symmetrized $r$--fold covariant
derivative of $f$. Using the proof of Proposition \ref{3.2a} this
easily implies that the principal symbol of $ D^{\Bbb W}$ is (a
nonzero multiple of) the Cartan product $S^rT^*M\otimes W_0\to
S^r_0T^*M\circledcirc W_0=H_1$.
\begin{proposition}
Let $D:\Gamma(W_0)\to\Gamma(H_1)$ be a quasi--linear differential operator
of order $r$ whose principal symbol is given by the Cartan product
$S^rT^*M\otimes W_0\to S^r_0T^*M\circledcirc W_0$. Then there is a
bundle map $A:W\to T^*M\otimes W$ as in \ref{3.3} such that
$\Sigma\mapsto\Sigma_0$ and $f\mapsto L(f)$ induce inverse bijections
between the sets of solutions of $D(f)=0$ and of the basic system
\eqref{basic}.
\end{proposition}
\begin{proof}
This is completely parallel to the proof of Proposition \ref{2.3}:
The conditions on $D$ exaclty means that it can be written in the
form $D(f)=D^{\Bbb W}(f)+A(L(f))$ for an appropriate choice of $A$
as above. Then $\tilde\nabla L(f)+A(L(f))$ is a section of the
subbundle $\operatorname{ker}(\delta^*)$ and the component in $H_1$ of this
section equals $D(f)$. Of course, being a section of $\operatorname{im}(\delta^*)$
is equivalent to vanishing of the $H_1$--component.
Conversely, Proposition \ref{3.3} shows that any solution $\Sigma$ of
\eqref{basic} is of the form $\Sigma=L(\Sigma_0)$.
\end{proof}
\subsection{The last step of the procedure}\label{3.5}
\setcounter{theorem}5
To rewrite the basic system \eqref{basic} in first order closed form,
we use the covariant exterior derivative $d^{\tilde\nabla}$. Suppose
that $\alpha\in\Omega^1(M,W)$ has the property that its components $\alpha_i$
vanish for $i=0,\dots,\ell$. Then one immediately verifies that
$(d^{\tilde\nabla}\alpha)_i=0$ for $i=0,\dots,\ell-1$ and
$(d^{\tilde\nabla}\alpha)_\ell=\partial(\alpha_{\ell+1})$, so
$(\delta^*d^{\tilde\nabla}\alpha)_i$ vanishes for $i\leq\ell$ and equals
$\delta^*\partial\alpha_{\ell+1}$ for $i=\ell+1$. If we in addition
assume that $\alpha$ is a section of the subbundle $\operatorname{im}(\delta^*)$, then
the same is true for $\alpha_{\ell+1}$ and hence
$\delta^*\partial\alpha_{\ell+1}= \alpha_{\ell+1}$.
Suppose that $\Sigma$ solves the basic system \eqref{basic}. Then
applying $\delta^*d^{\tilde\nabla}$, we obtain
$$
\delta^*(R\bullet\Sigma
+d^{\tilde\nabla}(A(\Sigma)))=\delta^*d^{\tilde\nabla}\delta^*\psi,
$$
where we have used that, as in \ref{2.4},
$d^{\tilde\nabla}\tilde\nabla\Sigma$ is given by the action of the
Riemann curvature $R$. From above we see that we can compute the
lowest nonzero homogeneous component of $\delta^*\psi$ from this
equation. We can then move this to the other side in \eqref{basic} to
obtain an equivalent system whose right hand side starts one
homogeneity higher. The lowest nonzero homogeneous component of the
right hand side can then be computed in the same way, and iterating
this we conclude that \eqref{basic} can be equivalently written as
\begin{equation}
\label{hoe}
\tilde\nabla \Sigma+B(\Sigma)=0
\end{equation}
for a certain differential operator $B:\Gamma(W)\to\Omega^1(M,W)$.
While $B$ is a higher order differential operator in general, it is
crucial that the construction gives us a precise control on the order
of the individual components of $B$. From the construction it follows
that $B(\Sigma)_i\in\Omega^1(M,V_i)$ depends only on $\Sigma_0,\dots,\Sigma_i$,
and the dependence is tensorial in $\Sigma_i$, first order in $\Sigma_{i-1}$
and so on up to $i$th order in $\Sigma_0$.
In particular, the component of \eqref{hoe} in $\Omega^1(M,W_0)$ has the
form $\nabla\Sigma_0=C_0(\Sigma_0,\Sigma_1)$. Next, the component in
$\Omega^1(M,W_1)$ has the form $\nabla\Sigma_1=\tilde
C_1(\Sigma_0,\Sigma_1,\Sigma_2,\nabla\Sigma_0)$, and we define
$$
C_1(\Sigma_0,\Sigma_1,\Sigma_2):=\tilde C_1(\Sigma_0,\Sigma_1,\Sigma_2,-C_0(\Sigma_0,\Sigma_1)).
$$
Hence the two lowest components of \eqref{hoe} are equivalent to
$$
\begin{cases}
\nabla\Sigma_1= C_1(\Sigma_0,\Sigma_1,\Sigma_2)\\
\nabla\Sigma_0=C_0(\Sigma_0,\Sigma_1)
\end{cases}
$$
Differentiating the lower row and inserting for $\nabla\Sigma_0$ and
$\nabla\Sigma_1$ we get an expression for $\nabla^2\Sigma_0$ in terms of
$\Sigma_0,\Sigma_1,\Sigma_2$. Continuing in this way, one proves
\begin{theorem}
Let $D:\Gamma(W_0)\to\Gamma(H_1)$ be a quasi--linear differential operator
of order $r$ with principal symbol the Cartan product
$S^rT^*M\otimes W_0\to S^rT^*M_0\circledcirc W_0$. Then there is a
bundle map $C:W\to T^*M\otimes W$ such that $\Sigma\mapsto\Sigma_0$ and
$f\mapsto L(f)$ induce inverse bijections between the sets of
solutions of $D(f)=0$ and of $\tilde\nabla \Sigma+C(\Sigma)=0$. If $D$ is
linear, then $C$ can be chosen to be a vector bundle map.
\end{theorem}
This in particular shows that any solution of $D(f)=0$ is determined
by the value of $L(f)$ in one point, end hence by the $N$--jet of $f$
in one point. For linear $D$, the dimension of the space of solutions
is bounded by $\dim(\Bbb W)$ and equality can be only attained if the
linear connection $\tilde\nabla+C$ on $W$ is flat. A crucial point
here is of course that $\Bbb W$, and hence $\dim(\Bbb W)$ and $N$ can
be immediately computed from $\Bbb W_0$ and $r$, so all this
information is available in advance, without going through the
procedure. As we shall see later, both the bound on the order and the
bound on the dimension are sharp.
To get a feeling for what is going on, let us consider some examples.
If we look at operators on smooth functions, we have $\Bbb W_0=\Bbb
R$. The representation associated to $(\Bbb R,r)$ is $S^{r-1}_0\Bbb
V$, the tracefree part of the $(r-1)$st symmetric power of the
standard representation $\Bbb V$. A moment of thought shows that the
eigenvalues of the grading element $E$ on this representation range
from $-r+1$ to $r-1$, so $N=2(r-1)$. On the other hand, for $r\geq 3$
we have
$$
\dim(S^{r-1}_0\Bbb V)=\dim(S^{r-1}\Bbb V)-\dim(S^{r-3}\Bbb
V)=(n+2r-2)\frac{(n+2r-2)!}{n!(r-1)!},
$$
and this is the maximal dimension of the space of solutions of any
system with principal part
$f\mapsto\nabla_{(a_1}\nabla_{a_2}\dots\nabla_{a_r)_0}f$ for $f\in
C^\infty(M,\Bbb R)$.
As an extreme example let us consider the conformal Killing equation
on tracefree symmetric tensors. Here $W_0=S^k_0TM$ for some $k$ and
$r=1$. The principal part in this case is simply
$$
f^{a_1\dots a_k}\mapsto\nabla^{(a}f^{a_1\dots a_k)_0}.
$$
The relevant representation $\Bbb W$ in this case turns out to be
$\circledcirc^k\frak g$, i.e.~the highest weight subspace in $S^k\frak
g$. In particular $N=2k$ in this case, so even though we consider
first order systems, many derivatives a needed to pin down a solution.
The expression for $\dim(\Bbb W)$ is already reasonably complicated in
this case, namely (see \cite{Mike:symmetries})
$$
\dim(\Bbb W)=\frac{(n+k-3)!(n+k-2)!(n+2k)!}{k!(k+1)!(n-2)!n!(n+2k-3)!}
$$
The conformal Killing equation $\nabla^{(a}f^{a_1\dots a_k)_0}=0$
plays an important role in the description of symmetries of the
Laplacian on a Riemannian manifold, see \cite{Mike:symmetries}.
\section{Conformally invariant differential operators}\label{4}
We now move to the method for constructing conformally invariant
differential operators, which gave rise to the prolongation procedure
discussed in the last two sections.
\setcounter{proposition}3
\subsection{Conformal geometry}\label{4.1}
Let $M$ be a smooth manifold of dimension $n\geq 3$. As already
indicated in \ref{2.4}, two Riemannian metrics $g$ and $\hat g$ on $M$
are called \textit{conformally equivalent} if and only if there is a
positive smooth function $\phi$ on $M$ such that $\hat g=\phi^2 g$. A
\textit{conformal structure} on $M$ is a conformal equivalence class
$[g]$ of metrics, and then $(M,[g])$ is called a conformal manifold. A
\textit{conformal isometry} between conformal manifolds $(M,[g])$ and
$(\tilde M,[\tilde g])$ is a local diffeomorphism which pulls back one
(or equivalently any) metric from the class $[\tilde g]$ to a metric
in $[g]$.
A Riemannian metric on $M$ can be viewed as a reduction of structure
group of the frame bundle to $O(n)\subset GL(n,\Bbb R)$. In the same
way, a conformal structure is a reduction of structure group to
$CO(n)\subset GL(n,\Bbb R)$, the subgroup generated by $O(n)$ and
multiples of the identity.
We want to clarify how the inclusion $O(n)\hookrightarrow G\cong
O(n+1,1)$ which was the basis for our prolongation procedure is
related to conformal geometry. For the basis $\{e_0,\dots,e_{n+1}\}$
used in \ref{2.1}, this inclusion was simply given by $A\mapsto
\left(\begin{smallmatrix} 1& 0 & 0 \\ 0 & A & 0\\ 0 & 0 &
1\end{smallmatrix} \right)$. In \ref{3.1} we met the decomposition
${\mathfrak g}={\mathfrak g}_{-1}\oplus{\mathfrak g}_0\oplus{\mathfrak g}_1$ of the Lie algebra ${\mathfrak g}$ of $G$.
We observed that this decomposition is preserved by $O(n)\subset G$
and in that way ${\mathfrak g}_{\pm 1}$ is identified with the standard
representation. But there is a larger subgroup with these properties.
Namely, for elements of
$$
G_0:=\left\{\left(\begin{smallmatrix} a & 0 & 0 \\ 0 & A & 0\\ 0 &
0 & a^{-1}\end{smallmatrix} \right):a\in\Bbb R\setminus 0, A\in
O(n)\right\}\subset G,
$$
the adjoint action preserves the grading, and maps $X\in{\mathfrak g}_{-1}$
to $a^{-1}AX$, so $G_0\cong CO({\mathfrak g}_{-1})$. Note that $G_0\subset G$
corresponds to the Lie subalgebra ${\mathfrak g}_0\subset{\mathfrak g}$.
Now there is a more conceptual way to understand this. Consider the
subalgebra $\frak p:={\mathfrak g}_0\oplus{\mathfrak g}_1\subset{\mathfrak g}$ and let $P\subset G$
be the corresponding Lie subgroup. Then $P$ is the subgroup of
matrices which are block--upper--triangular with blocks of sizes $1$,
$n$, and $1$. Equivalently, $P$ is the stabilizer in $G$ of the
isotropic line spanned by the basis vector $e_0$. The group $G$ acts
transitively on the space of all isotropic lines in $\Bbb V$, so one
may identify this space with the homogeneous space $G/P$.
Taking coordinates $z_i$ with respect to an orthonormal basis of $\Bbb
V$ for which the first $n+1$ vectors are positive and the last one is
negative, a vector is isotropic if and only if
$\sum_{i=0}^nz_i^2=z_{n+1}^2$. Hence for a nonzero isotropic vector
the last coordinate is nonzero and any isotropic line contains a
unique vector whose last coordinate equals $1$. But this shows that the
space of isotropic lines in $\Bbb V$ is an $n$--sphere, so $G/P\cong
S^n$.
Given a point $x\in G/P$, choosing a point $v$ in the corresponding
line gives rise to an identification $T_xS^n\cong v^\perp/\Bbb R v$
and that space carries a positive definite inner product induced by
$\langle\ ,\ \rangle$. Passing from $v$ to $\lambda v$, this inner product
gets scaled by $\lambda^2$, so we get a canonical conformal class of inner
products on each tangent space, i.e.~a conformal structure on $S^n$.
This conformal structure contains the round metric of $S^n$.
The action $\ell_g$ of $g\in G$ on the space of null lines by
construction preserves this conformal structure, so $G$ acts by
conformal isometries. It turns out, that this identifies
$G/\{\pm\operatorname{id}\}$ with the group of all conformal isometries of $S^n$.
For the base point $o=eP\in G/P$, the tangent space $T_o(G/P)$ is
naturally identified with $\frak g/\frak p\cong{\mathfrak g}_{-1}$. Let
$P_+\subset P$ be the subgroup of those $g\in P$ for which
$T_o\ell_g=\operatorname{id}$. Then one easily shows that $P/P_+\cong G_0$ and the
isomorphism $G_0\cong CO({\mathfrak g}_{-1})$ is induced by $g\mapsto
T_o\ell_g$. Moreover, $P_+$ has Lie algebra $\frak g_1$ and
$\exp:{\mathfrak g}_1\to P_+$ is a diffeomorphism.
\subsection{Conformally invariant differential operators}\label{4.2}
Let $(M,[g])$ be a conformal manifold. Choosing a metric $g$ from the
conformal class, we get the Levi--Civita connection $\nabla$ on each
Riemannian natural bundle as well as the Riemann curvature tensor $R$.
Using $g$, its inverse, and $R$, we can write down differential
operators, and see how they change if $g$ is replaced by a conformally
equivalent metric $\hat g$. Operators obtained in that way, which do
not change at all under conformal rescalings are called
\textit{conformally invariant}. In order to do this successfully one
either has to allow density bundles or deal with conformal weights,
but I will not go into these details here. The best known example of
such an operator is the conformal Laplacian or Yamabe operator which
is obtained by adding an appropriate amount of scalar curvature to the
standard Laplacian.
The definition of conformally invariant operators immediately suggests
a naive approach to their construction. First choose a principal part
for the operator. Then see how this behaves under conformal rescalings
and try to compensate the changes by adding lower order terms
involving curvature quantities. This approach (together with a bit of
representation theory) easily leads to a complete classification of
conformally invariant first order operators, see \cite{Fegan}. Passing
to higher orders, the direct methods get surprisingly quickly out of
hand.
The basis for more invariant approaches is provided by a classical
result of Elie Cartan, which interprets general conformal structures
as analogs of the homogeneous space $S^n\cong G/P$ from \ref{4.1}. As
we have noted above, a conformal structure $[g]$ on $M$ can be
interpreted as a reduction of structure group. This means that a
conformal manifold $(M,[g])$ naturally carries a principal bundle with
structure group $CO(n)$, the \textit{conformal frame bundle}. Recall
from \ref{4.1} that the conformal group $G_0=CO({\mathfrak g}_{-1})\cong CO(n)$
can be naturally viewed as a quotient of the group $P$. Cartan's
result says that the conformal frame bundle can be canonically
extended to a principal fiber bundle $\Cal G\to M$ with structure
group $P$, and $\Cal G$ can be endowed with a canonical Cartan
connection $\omega\in\Omega^1(\Cal G,\frak g)$. The form $\omega$ has similar
formal properties as the Maurer--Cartan form on $G$, i.e.~it defines a
trivialization of the tangent bundle $T\Cal G$, which is
$P$--equivariant and reproduces the generators of fundamental vector
fields.
While the canonical Cartan connection is conformally invariant, it is
not immediately clear how to use it to construct differential
operators. The problem is that, unlike principal connections, Cartan
connections do not induce linear connections on associated vector
bundles.
\subsection{The setup for the conformal BGG machinery}\label{4.3}
Let us see how the basic developments from \ref{3.1}--\ref{3.2a}
comply with our new point of view. First of all, for $g\in P$, the
adjoint action does not preserve the grading
${\mathfrak g}={\mathfrak g}_{-1}\oplus{\mathfrak g}_0\oplus {\mathfrak g}_1$, but it preserves the
subalgebras $\frak p={\mathfrak g}_0\oplus{\mathfrak g}_1$, and ${\mathfrak g}_1$. More generally,
if $\Bbb W=\Bbb W_0\oplus\dots\oplus\Bbb W_N$ is an irreducible
representation of ${\mathfrak g}$ decomposed according to eigenspaces of the
grading element $E$, then each of the subspaces $\Bbb
W_i\oplus\dots\oplus\Bbb W_N$ is $P$--invariant. Since $P$ naturally
acts on ${\mathfrak g}_1$ and on $\Bbb W$, we get induced actions on
$\Lambda^k{\mathfrak g}_1\otimes\Bbb W$ for all $k$. The formula for
$\partial^*:\Lambda^k{\mathfrak g}_1\otimes\Bbb W\to \Lambda^{k-1}{\mathfrak g}_1\otimes\Bbb W$
uses only the action of ${\mathfrak g}_1$ on $\Bbb W$, so $\partial^*$ is
$P$--equivariant.
In contrast to this, the only way to make $P$ act on ${\mathfrak g}_{-1}$ is via
the identification with $\frak g/\frak p$. However, the action of
${\mathfrak g}_{-1}$ on $\Bbb W$ has no natural interpretation in this
identification, and $\partial:\Lambda^k{\mathfrak g}_1\otimes\Bbb W\to
\Lambda^{k+1}{\mathfrak g}_1\otimes\Bbb W$ is \textit{not} $P$--equivariant.
Anyway, given a conformal manifold $(M,[g])$ we can now do the
following. Rather than viewing $\Bbb W$ just as sum of representations
of $G_0\cong CO(n)$, we can view it as a representation of $P$, and
form the associated bundle $\Cal W:=\Cal G\times_P\Bbb W\to M$. Bundles
obtained in this way are called \textit{tractor bundles}. I want to
emphasize at this point that the bundle $\Cal W$ is of completely
different nature than the bundle $W$ used in section \ref{3}. To see
this, recall that elements of the subgroup $P_+\subset P$ act on $G/P$
by diffeomorphisms which fix the base point $o=eP$ to first
order. Therefore, the action of such a diffeomorphism on the fiber
over $o$ of any tensor bundle is the identity. On the other hand, it
is easy to see that on the fiber over $o$ of any tractor bundle, this
action is always non--trivial. Hence tractor bundles are unusual
geometric objects.
Examples of tractor bundles have already been introduced as an
alternative to Cartan's approach in the 1920's and 30's, in particular
in the work of Tracy Thomas, see \cite{Thomas}. Their key feature is
that the canonical Cartan connection $\omega$ induces a canonical linear
connection, called the \textit{normal tractor connection} on each
tractor bundle. This is due to the fact that these bundles do not
correspond to general representations of $P$, but only to
representations which extend to the big group $G$. We will denote the
normal tractor connection on $\Cal W$ by $\nabla^{\Cal W}$. These
connections automatically combine algebraic and differential parts.
The duality between ${\mathfrak g}_1$ and ${\mathfrak g}_{-1}$ induced by the Killing
form, is more naturally viewed as a duality between ${\mathfrak g}_1$ and $\frak
g/\frak p$. Via the Cartan connection $\omega$, the associated bundle
$\Cal G\times_P(\frak g/\frak p)$ is isomorphic to the tangent bundle
$TM$. Thus, the bundle $\Cal G\times_P(\Lambda^k{\mathfrak g}_1\otimes\Bbb W)$ is again
the bundle $\Lambda^kT^*M\otimes\Cal W$ of $\Cal W$--valued forms. Now it
turns out that in a well defined sense (which however is rather awkward
to express), the lowest nonzero homogeneous component of $\nabla^{\Cal
W}$ is of degree zero, it is tensorial and induced by the Lie
algebra differential $\partial$.
Equivariancy of $\partial^*$ implies that it defines bundle maps
$$
\partial^*:\Lambda^kT^*M\otimes\Cal W\to\Lambda^{k-1}T^*M\otimes\Cal W
$$
for each $k$. In particular,
$\operatorname{im}(\partial^*)\subset\operatorname{ker}(\partial^*)\subset \Lambda^kT^*M\otimes\Cal W$
are natural subbundles, and we can form the subquotient
$H_k:=\operatorname{ker}(\partial^*)/\operatorname{im}(\partial^*)$. It turns out that these
bundles are always naturally associated to the conformal frame bundle,
so they are usual geometric objects like tensor bundles. The explicit
form of the bundles $H_k$ can be computed algorithmically using
Kostant's version of the BBW--theorem.
\subsection{The conformal BGG machinery}\label{4.4}
The normal tractor connection $\nabla^{\Cal W}$ extends to the
covariant exterior derivative, which we denote by $d^{\Cal
W}:\Omega^k(M,\Cal W)\to\Omega^{k+1}(M,\Cal W)$. The lowest nonzero
homogeneous component of $d^{\Cal W}$ is of degree zero, tensorial,
and induced by $\partial$.
Now for each $k$, the operator $\partial^* d^{\Cal W}$ on $\Omega^k(M,V)$
is conformally invariant and its lowest nonzero homogeneous component
is the tensorial map induced by $\partial^*\partial$. By Theorem
\ref{3.2}, $\partial^*\partial$ acts invertibly on
$\operatorname{im}(\partial^*)$. Hence we can find a (non--natural) bundle map $\beta$
on $\operatorname{im}(\partial^*)$ such that $\beta\partial^*d^\Cal W$ reproduces the
lowest nonzero homogeneous component of sections of
$\operatorname{im}(\partial^*)$. Therefore, the operator $\operatorname{id}-\beta\partial^*d^\Cal
W$ is (at most $N$--step) nilpotent on $\Gamma(\operatorname{im}(\partial^*))$, which
easily implies that
$$
\left(\textstyle\sum_{i=0}^N(\operatorname{id}-\beta\partial^*d^{\Cal W})^i\right)\beta
$$
defines a differential operator $Q$ on $\Gamma(\operatorname{im}(\partial^*))$ which
is inverse to $\partial^*d^{\Cal W}$ and therefore conformally
invariant.
Next, we have a canonical bundle map
$$
\pi_H:\operatorname{ker}(\partial^*)\to\operatorname{ker}(\partial^*)/\operatorname{im}(\partial^*)=H_k,
$$
and we denote by the same symbol the induced tensorial projection
on sections. Given $f\in\Gamma(H_k)$ we can choose $\phi\in\Omega^k(M,\Cal
W)$ such that $\partial^*\phi=0$ and $\pi_H(\phi)=f$, and consider
$\phi-Q\partial^*d^{\Cal W}\phi$. By construction, $\phi$ is uniquely
determined up to adding sections of $\operatorname{im}(\partial^*)$. Since these are
reproduced by $Q\partial^*d^{\Cal W}$, the above element is
independent of the choice of $\phi$ and hence defines
$L(f)\in\Omega^k(M,\Cal W)$. Since $Q$ has values in
$\Gamma(\operatorname{im}(\partial^*))$ we see that $\pi_H(L(f))=f$, and since
$\partial^*d^{\Cal W}Q$ is the identity on $\Gamma(\operatorname{im}(\partial^*))$ we
get $\partial^*d^{\Cal W}L(f)=0$. If $\phi$ satisfies $\pi_H(\phi)=f$
and $\partial^*d^{\Cal W}\phi=0$, then
$$
L(f)=\phi-Q\partial^*d^{\Cal W}\phi=\phi,
$$
so $L(f)$ is uniquely determined by these two properties.
By construction, the operator $L:\Gamma(H_k)\to\Omega^k(M,\Cal W)$ in
conformally invariant. Moreover, $d^{\Cal W}L(f)$ is a section of
$\operatorname{ker}(\partial^*)$, so we can finally define the BGG--operators
$D^{\Cal W}:\Gamma(H_k)\to\Gamma(H_{k+1})$ by $D^{\Cal W}(f):=\pi_H(d^{\Cal
W}L(f))$. They are conformally invariant by construction.
To obtain additional information, we have to look at structures which
are locally conformally flat or equivalently locally conformally
isometric to the sphere $S^n$. It is a classical result that local
conformal flatness is equivalent to vanishing of the curvature of the
canonical Cartan connection.
\begin{proposition}
On locally conformally flat manifolds, the BGG operators form a
complex $(\Gamma(H_*),D^{\Cal W})$, which is a fine resolution of the
constant sheaf $\Bbb W$.
\end{proposition}
\begin{proof}
The curvature of any tractor connection is induced by the Cartan
curvature (see \cite{TAMS}), so on locally conformally flat
structures, all tractor connections are flat. This implies that the
covariant exterior derivative satisfies $d^{\Cal W}\o d^{\Cal W}=0$.
Thus $(\Omega^*(M,\Cal W),d^{\Cal W})$ is a fine resolution of the
constant sheaf $\Bbb W$.
For $f\in\Gamma(H_k)$ consider $d^{\Cal W}L(f)$. By construction, this
lies in the kernel of $\partial^*$ and since $d^{\Cal W}\o d^{\Cal
W}=0$, it also lies in the kernel of $\partial^*d^{\Cal W}$. From
above we know that this implies that
$$
d^{\Cal W}L(f)=L(\pi_H(d^{\Cal W}L(f)))=L(D^{\Cal W}(f)).
$$
This shows that $L\o D^{\Cal W}\o D^{\Cal W}=d^{\Cal W}\o d^{\Cal
W}\o L=0$ and hence $D^{\Cal W}\o D^{\Cal W}=0$, so
$(\Gamma(H_*),D^{\Cal W})$ is a complex. The operators $L$ define a
chain map from this complex to $(\Omega^*(M,\Cal W),d^{\Cal W})$, and we
claim that this chain map induces an isomorphism in cohomology.
First, take $\phi\in\Omega^k(M,\Cal W)$ such that $d^{\Cal W}\phi=0$. Then
$$
\tilde \phi:=\phi-d^{\Cal W}Q\partial^*\phi
$$
is cohomologous to $\phi$ and satisfies $\partial^*\tilde\phi=0$.
Moreover, $d^{\Cal W}\tilde \phi=d^{\Cal W}\phi=0$, so $\partial^*
d^{\Cal W}\phi=0$. Hence $\tilde \phi=L(\pi_H(\tilde\phi))$ and $D^{\Cal
W}(\pi_H(\tilde\phi))=0$, so the induced map in cohomology is
surjective.
Conversely, assume that $f\in\Gamma(H_k)$ satisfies $D^{\Cal W}(f)=0$ and
that $L(f)=d^{\Cal W}\phi$ for some $\phi\in\Omega^{k-1}(M,\Cal W)$. As
before, replacing $\phi$ by $\phi-d^{\Cal W}Q\partial^*\phi$ we may
assume that $\partial^*\phi=0$. But together with $\partial^*L(f)=0$
this implies $\phi=L(\pi_H(\phi))$ and thus $f=\pi_H(L(f))=D^{\Cal
W}(\pi_H(\phi))$. Hence the induced map in cohomology is injective,
too. Since this holds both locally and globally, the proof is
complete.
\end{proof}
Via a duality between invariant differential operators and
homomorphisms of generalized Verma modules, this reproduces the
original BGG resolutions as constructed in \cite{Lepowsky}. Via the
classification of such homomorphisms, one also concludes that this
construction produces a large subclass of all those conformally
invariant operators which are non--trivial on locally conformally flat
structures.
Local exactness of the BGG sequence implies that all the operators
$D^{\Cal W}$ are nonzero on locally conformally flat manifolds.
Passing to general conformal structures does not change the principal
symbol of the operator $D^{\Cal W}$, so we always get non--trivial
operators.
On the other hand, we can also conclude that the bounds obtained from
Theorem \ref{3.5} are sharp. From Theorem \ref{3.2} we conclude that
any choice of metric in the conformal class identifies $H_0=\Cal
W/\operatorname{im}(\partial^*)$ with the bundle $W_0$ and $H_1$ with its
counterpart from section \ref{3}, and we consider the operator
$D^{\Cal W}:\Gamma(H_0)\to\Gamma(H_1)$. By conformal invariance, the system
$D^{\Cal W}(f)=0$ must be among the systems covered by Theorem
\ref{3.5}, and the above procedure identifies its solutions with
parallel sections of $\Cal W$. Since $\nabla^{\Cal W}$ is flat in the
locally conformally flat case, the space of parallel sections has
dimension $\dim(\Bbb W)$. Moreover, two solutions of the system
coincide if and only if their images under $L$ have the same value in
one point.
\section{Generalizations}\label{5}
In this last part, we briefly sketch how the developments of sections
\ref{3} and \ref{4} can be carried over to larger classes of geometric
structures.
\subsection{The prolongation procedure for general $|1|$--graded Lie
algebras}\label{5.1}
The algebraic developments in \ref{3.1}--\ref{3.2a} generalize without
problems to a semisimple Lie algebra $\frak g$ endowed with a
$|1|$--grading, i.e.~a grading of the form $\frak
g={\mathfrak g}_{-1}\oplus{\mathfrak g}_0\oplus{\mathfrak g}_1$. Given such a grading it is easy to
see that it is the eigenspace decomposition of $\operatorname{ad}(E)$ for a uniquely
determined element $E\in\frak g_0$. The Lie subalgebra ${\mathfrak g}_0$ is
automatically the direct sum of a semisimple part ${\mathfrak g}'_0$ and a
one--dimensional center spanned by $E$. This gives rise to
$E$--eigenspace decompositions for irreducible representations. Again
irreducible representations of ${\mathfrak g}$ may be parametrized by pairs
consisting of an irreducible representation of ${\mathfrak g}'_0$ and an integer
$\geq 1$. Then all the developments of \ref{3.1}--\ref{3.2a} work
without changes.
Next choose a Lie group $G$ with Lie algebra ${\mathfrak g}$ and let $G_0\subset
G$ be the subgroup consisting of those elements whose adjoint action
preserves the grading of ${\mathfrak g}$. Then this action defines an
infinitesimally effective homomorphism $G_0\to GL({\mathfrak g}_{-1})$. In
particular, the semisimple part $G'_0$ of $G_0$ is a (covering of a)
subgroup of $GL({\mathfrak g}_{-1})$, so this defines a type of geometric
structure on manifolds of dimension $\dim({\mathfrak g}_{-1})$. This structure
is linked to representation theory of $G'_0$ in the same way as
Riemannian geometry is linked to representation theory of $O(n)$.
For manifolds endowed with a structure of this type, there is an
analog of the prolongation procedure described in \ref{3.3}--\ref{3.5}
with closely parallel proofs, see \cite{BCEG}. The only change is that
instead of the Levi--Civita connection one uses any linear connection
on $TM$ which is compatible with the reduction of structure group.
There are some minor changes if this connection has torsion. The
systems that this procedure applies to are the following. One chooses
an irreducible representation $\Bbb W_0$ of $G'_0$ and an integer
$r\geq 1$. Denoting by $W_0$ the bundle corresponding to $\Bbb W_0$,
one then can handle systems whose principal symbol is (a multiple of)
the projection from $S^rTM\otimes W_0$ to the subbundle corresponding
to the irreducible component of maximal highest weight in
$S^r{\mathfrak g}_1\otimes\Bbb W_0$.
The simplest example of this situation is ${\mathfrak g}=\frak{sl}(n+1,\Bbb R)$
endowed with the grading $\begin{pmatrix}{\mathfrak g}_0 & {\mathfrak g}_1\\
{\mathfrak g}_{-1}&{\mathfrak g}_0\end{pmatrix}$ with blocks of sizes $1$ and $n$. Then
${\mathfrak g}_{-1}$ has dimension $n$ and ${\mathfrak g}_0\cong\frak{gl}(n,\Bbb R)$. For
the right choice of group, one obtains $G'_0=SL(n,\Bbb R)$, so the
structure is just a volume form on an $n$--manifold.
There is a complete description of $|1|$--gradings of semisimple Lie
algebras in terms of structure theory and hence a complete list of the
other geometries for which the procedure works. One of these is
related to almost quaternionic structures, the others can be described
in terms of identifications of the tangent bundle with a symmetric or
skew symmetric square of an auxiliary bundle or with a tensor product
of two auxiliary bundles.
\subsection{Invariant differential operators for
AHS--structures}\label{5.2}
For a group $G$ with Lie algebra
${\mathfrak g}={\mathfrak g}_{-1}\oplus{\mathfrak g}_0\oplus{\mathfrak g}_1$ as in \ref{5.1}, one defines
$P\subset G$ as the subgroup of those elements whose adjoint action
preserves the subalgebra ${\mathfrak g}_0\oplus{\mathfrak g}_1=:\frak p$. It turns out
that $\frak p$ is the Lie algebra of $P$ and $G_0\subset P$ can also
be naturally be viewed as a quotient of $P$.
On manifolds of dimension $\dim({\mathfrak g}_{-1})$ we may consider reductions
of structure group to the group $G_0$. The passage from $G'_0$ as
discussed in \ref{5.1} to $G_0$ is like the passage from Riemannian to
conformal structures. As in \ref{4.2}, one may look at extensions of
the principal $G_0$--bundle defining the structure to a principal
$P$--bundle $\Cal G$ endowed with a normal Cartan connection
$\omega\in\Omega^1(\Cal G,\frak g)$. In the example ${\mathfrak g}=\frak{sl}(n+1,\Bbb
R)$ with ${\mathfrak g}_0=\frak{gl}(n,\Bbb R)$ from \ref{5.1}, the principal
$G_0$--bundle is the full frame bundle, so it contains no information.
One shows that such an extension is equivalent to the choice of a
projective equivalence class of torsion free connections on $TM$. In
all other cases (more precisely, one has to require that no simple
summand has this form) Cartan's result on conformal structures can be
generalized to show that such an extension is uniquely possible for
each given $G_0$--structure, see e.g. \cite{CSS2}.
\noindent
The structures equivalent to such Cartan connections are called
AHS--structures in the literature. Apart from conformal and projective
structures, they also contain almost quaternionic and almost
Grassmannian structures as well as some more exotic examples, see
\cite{CSS2}. For all these structures, the procedure from section
\ref{4} can be carried out without changes to construct differential
operators which are intrinsic to the geometry.
\subsection{More general geometries}\label{5.3}
The construction of invariant differential operators from section
\ref{4} applies to a much larger class of geometric structures. Let
$\frak g$ be a semisimple Lie algebra endowed with a $|k|$--grading,
i.e.~a grading of the form ${\mathfrak g}={\mathfrak g}_{-k}\oplus\dots\oplus{\mathfrak g}_k$ for
some $k\geq 1$, such that $[{\mathfrak g}_i,{\mathfrak g}_j]\subset{\mathfrak g}_{i+j}$ and such
that the Lie subalgebra ${\mathfrak g}_-:={\mathfrak g}_{-k}\oplus\dots\oplus{\mathfrak g}_{-1}$ is
generated by ${\mathfrak g}_{-1}$. For any such grading, the subalgebra $\frak
p:={\mathfrak g}_0\oplus\dots\oplus{\mathfrak g}_k\subset{\mathfrak g}$ is a \textit{parabolic}
subalgebra in the sense of representation theory. Conversely, any
parabolic subalgebra in a semisimple Lie algebra gives rise to a
$|k|$--grading. Therefore, $|k|$--gradings are well understood and can
be completely classified in terms of the structure theory of semisimple
Lie algebras.
Given a Lie group $G$ with Lie algebra ${\mathfrak g}$ one always finds a closed
subgroup $P\subset G$ corresponding to the Lie algebra $\frak p$. The
homogeneous space $G/P$ is a so--called \textit{generalized flag
variety}. Given a smooth manifold $M$ of the same dimension as
$G/P$, a \textit{parabolic geometry} of type $(G,P)$ on $M$ is given
by a principal $P$--bundle $p:\Cal G\to M$ and a Cartan connection
$\omega\in\Omega^1(\Cal G,\frak g)$.
In pioneering work culminating in \cite{Tanaka}, N.~Tanaka has shown
that assuming the conditions of regularity and normality on the
curvature of the Cartan connection, such a parabolic geometry is
equivalent to an underlying geometric structure. These underlying
structures are very diverse, but during the last years a uniform
description has been established, see the overview article
\cite{Srni05}. Examples of these underlying structures include
partially integrable almost CR structures of hypersurface type, path
geometries, as well as generic distributions of rank two in dimension
five, rank three in dimension six, and rank four in dimension seven.
For all these geometries, the problem of constructing differential
operators which are intrinsic to the structure is very difficult.
The BGG machinery developed in \cite{CSS:BGG} and \cite{CD}
offers a uniform approach for this construction, but compared to the
procedure of section \ref{4} some changes have to be made. One again
has a grading element $E$ which leads to an eigenspace decomposition
$\Bbb W=\Bbb W_0\oplus\dots\oplus\Bbb W_N$ of any finite dimensional
irreducible representation of ${\mathfrak g}$. As before, we have
${\mathfrak g}_i\cdot\Bbb W_j\subset \Bbb W_{i+j}$. Correspondingly, this
decomposition is only invariant under a subgroup $G_0\subset P$ with
Lie algebra ${\mathfrak g}_0$, but each of the subspaces $\Bbb
W_i\oplus\dots\oplus\Bbb W_N$ is $P$--invariant. The theory of tractor
bundles and tractor connections works in this more general setting
without changes, see \cite{TAMS}.
Via the Cartan connection $\omega$, the tangent bundle $TM$ can be
identified with $\Cal G\times_P(\frak g/\frak p)$ and therefore
$T^*M\cong\Cal G\times_P(\frak g/\frak p)^*$. Now the annihilator of
$\frak p$ under the Killing form of ${\mathfrak g}$ is the subalgebra $\frak
p_+:={\mathfrak g}_1\oplus\dots\oplus{\mathfrak g}_k$. For a tractor bundle $\Cal W=\Cal
G\times_P\Bbb W$, the bundles of $\Cal W$--valued forms are therefore
associated to the representations $\Lambda^k\frak p_+\otimes\Bbb W$.
Since we are now working with the nilpotent Lie algebra $\frak p_+$
rather than with an Abelian one, we have to adapt the definition of
$\partial^*$. In order to obtain a differential, we have to add terms
which involve the Lie bracket on $\frak p_+$. The resulting map
$\partial^*$ is $P$--equivariant, and the quotients
$\operatorname{ker}(\partial^*)/\operatorname{im}(\partial^*)$ can be computed as representations
of ${\mathfrak g}_0$ using Kostant's theorem. As far as $\partial$ is concerned,
we have to identify $\frak g/\frak p$ with the nilpotent subalgebra
${\mathfrak g}_-:={\mathfrak g}_{-k}\oplus\dots\oplus{\mathfrak g}_{-1}$. Then we can add terms
involving the Lie bracket on ${\mathfrak g}_-$ to obtain a map $\partial$ which
is a differential. As the identification of ${\mathfrak g}/\frak p$ with
${\mathfrak g}_-$, the map $\partial$ is not equivariant for the $P$--action but
only for the action of a subgroup $G_0$ of $P$ with Lie algebra ${\mathfrak g}_0$.
The $P$--equivariant map $\partial^*$ again induces vector bundle
homomorphisms on the bundles of $\Cal W$--valued differential forms.
We can extend the normal tractor connection to the covariant exterior
derivative $d^\Cal W$. As in \ref{4.4}, the lowest homogeneous
component of $d^{\Cal W}$ is tensorial and induced by $\partial$ which
is all that is needed to get the procedure outlined in \ref{4.4}
going. Also the results for structures which are locally isomorphic
to $G/P$ discussed in \ref{4.4} extend to general parabolic
geometries.
The question of analogs of the prolongation procedure from section
\ref{3} for arbitrary parabolic geometries has not been completely
answered yet. It is clear that some parts generalize without problems.
For other parts, some modifications will be necessary. In particular,
the presence of non--trivial filtrations of the tangent bundle makes
it necessary to use the concept of weighted order rather than the
usual concept of order of a differential operator and so on. Research
in this direction is in progress.
|
1,314,259,993,492 | arxiv | \section{Introduction}
Calorimetric methods are the most exact for measurement of energy of high-energy nuclei and they are widely applied in experiments on accelerators and with cosmic radiation \cite{1}.
The most of methods of calorimetric measurements of energy are based on full absorption of energy of a particle in certain volume of substance.
The technical embodiment of modern ionization calorimeters can be various, but the idea remains invariable:
the primary particle enters into dense substance (for example, iron or lead), in which numerous nuclear and electromagnetic interactions occur.
It gives rise to a cascade of secondary particles. If depth of substance is sufficient, all kinetic energy of a primary particle will be transformed into the cascade of secondary particles, which will lose energy on ionization.
For measurement of characteristics of the cascade the dense substance is sandwiched with special detectors. By measurement of signals from these detectors the cascade curve is formed. It represents dependence of number of particles in the cascade versus penetration depth of the cascade in calorimeter. If a maximum of a cascade curve is measured, then the energy of a primary particle can be defined. The main problem of such measurement of energy is massive installations as the calorimeter should have big enough depth for formation of the cascade curve. It considerably complicates possibilities of use of such device in the cosmic industry.
Weight reduction can be reached by using a thin calorimeter.
For definition of energy of a primary particle in a thin calorimeter the formation of the whole cascade curve is not required. In this case, a registration of its beginning is sufficient. The energy is defined by measurement of number of particles in the cascade, as number of particles on certain depth of development of the cascade is proportionally energy of a primary particle.
Thus the problem of measurement of energy of a primary particle boil down to the decision of a inverse task by simulation of development of cascade process on the basis of modern knowledge of the elementary act of interaction.
In the project the NUCLEON \cite{2} reduction of weight of equipment is reached by use of kinematic methods of definition of energy of a primary particle. This technique is based on registration of corners of scattering of the secondary particles, which are born in interaction of projectile nucleus with a target nucleus.
However using only kinematic methods leads to significant uncertainties of energy definition. Therefore the combined approach has been offered: to measure not only "width" (spatial density) of the cascade of secondary particles, but also their quantity, i.e. to unite a kinematic method with a method of a thin calorimeter.
Authors named this technique KLEM (kinematic light - weight energy meter).
Calculations and test experiments on the accelerator have shown, that accuracy of definition of energy will be about 50$\%$ taking into account an aprioristic spectrum of cosmic rays.
Such low accuracy is caused by essential dependence of results of energy measurement on fluctuations in development of cascade process and on mass of primary nucleus.
Influence of fluctuations in cascade development on results of measurement of energy can be reduced essentially by using correlation methods of the analysis of development of the cascade. It allows to raise accuracy of measurement of energy considerably.
The approach of correlation curves has recommended itself successfully at the analysis of extensive air showers \cite{3}.
Modified technique, using cascade development universality for thin calorimeter, is applied in the given work.
\section{Method of correlation curves}
The method is developed and tested for the decision of a problem of measurement of energy of high-energy particles by thin calorimeter on the basis of computer calculations.
Simulation of development of the cascade processes, initiated by primary particles of various masses and energies, has been performed on
the basis of software package CORSIKA QGSJET \cite{4}.
In Figure 1-left the cascade curves of extensive air showers, formed by iron nuclei with various primary energies, are presented. As it is seen from the Figure, uncertainty of results of measurements of energy on the basis of a thin calorimeter first of all is connected with fluctuations in cascade development. Cascade curves essentially fluctuate and practically merge (not separable) at small values of penetration depth d. This fact does not allow to use them for definition of energy of a primary particle on the basis of a thin calorimeter, i.e. on the basis of the limited quantity of measurements on an ascending branch of a cascade curve.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig1a.eps}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig1b.eps}
\caption{\label{fig1}
Cascade (left) and correlation (right) curve from interactions of iron nuclei at energies
$10^{14}$eV, $10^{15}$eV, $10^{16}$eV, $10^{17}$eV with nuclei of atoms of air.
}
\end{center}
\end{figure}
In the technique of correlation curves some internal correlations, which are independent on fluctuations in cascade development, are analyzed.
As a result of research of the various parameters, characterizing development of cascade process, it is revealed, that the accuracy of definition of energy can be increased essentially if to use correlation curve of dependences of particles number at observation level versus a relation of number of particles at two levels, divided by a layer of an absorber.
In Figure 1-right the given correlation curves for the same interactions, as in Figure 1-left, are presented.
Correlation curves, as it is seen from the Figure, represent more ordered picture. Fluctuations of an ascending branch of a correlation curve are not so much considerable, as in case of cascade curves.
The second most important parameter, influencing accuracy of measurement of energy, is an uncertainty of a primary nucleus.
In Figure 2-left average cascade curves of interaction of nuclei of iron and proton at the fixed energy $10^{16}$eV with nuclei of air atoms, are presented.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig2a.eps}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig2b.eps}
\caption{\label{fig2}
Average cascade (left) and correlation (right) curve of interactions of primary nuclei of iron and proton at energy
$10^{16}$ eV with nuclei of air atoms.
}
\end{center}
\end{figure}
The cascade curves, formed by a proton, are shifted in area of the bigger depths of penetration in comparison with Fe cascade curves. This fact leads to unequal definition of energy for various nuclei. It is connected with following.
The most of experimental groups define the primary energy $E$ on the base of measurement of number of secondary particles $N_e$ on observation level, using the dependence:
\begin{center}
$N_e=\alpha E^\beta $
\end{center}
where $\alpha$, $\beta$ - parameters, which depend on depth of penetration and mass of a primary particle. Statistically the equation works correctly.
However, on the ascending branch of the cascade curve the energy is defined above real value for quickly developing cascades, and below real energy for slowly developing cascades.
This fact leads to the underestimated value of energy of proton cascades and to the overestimated value for Fe cascades.
Using correlation curves allows to reduce essentially energy definition errors, connected with uncertainty of a primary nucleus.
Average correlation curves are presented in Figure 2-right. As it is seen from the Figure, correlation curves practically coincide for different nuclei. Maximum points of proton and iron correlation curves are shifted in one point $dN=0$, which is independent from depth of penetration.
Summing up the given section once again we will underline the following: accuracy of definition of energy on the basis of a thin calorimeter can be increased essentially if to use correlation curve dependences of number of particles at observation level versus a difference of number of particles at two levels, divided by an absorber layer.
The following important question concerns a choice of an optimum thickness of a layer of an absorber.
\section{Discretization parametres of cascade curves}
Discretization parametres of cascade curves are important value for definition of optimum density of substance and minimisation of number of layers of a thin calorimeter. In Figure 3 correlation curves with various values of a thickness of an absorber $dN$, are presented.
\begin{figure}[t]
\begin{center}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig3a.eps}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig3b.eps}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig3c.eps}
\includegraphics*[width=0.48\textwidth,angle=0,clip]{fig3d.eps}
\caption{\label{fig3}
Correlation curves (Fe $10^{14}$eV, $10^{15}$eV, $10^{16}$eV, $10^{17}$eV) with different values of a thickness of a layer
($dN$= 20, 40, 60, 100 $g/sm^2$)}
\end{center}
\end{figure}
As it is seen from the Figure, the increase in layer thickness of an absorber leads to increase in accuracy of definition of energy. However the increase of the thickness increases weight of installation.
Thus, the choice of a thickness of a layer depends on conditions of concrete experiment.
\section{Conclusion}
The technique of measurement of energy of primary cosmic particles on the basis of correlation research of development of cascade process in consistently located layers of a thin calorimeter, is developed.
The given technique is based on the correlation analysis of dependence of number of secondary particles at observation level and the relation of number of particles at two levels divided by a layer of an absorber. It is shown, that use of correlation curves allows to reduce essentially errors of definition of energy of the primary particle, connected with uncertainty of a primary nucleus and fluctuations of development of cascade process.
Further for possible technical realisation of the project of a thin calorimeter it will be necessary to solve problems connected with calculation of installation response, optimisation of the information gathering, etc.
However, now the basic result is received: on the basis of computer simulation the correlation parameters, which allow to define the energy of a primary nuclei on an ascending branch of a cascade curve, are found out.
Work is supported by grant of MES RK N1276/GF2.
|
1,314,259,993,493 | arxiv | \section{The Carmen Sandiego game}
The work described here\footnote{The ideas of the present paper were formulated in Dec. 2013 - Jan. 2014, and discussed (in varying detail) in early 2014 with Scott Aaronson, Laci Babai, Sasha Razborov, and Avi Wigderson.} concerns the following ``game'' involving three parties: Alice, Bob, and Carmen.
Carmen Sandiego is a notorious globe-trotting criminal (created by Br{\o}derbund Software, and now owned by The Learning Company) whose misdeeds inadvertently raise awareness of geography and world cultures. In her latest caper, Carmen makes a single visit to each of $n > 1$ world cities $c_1, \ldots, c_n$, in some order of her choosing. Alice and Bob are two sleuths cooperating to catch Carmen; they know the identities of the $n$ cities but not the order in which they will be visited.
Bob is hot on the trail and follows Carmen directly from city to city but, due to severe funding restrictions, can't communicate directly with Alice as he travels. Instead, he is only able to leave a single 0/1 ``clue-bit'' behind in each city, based on what he has observed of Carmen's travels so far. (He chooses his clue-bits based upon some deterministic algorithm or ``strategy'' STRAT$_B$ agreed upon with Alice in advance.) To make matters worse, when Bob follows Carmen to the final city visited on her tour (let it be $c_{j}$), she captures him and plants the clue-bit for this city herself, with an intention to confuse. Carmen then ``hides out'' with her captive somewhere in $c_j$.
Afterwards Alice, having witnessed none of these events, visits all $n$ cities and observes the clue-bit $b_i$ left in each city $c_i$. Alice then prepares a list of ``suspect'' cities $S \subseteq [n]$ consisting of all cities that could potentially be Carmen's final hiding place, based on Alice's view of $b = (b_1, \ldots, b_n)$ and her knowledge of the algorithm followed by Bob.
The ``cost'' of this outcome is defined as the set size $|S|$---indicating the number of cities that might have to be thoroughly searched to uncover and arrest Carmen. The ``complexity'' of STRAT$_B$ is defined as the maximum cost over all possible outcomes, ranging over possible actions by Carmen.
The above game was independently defined and studied by Gilmer, Kouck\'{y}, and Saks~\cite{GKS_itcs}, who also established further results about the game (not proved by us) and published their results in ITCS `15. (The authors of~\cite{GKS_itcs} describe this game with inessential narrative differences---no ``Carmen'' is present, and their Alice and Bob play different roles. We choose to follow our original description.)
This author conjectured that any STRAT$_B$ used by Bob has complexity at least $\alpha \cdot n^\beta$, for some absolute constants $\alpha, \beta > 0$. This still seems plausible, although as we will discuss,~\cite{GKS_itcs} have disproved some stronger versions of this statement; their results also call the original statement's likelihood into question. We also note that Szegedy~\cite{Sze15} has shown a Bob-strategy with complexity $O(n^{0.4732})$, improving on a $.8\sqrt{n}$ upper bound achieved in~\cite{GKS_itcs}.
\paragraph{A more formal setup:} We will give definitions and then explain them. Fix $n > 1$ as before, and let $S_n$ be the set of all permutations $\pi = (\pi(1), \ldots, \pi(n))$ over $[n]$.
A \emph{Bob-strategy} STRAT$_B$ is defined as a family of functions $F_t: S_n \rightarrow \{0, 1\}$ for $t = 1, 2, \ldots, n-1$. We always require that each $F_t$ is \emph{$t$-restricted}, meaning that $F_t(\pi)$ is a function of the first $t$ values $\pi(1), \ldots, \pi(t)$.
With reference to a fixed Bob-strategy STRAT$_B$, for a permutation $\pi \in S_n$ and a $z \in \{0, 1\}$ we define $b(\pi, z) \in \{0, 1\}$ as the string whose $j^{th}$ coordinate is given by
$$
b_j \ := \
\begin{cases}
F_t(\pi), \ \text{if } j = \pi(t) \text{ for some $t < n$,} \\
z, \ \text{\ \ \ \ \ \ if } j = \pi(n) .\\
\end{cases}
$$
For $b \in \{0, 1\}^n$, we define the set $S = S(b) \subseteq [n]$ of ``suspect cities'' as
\[ S \ := \ \{i: \exists \rho \in S_n, u \in \{0, 1\} \text{ such that } b(\rho, u) = b \} \ . \]
The \emph{cost} of strategy STRAT$_B$ on pair $(\pi, z)$ is defined as $| S(b^\star)| $, where $b^\star := b(\pi, z)$.
The \emph{complexity}, $\mathrm{compl}($STRAT$_B)$, is defined as the maximum cost over all pairs $(\pi, z)$ as above. The complexity of the Carmen Sandiego communication game for $n$ cities, denoted $\mathrm{compl}(\mathcal{G}_{CS, n})$, is defined as
\[ \mathrm{compl}(\mathcal{G}_{CS, n}) \ := \ \min\ \{ \mathrm{compl}(\mathrm{STRAT}_B)\} \ , \]
with the minimum ranging over all Bob-strategies STRAT$_B$ for $n$ cities.
The intended ``interpretation'' of these definitions is as follows:
\begin{itemize}
\item We interpret each permutation $\pi$ as a possible itinerary for Carmen, where $\pi(i)$ indicates the index of the $i^{th}$ city visited.
\item The function $F_t$ tells Bob which clue-bit to leave at the $t^{th}$ city on his travels, the city $c_{\pi(t)}$; this clue must be determined based only on what he has seen of Carmen's itinerary so far---that is, based on the values $\pi(1), \ldots, \pi(t)$. As Bob is captured before he can choose a final clue, his $n^{th}$-step rule becomes irrelevant and is omitted.
\item The bit $b_j$ is the clue that Alice finds in the $j^{th}$ city $c_j$, if Carmen followed the itinerary $\pi$ and if she chose $z$ as the ``confusing'' clue to put in her final hideout-city $c_{\pi(n)}$.
\end{itemize}
\section{Relation to a question on the hypercube}
Next we recall the basic notion of induced subgraphs. If $G = (V, E)$ is an undirected graph and $K \subseteq V$, the \emph{induced subgraph} $G[K]$ is defined as the graph that has vertex set $K$ and edge set $E' := \{(u, v) \in E: u , v \in K\}$. The next conjecture considers the case where $G = H_n = \{0, 1\}^n$ is the Boolean hypercube, whose vertices $x, x'$ are adjacent iff $x, x'$ differ in exactly one coordinate.
Chung, F\"{u}redi, Graham, and Seymour~\cite{CFGS} raised the question of whether the following conjecture holds:
\begin{cfgs_conjecture}\label{cfgs-conj} There are absolute constants $a, b > 0$ such that the following holds for all $n > 0$: if $K \subseteq H_n$ is any set of size $|K| > 2^{n-1}$, then there is a $y \in K$ whose degree within $H_n[K]$ is at least $a \cdot n^b$.
\end{cfgs_conjecture}
The Hypercube Induced-Degree Conjecture was shown by Gotsman and Linial~\cite{GL92} to imply the well-known Sensitivity Conjecture, which asserts that two measures of Boolean function complexity, the \emph{sensitivity} $s(f)$ and \emph{block sensitivity} $bs(f)$ (the latter defined by Nisan in~\cite{Nis91}), are always within a polynomial factor of each other. The question of the Sensitivity Conjecture was raised in the conference version of Nisan's paper~\cite{Nis89}. For background and variations on the Sensitivity Conjecture (including several equivalent conjectures) we recommend the survey of Hatami, Kulkarni, and Pankratov~\cite{HKP11}.
We prove:
\begin{theorem}\label{thm:cs_to_cfgs} Suppose that $K \subseteq H_n$ is of size $|K| > 2^{n-1}$ and yet every $y \in K$ has degree at most $D > 0$ within $H_n[K]$. Then, $\mathrm{compl}(\mathcal{G}_{CS, n}) \leq D$.
\end{theorem}
As an immediate consequence, we see that if $\mathrm{compl}(\mathcal{G}_{CS, n})\geq n^{\Omega(1)}$ then the Hypercube Induced-Degree Conjecture is true. The authors of~\cite{GKS_itcs} showed (independently) that $\mathrm{compl}(\mathcal{G}_{CS, n})\geq n^{\Omega(1)}$ implies the Sensitivity Conjecture. Their proof technique is different and involves analyzing representations of Boolean functions as real polynomials.
\begin{proof}[Proof of Theorem~\ref{thm:cs_to_cfgs}] We will define a Bob-strategy STRAT$_B = (F_1, \ldots, F_{n-1})$ based on the fixed set $K$ from our starting assumption. First, for any $w \in \{0, 1, *\}^n$, we will use
\[ K(w) \ := \ \{y \in K: \text{for every $i$ such that $w_i \in \{0, 1\}$, we have $y_i = w_i$}\} \]
to denote the set of strings in $K$ ``agreeing with'' $w$.
For $t \geq 0$ we will let $w^t = w^t(\pi) \in \{0, 1, *\}^n$ denote the vector of clue-bits left by Bob in $c_1, \ldots, c_n$ after $t$ steps of Carmen's itinerary $\pi$; here, we take $w^t_\ell = *$ if Carmen has not visited $c_i$ in the first $t$ steps, that is, if $\pi^{-1}(\ell) > t$. Thus $w^0 = *^n$.
For $w = (w_1, \ldots, w_n) \in \{0, 1, *\}^n$ and $u \in \{0, 1\}$ we define $w[i \leftarrow u]$ as $w$ with the $i^{th}$ coordinate set to $u$. We define $F_t$ inductively for $t \geq 1$ by taking
\[ F_t(\pi) \ = \ F_t(\pi(1), \ldots, \pi(t)) \ := \ \arg \max_{u \in \{0, 1\}} \left| K\left(w^{t - 1}[\pi(t) \leftarrow u]\right) \right| \ . \]
That is, we set the $t^{th}$ clue-bit (placed at position $\pi(t)$) in such a way as to maximize the number of strings in $K$ agreeing with $w^t = w^{t - 1}[\pi(t) \leftarrow u]$, subject to the inductive setting of the previously chosen clue-bits (which determine $w^{t -1}$). Any ties above are broken by an arbitrary fixed rule---in favor of $u = 0$, for concreteness.
This Bob-strategy clearly has the ``$t$-restricted'' property we require for each $t \leq n-1$. Let us analyze its behavior. Fixing a Carmen itinerary $\pi \in S_n$ and a final bit $z \in \{0, 1\}$ determines the strings $w^0, \ldots, w^n$. We have $|K(w^0)| = |K| > 2^{n-1}$ and we claim that $|K(w^t)| > 2^{n - 1 - t}$ for each $t \in [n-1]$. This follows easily by induction on $t$, since by our choice of $t^{th}$ clue-bit we have $|K(w^{t})| \geq .5 |K(w^{t-1})| $.
Thus, $|K(w^{n-1})| > 2^0 = 1$. On the other hand, $K(w^{n-1})$ is contained in a subcube of dimension 1, i.e., the set of Boolean strings agreeing with $w^{n-1}$. Thus $|K(w^{n-1})| = 2$. It follows that no matter Carmen's choice of final ``confusing clue'' $z$, the singleton $K(w^n) = w^n = b = b(\pi, z)$ must lie in $K$.
Letting $b^{\oplus i}$ denote $b$ with its $i^{th}$ bit flipped, we conclude that the final index $\ell = \pi(n)$ must be such that $b$ and $b^{\oplus \ell}$ both lie in $K$. It follows that each city index $\ell' $ in the ``suspect'' set $S = S(b)$ defines a distinct neighbor of $b$ within the induced subgraph of $H_n$ on $K$. But by our assumption on $K$, there are at most $D$ such neighbors. Thus we must always have $|S| \leq D$. It follows that $\mathrm{compl}(\mathcal{G}_{CS, n}) \leq \mathrm{compl}(\mathrm{STRAT}_B) \leq D$, as claimed.
\end{proof}
We note that in the strategy STRAT$_B$ described above, Bob's choice of clue-bit at step $t$ is fully determined by the string $w^{t - 1} \in \{0, 1, *\}^n$ describing the locations and values of the clue-bits he has already fixed. Thus to prove the Hypercube Induced-Degree Conjecture, it suffices to lower-bound the complexity of Bob-strategies having this restricted form. The same observation (toward proving the Sensitivity Conjecture) was made independently in~\cite{GKS_itcs}.
\section{An average-case version of the problem}
Now suppose we assume that Carmen's itinerary $\pi \in S_n$ is a uniformly distributed random variable, so that in particular, the value $\pi(n)$ is uniform over $[n]$. Also assume that her final ``confusing clue'' bit $z$ is uniform and independent of $\pi$. In this setting, it is tempting to expect that no matter which strategy STRAT$_B$ Bob uses for his clues, Alice will have significant expected uncertainty about $\pi(n)$, even after seeing the clue-string $b = b(\pi, z)$.
We conjectured that under this experiment, for any fixed STRAT$_B$ the conditional uncertainty of $\pi(n)$ satisfies $H( \pi(n) | b ) \geq c \log n$,
for some absolute constant $c > 0$, where $H(\cdot|\cdot)$ denotes the conditional Shannon entropy. This conjecture would imply and strengthen the conjecture that $\mathrm{compl}(\mathcal{G}_{CS, n}) \geq n^{\Omega(1)}$, and until learning of the work of Gilmer, Kouck\'{y}, and Saks~\cite{GKS_itcs}, this author considered it likely. However, these researchers independently considered this entropic version of the conjecture, and managed to refute it! They also showed that the worst-case complexity $\mathrm{compl}(\mathcal{G}_{CS, n})$ is sub-polynomial in $n$ in a model where Bob is allowed to leave clues from even the slightly larger alphabet $\Sigma = \{0, 1, 2\}$. Their results indicate the delicate nature of the question, and cast some doubt on whether $\mathrm{compl}(\mathcal{G}_{CS, n}) \geq n^{\Omega(1)}$ in the Boolean model. These authors do, however, raise an ``average-cost'' strengthening of this hypothesis that remains open (see~\cite{GKS_itcs}).
\section{The Carmen Sandiego search problem with query bounds}
For the Carmen Sandiego game defined as before, we next consider a different model for Alice's behavior. In this model, Alice doesn't have enough time or money to visit every city. We model this formally by requiring that Alice has only \emph{query-bounded} access to the clue string $b = b(\pi, z) \in \{0, 1\}^n$ defined by $\pi, z$, and by the Bob-strategy STRAT$_B$. Recall that for $0 < t \leq n$, a \emph{randomized $t$-query algorithm} for Alice (which we will denote as $R_A$) is a probability distribution over depth-$t$ decision trees $\{R_{A, r}\}_{r \in \Omega}$ on the $n$ input variables $b_1, \ldots, b_n$.
In the query-bounded setting we also focus on a modified success criterion for Alice, in which she is no longer trying to give a short list of cities where Carmen might be hiding; now her only goal is to \emph{make a query} to the bit $b_{\pi(n)} = z$ planted by Carmen in the final city $c_{\pi(n)}$. (In this variant of the model, we imagine that this is enough for Alice to detect and thwart Carmen.) Formally, let VISITS$_A \subseteq [n]$ denote the random variable giving the $t$-subset of coordinates queried by Alice; we say that an execution of $R_A$ on $b = b(\pi, z)$ is \emph{search-successful} if $\pi(n) \in $ VISITS$_A$. Note that here, there is no requirement for Alice to \emph{identify} which of her queries goes to the $\pi(n)^{th}$ coordinate (or even to decide if success has occurred).
We conjecture:
\begin{conjecture}\label{conj:catch_hard} There are $\alpha, \beta > 0$ such that the following holds for any Bob-strategy STRAT$_B$ and any randomized algorithm $R_A$ for Alice making $t \leq \alpha \cdot n^\beta $ queries.
If $(\pi, z)$ are drawn uniformly from $S_n \times \{0, 1\}$, and $\mathbf{b} = b(\pi, z)$, then
\[ \Pr_{\pi, z, R_A}[R_A \text{ is search-successful on }\mathbf{b}] \ < \ 1/3 \ . \]
\end{conjecture}
By an argument similar to that of Theorem~\ref{thm:cs_to_cfgs}, we show that this conjecture's truth would imply another conjecture of Aaronson, Ambainis, Balodis, and Bavarian~\cite{AABB}, about the query complexity of computing the Parity function with high confidence on an arbitrary set of strictly more than half of all possible inputs.
\begin{wp_conjecture} There are absolute constants $\alpha, \beta > 0$ such that the following holds for all $n > 0$. Suppose $R$ is a randomized algorithm on $n$-bit Boolean input strings making $t \leq \alpha \cdot n^\beta$ queries to its inputs. Let $K \subseteq \{0, 1\}^n$ be the set of inputs on which $A$ computes the Parity function PAR $=$ PAR$_n$ with at least $2/3$ success probability,
\[ K \ := \ \left\{ y: \ \Pr[R(y) = PAR(y)] \ \geq \ 2/3 \right\} \ . \]
Then we must have $|K| \leq 2^{n-1}$.
\end{wp_conjecture}
The authors of~\cite{AABB} show that the Hypercube Induced-Degree Conjecture implies the Weak Parity Conjecture. Our definition and study of the Carmen Sandiego game was initially conceived in an attempt to prove their Weak Parity Conjecture, through the connection given in the next result.
\begin{theorem}\label{thm:connect} Let $0 < t \leq n$. If $R$ is a $t$-query randomized algorithm on $n$ input bits, and $K \subseteq \{0, 1\}^n$ is a set of strictly more than $2^{n-1}$ inputs $y$ on which $\Pr[R(y) = PAR(y)] \geq 2/3$, then there is a choice of Bob-strategy STRAT$_B$ such that, for the Alice query strategy $R_A := R$, the following holds. If $(\pi, z)$ are drawn uniformly from $S_n \times \{0, 1\}$, and $\mathbf{b} = b(\pi, z)$, then we have
\[ \Pr[R_A(\mathbf{b})\text{ is search-successful}] \ \geq \ 1/3 \ . \]
\end{theorem}
As a consequence of Theorem~\ref{thm:connect}, we see that Conjecture~\ref{conj:catch_hard} implies the Weak Parity Conjecture. This work also has a simple takeaway message which can be studied even without talk of Alice, Bob, and Carmen: for any set $K \subseteq \{0, 1\}^n$ of more than half of all strings, we propose the random variable $\mathbf{b} = b(\pi, z)$ used in the proof of Theorem~\ref{thm:connect} as a candidate hard distribution (supported entirely on $K$) for query-bounded algorithms attempting to compute the Parity function. This seems to us a promising approach to the Weak Parity Conjecture.
\begin{proof}[Proof of Theorem~\ref{thm:connect}] We use the same Bob-strategy STRAT$_B$ (defined relative to the set $K$) as in the proof of Theorem~\ref{thm:cs_to_cfgs}. Letting the random variable $\mathbf{b} = b(\pi, z)$ be as above, define the modified string $\mathbf{b}' := b(\pi, \overline{z})$ with flipped final clue-bit $\overline{z}$. Now $\mathbf{b}$ and $\mathbf{b}'$ are identically distributed (since $z$ is uniform and independent of $\pi$), so considering the Boolean output of the query algorithm $R$, we have
\begin{equation}\label{eq:11} \Pr[R(\mathbf{b})= PAR(\mathbf{b})] \ = \ \Pr[R(\mathbf{b}')= PAR(\mathbf{b}')] \ . \end{equation}
Also, for a fixed choice of randomness $r$ of $R$ (determining a $t$-query decision tree $R_r$ applied to the input), the execution of Alice's query algorithm $R_{r}(\mathbf{b})$ is search-successful if and only if $R_{r}(\mathbf{b}') $ is search-successful.
On the other hand, if $R_{r}(\mathbf{b})$ is not search-successful, then its view of $\mathbf{b}$ is identical to that of $\mathbf{b}'$ in the execution $R_r(\mathbf{b}')$, since $\mathbf{b}, \mathbf{b}'$ are identical outside of the $\pi(n)^{th}$ position. Thus the Boolean outputs of these two computations are identical. It also clearly holds that $PAR(\mathbf{b}) \neq PAR(\mathbf{b}')$, so at most one of the computations $R_r(\mathbf{b}), R_r(\mathbf{b}')$ can correctly output the Parity function of its input unless $R_{r}(\mathbf{b})$ is search-successful. Thus in terms of indicator random variables, we have
\begin{equation}\label{eq:22} \mathbf{1}[R(\mathbf{b})= PAR(\mathbf{b})] \ + \ \mathbf{1}[R(\mathbf{b}')= PAR(\mathbf{b}')] \ \leq \ 1 + \mathbf{1}[ R(\mathbf{b})\text{ is search-successful}] \ . \end{equation}
Using Eqs.~(\ref{eq:11}) and~(\ref{eq:22}), we find that
\begin{align*} \Pr[R(\mathbf{b})= PAR(\mathbf{b})] \ &= \ .5\left( \ \mathbb{E}\left[ \mathbf{1}[R(\mathbf{b})= PAR(\mathbf{b})] \ + \ \mathbf{1}[R(\mathbf{b}')= PAR(\mathbf{b}')] \right] \ \right) \\ &\leq \ .5(1 + \Pr[R(\mathbf{b})\text{ is search-successful}]) \ . \end{align*}
Now $[\mathbf{b} \in K]$ always holds, by the design of STRAT$_B$ and our analysis from the proof of Theorem~\ref{thm:cs_to_cfgs} (using the fact that $|K| > 2^{n-1}$). By our other initial assumption on $K$ in the present context, we then have $ \Pr[R(\mathbf{b})= PAR(\mathbf{b})] \geq 2/3$. Combining and rearranging shows that $\Pr[R_A(\mathbf{b})\text{ is search-successful}]) \geq 1/3$ as needed, proving Theorem~\ref{thm:connect}.
\end{proof}
\section*{Acknowledgements}
I thank Laci Babai for encouragement to produce this note. I thank Scott Aaronson for sharing an early draft of~\cite{AABB} and for many interesting conversations about the Sensitivity Conjecture and related topics.
|
1,314,259,993,494 | arxiv | \section{Introduction}
\label{intro}
The problem of the existence of the operator with dimension 2, whose contribution to QCD sum rules ~\cite{SVZtf79} is proportional to~$1/Q^2$, and search for relevant OPE corrections are performed already for quite a long time \cite{ShortString, ZakharovOther, BodDominEidel2012}.
In the pioneering paper~\cite{ShortString} a concept of short strings leading to corrections with dimension 2 was suggested.
In Cornell potential~\cite{Eichten:1974af}
$$
V(r) \approx -\cfrac{4\alpha_s(r)}{3r}+kr
$$
the term~$kr$ describes string potential (connected to the phenomenon of confinement) at short distances, leading\footnote{Note that in OPE the first correction to $e^+e^-$-annihilation cross-section is~$\sim \left\langle G_{\mu\nu}G^{\mu\nu}\right\rangle / Q^4$.}
to the correction~$\sim k/Q^2$. In paper~\cite{ZakharovOther} the correlation between short strings and perturbation theory order was established
\footnote{It is analogous to the interplay of perturbation theory order and higher twist in deep inelastic scattering~\cite{Kataev:2000wn}}.
The contribution of the operator with dimension 2 to the $e^+e^-$ data was studied later~\cite{BodDominEidel2012, SPIESBERGER:2013jpa} and it was found to be compatible to zero with large errors.
Our analysis is based on the use of Adler function being a simple two-point
correlator which is convenient to compare to experimental data.
We use a large number of them and perform an accurate numerical analysis. It consists of construction of the fitting model, calculating~$R$-ratio and $D$-function in dispersional form, applying Borel transform (BT), construction of sum rule and extraction of the OPE coefficients.
The main purpose is to verify whether the operator with dimension 2 exists.
\section{Fits}
\label{sec-fits}
We limit ourselves by the data with isospin~$I=1$, thereby avoiding strange particles, which are difficult to measure.
While the data are presented as sets of experimental points, for accurate analysis it is preferable to obtain analytic fits of experimental data which are convenient to integrate.
Whereas there are different annihilation channels data obtained from different detectors, we fit every channel separately and find the formula for full cross-section.
The analysis is performed using the data on~$e^+e^-$-annihilation to pion channels: $e^+e^-\to\pi^+ \pi^-$ (CMD and OLYA detectors)~\cite{Barkov85},
$e^+e^-\to 2\pi^+ 2\pi^-$ (M3N, GG2, DM1, CMD, OLYA, DM2, SND, CMD2, SND, BaBar)~\cite{M3N79,GG280,DM182, CMD88,OLYA88,DM291,SND91,CMD299,SND03,CMD204,BaBar05},
$e^+e^-\to \pi^+ \pi^-2\pi^0$ (OLYA, CMD2, SND, DM2, Frascati-ADONE-GAMMA-GAMMA-2)~\cite{All-4pi0, OLYA4pi0, CMD24pi0,SND4pi0,ORSAY-DCI-DM2-4pi0,FRASCATI-ADONE-4pi0},
$e^+e^-\to 3\pi^+ 3\pi^-$ (BaBar)~\cite{BaBar06},
$e^+e^-\to 2\pi^+ 2\pi^- 2\pi^0$ (BaBar)~\cite{BaBar06}.
For description of the data on squared pion form-factor~$|F_\pi(s)|^2$ depending on energy~$\sqrt{s}$ of reaction~$e^+e^-\to \pi^+ \pi^-$ we use the three-resonance (Gounaris-Sakurai) model~\cite{Barkov85},~\cite{GouSak68}, where the form-factor of each resonance~$V$ is calculated using Breit-Wigner formula:
$$
F^\text{BW}(s, m_V,\Gamma_V)
= \frac{m_V^2 (1 + d\cdot\Gamma_V/m_V)}{m_V^2-s+f(s, m_V, \Gamma_V) -i\,m_V\ \Gamma_V(s)}\,,\,\text{where} \quad \Gamma_V(s)
= \Gamma_V\,
\left(\frac {k(s)}{k(m_V^2)}\right)^3,
$$
$$
k(s) = \frac{\sqrt{s-4 m_{\pi}^2}}{2}\,, \quad f(s, m_V, \Gamma_V) = \Gamma_V \frac{m_V^2}{k(
m_V^2)^3} [k^2(s) (h(s) - h(m_V^2)) - (s - m_V^2) k^2(m_V^2) h'(m_V^2)],
$$
$$
h(s) = \frac{2}{\pi} \frac{k(s)}{\sqrt{s}} \ln \left( \frac{\sqrt{s} + 2 m_{\pi}}{2 m_{\pi}} \right), \quad
h'(mV^2) = h'(s)\left|_{s=mV^2} \right.,\quad
F^\text{BW}(0, m_V, \Gamma_V)=1 \, \text{automatically}.
$$
The full pion form-factor with resonances~$\rho$, $\omega$ and~$\rho'$ has the following expression:
$$
\label{eq:4res}
F_{\pi}\left(s, \{\rho,\omega, \rho' \}\}\right)
=\frac{F_\pi(s,\{\rho,\omega\})
+ \alpha_{\rho'} F^\text{BW}\left(s,m_{\rho'},\Gamma_{\rho'}\right)}
{1+\alpha_{\rho'}}\,,~~~
$$
$$
\text{where} \quad F_{\pi}\left(s, \{\rho,\omega\}\right)
= F^\text{BW}(s,m_{\rho},\Gamma_{\rho})\,
\frac{1+\alpha_{\omega} F^\text{BW}(s,m_{\omega},\Gamma_{\omega})}
{1+\alpha_{\omega}}\,.
$$
Performing the $\chi^2$-minimisation we get the fit of the experimental data on~$|F_{\pi}|^2(s)$ taken from work~\cite{Barkov85} characterised by
$\chi^2_\text{b.f.}=0.82$
and the following resonances parameters values\footnote{We use values of~$m_{\rho}$, $m_{\omega}$ and~$\Gamma_{\omega}$ taken from PDG~\cite{PDG}} :
\begin{eqnarray}
m_{\rho}&=&0.770~\text{GeV~(PDG)}\,,\quad
\Gamma_{\rho}\,=\,0.149 \pm 0.008~\text{GeV}\,;\nonumber\\
m_{\omega}&=&0.782~\text{GeV~(PDG)}\,,\quad
\Gamma_{\omega}\,=\,0.009~\text{GeV~(PDG)}\,,\quad
\alpha_{\omega}\,=\,0.0021 \pm 0.0017\,;\nonumber\\
m_{\rho'}&=&1.354 \pm 0.116~\text{GeV}\,,\quad
\Gamma_{\rho'}\,=\,0.344 \pm 0.157~\text{GeV}\,,\quad
\alpha_{\rho'}\,=\,-0.089 \pm 0.024\,;\nonumber\\
d\,&=&\,0.384 \pm 0.200~\text{GeV}\,.\nonumber
\end{eqnarray}
For cross-sections of the processes
$e^+e^-\to2\pi^+2\pi^-$,
$e^+e^-\to\pi^+\pi^-2\pi^0$,
$e^+e^-\to2\pi^+2\pi^-2\pi^0$,
and
$e^+e^-\to3\pi^+3\pi^-$
the description in the form of sum of three Gaussian curves, describing wide resonances, is assumed:
\begin{eqnarray*}
\label{eq:Gauss.3R}
\sigma\left(s, \{M_{i},\sigma_{i},\alpha_{i}\}\right)
= \sum_{i=1}^3\frac{\alpha_i^2}{\alpha_1^2+\alpha_2^2+\alpha_3^2}\,
e^{-(\sqrt{s}-M_i)^2/(2\sigma_i^2)}\,.
\end{eqnarray*}
The results are shown in table~\ref{tab-1}.
\begin{table}
\centering
\caption{The fitting results for particular $e^+e^-$-annihilation channels}
\label{tab-1}
\begin{tabular}{r|llll}
\hline
$e^+e^-\to2\pi^+2\pi^- \quad i$ &$M_i$, GeV & $\sigma_i$, GeV & $\alpha_i$ \\\hline
1&1.289 $\pm$ 0.034 & 0.213 $\pm$ 0.028 & 0.462 $\pm$ 0.058 \\
$\chi^2_\text{b.f.}=0.97\quad$
2&1.573 $\pm$ 0.017 & 0.227 $\pm$ 0.011 & 1.522 $\pm$ 0.121 \\
3&2.524 $\pm$ 0.070 & 0.356 $\pm$ 0.037 & 2.427 $\pm$ 0.791 \\\hline
\hline
$e^+e^-\to\pi^+\pi^-2\pi^0 \quad i$ &$M_i$, GeV & $\sigma_i$, GeV & $\alpha_i$ \\\hline
1&$1.304 \pm 0.250$ & $0.258 \pm 0.066$ & $1.024 \pm 1.064$ \\
$\chi^2_\text{b.f.}=1.12\quad$
2&$1.748 \pm 0.411$ & $0.283 \pm 0.052$ & $1.333 \pm 0.275$ \\
3&$2.322 \pm 0.201$ & $0.194 \pm 1.027$ & $0.817 \pm 2.826$ \\\hline
\hline
$e^+e^-\to 3\pi^+3\pi^- \quad i$ &$M_i$, GeV & $\sigma_i$, GeV & $\alpha_i$ \\\hline
1&$1.811\pm 0.027$ & $0.091\pm 0.023$ & $0.183\pm 0.040$ \\
$\chi^2_\text{b.f.}=0.58\quad$
2&$1.897\pm 0.034$ & $0.237\pm 0.027$ & $\,0.291\pm 0.029$ \\
3&$2.247\pm 0.042$ & $0.134\pm 0.264$ & $0.956\pm 0.534$ \\\hline
\hline
$e^+e^-\to 2\pi^+2\pi^-2\pi^0 \quad i$ &$M_i$, GeV & $\sigma_i$, GeV & $\alpha_i$ \\\hline
1&$1.740\pm 0.023$ & $0.115\pm 0.019$ & $0.529\pm 0.082$ \\
$\chi^2_\text{b.f.}=0.75\quad$
2&$2.027\pm 0.080$ & $0.287\pm 0.062$ & $0.362\pm 0.082$ \\
3&$2.274\pm 0.042$ & $0.248\pm 0.026$ & $1.051\pm 0.255$ \\\hline
\end{tabular}
\end{table}
\section{$R$-ratio, Adler function and sum rule
\label{sec-RD}
}
The full $R$-ratio is the sum of $R$-ratios of particular processes.
$
R(s)
= \sum\limits_{i} \cfrac{\sigma_{e^+e^-\to\text{hadrons of type $i$}}(s)}
{\sigma_{e^+e^-\to \mu^+\mu^-}(s)}\,.
$
\begin{figure*}
\centering
\includegraphics[width=12cm,clip]{Fig1.png}
\vspace*{0cm}
\caption{The full~$R$-ratio in dependence on energy~$\sqrt{s}$ at~$\sqrt{s}\leq2$~GeV and the theoretical representation of $R$-ratio. The data on particular channels of $e^+e^-$-annihilation are presented by blue dots with errors.}
\label{fig:Rs0}
\end{figure*}
Extracting the current with isospin~$I=1$ ($j_{\mu}=\frac 1 2 (\bar u \gamma_{\mu} u-\bar d \gamma_{\mu} d)$) from full electromagnetic current, we obtain the formula for $D$-function with isospin $I=1$ in the framework of OPE (for example, see work~\cite{SVZ}, eq.~3.4):
\begin{eqnarray}
\label{eq:Adler.PT-OPE}
D_\text{PT+OPE}(Q^2)
= \frac 3 2 \,
\left[1 + \frac{\alpha_s(Q^2)}{\pi}
+ \sum_{n\geq 1} \Gamma(n)\,\frac{a_{2n}}{Q^{2n}}
\right],
\end{eqnarray}
wher
~$a_{2n}$ are OPE coefficients, we take into account three first power corrections.
The another form of~$D$-function, dispersional, is the following:
\begin{eqnarray}
\label{eq:D.exp}
D_\text{exp}(Q^2)=
Q^2 \int_{4\,m_\pi^2}^{\infty}\frac{R_\text{exp-th}(s)\,ds}{(s+Q^2)^2}
= Q^2 \int_{4\,m_\pi^2}^{s_0}\frac{R_\text{exp}(s)\,ds}{(s+Q^2)^2}
+ Q^2 \int_{s_0}^\infty\frac{R_\text{th}(s)\,ds}{(s+Q^2)^2}\,,
\end{eqnarray}
$$
\text{where} \quad R_\text{exp-th}(s) = R_\text{exp}(s)\,\theta(s<s_0) + R_\text{th}(s)\,\theta(s>s_0),\quad
R_\text{th}(s)= \frac{3}{2}\,\left[1+\frac{\alpha_s(s)}{\pi}\right];
$$
$R_\text{exp}(s)$ is our fitting result (shown by black solid curve on Fig.~\ref{fig:Rs0}) and~$R_\text{th}(s)$ is the one-loop approximation of perturbative QCD (shown by red dashed curve on Fig.~\ref{fig:Rs0}).
The continuum threshold $\sqrt{s_0} = 1.57$~GeV is chosen to guarantee that $R_\text{exp}(s)$ and $R_\text{th}(s)$ and even their first derivatives take similar values (under the statistical uncertainties of experimental data), as it is seen on Fig.~\ref{fig:Rs0} (vertical dashed line).
The function~$R_\text{exp}(s)$ decreases above~$\sqrt{s_0} = 1.57$~GeV, which is explained by the absence of the data on~$e^+e^-$-annihilation to~$8\pi$ channels.
Equating the both forms of Adler function: by OPE~(\ref{eq:Adler.PT-OPE}) and dispersional~(\ref{eq:D.exp}), and applying BT, we get the sum rule:
\begin{eqnarray}
\label{eq:SumRule}
\Phi_\text{exp}(M^2) = \Phi_{\text{PT+OPE}}(M^2),
\end{eqnarray}
\begin{eqnarray*}
\label{eq:Ad.Bor.Exp}
\text{where}\quad \Phi_\text{exp}(M^2)
= \int_0^{\infty}\!\!
R_\text{exp-th}(s)\,\left(1-\frac{s}{M^2}\right)\,e^{-s/M^2}\,
\frac{ds}{M^2},
\end{eqnarray*}
\begin{equation*}
\label{eq:Ad.Bor.PT.OPE}
\Phi_{\text{PT+OPE}}(M^2) =
\frac 3 2 \frac{\hat{B}_{Q^2\to M^2}[ \alpha_s(Q^2)]}{\pi}
+ \frac 3 2 \left( \frac{C_2}{M^{2}}
+ \frac{C_4}{M^{4}}
+ \frac{C_6}{M^{6}} \right), \quad C_{2n}=a_{2n},
\end{equation*}
\begin{eqnarray*}
\label{eq:Bor.abar}
\text{and}\quad \hat{B}_{Q^2\to M^2}\left[\alpha_s(Q^2)\right]
= \frac{4\pi}{b_0}\, \left[\frac{1}{M^2}\,\int \limits_0^{\infty}\!\! \frac{e^{-s/M^2}d s}{\ln^2\left(s/\Lambda^2\right)+\pi^2}
+ \frac{\Lambda^2}{M^2}\,e^{\Lambda^2/M^2}\right]\,,\quad b_0=11-2N_f/3,
\end{eqnarray*}
$\Lambda=\Lambda_{\text{QCD}}=0.25$~GeV is the QCD scale parameter.
$M^2$ is the Borel mass, appearing in equation after Borel transform.
\section{The sum rule analysis}
\label{sec-Analysis}
Finally, let us turn to the discussion of obtained sum rule~(\ref{eq:SumRule}).
The Borel mass~$M^2$ is varied in interval~$0.75 \div 4$~GeV$^2$ which is divided to 20 points, then the coefficients~$C_2$ and~$C_4$ are determined by~$\chi^2$-minimisation, and $C_6 =-\frac{448\,\pi^3}{27}\,\alpha_s \left\langle \bar{q}q\right\rangle^2 =-0.121$~GeV$^6$, which can be expressed in terms of quark condensate~\cite{SVZ}, is fixed ~\cite{GeshIoffeZyabl}.
The coefficient~$C_4$ can be expressed in terms of gluon condensate
$ C_4 = \frac{2\pi^2}{3}\,\left\langle \frac{\alpha_s \, GG}{\pi}\right\rangle$~\cite{SVZtf79}.
Our result is shown on Fig.~\ref{fig:Chi2Region}; the regions corresponding to 1, 2 and 3~$\sigma$-levels are marked in red, blue and yellow.
The minimal value is~$\chi_{\text{min}}^2=0.483$,
and corresponding gluon condensate and~$C_2$ are found to be:~$\left\langle (\alpha_s/\pi) \,GG\right\rangle = 0.025\,\text{GeV}^4$, $C_2 = -0.086\,\text{GeV}^2$. The allowed intervals for~$C_2$ and gluon condensate at one~$\sigma$-level are:
$$
C_2=-0.086 \pm 0.050\,\text{GeV}^2, \quad
\left\langle \frac{\alpha_s GG}{\pi}\right\rangle=0.025 \pm 0.012\,\text{GeV}^4.
$$
At one $\sigma$-level our results do not contradict with the well-known results~\cite{SVZ, GeshIoffeZyabl, IoffeGG}, but at the same time~$C_2$ is not compatible with zero.
One can see that~$C_2=0$ is possible at 3~$\sigma$-level.
Furthermore, by the form of regions corresponding our results~(see Fig.~\ref{fig:Chi2Region}) we can suppose that there is some (anti)correlation between~$C_2$ and~$C_4$.
A rough estimation of this connection is expressed by formula:
$$
C_2 \approx -5\,\text{GeV}^{-2}\, \left\langle \frac{\alpha_s GG}{\pi}\right\rangle +0.025\,\text{GeV}^2.
$$
\begin{figure}[h]\center
\includegraphics[height=7cm]{Fig2.png}
\caption
The allowed regions for~$C_2$ and gluon condensate at 1, 2 and 3~$\sigma$-level, determined by $\chi^2 \leq \chi_{\text{min}}^2+1$ (red), $\chi^2 \leq \chi_{\text{min}}^2+2$ (blue) and $\chi^2 \leq \chi_{\text{min}}^2+3$ (yellow).
The regions of existing data on gluon condensate are shown for comparison.
Central values are marked by solid horizontal lines, errors ranges are marked by dashed lines.}
\label{fig:Chi2Region}
\end{figure}
One can compare the received values with the existing values of the gluon condensate
~\cite{SVZ,IoffeGG,GeshIoffeZyabl}.
\section{Conclusions and outlook}
\label{sec-Results}
The resonance contribution fitting model is developed,
the Adler function with Borelization is obtained and its precise numerical analysis is performed.
$C_2$ has negative sign and is compatible with zero only at 3 $\sigma$-level. At 1 $\sigma$-level:
$C_2=-0.086 \pm 0.050$~GeV$^2$,
$\left\langle \alpha_s/\pi \, GG\right\rangle=0.025 \pm 0.012$~GeV$^4$.
Strong (anti)correlation between short strings and local gluon condensate is found:
$$
C_2 \approx -5\,\text{GeV}^{-2}\, \left\langle \frac{\alpha_s GG}{\pi}\right\rangle +0.025\,\text{GeV}^2.
$$
We plan to perform an analogous analysis in the framework of the analytic perturbation theory~\cite{Shirkov:1997wi, Bakulev:2005gw, Khandramai:2011zd}.
\textit{Acknowlegements.
\\
We dedicate this paper to the memory of Alexander Petrovich Bakulev (1956-2012) with whom we started this work.
We are grateful to A.~L.~Kataev, S.~V.~Mikhailov,
O.~P.~Solovtsova for useful discussions and comments. M.~K. is thankful to Yu.~L.~Ryzhykau for valuable advices and assistance in numerical analysis. The work is supported in part by RFBR Grant-140100647.
}
|
1,314,259,993,495 | arxiv | \section{Introduction}
Even though Quantum chromodynamics (QCD) is universally accepted as the theory of strong interactions,
color confinement still remains an unsolved problem. Although the well established phenomenon of the confinement of quarks and gluons inside hadrons is without any doubt encoded into the QCD Lagrangian, our
current understanding relies only on a number of models of the QCD vacuum (for a review, see Refs.~\cite{Greensite:2011zz,Diakonov:2009jq} ). A theoretical a priori explanation of the area law for large Wilson loops, resulting from a long distance linear confining quark-antiquark potential, is still missing.
First-principle numerical Monte Carlo simulations of QCD on a space-time lattice represent the most eligible tool for testing (or ruling out) models of confinement, but can also provide ``phenomenological'' results that can suggest new insights into
the mechanism of confinement. Numerical simulations have clearly established the presence of a linear confining potential between a static quark an antiquark for distances larger than 0.5 fm up to infinite distance in SU(3) pure gauge theory, and up to a distance of about 1.4 fm in presence of dynamical quarks,
where {\em string breaking} should take place~\cite{Philipsen:1998de,Kratochvila:2002vm,Bali:2005fu,Koch:2015qxr,Koch:2018puh,KnechtlyLat21}.
This long-distance linear potential is naturally associated with the observation of a tube-like structure of the chromoelectric field
produced by a static quark-antiquark pair.
A large amount of numerical investigations of flux tubes has accumulated in SU(2) and SU(3) Yang-Mills theories~\cite{Fukugita:1983du,Kiskis:1984ru,Flower:1985gs,Wosiek:1987kx,DiGiacomo:1989yp,DiGiacomo:1990hc,Cea:1992sd,Matsubara:1993nq,Cea:1994ed,Cea:1995zt,Bali:1994de,Green:1996be,Skala:1996ar,Haymaker:2005py,D'Alessandro:2006ug,Cardaci:2010tb,Cea:2012qw,Cea:2013oba,Cea:2014uja,Cea:2014hma,Cardoso:2013lla,Caselle:2014eka,Cea:2015wjd,Cea:2017ocq,Shuryak:2018ytg,Bonati:2018uwh,Shibata:2019bke} mostly
aimed at studying the shape of the chromoelectric field on the transverse plane at the midpoint of the line connecting the static quark and antiquark, given that the other two components of the chromoelectric field and all the three components of the chromomagnetic field are suppressed in that plane. The last few years witnessed several numerical efforts toward a more thorough description of the color fields around static sources. In this regard measurements of all components of both chromoelectric an chromomagnetic fields on all transverse planes passing through the line between quarks were performed~\cite{Baker:2018mhw,Baker:2019gsi}; other contributions come from the study of the spatial distribution of the stress energy momentum tensor~\cite{Yanagihara:2018qqg,Yanagihara:2019foh} and of the flux densities for hybrid static potentials~\cite{Bikudo:2018,Mueller:2019mkh}.
In our numerical studies~\cite{Baker:2018mhw,Baker:2019gsi} of the spatial distribution in three dimensions of all components of the chromoelectric and chromomagnetic fields generated by
a static quark-antiquark pair in pure SU(3) lattice gauge theory, we found that the larger contribution is given by longitudinal chromoelectric field, but still the transverse chromoelectric components are large enough to be fit to a Coulomb-like ``perturbative'' field, whereas all the chromomagnetic components of the color field are compatible with zero within the statistical uncertainties.
We also introduced a new procedure to extract this perturbative Coulomb field from the transverse components of the chromoelectric field that
allows to isolate the nonperturbative confining field. From the knowledge of the nonpreturbative part of the longitudinal chromoelectric field we were able to extract some relevant
parameters of the confining flux tube, such as the mean width and the string tension.
In this paper we present preliminary results obtained in the case of QCD with (2+1) HISQ~\cite{Follana:2006rc,MILC:2009mpl} flavors.
\section{Lattice setup and numerical results}
The lattice operator whose vacuum expectation value gives us access to the components of the color field generated by a static $q \bar q$ pair is the following connected correlator~\cite{DiGiacomo:1989yp,DiGiacomo:1990hc,Kuzmenko:2000bq,DiGiacomo:2000va}:
%
\begin{equation}
\label{rhoW}
\rho_{W,\,\mu\nu}^{\rm conn} = \frac{\left\langle {\rm tr}
\left( W L U_P L^{\dagger} \right) \right\rangle}
{ \left\langle {\rm tr} (W) \right\rangle }
- \frac{1}{N} \,
\frac{\left\langle {\rm tr} (U_P) {\rm tr} (W) \right\rangle}
{ \left\langle {\rm tr} (W) \right\rangle } \; .
\end{equation}
%
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[width=0.4\textwidth]{fig_operator_Wilson.pdf}}
\subfigure[]{\includegraphics[width=0.2\textwidth]{qqbar.pdf}}
\caption{
(a) The connected correlator given in Eq.~(\protect\ref{rhoW}) between the plaquette $U_{P}$ and the Wilson loop
(subtraction in $\rho_{W,\,\mu\nu}^{\rm conn}$ not explicitly drawn). (b) The longitudinal
chromoelectric field $E_x(x_t)$ relative to the position of the static sources (represented by the white and black circles),
for a given value of the transverse distance $x_t$ and of the longitudinal distance $x_l$.}
\label{fig:op_W}
\end{figure}
Here $U_P=U_{\mu\nu}(x)$ is the plaquette in the $(\mu,\nu)$ plane, connected to the Wilson loop $W$ by a Schwinger line $L$, and $N$ is the number of colors (see Fig.~\ref{fig:op_W}).
The correlation function defined in Eq.~(\ref{rhoW}) measures the field strength $F_{\mu\nu}$, since in the naive continuum
limit~\cite{DiGiacomo:1990hc}
%
\begin{equation}
\label{rhoWlimcont}
\rho_{W,\,\mu\nu}^{\rm conn}\stackrel{a \rightarrow 0}{\longrightarrow} a^2 g
\left[ \left\langle
F_{\mu\nu}\right\rangle_{q\bar{q}} - \left\langle F_{\mu\nu}
\right\rangle_0 \right] \;,
\end{equation}
where $\langle\quad\rangle_{q \bar q}$ denotes the average in the presence of a static $q \bar q$ pair, and $\langle\quad\rangle_0$ is the vacuum average.
This relation is a necessary consequence of the gauge-invariance of the operator defined in Eq.~(\ref{rhoW}) and of its linear dependence on the color field in the continuum limit (see Ref.~\cite{Cea:2015wjd}).
The lattice definition of the quark-antiquark field-strength tensor
$F_{\mu\nu}$ is then obtained by equating the two sides of
Eq.~(\ref{rhoWlimcont}) for finite lattice spacing (more details in Refs.~\cite{Baker:2018mhw,Baker:2019gsi}).
We carried out numerical simulations for QCD with (2+1) HISQ flavors on lattices $24^4$, $32^4$, and $48^4$ and for
gauge coupling constants $\beta=6.445$, $\beta=7.158$, and $\beta=6.885$, respectively. For each setup, we measured the color fields corresponding to two different quark-antiquark separations. The physical scale is the ``$r_1$-scale'' as defined in Ref.~\cite{Bazavov:2011nk}. We take the quark mass parameters in order to be on the line of constant physics determined by fixing the strange quark mass to its physical value $m_s$ at each value of the gauge coupling $\beta$. The light-quark mass has
been fixed at $m_l=m_s/20$. This corresponds to a pion mass $m_\pi=160$~MeV~\cite{Bazavov:2011nk}.
The operator in Eq.~(\ref{rhoW}) undergoes a non-trivial renormalization~\cite{Battelli:2019lkz}, which depends on $x_t$. As discussed in Refs.~\cite{Baker:2018mhw,Baker:2019gsi},
comparing our results with those in Ref.~\cite{Battelli:2019lkz} we concluded that smearing behaves as an effective renormalization, even though no analysis of the systematics of our approach as compared to theirs was ever done.
The smearing procedure can be validated {\it a posteriori} by the observation of continuum scaling, that is by checking that fields obtained in the same {\em physical} setup, but at different values of the coupling and of the optimal number of smearing step, are in good agreement in the range of gauge couplings used.
In the present work we smoothed gauge configurations using $1$ step of HYP~\cite{Hasenfratz:2001hp} smearing on temporal links
In Fig.~\ref{fig:smearing} we show an example of the smearing procedure applied in the case of measurements of the longitudinal component of the chromoelectic field on a $32^4$ lattice, $\beta=7.158$, and a distance of 0.75~fm between static sources at $x_l=4$, and with as many datasets as there are $x_t$ at which we measured the field'.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth]{Ex_smearing_xl=4.pdf}
\caption{The $E_x(x_l,x_t)$ field in lattice units measured after 1 HYPt smearing versus the number of HYP3d smearings.
The values of $E_x(x_l,x_t)$ are displayed at a fixed value $x_l=4a$ for each value of the transverse distance $0\le x_t \le10$.}
\label{fig:smearing}
\end{figure}
We checked the physical scaling for several volumes and lattice spacings by measuring the fields in the case of physical distances $d\simeq0.75{\text{ fm}}$ and $d\simeq1{\text{ fm}}$ between quark-antiquark pair.
In Fig.~\ref{fig:scaling} the results of this scaling test show that the measured values of the $E_x$ field at different values of the coupling are well in agreement within statistical errors. The scaling test insures that in the range of parameters used our results satisfy continuum scaling.
As in the case of SU(3) pure gauge we find that: (i) the chromomagnetic field is everywhere much smaller than the longitudinal chromoelectric field and is compatible with zero within statistical errors;
(ii) the dominant component of the chromoelectric field is longitudinal; (iii) the transverse components of the chromoelectric field are also smaller than the longitudinal component,
but can be matched to the transverse components of an effective Coulomb-like field $\vec{E}^C(\vec{r})$ satisfying the conditions:
\begin{enumerate}
\item The transverse component $E_y$ of the chromoelectric field is identified with the transverse component $E_y^C$ of the perturbative field
\begin{equation}
E^C_y \equiv E_y.
\label{eq:Ecoulomb}
\end{equation}
\item The perturbative field $\vec{E}^C$ is irrotational
\begin{equation}
\nabla \times \vec{E}^C = 0.
\label{eq:irrotational}
\end{equation}
\end{enumerate}
%
By imposing the condition of being irrotational, we are able to evaluate (for details see Ref.~\cite{Baker:2019gsi}) the perturbative contribution $E^C$ to the longitudinal chromoelectric field and to extract the non-perturbative confining longitudinal chromoelectric field:
\begin{equation}
E_x^{\rm{NP}}= E_x - E_x^C.
\label{eq:monperturbative}
\end{equation}
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[width=0.4\textwidth,clip]{Ex_midpoint_d=0.75approx-crop.pdf}} \hspace{1 cm}
\subfigure[]{\includegraphics[width=0.4\textwidth,clip]{Ex_midpoint_d=0.95approx-crop.pdf}}
\caption{
The value of the $E_x(x_l,x_t)$ field in physical units evaluated at midpoint of the line connecting the static quark-antiquark pair placed at the same distance physical distance $d$ for
several lattice volumes and several lattice distances: (a) $d\simeq0.75$~fm; (b) $d\simeq1$~fm.
}
\label{fig:scaling}
\end{figure}
In Fig.~\ref{fig:Exbeta7.158} is shown the 3-dimensional profile of the chromoelectric field $E_x$ generated by a static quark-antiquark pair whose mutual distance is 0.74~fm.
At fixed value of $x_l$ along the line connecting the quark sources the value at transverse distance $x_t$ has been evaluated. The perturbative contribution to $E_x$ can be determined by means of the irrotational condition Eq.~(\ref{eq:irrotational}) and, after subtraction from the full chromoelectric field, we get the nonperturbative confining field $E_x^{\rm{NP}}$.
From the subtracted, nonperturbative part of the longitudinal chromoelectric field we can extract some relevant parameters of the flux tube at the midpoint, such as the root mean square width
\begin{equation}
\sqrt{w^2} = \sqrt{\frac{\int d^2x_t \, x_t^2 E_x(x_t)}{\int d^2x_t \, E_x(x_t)}},
\label{eq:rmswidth}
\end{equation}
and the square root of the string tension
\begin{equation}
\sqrt{\sigma} = \sqrt{ \int d^2x_t
\frac{(E^{\rm NP}_x)^2 (x_t)}{2} } .
\label{eq:stringtension}
\end{equation}
We stress that the nonperturbative field $E_x^{\rm{NP}}$ was determined by a model-independent procedure and, therefore, all the results obtained from it are model independent as well.
In Fig.~\ref{fig:stringwidth} we present our results for the string tension and the width of the flux tube.
This evaluation has been done for several values of the distance between the static quark-antiquark pair: results are quite independent from this distance.
A systematic study for several distances between the quark sources is in progress.
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[width=0.45\textwidth,clip]{beta=7.158_d=0.74_full_surface-crop.pdf}} \hspace{1 cm}
\subfigure[]{\includegraphics[width=0.45\textwidth,clip]{beta=7.158_d=0.74_nonperturbative_surface-crop.pdf}}
\caption{
Surface and contour plots for the longitudinal components of the full (a) and non-perturbative (b) chromoelectric field obtained by subtracting the perturbative contribution using
the procedure based on the irrotational condition Eq.~(\ref{eq:irrotational}) at $\beta=7.158$ and quark-antiquark pair at distance $d=10a=0.75$~fm.}
\label{fig:Exbeta7.158}
\end{figure}
\begin{figure}[tb]
\centering
\subfigure[]{\includegraphics[width=0.4\textwidth,clip]{sqrt_stringtension-crop.pdf}} \hspace{1 cm}
\subfigure[]{\includegraphics[width=0.4\textwidth,clip]{rms_width-crop.pdf}}
\caption{
The square root of the string tension (a) and the root mean square width of the flux tube obtained in QCD with (2+1) HISQ flavors for several distances between the static quark-antiquark pair.}
\label{fig:stringwidth}
\end{figure}
\section*{Acknowledgements}
This work was based in part on the MILC Collaboration's public lattice gauge theory code ({\url{http://physics.utah.edu/~detar/milc.html}).
Simulations have been performed using computing facilities at CINECA (INF20$\_$npqcd project under CINECA-INFN agreement).
\providecommand{\href}[2]{#2}\begingroup\raggedright |
1,314,259,993,496 | arxiv | \section{Introduction}
\label{Sec-Introduction}
Microwave engineering relies on time-consuming electromagnetic simulations to carry out robust electrical designs. The electromagnetic complexity in microwave devices is such that \emph{only} detailed, full-wave simulations, i.e., solving Maxwell's equations directly, can guide an engineer in pursuing the target electrical design. As a result, most of a microwave engineer's working time is spent on waiting for an electromagnetic simulation to conclude, which will assist him in taking an action to meet the specifications in an electrical design. Current industrial needs have long since recognized that one can no longer afford this design methodology. Different efforts in computational electromagnetics (CEM) community have been carried out to speed up this costly process, and most of them follow the model order reduction (MOR) philosophy \cite{Edlinger2014APosteriori,nicolini2019model,hochman2014reduced,Rewienski2016greedy,codecasa2019exploiting,ChewTAP2014GeneralizedModal,xue2020rapid,mrozowski2020,szypulski2020SSMMM}.
A reduced-order model (ROM) implies replacing a rather complex physical model by a much simpler mathematical one that still maintains certain physical aspects of the original model over a parameter domain. The computational complexity of the ROM should be insignificant in comparison to the high computational cost of the original full-order model (FOM). MOR has demonstrated its robustness in reducing the complexity of parametric systems \cite{morQuaMN16,morHesRS16,morBenGW15}. However, the accuracy of the ROM is sometimes not guaranteed due to lack of low-cost and computable error estimators. Although the ROM may be valid for a certain parameter range, its validity over the entire parameter domain is not guaranteed. As a result, it can not be used as a reliable surrogate of the original FOM. This lack of providing accuracy guarantees precludes the ROM from being used for industrial applications, where no \emph{a priori} knowledge of the parameter range may be available. This is the worst case scenario for MOR. It is quite usual that the proposition of corresponding error estimation lags behind new MOR algorithms. To remedy this, a great effort has been carried out in certifying the accuracy of the ROM, where computationally expensive error estimation may be allowed. This is the case for \emph{inf-sup} constant-based error estimators \cite{morHesB13,hess2015estimating,Edlinger2015CertifiedDualCorrected,morSchWH18}. The residual norm divided by this costly \emph{inf-sup} constant \cite{GarRM17} bounds the state error. As already stated in \cite{morFenB19}, keeping the \emph{inf-sup} constant in the denominator of the error estimator causes potential risk for many problems with small \emph{inf-sup} constants. This is quite common in microwave circuits, where resonances show up in the frequency band of analysis, dropping the \emph{inf-sup} constant down to zero \cite{delaRubia2018CRBM,GarRM17}. Different strategies for \emph{a posteriori} error estimation should be considered, reducing its computational cost to the same order of the ROM, if possible. Residual norm-based error estimation can be carried out without effort and this has been often used as a heuristic error estimator~\cite{delaRubia2018CRBM,de2009reliable, Vouvakis2011FastFrequency, delaRubia2014Reliable, morRewLM15, Edlinger2015ANewMethod, Edlinger2017finite, fotyga2018reliable, MonjedelaRubia2020EFIE}. Going back to the state error estimation, recent works have focused on avoiding the \emph{inf-sup} constant evaluation \cite{morSemZP18, feng2020InfSupConstantFree}. There, additional dual or residual systems are solved to obtain the error estimators, overcoming any time-consuming \emph{inf-sup} constant calculation. However, despite the fact that they avoid computing the expensive \emph{inf-sup} constant, both approaches need to solve additional dual or residual systems, respectively.
In this work, we aim to further reduce the computational costs of the error estimator proposed in \cite{feng2020InfSupConstantFree}, which was shown to be more efficient than that in \cite{morSemZP18}. We study the frequency behavior of both the electric field and the state error when an approximation to the electric field solution is carried out, and detail a Fourier series representation in both cases. Both field quantities share the same orthogonal Fourier series representation basis in frequency-parameter systems. In comparison to what has been previously done for general parametric systems, where a residual system needs to be solved independently, we focus on the efficient determination of the electric field state error and propose a fast evaluation of the ROM state error in the frequency band of analysis, minimizing the number of FOM evaluations which plays a central role in determining the efficiency in MOR. This methodology is of paramount importance to carry out a reliable fast frequency sweep in microwave circuits.
This paper is organized as follows. In Section \ref{Sec-ProblemStatement} we review the time-harmonic Maxwell's equations in variational form, solve for the electromagnetic field in order to show its frequency behavior and detail the \emph{inf-sup} constant-based standard error analysis. Section \ref{Sec-InfSupConstantFree} deals with the proposed state error estimation in frequency-parameter systems avoiding the \emph{inf-sup} constant. Numerical simulations in Section \ref{Sec-NumericalResults} show the performance of the proposed approach for reliable fast frequency sweeps in electromagnetics. Real-life microwave circuits illustrate the capabilities and accuracy of the proposed methodology. Finally, in Section \ref{Sec-Conclusions}, we provide conclusions.
\section{Problem Statement}
\label{Sec-ProblemStatement}
The electromagnetic phenomena in a given device are described by Maxwell's equations. Applying the Fourier transform to these, the fields in the transform domain $i\omega$ can be found. They are
\begin{subequations}
\label{eq:Sec-ProblemStatement-MaxwellSystem}
\begin{align}
\nabla \times \mathbf{E} &= -i \omega \mu \mathbf{H} \text{ in } \Omega,\\
\nabla \times \mathbf{H} &= i \omega \varepsilon \mathbf{E} \text{ in } \Omega,\\
\label{eq:Sec-ProblemStatement-MaxwellSystem-PECBoundaryCondition}
\mathbf{n} \times \mathbf{E} &= \mathbf{0} \text{ on } \Gamma_\text{PEC},\\
\mathbf{n} \times \mathbf{H} &= \mathbf{0} \text{ on } \Gamma_\text{PMC},\\
\mathbf{n} \times \mathbf{H} &= \mathbf{J} \text{ on } \Gamma \text{,}
\end{align}
\end{subequations}
where $\Omega \subset \mathbb{R}^{3}$ is a source-free, sufficiently smooth bounded domain, $\mathbf{n}$ is the unit outward normal vector on the boundary $\partial \Omega$ of $\Omega$. The boundary is divided into perfect electric conductor (PEC), perfect magnetic conductor (PMC) and ports, i.e., $\partial \Omega =\Gamma_\text{PEC} \cup \Gamma_\text{PMC}\cup \Gamma $. $\mathbf{E}$ and $\mathbf{H}$ are the electric and magnetic fields, $\varepsilon $ and $\mu $ are, respectively, the permittivity and permeability of the medium, which is assumed to be lossless, and the tangential field $\mathbf{J}$ is the excitation current at the ports. Time-harmonic Maxwell's equations can be written in a classical weak formulation over an appropriate admissible function space $\mathcal{H}$, viz.
\begin{equation}
\label{eq:Sec-ProblemStatement-WeakForm}
\begin{aligned}
\text{find}~&\mathbf{E}\in \mathcal{H}~\text{such that} \\
&a(\mathbf{E},\mathbf{v})=f(\mathbf{v})~\forall \mathbf{v}\in \mathcal{H}\text{.}
\end{aligned}
\end{equation}
The bilinear form is
\begin{equation}
\label{eq:Sec-ProblemStatement-BilinearForm}
a(\mathbf{E}, \mathbf{v}) = \int\limits_{\Omega}\left( \frac{1}{\mu } \nabla \times \mathbf{E} \cdot \nabla \times \mathbf{v} - \omega ^{2} \varepsilon \mathbf{E} \cdot \mathbf{v}\right) dx \text{,}
\end{equation}
and the linear form
\begin{equation}
\label{eq:Sec-ProblemStatement-LinearForm}
f(\mathbf{v})=i\omega \int\limits_{\partial \Omega} \mathbf{J} \cdot \mathbf{v}~ds = i \omega \int\limits_{\Gamma} \mathbf{J} \cdot \mathbf{v}~ds \text{.}
\end{equation}
Here, the admissible space $\mathcal{H}$ is a subspace of the Hilbert space $H(curl,\Omega )$ defined by:
\begin{equation}
\label{eq:Sec-ProblemStatement-Hcurl}
H(curl,\Omega )=\left\{ \mathbf{u}\in L^{2}(\Omega, \mathbb{C}^{3})~|~\nabla \times \mathbf{u}\in L^{2}(\Omega, \mathbb{C}^{3})\right\} \text{,}
\end{equation}
since $\mathcal{H}$ should take the boundary condition \eqref{eq:Sec-ProblemStatement-MaxwellSystem-PECBoundaryCondition} into account, namely,
\begin{equation}
\label{eq:Sec-ProblemStatement-H}
\mathcal{H}=\left\{ \mathbf{u}\in H(curl, \Omega )~|~\mathbf{n} \times \mathbf{u} = \mathbf{0}~\text{on }\Gamma_\text{PEC}\right\} \text{.}
\end{equation}
Let us refer to the trace spaces, namely,
\begin{equation}
\label{eq:Sec-ProblemStatement-TraceSpaces}
\begin{aligned}
H^{-\frac{1}{2}}(div, \partial \Omega)&= \{ \mathbf{n} \times \mathbf{u}~\text{on}~\partial \Omega ~|~\mathbf{u} \in H(curl,\Omega) \} \\
H^{-\frac{1}{2}}(curl, \partial \Omega)&= \{ \mathbf{n} \times \mathbf{u} \times \mathbf{n}~\text{on}~\partial \Omega ~|~\mathbf{u} \in H(curl,\Omega) \} \text{,}
\end{aligned}
\end{equation}
and point out that they are dual to each other with the following duality pairing
\begin{equation}
\label{eq:Sec-ProblemStatement-DualityPairing}
\langle \mathbf{u},\mathbf{v} \rangle = \int\limits_{\partial \Omega} \mathbf{u} \cdot \mathbf{v} ~ds
\end{equation}
$\mathbf{u} \in H^{-\frac{1}{2}}(div,\partial \Omega)$ and $\mathbf{v} \in H^{-\frac{1}{2}}(curl,\partial \Omega)$. It is now apparent that the excitation current $\mathbf{J}$ belongs to $H^{-\frac{1}{2}}(div, \partial \Omega)$. We refer to \cite{GiraultRaviart,Monk} for a through explanation for all these spaces.
\subsection{Field Frequency Dependency in Electromagnetics}
Following \cite{delaRubia2018CRBM,Kirsch}, where some frequency structure is shown in the solution to the variational problem \eqref{eq:Sec-ProblemStatement-WeakForm}, we introduce the Helmholtz decomposition
\begin{equation}
\label{eq:Sec-ProblemStatement-HelmholtzDecomposition}
\mathcal{H}=\mathcal{H}(curl0, \Omega) \oplus \mathcal{V} \text{,}
\end{equation}
where
\begin{subequations}
\label{eq:Sec-ProblemStatement-HelmholtzSpaces}
\begin{align}
\label{eq:Sec-ProblemStatement-HelmholtzSpaces-Hcurl0}
&\mathcal{H}(curl0,\Omega) = \{ \mathbf{u} \in \mathcal{H}~|~ \nabla \times \mathbf{u} = \mathbf{0} \},\\
\label{eq:Sec-ProblemStatement-HelmholtzSpaces-V}
&\mathcal{V} = \{ \mathbf{u} \in \mathcal{H}~|~(\varepsilon \mathbf{u}, \mathbf{v})_{L^{2}(\Omega)} = 0~\forall \mathbf{v} \in \mathcal{H}(curl0,\Omega) \} \text{.}
\end{align}
\end{subequations}
$(\cdot, \cdot)_{L^2(\Omega)}$ is the inner product in $L^{2}(\Omega, \mathbb{C}^{3})$. $\mathcal{H}(curl0, \Omega)$ denotes the nullspace of the curl operator while $\mathcal{V}$ stands for its orthogonal complement within the solution space $\mathcal{H}$ in the following inner product
\begin{equation}
\label{eq:Sec-ProblemStatement-muepsInnerProduct}
(\mathbf{u}, \mathbf{v})_{\mu, \varepsilon} = ( \frac{1}{\mu} \nabla \times \mathbf{u}, \nabla \times \mathbf{v})_{L^{2}(\Omega)}+(\varepsilon \mathbf{u}, \mathbf{v})_{L^{2}(\Omega)} \text{.}
\end{equation}
It should be noted that both $\mathcal{H}(curl0, \Omega)$ and $\mathcal{V}$ spaces satisfy the PEC boundary condition on $\Gamma_\text{PEC}$.
The variational problem \eqref{eq:Sec-ProblemStatement-WeakForm} can be solved by using the splitting $\mathbf{E} = \mathbf{E}_0 + \mathbf{e}$, $\mathbf{E}_0 \in~\mathcal{H}(curl0, \Omega)$, $\mathbf{e} \in \mathcal{V}$. We refer to \cite{delaRubia2018CRBM,Kirsch} for details. As a result, we can make the dependence of the solution to time-harmonic Maxwell's equations on frequency explicit, cf. \cite{Kurokawa,Conciauro}, i.e.,
\begin{equation}
\label{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency}
\begin{aligned}
&\text{if } \omega^2 \neq \omega_n^2\text{,} ~ \mathbf{E} = \mathbf{E}_0 + \mathbf{e} = \frac{1}{i\omega} \mathbf{F}_0 + i \omega \sum \limits_{n=1}^{\infty} \frac{A_n}{1 - \frac{\omega^2}{\omega_n^2} } \mathbf{e}_n,\\
&\text{if } \omega^2 = \omega_n^2\text{,}~\mathbf{E} = \mathbf{E}_0 + \mathbf{e} = \\
&= \frac{1}{i\omega} \mathbf{F}_0 + i \omega \sum \limits_{ \omega_n^2 = \omega^2 } a_n \mathbf{e}_n + i \omega \sum \limits_{\omega_n^2 \neq \omega^2} \frac{A_n}{ 1 - \frac{\omega^2}{\omega_n^2} } \mathbf{e}_n \text{.}
\end{aligned}
\end{equation}
$\mathbf{F}_0 \in \mathcal{H}(curl0, \Omega)$ is related to the Riesz representative for the electric field in statics. The set of eigenmodes $\{\mathbf{e}_n~|~n \in~\mathbb{N}\} \subset \mathcal{V}$ stands for the resonant modes in electrodynamics, along with their corresponding resonant frequencies $\omega_n \in \mathbb{R}$, and forms a complete orthonormal system in $\mathcal{V}$ with respect to the inner product \eqref{eq:Sec-ProblemStatement-muepsInnerProduct} \cite{Kirsch}. It should be pointed out that $\mathcal{H}(curl0, \Omega)$ is orthogonal to $\mathcal{V}$ with respect to the same inner product \eqref{eq:Sec-ProblemStatement-muepsInnerProduct}. Getting to our point, \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency} details an orthogonal representation, i.e., a Fourier series for the electric field where the frequency dependence is explicit. Further, $A_n$ are coupling coefficients for the excitation current $\mathbf{J}$ to its corresponding resonant mode $\mathbf{e}_n$ and are determined by
\begin{equation}
\label{eq:Sec-ProblemStatement-ExcitationCouplingCoefficient}
A_n = \langle \mathbf{J}, \mathbf{n} \times \overline{\mathbf{e}}_n \times \mathbf{n} \rangle \text{.}
\end{equation}
$\overline{\mathbf{e}}_n$ stands for the complex conjugate of $\mathbf{e}_n$. Finally, $a_n$ are arbitrary coefficients since the electric field is not unique at resonance.
\subsection{Parametric Variational Problem and Standard \emph{A Posteriori} Error Analysis}
\label{Sec-ProblemStatement-Subsec-StandarAPosterioriError}
Taking frequency ($\omega$) as a parameter, the weak formulation for time-harmonic Maxwell's equations \eqref{eq:Sec-ProblemStatement-WeakForm} turns into the following parametric variational problem:
\begin{equation}
\label{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}
\begin{aligned}
\text{find}~&\mathbf{E}(\omega) \in \mathcal{H}~\text{such that} \\
&a(\mathbf{E}(\omega), \mathbf{v}; \omega) = f(\mathbf{v}; \omega)~\forall \mathbf{v} \in \mathcal{H},~\forall \omega \in \mathcal{B} \text{.}
\end{aligned}
\end{equation}
where $\mathcal{B}:=[\omega_\text{min},\omega_\text{max}]\subset \mathbb{R}$ is the frequency band of interest, and the frequency-parameter bilinear and linear forms $a(\cdot,\cdot;\omega)$ and $f(\cdot;\omega)$ are already defined in \eqref{eq:Sec-ProblemStatement-BilinearForm} and \eqref{eq:Sec-ProblemStatement-LinearForm}, respectively. The well-posedness of the parametric problem \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm} relies on the behavior of the so-called \emph{inf-sup} constant $\beta(\omega)$ as a function of frequency:
\begin{equation}
\label{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-InfSupConstant}
\beta (\omega ) = \adjustlimits\inf_{\mathbf{u} \in \mathcal{H}} \sup_{\mathbf{v} \in \mathcal{H}}\, \frac{ |a(\mathbf{u}, \mathbf{v}; \omega)| }{ \| \mathbf{u} \|_\mathcal{H} \| \mathbf{v} \|_\mathcal{H} }.
\end{equation}
For all $\omega \in \mathcal{B}$, $\beta (\omega )\geq \beta _{0} > 0$ ensures the well-posedness and uniqueness in the variational problem \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm} \cite{hess2015estimating}.
This result gives rise to a standard \emph{a posteriori} error analysis. Provided an approximate solution $\mathbf{\tilde E}(\omega) \in \mathcal{H}$ to the variational problem \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm} is found, the error in the approximation $\| \mathbf{E}(\omega) - ~\mathbf{\tilde E}(\omega)\|_\mathcal{H}$ can be bounded using the \emph{inf-sup} constant. Indeed, \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-InfSupConstant} can be rewritten as follows:
\begin{equation}
\label{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-InfSupConstant2}
\sup_{\mathbf{v} \in \mathcal{H}} \frac{|a(\mathbf{u}, \mathbf{v};\omega)|}{ \|\mathbf{u}\|_\mathcal{H} \|\mathbf{v}\|_\mathcal{H}}>\beta(\omega), \forall \mathbf{u} \in \mathcal{H}.
\end{equation}
In particular, this inequality still holds when replacing $\mathbf{u} \in \mathcal{H}$ by the field $\mathbf{E}(\omega)-\mathbf{\tilde E}(\omega) \in \mathcal{H}$, which gives rise to an upper bound for the approximation error, namely,
\begin{equation}
\label{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-UpperBoundError}
\| \mathbf{E}(\omega )- \mathbf{ \tilde E}(\omega) \|_\mathcal{H} < \frac{ 1 }{ \beta(\omega)} \sup_{ \mathbf{v} \in \mathcal{H} } \frac{ |a(\mathbf{E}(\omega )-\mathbf{\tilde E}(\omega), \mathbf{v}; \omega)| }{ \| \mathbf{v} \|_\mathcal{H} }.
\end{equation}
However, this error bound not only involves the computation of the norm of the residual functional
\begin{equation}
\label{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-Residual}
\begin{aligned}
r(\mathbf{\tilde E}(\omega), \mathbf{v}; \omega) :=& f(\mathbf{v}; \omega )-a(\mathbf{\tilde E}(\omega), \mathbf{v}; \omega) \\
=& a(\mathbf{E}(\omega)-\mathbf{\tilde E}(\omega), \mathbf{v}; \omega),\forall \mathbf{v} \in \mathcal{H} \text{,}
\end{aligned}
\end{equation}
which can be determined in an efficient way as a function of frequency \cite{de2009reliable,Vouvakis2011FastFrequency,morHesB13,Edlinger2015ANewMethod,morRewLM15}, but also the determination of the \emph{inf-sup} constant throughout the frequency band of interest $\mathcal{B}$, which can be time-consuming \cite{hess2015estimating,Edlinger2015CertifiedDualCorrected,GarRM17}.
Furthermore, in microwave engineering, resonances appearing in $\mathcal{B}$ are responsible for the target electrical response. As a result, resonant modes arise and the uniqueness of the solution is no longer valid in the band of interest $\mathcal{B}$. The \emph{inf-sup} constant vanishes at the resonance frequencies, giving rise to a near-infinity upper bound for the error in \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-UpperBoundError} nearby resonances. In addition, the above error estimation leads to unacceptable overestimation of the error even for well-conditioned problems~\cite{morSchWH18}. Given no better choices, the norm of the residual \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-Residual}, which can be straightforwardly computed, has been used as a \emph{heuristic} error estimator \cite{Rewienski2016greedy,delaRubia2014Reliable,morRewLM15,Edlinger2017finite,fotyga2018reliable,MonjedelaRubia2020EFIE,Kouki}
\section{State Error Estimation Avoiding the \emph{Inf-Sup}-Constant}
\label{Sec-InfSupConstantFree}
The previous section has shown the main role the \emph{inf-sup} constant plays in \emph{a posteriori} error estimation, as well as the incapability of \emph{inf-sup} constant-based error estimators to provide a tight error bound nearby resonant frequencies. Unfortunately, the norm of the residual cannot provide a sharp error estimation at or nearby resonance frequencies either. We shall elaborate on this later in Section \ref{Sec-Conclusions}. As a result, we are in need of more efficient state error estimators to certify the accuracy of the approximate field solution to the frequency-parameter variational problem \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}, even in the presence of resonances. Recent efforts have moved towards this goal \cite{morSemZP18,feng2020InfSupConstantFree}. There, instead of computing the \emph{inf-sup} constant, additional dual or residual systems need to be solved to obtain the state error estimator. These additional systems constitute an extra computational effort to certify the accuracy of the approximate field solution. In this work, we focus on fast \emph{a posteriori} state error estimator computation taking advantage of the frequency dependency in the field solution \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency} for the frequency-parameter problem \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}.
\subsection{Field Error Frequency Dependency in Electromagnetics}
Given an approximate solution $\mathbf{\tilde E}(\omega) \in \mathcal{H}$ to \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}, we can study the Fourier series representation of the state error
\begin{equation}
\label{eq:Sec-InfSupConstantFree-StateError}
\boldsymbol{\epsilon}(\omega) := \mathbf{E}(\omega) - \mathbf{\tilde E}(\omega) \in \mathcal{H} \text{,}
\end{equation}
making its frequency dependency explicit. The state error \eqref{eq:Sec-InfSupConstantFree-StateError} satisfies the frequency-parameter variational problem,
\begin{equation}
\label{eq:Sec-InfSupConstantFree-ParametricWeakFormStateError}
\begin{aligned}
\text{find}~&\boldsymbol{\epsilon}(\omega) \in \mathcal{H}~\text{such that} \\
&a(\boldsymbol{\epsilon}(\omega), \mathbf{v}; \omega) = f^{\boldsymbol{\epsilon}}(\mathbf{v}; \omega)~\forall \mathbf{v} \in \mathcal{H},~\forall \omega \in \mathcal{B} \text{.}
\end{aligned}
\end{equation}
The frequency-parameter bilinear form $a(\cdot, \cdot; \omega)$ is already defined in \eqref{eq:Sec-ProblemStatement-BilinearForm}. The frequency-parameter linear form $f^{\boldsymbol{\epsilon}}(\cdot; \omega)$ is the residual functional $r(\mathbf{\tilde E}(\omega), \cdot; \omega) $ detailed in \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-Residual},
\begin{equation}
\label{eq:Sec-InfSupConstantFree-LinearForm}
f^{\boldsymbol{\epsilon}}(\mathbf{v}) = f(\mathbf{v}; \omega )-a(\mathbf{\tilde E}(\omega), \mathbf{v}; \omega) = i \omega \int\limits_{\partial \Omega} \mathbf{J}^{\boldsymbol{\epsilon}} \cdot \mathbf{v}~ds \text{,}
\end{equation}
which can be identified as a residual error current $\mathbf{J}^{\boldsymbol{\epsilon}} \in ~ H^{-\frac{1}{2}}(div, \partial \Omega)$. By an analogous reasoning as the one in Section \ref{Sec-ProblemStatement}, we get
\begin{equation}
\label{eq:Sec-InfSupConstantFree-SolutionErrorMaxwellFrequencyDependency}
\begin{aligned}
\text{if } \omega^2 &\neq \omega_n^2\text{,} ~ \boldsymbol{\epsilon}(\omega) = \frac{1}{i\omega} \mathbf{F}^{\boldsymbol{\epsilon}}_0 + i \omega \sum \limits_{n=1}^{\infty} \frac{A^{\boldsymbol{\epsilon}}_n}{1 - \frac{\omega^2}{\omega_n^2} } \mathbf{e}_n,\\
\text{if } \omega^2 &= \omega_n^2\text{,} \\
\boldsymbol{\epsilon}(\omega) &= \frac{1}{i\omega} \mathbf{F}^{\boldsymbol{\epsilon}}_0 + i \omega \sum \limits_{ \omega_n^2 = \omega^2 } a^{\boldsymbol{\epsilon}}_n \mathbf{e}_n + i \omega \sum \limits_{\omega_n^2 \neq \omega^2} \frac{A^{\boldsymbol{\epsilon}}_n}{ 1 - \frac{\omega^2}{\omega_n^2} } \mathbf{e}_n \text{.}
\end{aligned}
\end{equation}
$\mathbf{F}^{\boldsymbol{\epsilon}}_0 \in \mathcal{H}(curl0, \Omega)$ is related to the Riesz representative for the stationary error field. The same set of eigenmodes $\{\mathbf{e}_n~|~n \in~\mathbb{N}\} \subset \mathcal{V}$ along with their corresponding resonant frequencies $\omega_n$ as in \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency} can be used. $A^{\boldsymbol{\epsilon}}_n$ are coupling coefficients for the residual error current $\mathbf{J}^{\boldsymbol{\epsilon}}$ to the corresponding resonant mode $\mathbf{e}_n$, namely,
\begin{equation}
\label{eq:Sec-InfSupConstantFree-ErrorExcitationCouplingCoefficient}
A^{\boldsymbol{\epsilon}}_n = \langle \mathbf{J}^{\boldsymbol{\epsilon}}, \mathbf{n} \times \overline{\mathbf{e}}_n \times \mathbf{n} \rangle \text{.}
\end{equation}
In addition, $a^{\boldsymbol{\epsilon}}_n$ are arbitrary coefficients since there is no unique solution at resonance.
Having a closer look at equations \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency} and \eqref{eq:Sec-InfSupConstantFree-SolutionErrorMaxwellFrequencyDependency}, we can realize that the solutions to both original and residual variational problems share the same frequency-parameter behaviour and admit a similar Fourier series representation with the same frequency pattern. We may be then tempted to use the same representation basis to find an approximate solution to both original and residual variational problems \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm} and \eqref{eq:Sec-InfSupConstantFree-ParametricWeakFormStateError}. However, this should be done carefully to avoid stating that the state error $\boldsymbol{\epsilon}(\omega)$ is identically zero even in the situation where the approximate field $\mathbf{\tilde E}(\omega)$ may still be far away from the true solution $\mathbf{E}(\omega)$. We will get back to this point later in Section \ref{Sec-InfSupConstantFree-SubSec-InBandEigenmodes}. In order to avoid this embarrassing scenario, \cite{feng2020InfSupConstantFree} proposes to approximate these variational problems by applying different Galerkin projection spaces $\mathcal{H}_m, \mathcal{H}^{\boldsymbol{\epsilon}}_m:=\mathcal{H}_m+\mathcal{H}^\mathbf{r}_m \subset \mathcal{H}$, to the original and residual problems, respectively, giving rise to corresponding reduced systems which can be solved with ease, namely,
\begin{subequations}
\label{eq:Sec-InfSupConstantFree-GalerkinProjection}
\begin{align}
\label{eq:Sec-InfSupConstantFree-GalerkinProjectionField}
\text{find}~&\mathbf{\tilde E}(\omega) \in \mathcal{H}_m~\text{such that} \\
&a(\mathbf{\tilde E}(\omega), \mathbf{v}; \omega) = f(\mathbf{v}; \omega)~\forall \mathbf{v} \in \mathcal{H}_m,~\forall \omega \in \mathcal{B} \text{,} \nonumber \\
\label{eq:Sec-InfSupConstantFree-GalerkinProjectionError}
\text{and find}~&\boldsymbol{\tilde \epsilon}(\omega) \in \mathcal{H}^{\boldsymbol{\epsilon}}_m~\text{such that} \\
&a(\boldsymbol{\tilde \epsilon}(\omega), \mathbf{v}; \omega) = f^{\boldsymbol{\epsilon}}(\mathbf{v}; \omega)~\forall \mathbf{v} \in \mathcal{H}^{\boldsymbol{\epsilon}}_m,~\forall \omega \in \mathcal{B} \text{.} \nonumber
\end{align}
\end{subequations}
Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} adaptively builds the reduced-basis spaces up, i.e., the Galerkin projection spaces, in a greedy framework. As the number of iterations in this procedure increases, $\boldsymbol{\tilde \epsilon}(\omega)$ approximates the true state error $\boldsymbol{\epsilon}(\omega)$ better and better, $\boldsymbol{\tilde \epsilon}(\omega) \approx \boldsymbol{\epsilon}(\omega)$, and $\|\boldsymbol{\tilde \epsilon}(\omega)\|$ does perform as a sharp \emph{a posteriori} state error estimator. We refer to \cite{feng2020InfSupConstantFree} for the details. However, it should be pointed out that, in Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, the dimension of the Galerkin projection space for the residual problem $\mathcal{H}^{\boldsymbol{\epsilon}}_m$ may double the dimension in the Galerkin projection space for the original problem $\mathcal{H}_m$ at each iteration. Further, distinct sets of parameters are arranged to solve for both the original and residual problems \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm} and \eqref{eq:Sec-InfSupConstantFree-ParametricWeakFormStateError}, which need to be done independently, in spite of the same dynamics being observed in both variational problems (see \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency} and \eqref{eq:Sec-InfSupConstantFree-SolutionErrorMaxwellFrequencyDependency}). This may turn this procedure rather time consuming. In this work, we focus on keeping a low computational effort, taking advantage of the observations in Sections \ref{Sec-ProblemStatement} and \ref{Sec-InfSupConstantFree} to carry out a reliable fast frequency sweep analysis.
\begin{algorithm}[t!]
\caption{Adaptive construction of the Galerkin projection spaces $\mathcal{H}_m, \mathcal{H}^{\boldsymbol{\epsilon}}_m$ for state error estimation \cite{feng2020InfSupConstantFree}.}
\label{alg:Sec-InfSupConstantFree-ROMErrorEstimator}
\begin{algorithmic}[1]
\REQUIRE Frequency band of interest $\mathcal{B}:=[\omega_\text{min}, \omega_\text{max} ]$, tolerance $\texttt{tol} > 0$ as the acceptable state error.
\ENSURE $\mathcal{H}_m$ to ensure $\texttt{tol}$ state error in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjection}.
\STATE Initialize $\mathcal{H}_m = \{\mathbf{0}\}$, $\mathcal{H}^{\mathbf{r}}_m = \{\mathbf{0}\}$, $\xi = \texttt{tol} + 1$. Choose different samples $\omega^*$ and $\omega^*_{\boldsymbol{\epsilon}}$ randomly taken from $\mathcal{B}$.
\WHILE{$\xi > \texttt{tol}$}
\STATE Solve for $\mathbf{E}(\omega^*)$ in \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}, orthonormalize and enrich $\mathcal{H}_m$: $\mathcal{H}_m = \mathcal{H}_m + \text{span}\{\mathbf{E}(\omega^*)\} $.
\STATE Using $\mathcal{H}_m$, solve for the approximate field $\mathbf{\tilde E}(\omega)$ in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionField}.
\STATE Solve for $\boldsymbol{\epsilon}(\omega^*_{\boldsymbol{\epsilon}})$ in \eqref{eq:Sec-InfSupConstantFree-ParametricWeakFormStateError}, orthonormalize and enrich $\mathcal{H}^{\mathbf{r}}_m$: $\mathcal{H}^{\mathbf{r}}_m = \mathcal{H}^{\mathbf{r}}_m + \text{span}\{\boldsymbol{\epsilon}(\omega^*_{\boldsymbol{\epsilon}})\} $.
\STATE Form $\mathcal{H}^{\boldsymbol{\epsilon}}_m$ and orthonormalize: $\mathcal{H}^{\boldsymbol{\epsilon}}_m = \mathcal{H}_m + \mathcal{H}^{\mathbf{r}}_m$.
\STATE Using $\mathcal{H}^{\boldsymbol{\epsilon}}_m$, solve for the state error estimation $\boldsymbol{\tilde \epsilon}(\omega)$ in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionError}.
\STATE Choose the next sample $\omega^*$ from $\mathcal{B}$ as $$\omega^*=\arg\max\limits_{\omega\in \mathcal{B}}\|\boldsymbol{\tilde \epsilon}(\omega)\|_{\mathcal{H}}.$$
\STATE Choose the next sample $\omega^*_{\boldsymbol{\epsilon}}$ from $\mathcal{B}$ following $$\omega^*_{\boldsymbol{\epsilon}}=\arg\max\limits_{\omega \in \mathcal{B}} \|r(\boldsymbol{\tilde \epsilon}(\omega), \cdot; \omega)\|_{\mathcal{H}^{\prime}}.$$
Here $r(\boldsymbol{\tilde \epsilon}(\omega), \cdot; \omega)$ is the residual functional introduced by $\boldsymbol{\tilde \epsilon}(\omega)$ in \eqref{eq:Sec-InfSupConstantFree-ParametricWeakFormStateError}, namely,
$$r(\boldsymbol{\tilde \epsilon}(\omega), \mathbf{v}; \omega) := f^{\boldsymbol{\epsilon}}(\mathbf{v}; \omega )-a(\boldsymbol{\tilde \epsilon}(\omega), \mathbf{v}; \omega),\forall \mathbf{v} \in \mathcal{H}.$$
\STATE $\xi = \|\boldsymbol{\tilde \epsilon}(\omega^*)\|_{\mathcal{H}}.$
\label{step:Sec-InfSupConstantFree-Alg-ROMErrorEstimator-StoppingCriterion}
\ENDWHILE
\STATE Use $\mathcal{H}_m$ to solve \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionField}.
\end{algorithmic}
\end{algorithm}
\subsection{In-Band Eigenmodes in the Reduced-Basis Space}
\label{Sec-InfSupConstantFree-SubSec-InBandEigenmodes}
The works in \cite{szypulski2020SSMMM,delaRubia2018CRBM} suggest that a good approximation basis to the frequency-parameter problem \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}, i.e., a good reduced-basis space, should include the resonant modes hit in the band of analysis $\mathcal{B}$. These resonant modes constitute the dominant basis representing the electric field $\mathbf{E}(\omega)$ in $\mathcal{B}$ (see \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency}). In this work, we use this basis not only for the original problem \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}, but also for the residual problem \eqref{eq:Sec-InfSupConstantFree-ParametricWeakFormStateError}. For example,
\begin{subequations}
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-EigenBasis}
\begin{align}
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-EigenBasisField}
\mathcal{H}_m &= \text{span}\{\mathbf{e}_n~|~\omega_n \in~\mathcal{B}\}:=\mathcal{V}_\mathcal{B},\\
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-EigenBasisError}
\mathcal{H}^{\boldsymbol{\epsilon}}_m &= \mathcal{H}_m = \mathcal{V}_\mathcal{B} \text{.}
\end{align}
\end{subequations}
In this situation, we can find the solution for the reduced systems in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjection} in closed form and get some further insights, viz.,
\begin{subequations}
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-FieldSolution}
\begin{align}
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-FieldSolution1}
&\mathbf{\tilde E}(\omega) = i \omega \sum \limits_{\omega_n^2 \in \mathcal{B}_2} \frac{A_n}{1 - \frac{\omega^2}{\omega_n^2}} \mathbf{e}_n \text{, if } \omega^2 \neq \omega_n^2 \in \mathcal{B}_2,\\
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-FieldSolution2}
&\mathbf{\tilde E}(\omega) = i \omega \sum \limits_{ \omega_n^2 = \omega^2 } a_n \mathbf{e}_n + i \omega \sum \limits_{\omega_n^2 \in \mathcal{B}_2 \setminus \{\omega^2\}} \frac{A_n}{1 - \frac{\omega^2}{\omega_n^2}} \mathbf{e}_n \text{,} \\ &\text{ if } \omega^2 = \omega_n^2 \in \mathcal{B}_2 \text{,}
\end{align}
\end{subequations}
and
\begin{subequations}
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution}
\begin{align}
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution1}
&\boldsymbol{\tilde \epsilon}(\omega) = \mathbf{0} \text{, if } \omega^2 \neq \omega_n^2 \in \mathcal{B}_2,\\
\label{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution2}
&\boldsymbol{\tilde \epsilon}(\omega) = i \omega \sum \limits_{ \omega_n^2 = \omega^2 } a^{\boldsymbol{\epsilon}}_n \mathbf{e}_n \text{, if } \omega^2 = \omega_n^2 \in \mathcal{B}_2 \text{.}
\end{align}
\end{subequations}
$\mathcal{B}_2$ stands for $[\omega^2_\text{min},\omega^2_\text{max}]$. As \eqref{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution} shows, this is the worst-case scenario. Although the in-band eigenmode approximation basis in the eigenspace $\mathcal{V}_{\mathcal{B}}$ is the best basis to capture the fundamental dynamics of the electric field $\mathbf{E}(\omega)$ in the band of analysis $\mathcal{B}$ and so might be the case for the residual system, it turns out the opposite: the approximate state error $\boldsymbol{\tilde \epsilon}(\omega)$ is, apart from the in-band resonances, identically zero throughout the whole electromagnetic spectrum, $\forall \omega \in \mathbb{R}$ (see \eqref{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution}). Unfortunately, the actual error is not zero but, on the contrary, we get zero error in \eqref{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution}. This is how Galerkin approximation works, i.e., as far as the projection space is concerned, no error can be identified within this space since all the error, which is not identically zero, is orthogonal to the testing space and, therefore, remains outside the projection space used in \eqref{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-EigenBasis}. As a result, we get zero error in \eqref{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution}, apart from the in-band resonances. This is the rationale behind the use of different projection spaces for the reduced problems \eqref{eq:Sec-InfSupConstantFree-GalerkinProjection} in \cite{feng2020InfSupConstantFree}, despite the fact that two similar problems have to be solved independently at the same time. Using two different projection spaces for the reduced systems increases the chance to identify the true state error in the approximation. In this work, we aim to achieve this goal while keeping the computational burden even much lower.
\subsection{Enhanced Reduced-Basis Space}
\label{Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace}
In-band eigenmodes are not enough to ensure a good approximation to the electric field $\mathbf{E}(\omega)$ in $\mathcal{B}$. In particular, only resonant phenomena are strictly captured by the in-band eigenmode basis in the eigenspace $\mathcal{V}_{\mathcal{B}}$, while other electromagnetic phenomena, such as direct source to load couplings, are missing. This has already been shown in Section \ref{Sec-InfSupConstantFree-SubSec-InBandEigenmodes}. As a result, the reduced-basis space has to be enriched by snapshots of the electric field $\mathbf{E}(w_k)$ in the frequency band $\mathcal{B}$, namely,
\begin{subequations}
\label{eq:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-RBS}
\begin{align}
\label{eq:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-RBS1}
\mathcal{H}_m &= \mathcal{V}_\mathcal{B} + \text{span} \{\mathbf{E}(w_k)~|~w_k \in~\mathcal{B}\},\\
\label{eq:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-RBS2}
\mathcal{V}_\mathcal{B} &= \text{span}\{\mathbf{e}_n~|~\omega_n \in~\mathcal{B}\},
\end{align}
\end{subequations}
giving rise to an enhanced reduced-basis space which ensures convergence to the electric field $\mathbf{E}(\omega)$ in $\mathcal{B}$ within a fast setting \cite{delaRubia2018CRBM}. In our situation, one question still remains: what is the reduced-basis space for the residual problem $\mathcal{H}^{\boldsymbol{\epsilon}}_m$ that should be used to get an approximation $\boldsymbol{\tilde \epsilon}(\omega)$ to the state error while keeping a low computational effort? We have already shown in Section \ref{Sec-InfSupConstantFree-SubSec-InBandEigenmodes} that using the same reduced-basis space considered for the original problem, i.e., $\mathcal{H}^{\boldsymbol{\epsilon}}_m = \mathcal{H}_m$, while keeping the computational burden low, is not a good choice (see \eqref{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-EigenBasis}--\eqref{eq:Sec-InfSupConstantFree-SubSec-InBandEigenmodes-ErrorSolution}). Also, carrying out a totally different reduced-basis space strategy for both reduced systems, i.e., the original and residual systems, yields good approximation results but the computational cost substantially increases \cite{feng2020InfSupConstantFree}. We then follow a compromise criterion: we allow the reduced-basis space for the residual problem $\mathcal{H}^{\boldsymbol{\epsilon}}_m$ to be different from the reduced-basis space for the original problem $\mathcal{H}_m$, but just for only one additional basis vector. As a result, we keep the computational cost low enough since only one additional solution needs to be carried out in the whole process. This cheap approach may result in underestimation of $\boldsymbol{\tilde \epsilon}(\omega)$ for the actual value of the state error $\boldsymbol{\epsilon}(\omega)$. This is not critical, as will become clear later. However, what is indeed essential is to choose an additional basis vector that allows to monitor the state error by means of the reduced residual system while not interfering the greedy algorithm with some unwanted frequency modulation in the snapshot selection for the reduced original problem. In other words, we should be careful to prevent the residual system solution from biasing the greedy procedure to solve the reduced original problem. Otherwise, the snapshots selected in the greedy process will end up with a bad choice for approximation purposes.
We propose to add the stationary electric field $\mathbf{F}_0$ as the only additional basis vector into $\mathcal{H}^{\boldsymbol{\epsilon}}_m$, as $\mathbf{F}_0$ is missing in $\mathcal{H}_m$, so it contributes to the error of the ROM. It is therefore reasonable to add $\mathbf{F}_0$ to the reduced basis space $\mathcal{H}^{\boldsymbol{\epsilon}}_m$ for the approximate error $\boldsymbol{\tilde \epsilon}(\omega)$. Furthermore, not only is it orthogonal to the eigenspace $\mathcal{V}_{\mathcal{B}}$ but also it has an almost-flat smooth influence on the fields $\mathbf{E}(\omega)$ throughout the frequency band of interest $\mathcal{B}$, due to its $1/\omega$ frequency behavior, cf. \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency}. As a consequence, its contribution to the error is also small, not spoiling the desired property for $\boldsymbol{\tilde \epsilon}(\omega)$, which should allow us to identify the missing information in the reduced original system. Thus, we can ensure that we are not modulating the adaptive sampling in the greedy algorithm. As a result, the basis vector $\mathbf{F}_0$ added to the reduced-basis space for the original problem $\mathcal{H}_m$ is a suitable candidate, since it has the desired behavior as testing space $\mathcal{H}^{\boldsymbol{\epsilon}}_m$ for the residual problem. Putting everything together, we use the following spaces:
\begin{subequations}
\label{eq:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-ResidualRBS}
\begin{align}
\label{eq:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-ResidualRBS2}
\mathcal{H}_m &= \mathcal{V}_\mathcal{B} + \text{span} \{\mathbf{E}(w_k)~|~w_k \in~\mathcal{B}\},\\
\label{eq:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-ResidualRBS3}
\mathcal{V}_\mathcal{B} &= \text{span}\{\mathbf{e}_n~|~\omega_n \in~\mathcal{B}\},\\
\label{eq:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-ResidualRBS1}
\mathcal{H}^{\boldsymbol{\epsilon}}_m &= \mathcal{H}_m + \text{span} \{ \mathbf{F}_0 \} \text{.}
\end{align}
\end{subequations}
Unfortunately, the actual value of the approximate state error $\boldsymbol{\tilde \epsilon}(\omega)$ in this cheap statics-based reduced residual problem solution, while accurately identifying the missing information in the original problem, is \emph{not} a reliable indicator of the true state error $\boldsymbol{\epsilon}(\omega)$ due to its implicit underestimation, and can only be considered as a \emph{rough} estimator as compared to the one in Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}. As a result, a different strategy should be carried out to estimate the actual state error in the system.
We propose Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} to carry out a fast \emph{a posteriori} state error estimation for reliable frequency sweep analysis in microwave devices. The reduced-basis space is adaptively built up by means of a greedy algorithm based on this state error estimation. Further, to overcome the lack of reliability of the rough error estimator $\boldsymbol{\tilde \epsilon}(\omega)$ in case it is used as the stopping criterion, a density stopping criterion \cite{delaRubia2018CRBM} is preferred in Step \ref{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-StoppingCriterion}. In contrast to the error estimator $\boldsymbol{\tilde \epsilon}(\omega^*)$ in Step \ref{step:Sec-InfSupConstantFree-Alg-ROMErrorEstimator-StoppingCriterion} of Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, the true error at $\omega^*$ is computed in Step \ref{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-StoppingCriterion} of Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} and is used as the stopping criterion. Note that the true error $\mathbf{E}(\omega^*)- \mathbf{\tilde E}(\omega^*)$ at $\omega^*$ is actually equivalent to the error $\mathbf{e}_\perp(\omega^*)$ defined in Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-NewInformationNorm} and is readily available from Step \ref{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-NewSnapshot} in Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}, where $\mathbf{E}(\omega^*)$ is orthogonalized against the existing basis vectors in $\mathcal{H}_m$ before being added to $\mathcal{H}_m$. In other words, the new error indicator in Step \ref{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-StoppingCriterion} of Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} is based on how much new information the new greedy field sample $\mathbf{E}(\omega^*)$ is adding to the reduced-basis space. Eventually, the procedure stops whenever there is nothing new to add, within a tolerance denoted by \texttt{tol}, and the reduced-basis space can be considered as a dense enough approximation space which accurately describes the field solution $\mathbf{E}(\omega)$ throughout the band of interest $\mathcal{B}$.
In summary, we propose Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} which makes use of two error indicators: one is the rough error estimator $\boldsymbol{\tilde \epsilon}(\omega)$ employed in Step \ref{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-RoughErrorEstimator} based on a fixed $\mathcal{H}^{\mathbf{r}}_m$; the other is the indicator used in Step \ref{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-StoppingCriterion}, which is the true error at $\omega^*$ selected by the rough error estimator. It is then essential that these adaptive greedy field snapshots are properly selected in the band of analysis. Otherwise the procedure stops with no control on the actual error in the ROM. Fortunately, the rough error estimator $\boldsymbol{\tilde \epsilon}(\omega)$ is capable of providing what is indeed the missing information in the system. That means it catches the trend of the error change during the greedy algorithm, although it may underestimate the actual error. Therefore, we expect good performance in the proposed methodology. Finally, we present some remarks about Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}.
\begin{algorithm}[tbp]
\caption{Fast \emph{a posteriori} state error estimation for reliable frequency-parameter MOR.}
\label{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}
\begin{algorithmic}[1]
\REQUIRE Frequency band of interest $\mathcal{B}:=[\omega_\text{min}, \omega_\text{max} ]$, tolerance $\texttt{tol} > 0$ as the acceptable state error.
\ENSURE $\mathcal{H}_m$ to ensure $\texttt{tol}$ state error in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjection}.
\STATE Solve for the in-band eigenmodes $\mathbf{e}_n$ in \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}. Form the in-band eigenspace $\mathcal{V}_{\mathcal{B}}$: $\mathcal{V}_\mathcal{B} = \text{span}\{\mathbf{e}_n~|~\omega_n \in~\mathcal{B}\}$.
\STATE Solve for the stationary solution $\mathbf{F}_0$ in \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm}.
\STATE Initialize $\mathcal{H}_m = \mathcal{V}_\mathcal{B}$, $\mathcal{H}^{\mathbf{r}}_m = \text{span}\{ \mathbf{F}_0 \}$, $\xi = \texttt{tol} + 1$. Choose a sample $\omega^*$ randomly taken from the end points in $\mathcal{B}$.
\label{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-EndPoints}
\STATE Solve for $\mathbf{E}(\omega^*)$ in \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm} and update $\mathcal{H}_m$: $\mathcal{H}_m = \mathcal{H}_m + \text{span}\{\mathbf{E}(\omega^*)\} $.
\WHILE{$\xi > \texttt{tol}$}
\STATE Compute the approximate field $\mathbf{\tilde E}(\omega)$ in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionField}.
\STATE Form $\mathcal{H}^{\boldsymbol{\epsilon}}_m$: $\mathcal{H}^{\boldsymbol{\epsilon}}_m = \mathcal{H}_m + \mathcal{H}^{\mathbf{r}}_m$.
\STATE Compute the state error estimation $\boldsymbol{\tilde \epsilon}(\omega)$ in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionError}.
\label{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-RoughErrorEstimator}
\STATE Choose the next sample $\omega^*$ from $\mathcal{B}$ as $$\omega^*=\arg\max\limits_{\omega\in \mathcal{B}}\|\boldsymbol{\tilde \epsilon}(\omega)\|_{\mathcal{H}}.$$
\STATE Compute $\mathbf{E}(\omega^*)$ in \eqref{eq:Sec-ProblemStatement-Subsec-StandarAPosterioriError-ParametricWeakForm} and update $\mathcal{H}_m$: $\mathcal{H}_m = \mathcal{H}_m + \text{span}\{\mathbf{E}(\omega^*)\} $.
\label{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-NewSnapshot}
\STATE $\xi = \|\mathbf{e}_\perp(\omega^*)\|_{\mathcal{H}} = \|\mathbf{E}(\omega^*)\|_{\nu}.$
\label{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-StoppingCriterion}
\ENDWHILE
\STATE Use $\mathcal{H}_m$ to solve \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionField}.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tbp]
\caption{Evaluation of the new information added to the reduced-basis space $\mathcal{H}_m$ by the field sample $\mathbf{E}$ \cite{delaRubia2018CRBM}.}
\label{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-NewInformationNorm}
\begin{algorithmic}[1]
\REQUIRE Electric field $\mathbf{E}$ and reduced-basis space $\mathcal{H}_m$.
\ENSURE Norm $\|\mathbf{E}\|_{\nu}$ of the field $\mathbf{E}$ indicating the new information added to the reduced-basis space $\mathcal{H}_m$.
\STATE Normalize $\mathbf{E}$: $\mathbf{e} = \mathbf{E}/ \|\mathbf{E}\|_{\mathcal{H}}$.
\STATE Project $\mathbf{e}$ onto $\mathcal{H}_m$. Decompose $\mathbf{e}$ into its projection and its orthogonal complement onto $\mathcal{H}_m$: $\mathbf{e} = \mathbf{e}_\parallel + \mathbf{e}_\perp $.
\STATE Set $\|\mathbf{E}\|_{\nu} = \|\mathbf{e} - \mathbf{e}_\parallel\|_{\mathcal{H}} = \|\mathbf{e}_\perp\|_{\mathcal{H}}$.
\end{algorithmic}
\end{algorithm}
\begin{remark}
Once the in-band eigenmode basis is completed, a randomly chosen sample between the end points in the band of analysis $\mathcal{B}$ is taken to enrich the eigenbasis in Step \ref{step:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-Alg-FastROMErrorEstimator-EndPoints}. At this point, neither the residual norm nor the state error norm can provide an answer to what is the best sample to choose, since both estimators show a constant behavior throughout the whole spectrum $\forall \omega \in \mathbb{R}$. We refer to \cite{delaRubia2018CRBM} for the illustration of the residual norm behavior when eigenbasis is used. As a result, based on the new information arguments added to the eigenbasis, any of the end points in the frequency band of interest should be sampled (see \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency}). It should be noted that, in \eqref{eq:Sec-ProblemStatement-SolutionMaxwellFrequencyDependency}, the further we are from the in-band eigenmodes, the more linearly independent new information is found, until a new out of band eigenmode eventually shows up. This is the main reason for sampling at the end points of the frequency band of interest $\mathcal{B}$.
\end{remark}
\begin{remark}
Contrary to what is done in \cite{delaRubia2018CRBM}, the residual norm is not used to guide the greedy algorithm at any step. Thus, better results are expected. Residual information is problematic since its behavior suffers from the in-band eigenresonances. The residual does not vanish at and nearby resonances, which can definitely mislead the greedy algorithm. This will become apparent in Section \ref{Sec-NumericalResults} throughout the numerical examples.
\end{remark}
\section{Numerical Results}
\label{Sec-NumericalResults}
In this section, we apply the proposed \emph{a posteriori} state error estimation for reliable fast frequency sweeps of different challenging microwave circuits, namely, a quad-mode dielectric resonator filter, an inline filter with transmission zeros generated by frequency-dependent couplings, an inline dielectric resonator filter and a combline diplexer. The capabilities and reliability of the proposed procedure are demonstrated via these examples. The in-house C++ code for finite element method (FEM) simulations uses a second-order first family of N\'ed\'elec's elements \cite{Ned80, Ing06}, on meshes provided by \texttt{Gmsh} \cite{GeuR09}. All computations were carried out on a workstation with 3.00-GHz Intel Xeon E5-2687W v4 processor and 256-GB RAM.
In our experiments, we define the true error ($\epsilon_{\text{true}}$) of the ROM as the maximal error over the whole frequency band $\mathcal{B}$ using the indicator $\mathbf{e}_\perp$ in Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-NewInformationNorm}, namely,
\begin{equation}
\label{eq:Sec-NumericalResults-TrueError}
\epsilon_{\text{true}} = \max \limits_{ \omega \in \mathcal{B} } \| \mathbf{e}_\perp(\omega) \|_{\mathcal{H}} \text{.}
\end{equation}
Note that the novelty of the proposed Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} is twofold: 1) a rough error estimator $\boldsymbol{\tilde \epsilon}(\omega)$ to save computational costs as compared with the one in Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}; 2) the indicator $\| \mathbf{e}_\perp(\omega) \|_{\mathcal{H}}$ to improve the reliability of the rough estimator. To show that the proposed Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} is more efficient, we compare it with Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, as well as with the greedy algorithm using the residual norm-based error estimator. To this end, we define the indicators $\| \mathbf{e}_\perp(\omega) \|_{\mathcal{H}}$ based on the above three different estimators, respectively, i.e.,
\begin{equation}
\label{eq:Sec-NumericalResults-StateErrorEstimator}
\epsilon_{\text{state}} = \|\mathbf{e}_\perp( \arg\max\limits_{\omega\in \mathcal{B}}\|\boldsymbol{\tilde \epsilon}(\omega)\|_{\mathcal{H}} )\|_{\mathcal{H}} \text{,}
\end{equation}
where $\boldsymbol{\tilde \epsilon}(\omega)$ refers to either the one in Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} or the rough estimator in Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}. The indicator based on the residual-norm is defined as
\begin{equation}
\label{eq:Sec-NumericalResults-ResidualBasedErrorEstimator}
\epsilon_{\text{res}} = \|\mathbf{e}_\perp( \arg\max\limits_{\omega \in \mathcal{B}} \|r(\mathbf{\tilde E}(\omega), \cdot; \omega)\|_{\mathcal{H}^{\prime}} )\|_{\mathcal{H}} \text{.}
\end{equation}
For a fair comparison, we compare the results of Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} with Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, where $\xi$ in Step \ref{step:Sec-InfSupConstantFree-Alg-ROMErrorEstimator-StoppingCriterion} is replaced by the indicator $\epsilon_{\text{state}}$ in \eqref{eq:Sec-NumericalResults-StateErrorEstimator}. We also compare Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} with the residual-norm based greedy algorithm, where the indicator in \eqref{eq:Sec-NumericalResults-ResidualBasedErrorEstimator} is used as the stopping criterion. The comparison between Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} and Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} is carried out for the first two examples. The indicator $\mathbf{e}_\perp$ plays a role of paramount importance in providing a fair comparison among the different strategies. It should be pointed out that the true error $\epsilon_{\text{true}}$ in \eqref{eq:Sec-NumericalResults-TrueError} implies the computation of the field solution $\mathbf{E}(\omega)$ by means of time-consuming FEM simulations throughout the whole frequency band of interest $\mathcal{B}$. This can \emph{only} be carried out for academic purposes.
Finally, as a figure of merit, we use the metric of effectivity to gauge how close the estimated error is to the true error:
\mbox{$ \texttt{eff} := \frac{\epsilon}{\epsilon_{\text{true}}}$,}
$\epsilon$ refers to either $\epsilon_{\text{state}}$ in \eqref{eq:Sec-NumericalResults-StateErrorEstimator} or $\epsilon_{\text{res}}$ in \eqref{eq:Sec-NumericalResults-ResidualBasedErrorEstimator}. For each of the four microwave circuits considered, we evaluate the performance of the error estimators using the indicators defined in \eqref{eq:Sec-NumericalResults-StateErrorEstimator} and \eqref{eq:Sec-NumericalResults-ResidualBasedErrorEstimator}. A tolerance threshold of $\texttt{tol} = 2\cdot10^{-7}$ is used throughout all the numerical examples, for all greedy algorithms corresponding to their respective error estimators.
\subsection{Quad-Mode Dielectric Resonator Filter}
\label{Sec-NumericalResults-Subsec-QuadModeFilter}
A quad-mode dielectric resonator filter in a single cylindrical cavity is proposed in \cite{memarian2009quad}. Fig. \ref{fig:Sec-NumericalResults-Subsec-QuadModeFilter-FilterGeometry} shows the geometry of the filter as well as the mesh used for its analysis. These structures are extremely attractive since multiple resonant modes show up in a single cylindrical cavity due to the dielectric resonator. At the same time, they are difficult to tune since all dominant modes are coupled with each other, requiring multiple full-wave electromagnetic analyses to carry out a good electrical design. Six tuning screws are included in this filter. It is then of paramount importance to accurately predict the electromagnetic behavior in a frequency band in an efficient way. The filter detailed in Fig. \ref{fig:Sec-NumericalResults-Subsec-QuadModeFilter-FilterGeometry} is a quad-mode filter, where a four-pole passband filtering response, shown in Fig. \ref{fig:Sec-NumericalResults-Subsec-QuadModeFilter-FilterResponse}, is obtained. However, at the same time, there are additional resonant modes in the band of analysis, $\mathcal{B}:=[3.4, 4.2]$ GHz, giving rise to a more complicated response including direct source to load coupling, affecting the position of the two transmission zeros, rather than a typical four-pole frequency response. As a result, a reliable ROM for fast frequency sweep analysis is essential.
An FEM system with 245,778 degrees of freedom is used to solve for the electric field. Following Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}, a ROM is obtained by means of the Reduced-Basis Method (RBM) giving rise to a reduced system of dimension 14 to compute the frequency response detailed in Fig. \ref{fig:Sec-NumericalResults-Subsec-QuadModeFilter-FilterResponse}. It is clear that there is not even a competition ($14 \ll 245,778$) between the computational effort carried out by RBM and the one that would have been required by FEM to get the same frequency response along the same frequency samples, nevermind subsampling is considered. Solving a reduced system of dimension 14 many times is totally effortless while solving many times an FEM system of dimension 245,778 is rather time-consuming. Good agreement is found between the FEM and RBM results in Fig. \ref{fig:Sec-NumericalResults-Subsec-QuadModeFilter-FilterResponse}.
Next, a comparison of the different MOR methodologies is carried out. Table \ref{tab:Sec-NumericalResults-Subsec-QuadModeFilter-GreedySamples} not only details the frequency samples adaptively chosen by each greedy algorithm but also the estimated and true errors, at each iteration in the MOR process. Fig. \ref{fig:Sec-NumericalResults-Subsec-QuadModeFilter-Error} depicts the convergence behavior for the different methodologies and details the effectivity metric for the proposed \emph{a posteriori} state error estimator. Good behavior is observed in the proposed methodology in comparison to the true error. On the contrary, the residual norm-based error estimation results in a greedy algorithm that underestimates the error and prematurely stops the iterative procedure. The rationale behind this is shown in Fig. \ref{fig:Sec-NumericalResults-Subsec-QuadModeFilter-ErrorEstimator}. For a ROM of dimension 12 obtained by the proposed approach, both the residual norm $\|r(\mathbf{\tilde E}(\omega), \cdot; \omega)\|_{\mathcal{H}^{\prime}}$ and the approximate state error $\|\boldsymbol{\tilde \epsilon}(\omega)\|_{\mathcal{H}}$ are plotted versus frequency. While the state error estimation has a smooth frequency behavior, the residual norm is contaminated by the in-band resonant modes, creating an eigenfrequency pollution. This makes the residual norm-based greedy algorithm mislead its sampling once again around the eigenresonances, notoriously deteriorating the new information added to the reduced-basis space. This is the key advantage of using state error estimation, where this unwanted behavior does not hold.
Finally, Table \ref{tab:Sec-NumericalResults-Subsec-QuadModeFilter-AlgorithmComparison} compares the performance of the proposed approach with Algorithm~\ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, where $\xi$ in Step \ref{step:Sec-InfSupConstantFree-Alg-ROMErrorEstimator-StoppingCriterion} is replaced by the indicator in \eqref{eq:Sec-NumericalResults-StateErrorEstimator}. It should be pointed out that, even though the in-band eigenmodes are not imposed in Algorithm~\ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, these in-band eigenmodes do show up in the greedy algorithm after the first few iterations in the procedure. This indicates Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} is accurately working out since the in-band eigenmodes have been shown to be the best choice from the theoretical point of view in Section \ref{Sec-ProblemStatement}. As far as computational complexity is concerned, Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} is more expensive than Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}. As a matter of fact, the size of both ROMs when the same stopping criterion is used is $14$ for Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} and $16$ for Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, whereas the size of the residual ROMs defined in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionError} is $15$ and $30$ for Algorithms \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} and \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, respectively. This shows how many more efforts we need to carry out in Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} compared to Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}. As a result, the proposed approach yields a fast \emph{a posteriori} state error estimation.
\begin{table}[tbp]
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Greedy Algorithm Frequency Samples for Different MOR Methodologies in the Quad-Mode Filter.}
\label{tab:Sec-NumericalResults-Subsec-QuadModeFilter-GreedySamples}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
GHz & $\epsilon_{\text{true}}$ & GHz & $\epsilon_{\text{state}}$ & GHz & $\epsilon_{\text{res}}$ \\
\hline
\hline
3.6103 & 1. & 3.6103 & 1. & 3.6103 & 1. \\
3.6245 & 1. & 3.6245 & 1. & 3.6245 & 1. \\
3.6887 & 1. & 3.6887 & 1. & 3.6887 & 1. \\
3.7149 & 1. & 3.7149 & 1. & 3.7149 & 1. \\
3.9299 & 1. & 3.9299 & 1. & 3.9299 & 1. \\
4.0067 & 1. & 4.0067 & 1. & 4.0067 & 1. \\
4.1552 & 1. & 4.1552 & 1. & 4.1552 & 1. \\
4.2000 & $1.7\cdot10^{-1}$ & 3.4000 & $1.6\cdot10^{-1}$ & 3.4000 & $1.6\cdot10^{-1}$ \\
3.4000 & $7.1\cdot10^{-2}$ & 4.2000 & $8.0\cdot10^{-2}$ & 4.0067 & $2.8\cdot10^{-7}$ \\
3.7800 & $1.6\cdot10^{-3}$ & 3.8820 & $8.3\cdot10^{-4}$ & -- & -- \\
4.1000 & $9.4\cdot10^{-5}$ & 4.0940 & $6.7\cdot10^{-5}$ & -- & -- \\
3.4900 & $6.8\cdot10^{-6}$ & 3.5930 & $2.5\cdot10^{-6}$ & -- & -- \\
4.1800 & $1.2\cdot10^{-6}$ & 4.1680 & $5.8\cdot10^{-7}$ & -- & -- \\
3.8600 & $1.9\cdot10^{-7}$ & 3.6930 & $1.5\cdot10^{-7}$ & -- & -- \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{table}[tbp]
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{State Error Algorithm Comparison in the Quad-Mode Filter.}
\label{tab:Sec-NumericalResults-Subsec-QuadModeFilter-AlgorithmComparison}
\centering
\begin{tabular}{c|c|c|c}
\hline
\multicolumn {2} {c|}{Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}} & \multicolumn {2} {|c}{Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}} \\
\hline
\hline
GHz & $\epsilon_{\text{state}}$ & GHz & $\epsilon_{\text{state}}$ \\
\hline
\hline
3.6103 & 1. & 3.4000 & $1.6\cdot10^{-3}$ \\
3.6245 & 1. & 3.6470 & $1.7\cdot10^{-3}$ \\
3.6887 & 1. & 4.0810 & $1.6$ \\
3.7149 & 1. & 3.9300 & $2.6\cdot10^{-3}$ \\
3.9299 & 1. & 3.6250 & $1.2\cdot10^{-3}$ \\
4.0067 & 1. & 3.7150 & $3.2\cdot10^{-4}$ \\
4.1552 & 1. & 4.1550 & $9.8\cdot10^{-7}$ \\
3.4000 & $1.6\cdot10^{-1}$ & 4.0070 & $5.5\cdot10^{-8}$ \\
4.2000 & $8.0\cdot10^{-2}$ & -- & -- \\
3.8820 & $8.3\cdot10^{-4}$ & -- & -- \\
4.0940 & $6.7\cdot10^{-5}$ & -- & -- \\
3.5930 & $2.5\cdot10^{-6}$ & -- & -- \\
4.1680 & $5.8\cdot10^{-7}$ & -- & -- \\
3.6930 & $1.5\cdot10^{-7}$ & -- & -- \\
\hline
\hline
ROM size & 14 & ROM size & 16 \\
Residual ROM size & 15 & Residual ROM size & 30 \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.7\linewidth]{figures/quadModeFilterGeometry.png}
\caption{Quad-mode dielectric resonator filter proposed in \cite{memarian2009quad}.}
\label{fig:Sec-NumericalResults-Subsec-QuadModeFilter-FilterGeometry}
\end{figure}
\begin{figure}[tbp]
\centering
\input{figures/quadModeFilter_Sparameters_Response.tex}
\caption{Quad-mode dielectric resonator filter scattering parameter response comparison with the proposed approach. (--) RBM. ($\circ$) FEM.}
\label{fig:Sec-NumericalResults-Subsec-QuadModeFilter-FilterResponse}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-QuadModeFilter-ErrorConvergence}\input{figures/quadModeFilter_greedy_convergence_estimator.tex}}
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-QuadModeFilter-ErrorEffectivity}\input{figures/quadModeFilter_effectivity_estimator.tex}}
\caption{Quad-mode dielectric resonator filter error estimator results. (a) Convergence of the greedy algorithm. (b) Effectivity (\texttt{eff}).}
\label{fig:Sec-NumericalResults-Subsec-QuadModeFilter-Error}
\end{figure}
\begin{figure}[tbp]
\centering
\input{figures/quadModeFilter_ErrorEstimator_ROM12.tex}
\caption{Quad-mode dielectric resonator filter error estimation frequency behavior for the ROM of order 12. ({\color{blue}{--}}) Residual error $\|r(\mathbf{\tilde E}(\omega), \cdot; \omega)\|_{\mathcal{H}^{\prime}}$. ({\color{red}{--}}) State error estimator $\|\boldsymbol{\tilde \epsilon}(\omega)\|_{\mathcal{H}}$.}
\label{fig:Sec-NumericalResults-Subsec-QuadModeFilter-ErrorEstimator}
\end{figure}
\subsection{Inline Filter with Frequency-Dependent Couplings}
\label{Sec-NumericalResults-Subsec-MacchiarellaFilter}
The next example is a fourth-order inline combline filter designed in \cite{he2018direct}, where frequency dependent couplings are taken into account to provide finite transmission zeros, even within an inline coupling route structure. The filter geometry is depicted in Fig. \ref{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-FilterGeometry} and the mesh for the electromagnetic analysis is shown as well. Within four combline resonant cavities, two additional shunt inductors and capacitors are included, which indeed give rise to additional higher frequency resonances creating two finite transmission zeros near by the filter passband. As a result, better rejection properties are allowed in this inline structure. Frequency dependent coupling filter theory is continuously developing \cite{szydlowski2012coupled, tamiazzo2017synthesis} and a large number of full-wave analyses are required to design these kind of filtering responses. It is essential to carry out a reliable fast frequency sweep analysis to meet the target electrical response in the design optimization loop.
The frequency band $\mathcal{B}:=[1.4, 2.9]$ GHz is taken into account for analysis. We solve for the electric field in the band of interest $\mathcal{B}$ by means of an FEM system with 105,690 degrees of freedom. Following the procedure proposed in this work, we obtain a ROM of dimension 13 to sweep the frequency response of this filter via RBM. A comparison between the filter response results obtained by FEM and RBM is shown in Fig. \ref{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-FilterResponse}. Reasonable agreement is achieved.
Next, we carry out the numerical tests to compare the different MOR methodologies. Table \ref{tab:Sec-NumericalResults-Subsec-MacchiarellaFilter-GreedySamples} details the frequency samples adaptively chosen by each greedy algorithm as well as the error estimator and true error at each iteration in the MOR process. Contrary to what Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} proposes, it should be noted that the true error $\epsilon_{\text{true}}$-based greedy algorithm does not choose an end point sample after the eigenbasis made of 6 in-band resonant modes is built up. Fig. \ref{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-Error} depicts the convergence behavior for the different methodologies and details the effectivity metric for the proposed \emph{a posteriori} state error estimator. Reasonable performance is observed in the proposed methodology in comparison to the true error. Once again, the residual norm-based greedy algorithm prematurely aborts the iterative MOR procedure.
In addition, Table \ref{tab:Sec-NumericalResults-Subsec-MacchiarellaFilter-AlgorithmComparison} compares the performance of the proposed approach to Algorithm~\ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} where $\xi$ in Step \ref{step:Sec-InfSupConstantFree-Alg-ROMErrorEstimator-StoppingCriterion} is replaced by the indicator in \eqref{eq:Sec-NumericalResults-StateErrorEstimator}. Once again, even though the in-band eigenmodes are not \emph{a priori} included in the projection basis in Algorithm~\ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, these show up in the greedy algorithm after the first few iterations. This shows Algorithm~\ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} is properly working out since the in-band eigenmodes are expected to show up in a good approximation basis, as has been discussed from a theoretical point of view. As far as computational burden is concerned, Algorithm~\ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} is more time consuming than Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}. As a matter of fact, the size of both ROMs when the same stopping criterion is used is $12$ for Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} and $14$ for Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, whereas the size of the residual ROMs defined in \eqref{eq:Sec-InfSupConstantFree-GalerkinProjectionError} is $13$ and $27$ for Algorithms \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} and \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}, respectively. This shows the additional effort that has to be carried out in Algorithm~\ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator} with respect to Algorithm~\ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}. As a result, the proposed approach gives rise to a fast \emph{a posteriori} state error estimation.
\begin{table}[tbp]
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Greedy Algorithm Frequency Samples for Different MOR Methodologies in the Inline Filter with Finite Transmission Zeros.}
\label{tab:Sec-NumericalResults-Subsec-MacchiarellaFilter-GreedySamples}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
GHz & $\epsilon_{\text{true}}$ & GHz & $\epsilon_{\text{state}}$ & GHz & $\epsilon_{\text{res}}$ \\
\hline
\hline
1.6981 & 1. & 1.6981 & 1. & 1.6981 & 1. \\
1.7043 & 1. & 1.7043 & 1. & 1.7043 & 1. \\
1.7799 & 1. & 1.7799 & 1. & 1.7799 & 1. \\
1.8328 & 1. & 1.8328 & 1. & 1.8328 & 1. \\
2.7661 & 1. & 2.7661 & 1. & 2.7661 & 1. \\
2.7856 & 1. & 2.7856 & 1. & 2.7856 & 1. \\
2.3000 & $8.8\cdot10^{-1}$ & 1.4000 & $6.7\cdot10^{-1}$ & 1.4000 & $6.7\cdot10^{-1}$ \\
2.9000 & $7.7\cdot10^{-2}$ & 2.9000 & $1.5\cdot10^{-1}$ & 2.9000 & $1.5\cdot10^{-1}$ \\
1.4000 & $4.6\cdot10^{-3}$ & 2.5156 & $2.8\cdot10^{-3}$ & 2.7661 & $8.4\cdot10^{-8}$ \\
2.6200 & $2.9\cdot10^{-4}$ & 2.0520 & $3.5\cdot10^{-4}$ & -- & -- \\
1.9800 & $9.8\cdot10^{-6}$ & 2.8187 & $7.6\cdot10^{-6}$ & -- & -- \\
2.8500 & $4.9\cdot10^{-7}$ & 1.7007 & $9.4\cdot10^{-9}$ & -- & -- \\
1.7600 & $1.7\cdot10^{-7}$ & -- & -- & -- & -- \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{table}[tbp]
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{State Error Algorithm Comparison in the Inline Filter with Finite Transmission Zeros.}
\label{tab:Sec-NumericalResults-Subsec-MacchiarellaFilter-AlgorithmComparison}
\centering
\begin{tabular}{c|c|c|c}
\hline
\multicolumn {2} {c|}{Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator}} & \multicolumn {2} {|c}{Algorithm \ref{alg:Sec-InfSupConstantFree-ROMErrorEstimator}} \\
\hline
\hline
GHz & $\epsilon_{\text{state}}$ & GHz & $\epsilon_{\text{state}}$ \\
\hline
\hline
1.6981 & 1. & 1.4000 & $3.6\cdot10^{-3}$ \\
1.7043 & 1. & 1.9430 & $1.6\cdot10^{-2}$ \\
1.7799 & 1. & 1.7260 & $6.1$ \\
1.8328 & 1. & 2.7660 & $1.2\cdot10^{-2}$ \\
2.7661 & 1. & 1.6980 & $6.3\cdot10^{-4}$ \\
2.7856 & 1. & 2.7860 & $2.5\cdot10^{-6}$ \\
1.4000 & $6.7\cdot10^{-1}$ & 1.8330 & $3.6\cdot10^{-8}$ \\
2.9000 & $1.5\cdot10^{-1}$ & -- & -- \\
2.5156 & $2.8\cdot10^{-3}$ & -- & -- \\
2.0520 & $3.5\cdot10^{-4}$ & -- & -- \\
2.8187 & $7.6\cdot10^{-6}$ & -- & -- \\
1.7007 & $9.4\cdot10^{-9}$ & -- & -- \\
\hline
\hline
ROM size & 12 & ROM size & 14 \\
Residual ROM size & 13 & Residual ROM size & 27 \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/MacchiarellaFilterGeometry.png}
\caption{Inline filter with finite transmission zeros proposed in \cite{he2018direct}.}
\label{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-FilterGeometry}
\end{figure}
\begin{figure}[tbp]
\centering
\input{figures/MacchiarellaFilter_Sparameters_Response.tex}
\caption{Inline filter with finite transmission zeros scattering parameter response comparison with the proposed approach. (--) RBM. ($\circ$) FEM.}
\label{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-FilterResponse}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-ErrorConvergence}\input{figures/MacchiarellaFilter_greedy_convergence_estimator.tex}}
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-ErrorEffectivity}\input{figures/MacchiarellaFilter_effectivity_estimator.tex}}
\caption{Inline filter with finite transmission zeros error estimator results. (a) Convergence of the greedy algorithm. (b) Effectivity (\texttt{eff}).}
\label{fig:Sec-NumericalResults-Subsec-MacchiarellaFilter-Error}
\end{figure}
\subsection{Inline Dielectric Resonator Filter}
\label{Sec-NumericalResults-Subsec-SnyderFilter}
A sixth order inline dielectric resonator filter with two transmission zeros is depicted in Fig. \ref{fig:Sec-NumericalResults-Subsec-SnyderFilter-FilterGeometry}. Cross-coupling between nonadjacent dielectric resonators, appropriately arranging their orientations, is obtained by exploiting multiple evanescent modes in the inline structure. This filter is proposed in \cite{bastioli2012inline}. The $[2.14, 2.2]$ GHz band is taken into account in the analysis. A FEM discretization shown in Fig. \ref{fig:Sec-NumericalResults-Subsec-SnyderFilter-FilterGeometry} with 230,058 degrees of freedom is used. Fig. \ref{fig:Sec-NumericalResults-Subsec-SnyderFilter-FilterResponse} details the filter response in the band of analysis under both time-consuming FEM simulation and fast RBM analysis. A ROM of dimension 10 is used to get the fast frequency sweep results in Fig. \ref{fig:Sec-NumericalResults-Subsec-SnyderFilter-FilterResponse}. Good agreement is obtained between both analyses. It should be pointed out that the FEM solution of the FOM evaluated at a given frequency takes $5.610$ seconds. In the online stage, the ROM resulting from RBM was evaluated for $1201$ different frequency samples requiring $0.085$ seconds in total; this works out to $70$ microseconds to solve a single ROM and amounts to a speedup of nearly $80000$.
Next, we compare the different MOR techniques. Table~\ref{tab:Sec-NumericalResults-Subsec-SnyderFilter-GreedySamples} shows the frequency samples adaptively chosen by each greedy algorithm as well as the error estimator at each iteration in the MOR process. As expected in a sixth order filter, 6 in-band eigenmodes are found in the frequency band of analysis. Fig. \ref{fig:Sec-NumericalResults-Subsec-SnyderFilter-Error} details the convergence behavior for the different methodologies and shows the effectivity metric for the proposed approach. A good behavior is observed in the proposed \emph{a posteriori} state error estimator.
\begin{table}[tbp]
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Greedy Algorithm Frequency Samples for Different MOR Methodologies in the Inline Dielectric Resonator Filter.}
\label{tab:Sec-NumericalResults-Subsec-SnyderFilter-GreedySamples}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
GHz & $\epsilon_{\text{true}}$ & GHz & $\epsilon_{\text{state}}$ & GHz & $\epsilon_{\text{res}}$ \\
\hline
\hline
2.1635 & 1. & 2.1635 & 1. & 2.1635 & 1. \\
2.1640 & 1. & 2.1640 & 1. & 2.1640 & 1. \\
2.1663 & 1. & 2.1663 & 1. & 2.1663 & 1. \\
2.1709 & 1. & 2.1709 & 1. & 2.1709 & 1. \\
2.1768 & 1. & 2.1768 & 1. & 2.1768 & 1. \\
2.1788 & 1. & 2.1788 & 1. & 2.1788 & 1. \\
2.1400 & $3.4\cdot10^{-1}$ & 2.1400 & $3.4\cdot10^{-1}$ & 2.1400 & $3.4\cdot10^{-1}$ \\
2.2000 & $7.8\cdot10^{-3}$ & 2.2000 & $7.8\cdot10^{-3}$ & 2.1709 & $5.4\cdot10^{-8}$ \\
2.1530 & $2.8\cdot10^{-5}$ & 2.1697 & $5.7\cdot10^{-6}$ & -- & -- \\
2.1915 & $3.6\cdot10^{-7}$ & 2.1874 & $1.7\cdot10^{-7}$ & -- & -- \\
2.1785 & $1.6\cdot10^{-7}$ & -- & -- & -- & -- \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/SnyderFilterGeometry.png}
\caption{Inline dielectric resonator filter designed in \cite{bastioli2012inline}.}
\label{fig:Sec-NumericalResults-Subsec-SnyderFilter-FilterGeometry}
\end{figure}
\begin{figure}[tbp]
\centering
\input{figures/SnyderFilter_Sparameters_Response.tex}
\caption{Inline dielectric resonator filter scattering parameter response comparison with the proposed approach. (--) RBM. ($\circ$) FEM.}
\label{fig:Sec-NumericalResults-Subsec-SnyderFilter-FilterResponse}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-SnyderFilter-ErrorConvergence}\input{figures/SnyderFilter_greedy_convergence_estimator.tex}}
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-SnyderFilter-ErrorEffectivity}\input{figures/SnyderFilter_effectivity_estimator.tex}}
\caption{Inline dielectric resonator filter error estimator results. (a) Convergence of the greedy algorithm. (b) Effectivity (\texttt{eff}).}
\label{fig:Sec-NumericalResults-Subsec-SnyderFilter-Error}
\end{figure}
\subsection{Combline Diplexer}
\label{Sec-NumericalResults-Subsec-ComblineDiplexer}
The last real-life application is an $11^\text{th}$ order combline diplexer with star-junction designed in \cite{zhao2014iterative}. The geometry of this diplexer is shown in Fig. \ref{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-FilterGeometry}. The frequency band of analysis is $\mathcal{B}:=[2.2, 3.0]$ GHz. An FEM system with 270,446 degrees of freedom arises whereas the application of the proposed methodology in Algorithm \ref{alg:Sec-InfSupConstantFree-SubSec-EnhancedReducedBasisSpace-FastROMErrorEstimator} gives rise to a ROM of dimension 20 by means of RBM. The scattering parameter response for this diplexer is detailed in Fig. \ref{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-FilterResponse}. Good agreement is found between FEM and RBM results. It should be pointed out that further tuning is needed to obtain the target equiripple response. The FOM solution for this example requires $4.630$ seconds. In the online stage, the ROM was evaluated at $1601$ different frequencies taking $0.446$ seconds. Thus, the time to solve a single ROM is $278$ microseconds, at a speedup of around $16650$.
As expected in an $11^\text{th}$ order diplexer, 11 in-band eigenmodes are found in the frequency band of interest. Table \ref{tab:Sec-NumericalResults-Subsec-ComblineDiplexer-GreedySamples} details the different frequency samples for each methodology. A comparison for the different MOR strategies is shown in Fig. \ref{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-Error}, where the convergence of the estimated and true errors at each iteration as well as the effectivity metric is detailed. Once again, the residual norm-based greedy algorithm prematurely stops, misled by oversampling nearby the eigenresonances. A reasonable performance is observed in the proposed \emph{a posteriori} state error estimator.
\begin{table}[tbp]
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.3}
\caption{Greedy Algorithm Frequency Samples for Different MOR Methodologies in the Combline Diplexer.}
\label{tab:Sec-NumericalResults-Subsec-ComblineDiplexer-GreedySamples}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
GHz & $\epsilon_{\text{true}}$ & GHz & $\epsilon_{\text{state}}$ & GHz & $\epsilon_{\text{res}}$ \\
\hline
\hline
2.4623 & 1. & 2.4623 & 1. & 2.4623 & 1. \\
2.4914 & 1. & 2.4914 & 1. & 2.4914 & 1. \\
2.5233 & 1. & 2.5233 & 1. & 2.5233 & 1. \\
2.5592 & 1. & 2.5592 & 1. & 2.5592 & 1. \\
2.5803 & 1. & 2.5803 & 1. & 2.5803 & 1. \\
2.5981 & 1. & 2.5981 & 1. & 2.5981 & 1. \\
2.6163 & 1. & 2.6163 & 1. & 2.6163 & 1. \\
2.6419 & 1. & 2.6419 & 1. & 2.6419 & 1. \\
2.6766 & 1. & 2.6766 & 1. & 2.6766 & 1. \\
2.7102 & 1. & 2.7102 & 1. & 2.7102 & 1. \\
2.7274 & 1. & 2.7274 & 1. & 2.7274 & 1. \\
3.0000 & $8.6\cdot10^{-1}$ & 3.0000 & $8.6\cdot10^{-1}$ & 3.0000 & $8.6\cdot10^{-1}$ \\
2.2000 & $4.5\cdot10^{-1}$ & 2.2000 & $4.5\cdot10^{-1}$ & 2.2000 & $4.5\cdot10^{-1}$ \\
2.8600 & $6.0\cdot10^{-2}$ & 2.5094 & $1.1\cdot10^{-2}$ & 2.6766 & $3.1\cdot10^{-9}$ \\
2.3500 & $7.2\cdot10^{-3}$ & 2.9067 & $1.1\cdot10^{-2}$ & -- & -- \\
2.9500 & $1.1\cdot10^{-3}$ & 2.6060 & $9.2\cdot10^{-5}$ & -- & -- \\
2.2600 & $1.6\cdot10^{-4}$ & 2.2835 & $2.4\cdot10^{-4}$ & -- & -- \\
2.7800 & $1.5\cdot10^{-5}$& 2.9712 & $3.7\cdot10^{-5}$ & -- & -- \\
2.4200 & $1.1\cdot10^{-6}$& 2.2137 & $7.9\cdot10^{-7}$ & -- & -- \\
2.4100 & $1.9\cdot10^{-7}$& 2.6766 & $1.1\cdot10^{-8}$ & -- & -- \\
\hline
\end{tabular}
\end{threeparttable}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{figures/ZhaoWuDiplexerGeometry.png}
\caption{Combline diplexer designed in \cite{zhao2014iterative}.}
\label{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-FilterGeometry}
\end{figure}
\begin{figure}[tbp]
\centering
\input{figures/ZhaoWuDiplexer_Sparameters_Response.tex}
\caption{Combline diplexer scattering parameter response comparison with the proposed approach. (--)~RBM. ($\circ$)~FEM.}
\label{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-FilterResponse}
\end{figure}
\begin{figure}[tbp]
\centering
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-ErrorConvergence}\input{figures/ZhaoWuDiplexer_greedy_convergence_estimator.tex}}
\subfloat[]{\label{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-ErrorEffectivity}\input{figures/ZhaoWuDiplexer_effectivity_estimator.tex}}
\caption{Combline diplexer error estimator results. (a) Convergence of the greedy algorithm. (b) Effectivity (\texttt{eff}).}
\label{fig:Sec-NumericalResults-Subsec-ComblineDiplexer-Error}
\end{figure}
\section{Conclusions}
\label{Sec-Conclusions}
A compact and reliable MOR method for fast frequency sweeps in microwave circuits by means of the reduced-basis method has been detailed. A compact basis including the in-band resonant modes hit in the frequency band of interest for both the reduced basis approximation of the electric field and its corresponding state error has been proposed. This allows to efficiently solve for both the reduced original and residual problems, thus minimizing the additional computational effort. The benefits of using proper state error estimators avoiding time-consuming \emph{inf-sup} constant evaluations has also been highlighted. As a result, a fast \emph{a posteriori} state error estimator for the ROM has been obtained. Real-life microwave devices, including a quad-mode dielectric resonator filter and a combline diplexer, have shown the capabilities and reliability of the proposed methodology.
\section*{Acknowledgments}%
\addcontentsline{toc}{section}{Acknowledgments}
Sridhar Chellappa is supported by the International Max Planck Research School for Advanced Methods in Process and Systems Engineering (IMPRS-ProEng).
\addcontentsline{toc}{section}{References}
\bibliographystyle{plainurl}
|
1,314,259,993,497 | arxiv | \section{Introduction}\label{sec:intro}
In this paper we consider the Cauchy problem for the following log-modified nonlinear Schr\"odinger equation (NLS) on $\mathbb R^2$:
\begin{equation}\label{eq:nls}
\left\{
\begin{aligned}
& i{\partial}_t u +\frac{1}{2}\Delta u =\lambda u|u|^2\ln |u|^2,\quad x\in
\mathbb R^2,\ \lambda>0,\\
& u(0,x) = u_0\in H^1(\mathbb R^2).
\end{aligned}
\right.
\end{equation}
This model is discussed in the physics literature (cf. \cite{MALOMED19, PhysRevLett117, PhysRevLett.123}) as an effective mean-field description of ultra-dilute quantum fluids in two spatial dimensions.
The logarithmic factor thereby stems from the LHY-correction (after Lee-Huang-Yang), a series expansion in the mean particle density of
Bose-Einstein condensates with origins in the work of Bogolubov (see, e.g., \cite{LHY, PhysRevLett.115} for more details).
It is argued that the LHY correction should have a stabilizing effect on an otherwise collapsing condensate, allowing for stable soliton-like modes,
which are often called {\it quantum droplets}.
Unfortunately, there are only a few results available to date concerning the rigorous mathematical derivation of the LHY correction, the most recent being \cite{Brietzke2020}
concerning second order corrections to the (mean-field) bosonic ground state energy in three spatial dimensions.
The corresponding problem in 2D, however, still remains open.
Nevertheless, the NLS \eqref{eq:nls} has several mathematical
properties which make it an intriguing model to study: It can be seen
as the Hamiltonian evolution equation associated to the
following {\it energy functional}
\begin{equation}\label{eq:energy}
E(u) := \frac{1}{2}\|\nabla u\|_{L^2(\mathbb R^2)}^2 +\frac{\lambda}{2}
\int_{\mathbb R^2}|u|^4\ln\left(\frac{|u|^2}{\sqrt e}\right)\, dx.
\end{equation}
The latter is thus (at least formally) conserved by solutions to \eqref{eq:nls}, as are the total {\it mass} and {\it momentum}, i.e.,
\begin{equation}\label{eq:mass}
M(u):=\int_{\mathbb R^2}|u|^2\, dx,\quad P(u) := \int_{\mathbb R^2}\IM \overline u \nabla u\, dx.
\end{equation}
In view of \eqref{eq:energy}, one sees that the second term in the energy, i.e., the one stemming from the nonlinearity, has no definite sign. Indeed, in terms of the usual classification of NLS (see, e.g. \cite{CazCourant}), the
nonlinearity in \eqref{eq:nls} is seen to be {\it defocusing} (or
repulsive) whenever the density $|u|^2>\sqrt e$ and {\it focusing} (or
attractive) whenever $|u|^2<\sqrt e$. Furthermore, it is well known that
in the case of pure power-law nonlinearities such as $\lambda |u|^{p-1}u$, solutions $u$ to NLS obey the additional scaling symmetry
$$
u(t,x)\mapsto u_\mu (t,x)=\mu^{{2}/{(p-1)}} u(\mu^2 t, \mu x),\ \mu >0.
$$
In two spatial dimensions, this implies that the {\it cubic case} $p=3$ is {\it mass-critical}, since in this case the transformation also
preserves the $L^2(\mathbb R^2)$-norm of $u$. It has been proved,
that the corresponding Cauchy problem is globally well-posed in
$L^2(\mathbb R^2)$ in the defocusing case, and also in the focusing case for masses below the one of the
ground state (cf. \cite{Dodson15,Dodson16} for more details).
Furthermore the Cauchy problem becomes ill-posed if
one tries to study it in spaces which are less regular than $L^2$ (\cite{KPV01}).
Coming back to our model, we first note that due to the appearance of the logarithmic factor, \eqref{eq:nls} does not obey any scaling symmetry. However, since
for all $\varepsilon>0$, we have
\begin{equation*}
\left\lvert u|u|^2\ln |u|^2\right\rvert\lesssim |u|^{3-\varepsilon} + |u|^{3+\varepsilon} ,
\end{equation*}
the log-modified NLS can formally be seen to be {\it inter-critical}, in two different ways: First, its nonlinearity is slightly larger than cubic, and thus mass supercritical,
but still remains energy subcritical. Second, it can be understood as the sum of a slightly $L^2$-subcritical (focusing) nonlinearity and a slightly $L^2$-supercritical (defocusing) nonlinearity. It is therefore similar to the case of NLS with competing cubic-quintic power law nonlinearities, i.e.
\begin{equation}\label{eq:cubicqutinic}
i{\partial}_t u +\frac{1}{2}\Delta u =- |u|^2u + |u|^4u,\\
\end{equation}
which has been studied in \cite{KOPV17} in 3D, and, more recently, in \cite{CaSp20, LewinRotaNodari-p} in various space dimensions.
\smallbreak
Our first main result of this work is as follows:
\begin{theorem}[Global well-posedness]\label{thm:gwp}
For any $u_0\in H^1(\mathbb R^2)$, there exists a unique global in-time solution
$u\in C(\mathbb R; H^1(\mathbb R^2))\cap C^1(\mathbb R;H^{-1}(\mathbb R^2))$
to \eqref{eq:nls}, depending continuously on the initial data $u_0$.
Furthermore the solution $u$ obeys the conservation of mass, energy and momentum.
\end{theorem}
This result can be interpreted as a rigorous expression of the stabilizing effect of the LHY correction in two spatial dimensions. Recall that the focusing, cubic NLS in
two spatial dimensions, in general, exhibits finite-time blow-up of solutions. The introduction of the logarithmic factor prevents any such blow-up from happening.
\begin{remark}
Since \eqref{eq:nls} is a logarithmic perturbation of the
$L^2$-critical case, local and global well-posedness might even hold in $L^2(\mathbb R^2)$,
in view of the similar case of a (smooth) logarithmic perturbation of
an energy-critical wave-equation considered in \cite{Tao07}.
\end{remark}
Our second main result concerns the properties of {\it solitary waves}, i.e., solutions of the form
\[
u(t,x) = e^{i\omega t} \phi(x), \quad \omega \in \mathbb R,
\]
where $\phi$ solves
\begin{equation}
\label{eq:ground}
-\frac{1}{2}\Delta \phi +\lambda \phi|\phi|^2\ln |\phi|^2 +\omega
\phi=0,\quad x\in \mathbb R^2.
\end{equation}
Clearly, solutions to this equation can only be unique up to translations and phase conjugation,
a fact that, together with the Galilei invariance of \eqref{eq:nls}, allows one to subsequently construct
more general solitary waves, moving with non-zero speed.
In the following we shall denote the
{\it action} associated to \eqref{eq:ground} by
\[
S(\phi) = E(\phi) + \omega M(\phi).
\]
A solution $\phi$ is called a nonlinear \emph{ground state} if it minimizes the action $S(\phi)$ among all possible solutions
$\phi$ of \eqref{eq:ground}. It follows from \cite[Lemma~2.3]{CiJeSe09} and \cite[Proposition~4]{ByJeMa09}
that every minimizer $\varphi$ of the action $S(\phi)$ is of the form
\begin{equation*}
\varphi(x) = e^{i\theta} \phi_\omega(x-x_0),
\end{equation*}
for some constants $\theta\in \mathbb R$, $x_0\in \mathbb R^2$, and where $\phi_\omega$ is a {\it positive} least
action solution to \eqref{eq:ground}. The existence and uniqueness of such positive ground states is the content of our second main result.
\begin{theorem}[Existence and uniqueness of positive ground states]\label{thm:ground}
Suppose that the frequency $\omega \in \mathbb R$ satisfies
\begin{equation*}
0<\omega<\frac{\lambda}{2\sqrt e}.
\end{equation*}
Then \eqref{eq:ground} admits a unique solution $\phi_\omega \in C^2(\mathbb R^2)$ which is radially symmetric and
exponentially decaying as $|x|\to \infty$. Moreover, for all $x\in \mathbb R^2$, it holds
\[
0<\phi_\omega(x)<\sqrt{z_\omega},
\]
for some uniquely defined parameter $z_\omega\in (\tfrac{1}{e},1)$, which satisfies $z_\omega \to 1$ as $\omega \to 0_+$.
\end{theorem}
These ground states can be physically interpreted as quantum droplets with zero vorticity. In numerical simulations,
they are found to have a rather flat top with nearly constant value of the density in its interior, see \cite{MALOMED19}.
As a final result we shall turn to the question of {\it orbital stability} of solitary waves. To this end we first recall the following notions for constrained energy minimizers.
\begin{definition}\label{def:set-stability}
For $\rho>0$, denote
\begin{equation*}
\Gamma(\rho) = \left\{ u\in H^1(\mathbb R^2),\ M(u)=\rho\right\}.
\end{equation*}
Assuming that the minimization problem
\begin{equation}\label{eq:8.3.5}
u\in \Gamma(\rho),\quad E(u)= \inf \{ E(v)\ ;\ v\in \Gamma(\rho)\}
\end{equation}
has a solution, we shall denote by $\mathcal E(\rho)$ the set of all possible
(constraint) energy minimizers. We call this set {\it orbitally stable}, if
for all $\varepsilon>0$, there exists $\delta>0$ such that if
$u_0\in H^1(\mathbb R^2)$ satisfies
\[\inf_{\phi\in \mathcal E(\rho)}\|u_0-\phi\|_{H^1}\le \delta,\]
then the
solution to \eqref{eq:nls} with $u_{\mid t=0}=u_0$ satisfies
\begin{equation*}
\sup_{t\in \mathbb R}\inf_{\phi\in \mathcal E(\rho)}\left\|u(t,
\cdot)-\phi\right\|_{H^1}\le \varepsilon.
\end{equation*}
\end{definition}
\begin{theorem}[Orbital stability of energy minimizers]\label{thm:stab}
Given any $\rho>0$, the set $\mathcal E(\rho)$
is non-empty and orbitally stable.
\end{theorem}
The fact that energy minimizers are orbitally stable is in sharp
contrast to the case of the usual focusing cubic NLS in two spatial dimensions, for which
all solitary waves are known to be {\it strongly unstable} due to the possibility of blow-up, see \cite{CazCourant}.
(In the defocusing case, there is no solitary wave and all solutions scatter.)
Using re-arrangement inequalities, cf. \cite{LiLo}, it is possible to infer that every energy minimizer is radially decreasing and solves \eqref{eq:ground}
for some Lagrange multiplier $\omega>0$. Hence, the energy minimizer equals a nonlinear ground state $\phi_\omega$,
possibly after an appropriate space translation (for a given mass and
for a certain fixed $\omega$, minimizing the action or the energy is equivalent). The difficulty, however, is that several $\omega$'s could, at least in principle, yield the same mass $\rho$.
Thus, uniqueness of solutions to \eqref{eq:ground} at fixed $\omega$
does not imply the uniqueness of energy minimizers. The only cases for which this uniqueness is known to be true
seem to be the
one of a single pure power law nonlinearity $|u|^{p-1}u$, see \cite{CazCourant},
and the one of a purely logarithmic nonlinearity $u\ln|u|^2$, cf. \cite{Ar16}. It is, nevertheless conjectured that uniqueness holds true
for more general nonlinearities, see e.g. \cite{CaSp20, HajStu, LewinRotaNodari-p} for a more detailed discussion on this.
The fact that there exists energy minimizer with arbitrarily small mass $\rho>0$ (among the set of ground states), also implies
that there is no positive lower bound on the mass of ground states. This is in contrast to the case of the
cubic-quintic NLS \eqref{eq:cubicqutinic} in 2D.
For the latter it is known that all ground states have mass {\it strictly bigger} than the one of the {\it cubic} nonlinear ground state $Q$, see \cite{CaSp20}.
In Section \ref{sec:prop}, we shall present arguments showing that
\[
M(\phi_\omega)\equiv\| \phi_\omega\|^2_{L^2}\to 0, \quad \text{as $\omega \to 0_+$.}
\]
The rest of this paper is devoted to the proof of these theorems, which will be done via a series of technical results given in
Sections \ref{sec:Cauchy}--\ref{sec:stab} below. In there, we will
also add further remarks and results on topics such as scattering and
the asymptotic behavior of $\phi_\omega$. Finally, in an appendix, we
address the analogue of \eqref{eq:nls} in 1D: Our Theorem~\ref{theo:stab1D} suggests that
ground states for \eqref{eq:nls} are indeed orbitally stable in the sense of,
e.g., \cite{BGR15}.
\section{Cauchy problem}\label{sec:Cauchy}
\subsection{Global well-posedness} The aim of this subsection is to prove Theorem \ref{thm:gwp}.
To this end, we start by first proving local well-posedness of \eqref{eq:nls}, when rewritten through Duhamel's formula, i.e.
\begin{equation}\label{eq:duhamel}
u(t) = e^{i\frac{t}{2} \Delta } u_0 - i \lambda \int_0^t e^{i\frac{t-s}{2} \Delta} f(u)(s) \, ds,
\end{equation}
where here and in the following, we denote
\[
f(z)= z|z|^2\ln |z|^2, \quad z\in \mathbb C.
\]
A classical fixed point argument, based on the use of Strichartz estimates, then yields the following result.
\begin{proposition}[Local well-posedness]
For any $u_0\in H^1(\mathbb R^2)$ and any $\lambda\in \mathbb R$, there exist times $T>0$ and a unique solution
\[
u\in C([0, T];H^1(\mathbb R^2))\cap C^1((0, T); H^{-1}(\mathbb R^2)),
\]
to \eqref{eq:duhamel}, depending continuously on $u_0$. Moreover $u$ conserves its mass, energy, and momentum, and
we also have the blow-up alternative, i.e. if $T<\infty$, then
\[
\lim_{t\to T_-} \| u(t, \cdot)\|_{H^1}=\infty.
\]
\end{proposition}
In view of the fact that \eqref{eq:nls} is time-reversible, we also obtain the analogous statement backward in time.
\begin{proof}
We see that our nonlinearity $f\in C^1(\mathbb R^2;\mathbb R^2)$ satisfies $f(0)=0$,
\[
| f(u)| \lesssim |u|^{3-\varepsilon} + |u|^{3+\varepsilon} , \quad \forall \varepsilon>0,
\]
as well as
\[
|\nabla f(u)|\le (3 |\ln |u|^2 | +2) |u|^2 |\nabla u| \lesssim (|u|^{2+\varepsilon} + |u|^{2-\varepsilon}) |\nabla u|.
\]
We therefore can simply quote classical results by Kato, in particular
\cite[Theorem~I]{Kato1987} (see also \cite{CazCourant}), to obtain existence and uniqueness of a
strong
solution $u(t, \cdot)
\in H^1(\mathbb R^2)$ to \eqref{eq:duhamel}, up to some (possibly finite) time $T=T(\|u_0\|_{H^1})>0$.
The proof of the conservation laws for mass, energy and momentum follows along the same lines as in \cite[Theorem~III]{Kato1987}
(see also \cite{Oz06} for an alternative approach which does not require any additional smoothness of the solution $u$).
\end{proof}
\begin{remark}
It is not clear whether the solution is arbitrarily smooth or not, in
general, since one can see that the
third derivative of $f(z)$ becomes singular. See also \cite{CaGa18} in the
case of the (even more singular) nonlinearity $z\ln|z|^2$.
\end{remark}
\begin{corollary}[Global well-posedness]
Let $\lambda \ge 0$. Then, the solution is global, i.e. $T=\infty$.
\end{corollary}
\begin{proof} Using the conservation laws of mass and energy, together with the fact that $\lambda \ge 0$,
the positive part of the energy satisfies
\begin{align*}
E_+(u) :&= \frac{1}{2}\|\nabla u(t, \cdot)\|_{L^2}^2 +\frac{\lambda}{2}
\int_{|u|^2>\sqrt e}|u(t,x)|^4\ln\(\frac{|u(t,x)|^2}{\sqrt e}\)dx\\
&=
E(u_0)+ \frac{\lambda}{2}
\int_{|u|^2<\sqrt e}|u(t,x)|^4\ln\(\frac{\sqrt e}{|u(t,x)|^2}\) dx\\
&\le E(u_0) +\frac{\lambda}{2} \int_{\mathbb R^2} |u(t,x)|^4\(\frac{\sqrt
e}{|u(t,x)|^2}\)dx = E(u_0) +\frac{\lambda}{2} \sqrt e M(u_0).
\end{align*}
This consequently yields a uniform in-time
bound on $\|u(t, \cdot)\|_{H^1}$ and thus, the blow-up alternative implies that $T=\infty$.
\end{proof}
\subsection{Some scattering results} Let us introduce the conformal space
\[
\Sigma:=\left\{f\in H^1(\mathbb R^2),\ x\mapsto |x|f(x)\in
L^2(\mathbb R^2)\right\},\quad \|f\|_\Sigma =
\|f\|_{H^1(\mathbb R^2)}+\left\||x|f\right\|_{L^2(\mathbb R^2)}.
\]
\begin{lemma} Let $u_0\in \Sigma$ and $\lambda \ge 0$,
then the global in-time solution $u$ obtained above satisfies $u\in C(\mathbb R; \Sigma)$.
\end{lemma}
\begin{proof} We introduce the Galilean operator $J(t)=x+it\nabla$,
which commutes with the free Schr\"odinger equation, i.e.
\begin{equation*}
\big[J,i{\partial}_t+\tfrac{1}{2}\Delta\big]=0.
\end{equation*}
A direct computation then yields the pseudo-conformal conservation law
\begin{equation*}
\frac{d}{dt}\(\frac{1}{2}\|(x+it\nabla )u\|_{L^2}^2 +\frac{\lambda
t^2}{2}\int_{\mathbb R^2} |u(t,x)|^4 \ln \(\frac{|u(t,x)|^2}{\sqrt e}\)dx \)
=-\lambda t \int_{\mathbb R^2}|u(t,x)|^4dx.
\end{equation*}
In particular if $\lambda \ge 0$, the same type of argument as in the proof above yields
that $\|(x+it\nabla )u\|_{L^2}$ is uniformly bounded for all $t\ge 0$. A triangle inequality then implies that $u(t, \cdot)\in \Sigma$.
\end{proof}
\begin{proposition}
\emph{Existence of wave operators:} If $u_-\in \Sigma$, then there exist $u_0\in \Sigma$ and $u\in
C(\mathbb R;\Sigma)$ solving \eqref{eq:nls} such that
\begin{equation*}
\left\| e^{-i\frac{t}{2}\Delta} u(t, \cdot)-
u_-\right\|_\Sigma \Tend t {-\infty} 0.
\end{equation*}
\emph{Small data scattering:} If $u_0\in \Sigma$ and $\|u_0\|_\Sigma$ is sufficiently small, then there exists $u_+\in \Sigma$, such that
\begin{equation*}
\left\| e^{-i\frac{t}{2}\Delta} u(t, \cdot)-
u_+\right\|_\Sigma \Tend t \infty 0.
\end{equation*}
\end{proposition}
\begin{proof}[Sketch of the proof]
Recall that
\begin{equation*}\label{eq:factor}
J(t)u = it\, e^{i|x|^2/(2t)}\nabla\(u e^{-i|x|^2/(2t)}\),
\end{equation*}
which implies that $J(t)u$ can be estimated like $\nabla u$ in $L^p$. Using this, one obtains the Gagliardo--Nirenberg type inequality
adapted to $J(t)$, i.e.
\begin{equation*}
\|u\|_{L^p(\mathbb R^2)}\lesssim
\frac{1}{t^{1-2/p}}\|u\|_{L^2(\mathbb R^2)}^{2/p}
\|(x+it\nabla )u\|_{L^2(\mathbb R^2)}^{1-2/p},\quad 2\le p<\infty.
\end{equation*}
Essentially the same fixed point argument as the one used in solving the Cauchy
problem locally in-time then yields the existence of wave operators
(see e.g. \cite{CazCourant}). Small data scattering then follows directly from \cite[Theorem~2.1]{NakanishiOzawa}.
\end{proof}
\begin{remark}
The existence of wave operators under the mere assumption $u_-\in H^1(\mathbb R^2)$ is very
delicate, since the present nonlinearity can be understood as the
sum of a slightly $L^2$-subcritical (focusing) nonlinearity and a
slightly $L^2$-supercritical (defocusing) nonlinearity. The
existence of wave operators in $H^1$ is known for
$L^2$-supercritical defocusing nonlinearities, but not for
$L^2$-subcritical ones. Also, the smallness in $\Sigma$ is necessary
to have scattering, in the sense that smallness in $H^1(\mathbb R^2)$ is not
enough, see also Remark~\ref{rem:smallness}.
\end{remark}
\section{Nonlinear ground states}\label{sec:ground}
\subsection{Necessary and sufficient conditions for the existence of ground states}
We seek solutions to \eqref{eq:nls} in the form $u(t,x) =e^{i\omega t}\phi(x)$, with $\omega \in \mathbb R$ and $\phi$ sufficiently smooth and localized.
Then $\phi$ solves
\begin{equation}
\label{eq:soliton}
-\Delta \phi =g(\phi),\quad \text{on $\mathbb R^2$},
\end{equation}
where here, and in the following, we shall denote (in agreement with the notations from \cite{BGK83,BL83a}):
\begin{equation}\label{eq:g}
g(\phi) = -2\omega \phi -2\lambda|\phi|^2\phi\ln|\phi|^2,\quad G(z):=
\int_0^z g(s)\, ds.
\end{equation}
We also define the quantities
\begin{equation*}
T(\phi) := \int_{\mathbb R^2}|\phi(x)|^2dx,\quad V(\phi) := \int_{\mathbb R^2}G\(\phi(x)\)dx,
\end{equation*}
which allow us to rewrite the Lagrangian action as
\begin{equation}\label{eq:lagrange}
S(\phi) = \frac{1}{2}T(\phi) -V(\phi).
\end{equation}
In a first step, we shall derive certain necessary conditions for solution $\phi$ to \eqref{eq:soliton}.
\begin{lemma}[Pohozaev identities] Any solution $\phi \in H^1(\mathbb R^2)$ to \eqref{eq:soliton} satisfies
\begin{equation}
\label{eq:phi1}
\frac{1}{2}\int_{\mathbb R^2}|\nabla \phi|^2\, dx
+\lambda\int_{\mathbb R^2}|\phi|^4\ln|\phi|^2 \, dx+\omega \int_{\mathbb R^2}|\phi|^2 \, dx= 0,
\end{equation}
as well as
\begin{equation} \label{eq:phi2}
\frac{1}{2}\int_{\mathbb R^2}|\nabla
\phi|^2\, dx+\frac{\lambda}{2}\int_{\mathbb R^2}|\phi|^4 \, dx= \omega \int_{\mathbb R^2}|\phi|^2\, dx.
\end{equation}
Moreover, in order to have a nontrivial solution $\phi\not \equiv 0$, a necessary condition on the frequency $\omega \in \mathbb R$ is
\begin{equation*}
0<\omega<\frac{\lambda}{2\sqrt e}.
\end{equation*}
\end{lemma}
\begin{proof}
First, assume that $\phi$ is sufficiently smooth and rapidly decaying as $|x|\to \infty$. Then we directly obtain \eqref{eq:phi1} by multiplying \eqref{eq:soliton} with $\bar \phi$ and integrating w.r.t. $x\in \mathbb R^2$.
To obtain \eqref{eq:phi2}, we instead multiply by \eqref{eq:soliton} with $x\cdot \nabla \bar \phi$. Integration in $x$ then yields
\begin{equation}
\label{eq:phi2a}
\frac{\lambda}{2}\int_{\mathbb R^2}|\phi|^4\ln|\phi|^2\, dx
-\frac{\lambda}{4}\int_{\mathbb R^2}|\phi|^4\, dx+\omega \int_{\mathbb R^2}|\phi|^2\, dx= 0,
\end{equation}
or, in other words, $V(\phi)=0$.
By taking \eqref{eq:phi1}$-2\times$\eqref{eq:phi2a} we infer \eqref{eq:phi2} for sufficiently ``nice" $\phi$,
and a limiting argument allows us to extend this result to general $\phi \in H^1(\mathbb R^2)$.
In particular, \eqref{eq:phi2} also implies that $\omega >0$ is necessary for nontrivial $\phi$.
Next, we consider, for $\varepsilon>0$:
\begin{equation*}
c_\varepsilon = \sup_{0<z<1}z^\varepsilon \ln \frac{1}{z}.
\end{equation*}
Introducing $f_\varepsilon(z) =z^\varepsilon \ln \frac{1}{z}$ and computing its
derivative, we find that
\begin{equation*}
c_\varepsilon = f_\varepsilon\( e^{-1/\varepsilon}\) = \frac{1}{\varepsilon e}.
\end{equation*}
Taking $\varepsilon=1$, we infer
\begin{equation*}
0\le \int_{|\phi|^2<\sqrt{e}}|\phi|^4\ln\frac{\sqrt e}{|\phi|^2}\, dx\le
\frac{1}{\sqrt e} \int_{\mathbb R^2} |\phi|^2 \, dx.
\end{equation*}
Using this within the Pohozaev identity \eqref{eq:phi2a}, which we can be rewritten as
\[
\frac{\lambda}{2}\int_{\mathbb R^2}|\phi|^4\ln\frac{|\phi|^2}{\sqrt e}\, dx+\omega \int_{\mathbb R^2}|\phi|^2\, dx= 0,
\]
then yields
\begin{equation*}
\frac{\lambda}{2}\int_{|\phi|^2\ge \sqrt e}
|\phi|^4\ln\frac{|\phi|^2}{\sqrt e} \, dx+\omega
\int_{\mathbb R^2}|\phi|^2 \, dx= \frac{\lambda}{2}\int_{|\phi|^2< \sqrt e }
|\phi|^4\ln\frac{\sqrt e}{|\phi|^2}\, dx\le \frac{\lambda}{2\sqrt e}\| \phi\|_{L^2}^2.
\end{equation*}
Since the l.h.s. is the sum of two positive terms (unless $\phi \equiv 0$), this yields the condition that $0<\omega<\tfrac{\lambda}{2\sqrt e}$.
\end{proof}
Next, we shall show that the necessary condition on $\omega$ obtained above is also sufficient for the existence of positive ground states.
\begin{proposition}[Existence of ground states]\label{prop:ex}
Let $0<\omega<\tfrac{\lambda}{2\sqrt e}$.
Then \eqref{eq:soliton} has a solution $\phi_\omega$, such that:
\begin{enumerate}
\item $\phi_\omega>0$ on $\mathbb R^2$.
\item $\phi_\omega$ is radially symmetric, i.e., $\phi_\omega=\phi_\omega(r)$ with $r=|x|$, and non-increasing.
\item $\phi_\omega\in C^2(\mathbb R^2)$.
\item The derivatives of $\phi_\omega$ up to order two decay
exponentially, i.e.,
\begin{equation*}
\exists \delta>0,\quad |{\partial}^\alpha \phi_\omega(x)|\lesssim
e^{-\delta|x|},\quad |\alpha|\le 2.
\end{equation*}
\item For every solution $\varphi$ to \eqref{eq:soliton}, we have
\begin{equation*}
0< S(\phi_\omega)\le S(\phi),
\end{equation*}
where $S$ is the Lagrangian defined in \eqref{eq:lagrange}.
\end{enumerate}
\end{proposition}
\begin{proof}
With the exception of the exponential decay asserted in $(4)$, this statement is a direct quotation of \cite[Th\'eor\`eme~1]{BGK83}. We therefore only need to check that
the function $g$, defined in \eqref{eq:g}, satisfies the conditions $(g.0)-(g.3)$ imposed in \cite{BGK83}.
To this end, we first note that the function $g\in C(\mathbb R;\mathbb R)$ is obviously odd, and that
\begin{equation*}
\lim_{s\to 0} \frac{g(s)}{s}=-2\omega<0,\quad \text{since }\omega>0.
\end{equation*}
Thus $(g.0)$ and $(g.2)$ are indeed satisfied. In addition, we see that that $g$ is sub-exponential at infinity, hence satisfying condition $(g.3)$. It remains to check $(g.2)$: an integration
by parts yields, for $z>0$,
\begin{equation*}
G(z) = -\omega z^2 -4\lambda\int_0^z s^3\ln s \, ds = -\omega z^2
-\lambda z^4\ln z
+\lambda \frac{z^4}{4}=-\omega z^2 -\frac{\lambda}{2} z^4\ln
\frac{z^2}{\sqrt e}.
\end{equation*}
The map $z\mapsto \tfrac{z^2}{4}- z^2\ln z$ reaches its maximum at $z^*=
e^{-1/4}$, and
\begin{equation*}
G\(e^{-1/4}\) =\frac{1}{\sqrt e}\( -\omega +\frac{\lambda}{2\sqrt e}\)>0,
\end{equation*}
by our assumption on $\omega$. Therefore, also $(g.2)$ is satisfied and we obtain our result.
Finally, the exponential decay of the solution (together with its derivatives) follows from standard arguments for ordinary differential equations, see, e.g., \cite[Section 4.2]{BL83a}.
\end{proof}
\subsection{Uniqueness and further properties}\label{sec:prop}
Having obtained existence of nonlinear ground states, we shall now derive further properties for them.
\begin{lemma}[$L^\infty$-bound]
Let $\phi_\omega$ be a nonlinear ground state.
Then there exists a unique $z_\omega \in (\tfrac{1}{e}, 1)$, satisfying $z_\omega \to 1$ as $\omega\to 0_+$, such that
\[
0<\phi_\omega(x)<\sqrt{z_\omega}, \ \text{for all $x\in \mathbb R^2$,}
\]
\end{lemma}
\begin{proof} In view of Proposition \ref{prop:ex}, we know that $\phi_\omega=\phi_\omega(r)>0$ reaches its
maximum at zero, $\Delta \phi_\omega(0)\le 0$, thus
\begin{equation*}
\lambda \phi_\omega^3\ln \phi_\omega^2 +\omega {\phi_\omega}_{\mid r=0}\le 0.
\end{equation*}
Therefore, since $\phi_\omega(0)>0$,
\begin{equation*}
z\ln z \le -\frac{\omega}{\lambda},\quad \text{where} \quad z=\phi_\omega(0)^2.
\end{equation*}
The map $z\mapsto z\ln z$ is negative exactly on $(0,1)$, and
reaches its minimum value $-\tfrac{1}{e}$ at $z_\ast=\tfrac{1}{e}$. Since $\omega \in (0,\tfrac{\lambda}{2\sqrt e})$
by assumption, there exists a unique $z_\omega\in (\tfrac{1}{e},1)$
such that
\begin{equation*}
z_\omega\ln z_\omega = -\frac{\omega}{\lambda},
\end{equation*}
and $z_\omega\to 1$ as $\omega\to 0$.
\end{proof}
\begin{remark} The proof can be generalized to any sufficiently smooth solution $\phi$ to \eqref{eq:soliton}, not necessarily radial and decreasing.
Indeed, the same argument as above shows that at any point $x_0\in \mathbb R^2$
where $|\phi|$ reaches its maximum: $|\phi(x_0)|\le \sqrt{z_\omega}$. Hence,
the above estimate generalizes to
\begin{equation*}
|\phi(x)|\le \sqrt{z_\omega},\quad \forall x\in \mathbb R^2,
\end{equation*}
as soon as $\phi\in C^2(\mathbb R^2)$ solves \eqref{eq:soliton}.
In particular, $|\phi(x)|<1$ for all $x\in \mathbb R^2$, hence $\ln
|\phi|^2<0$, i.e., the nonlinearity can be considered fully focusing.
\end{remark}
We now turn to the question of uniqueness of nonlinear ground states.
\begin{lemma}[Uniqueness]
There exists at most one positive solution $\phi_\omega$ to \eqref{eq:soliton}.
\end{lemma}
\begin{proof}
This result follows from \cite[Theorem 1.1]{JANG2010} provided we can check the condition $(f1)-(f3)$ imposed on $g$. In view of \eqref{eq:g}, we
see that $g(0)=0$ and continuous on $[0, \infty)$. Recall that its anti-derivative is
\[
G(z) = \lambda z^2\left(\frac{z^2}{4}
- z^2\ln z - \tfrac{\omega}{\lambda}\right) \equiv \lambda z^2 \tilde g(z).
\]
A straightforward calculation shows that
$\tilde g$ is strictly increasing on $[0, e^{-1/4})$ and strictly decreasing on $(e^{-1/4}, \infty)$.
In addition, we know that
\[
\tilde g(0)=-\tfrac{\omega}{\lambda}<0,\quad
\tilde g\(e^{-1/4}\)>0, \quad \text{and $\tilde g(z)\to -\infty$, as $z\to +\infty$.}
\]
Thus, we can choose $u_1$ as the unique zero of $\tilde g$ on the interval $[0, e^{-1/4})$.
Furthermore we claim that we can choose $\bar u = \sqrt{z_\omega}$. To this end, one first checks that there exists a unique $\alpha \in [0, e^{-1/2})$,
such that $g(0)=g(\alpha)=g(\sqrt{z_\omega})=0$ and
\[
g(z)< 0 \ \text{on $[0, \alpha) \cup (\sqrt{z_\omega}, \infty)$, while} \ g(z)>0 \ \text{on $(\alpha , \sqrt{z_\omega})$.}
\]
By the choice of $u_1$, we have that
\[
G(u_1)=\int_0^{u_1} g(z) \, dz = 0,
\]
and hence $\alpha < u_1$. In particular, since $g(z)>0$ on $(\alpha , \sqrt{z_\omega})$, this implies that $G(z)>0$ on $(u_1, \sqrt{z_\omega})$.
Finally, to satisfy condition $(f3)$, one needs to check if $s(z)= \frac{z g'(z)}{g(z)}$ is decreasing on $[u_1, \sqrt{z_\omega})$. This follows from a
lengthy calculation which shows that
\[
g(z)^2s'(z) = 4\lambda z^3\left(2{\omega} (1+\ln z)-\lambda z^2\right)<0,
\]
on $(u_1 , \sqrt{z_\omega})$. We therefore have all the necessary ingredients to conclude uniqueness of the ground state.
\end{proof}
The proof Theorem \ref{thm:ground} is now complete.
\smallbreak
{\bf Asymptotics for $\omega \to 0$.}
To show that $M(\phi_\omega)\to 0$, as $\omega \to 0$,
one can follow the ideas in \cite{KOPV17} for the cubic-quintic case
(see also \cite{MorozMuratov2014}). In there, the asymptotic regime
$\omega\to 0$ is analyzed through the rescaling
\begin{equation*}
\psi_\omega(x) =\frac{1}{\sqrt\omega}\phi_\omega\(\frac{x}{\sqrt\omega}\),
\end{equation*}
which is $L^2(\mathbb R^2)$-unitary. One finds that $\psi_\omega$ solves
\begin{equation*}
-\Delta \psi_\omega +\omega \lambda \psi_\omega^5 -\lambda \psi_\omega^3+\psi_\omega=0,
\end{equation*}
and thus, the limit $\omega\to 0$ is no longer singular. Moreover, in
the 2D case,
\[
M(\psi_\omega) = M(\phi_\omega) \Tend \omega 0 M(Q),
\]
where $Q$ is the cubic ground state solution to
\begin{equation*}
-\frac{1}{2}\Delta Q -\lambda Q^3 +Q=0,
\end{equation*}
In our case, the logarithm is not
compatible with such a rescaling.
Instead, we define
\begin{equation*}
\psi_\omega(x)
=\sqrt{\frac{\ln\frac{1}{\omega}}{\omega}}\phi_\omega\(\frac{x}{\sqrt\omega}\),
\end{equation*}
and a computation shows that $\psi_\omega$ solves
\begin{equation*}
-\frac{1}{2}\Delta \psi_\omega -\lambda \psi_\omega^3+
\psi_\omega = \lambda \frac{\ln \ln
\frac{1}{\omega}}{\ln\frac{1}{\omega}} \psi_\omega^3
-\frac{\lambda}{\ln\frac{1}{\omega}} \psi_\omega^3\ln \psi_\omega^2 .
\end{equation*}
Recalling that, as $\omega \to 0$
\[
1\ll \ln \ln\frac{1}{\omega}\ll \ln\frac{1}{\omega} ,
\]
and using the analyticity of $\psi_\omega$ in $\omega$, we have $\psi_\omega \Eq \omega 0 Q$, and thus, in terms of $\phi_\omega$,
\begin{equation*}
\phi_\omega(x)\Eq \omega 0 \sqrt{\frac{\omega}{\ln
\frac{1}{\omega}}}Q(x\sqrt\omega).
\end{equation*}
In turn, this implies that
\[
M(\phi_\omega) = \frac{1}{\sqrt{\ln
\frac{1}{\omega}}} M(Q) \Tend \omega 0 0.
\]
These formal arguments can be made rigorous by following the steps in \cite{KOPV17}, which are based on the use of the linearized operator
\[L: f \mapsto -\frac{1}{2}f-3\lambda Q^2f+f,\]
which is known to be an isomorphism $L: H^1_{\rm rad}\to H^{-1}_{\rm rad}$, cf. \cite{Weinstein85}. The implicit function theorem then allows one to write
$\psi_\omega$ in terms of $Q$ plus lower order corrections involving
$L^{-1}$. In the present case, the situation is similar, for the
spectral analysis presented in \cite{KOPV17} is readily adapted to the
present case. Details are left to the interested reader.
\begin{remark}\label{rem:smallness}
This computation also shows that the $L^\infty$-bound derived before is
far from being sharp for small $\omega$.
The fact that the $L^2$-norm of $\phi_\omega$
can be arbitrarily small, is in sharp contrast with the cubic-quintic
case. Also, \eqref{eq:phi2} shows that the $H^1$-norm of
$\phi_\omega$
can be arbitrarily small: smallness in $H^1$ does not guarantee
scattering. The smallness of the momentum in
\cite[Theorem~2.1]{NakanishiOzawa} (and thus $\|u_0\|_\Sigma$ sufficiently
small) must be considered as necessary (since $\phi_\omega$ decays
exponentially).
\end{remark}
\section{Orbital Stability}\label{sec:stab}
We start by recalling that for $\rho>0$:
\begin{equation*}
\Gamma(\rho) = \left\{ u\in H^1(\mathbb R^d),\ M(u)=\rho\right\},
\end{equation*}
and first prove that the constrained energy is bounded below.
\begin{lemma}[Bound on the energy]\label{lem:min}
For any $\rho>0$,
\begin{equation*}
\inf\left\{E(u)\, ;\, u\in \Gamma(\rho)\right\} =-\nu,
\end{equation*}
for some finite $\nu>0$. \end{lemma}
\begin{proof} We can estimate
\begin{align*}
E(u) \ge & \, \frac{1}{2}\|\nabla u\|_{L^2}^2 -\frac{\lambda}{2}
\int_{|u|^2<\sqrt e}|u|^4\ln\(\frac{\sqrt e}{|u|^2}\) dx\\
\ge & \, \frac{1}{2}\|\nabla u\|_{L^2}^2 -\frac{\lambda \sqrt{e}}{2}
\int_{\mathbb R^2} {|u|^2}\, dx \ge \, \frac{1}{2}\| u\|_{H^1}^2-K,
\end{align*}
where $K= \tfrac{\rho}{2}(1+ {\lambda \sqrt{e}})>0$. Thus, all (constrained) energy-minimizing sequences are bounded in $H^1(\mathbb R^2)$
and $-\nu \ge -K>-\infty$. Moreover, for $\mu>0$, let
\[
u_\mu (x):= \mu u(\mu x) \quad \text{such that $\| u_\mu\|_{L^2(\mathbb R^2)}= \| u\|_{L^2(\mathbb R^2)}$.}
\]
Then, one finds that
\[
E(u_\mu) = \mu^2 E(u) - \mu^2 \lambda \ln \( \frac{1}{\mu^2}\) \int_{\mathbb R^2} |u|^4 \, dx.
\]
Hence, $E(u_\mu)<0$ for $\mu>0$ sufficiently small, and thus $\nu >0$.
\end{proof}
We shall now show that energy minimizers indeed exist, and that they are orbitally stable (as a set),
by invoking the concentration-compactness method of
\cite{PL284a} (see also \cite[Proposition~1.7.6]{CazCourant}.
\begin{proof}[Proof of Theorem \ref{thm:stab}]
We proceed in several steps:
\smallbreak
\noindent {\bf Step 1.} Let $(u_n)_{n\ge 0}\subset H^1(\mathbb R^2)$ be a minimizing sequence to \eqref{eq:8.3.5}. In view of \cite{PL284a}, we have the standard
trichotomy of concentration compactness. To rule out vanishing of the sequence, we first note that for
$n$ sufficiently large, Lemma~\ref{lem:min} implies that $E(u_n)\le
-\tfrac{\nu}{2}$, and hence, from the proof of Lemma~\ref{lem:min},
\begin{equation*}
\int_{|u_n|^2<\sqrt e}|u_n|^4\ln\(\frac{\sqrt e}{|u_n|^2}\) dx\ge \frac{\nu}{\lambda}>0.
\end{equation*}
In addition,
\[
\int_{|u_n|^2<\sqrt e}|u_n|^3\, dx \gtrsim \int_{|u_n|^2<\sqrt e}|u_n|^4\ln\(\frac{\sqrt e}{|u_n|^2}\) dx,
\]
and, thus, any minimizing sequence is bounded away
from zero in $L^3(\mathbb R^2)$.
\smallbreak
\noindent {\bf Step 2.} Next, we need to rule out dichotomy, in order to conclude
compactness. Arguing by contradiction, suppose that, after the extraction of some suitable subsequences,
there exist $(v_k)_{k\ge 0}$, $(w_k)_{k\ge 0}$ in $H^1(\mathbb R^2)$, such
that
\begin{equation*}
\operatorname{supp} v_k\cap \operatorname{supp} w_k=\emptyset,
\end{equation*}
as well as the following properties:
\begin{align*}
\|v_k\|_{L^2}^2\Tend k \infty \theta\rho,\quad
\|w_k\|_{L^2}^2\Tend k \infty (1-\theta)\rho,\quad\text{for some
}\theta\in (0,1),
\end{align*}
\begin{equation}\label{eq:kinetic}
\liminf_{k\to \infty} \(\int |\nabla u_{n_k}|^2 - \int|\nabla
v_k|^2 -\int |\nabla w_k|^2 \)\ge 0,
\end{equation}
and the remainder $r_k:= u_{n_k} -v_k -w_k$ satisfies
\[
\| r_k\|_{L^p}\Tend k \infty 0,
\]
for all $2\le p<\infty$. Note that this also implies
\[
\left|\int |u_{n_k}|^p -\int|v_k|^p- \int|w_k|^p\right|\Tend k \infty0,
\]
since $v_k$ and $w_k$ have disjoint support.
Denote $h(y)= y^4 \ln y$ for $y>0$. A Taylor expansion on $h(y+z)-h(y)-h(z)$, combined with an induction step shows that
for $\varepsilon>0$ and $N\ge 1$, that there exists a $C_{\varepsilon, N}>0$, such that
\[
\left| h\( \sum^N_{n =1} y_n \) - \sum^N_{n =1} h(y_n)\right| \le C_{\varepsilon, N} \sum_{\ell \not = k}^N |y_\ell| \( |y_k|^{3-\varepsilon} + |y_k|^{3+\varepsilon}\).
\]
Applying this with $\varepsilon =1$ and $N=3$ to $v_k, w_k, r_k$, and integrating over $\mathbb R^2$, yields
\begin{align*}
& \left | \int h(u_{n_k}) - \int h(v_k) -\int h(w_k) \right| \lesssim \int h(r_k) \, +\\
& + \int |r_k| \( |v_k|^{2} + |v_k|^{4} +|w_k|^{2} + |w_k|^{4}\)
+ \int (|v_k| + |w_k| )\( |r_k|^{2} + |r_k|^{4}\),
\end{align*}
where in the second line we have used the fact that $v_k$ and $w_k$ have disjoint supports. Applying H\"older's inequality
and recalling that $\|r_k\|_{L^p}\to 0$, as $k \to \infty$, shows that all the integrals on the right hand side tend to zero in the limit $k\to \infty$, hence
\[
\left | \int h(u_{n_k}) - \int h(v_k) -\int h(w_k) \right| \Tend k
\infty 0.
\]
Recalling that
\[
\int |u_{n_k}|^2 -\int|v_k|^2- \int|w_k|^2\Tend k \infty 0,
\]
we obtain
\begin{equation*}
\int |u_{n_k}|^4\ln\(\frac{|u_{n_k}|^2}{\sqrt e}\)
-\int|v_k|^4\ln\(\frac{|v_{k}|^2}{\sqrt e}\) -
\int|w_k|^4\ln\(\frac{|w_{k}|^2}{\sqrt e}\)\Tend k \infty 0.
\end{equation*}
We consequently infer from \eqref{eq:kinetic} that
\begin{equation*}
\liminf_{k\to \infty}\(E\(u_{n_k}\)-E(v_k)-E(w_k)\)\ge 0,
\end{equation*}
and thus
\begin{equation}\label{eq:lower}
\limsup_{k\to \infty} \(E(v_k)+E(w_k)\)\le -\nu.
\end{equation}
Following an idea from \cite{CoJeSq10}, we now use a scaling argument and set
\begin{align*}
\tilde v_k(x) & = v_k\(\sigma_k^{-1/2}x\),\quad \sigma_k =
\frac{\rho}{\|v_k\|_{L^2}^2} \\
\tilde w_k(x)& = w_k\(\mu_k^{-1/2}x\),\quad \mu_k =
\frac{\rho}{\|w_k\|_{L^2}^2} .
\end{align*}
We have $M( \tilde v_k)=M(\tilde w_k)=\rho $, and hence
$E(\tilde v_k), E(\tilde w_k)\ge -\nu.$ We also find that
\begin{equation*}
E(\tilde v_k) = \sigma_k\(\frac{1}{2\sigma_k}\int |\nabla v_k|^2
-\frac{\lambda}{2}\int |v_k|^4 \ln \( \frac{|v_k|^2}{\sqrt{e}}\)\),
\end{equation*}
and so
\begin{equation*}
E(v_k) = \frac{1}{\sigma_k}E(\tilde v_k)
+\frac{1-\sigma_k^{-1}}{2}\int |\nabla v_k|^2\ge
\frac{-\nu}{\sigma_k}+\frac{1-\sigma_k^{-1}}{2}\int |\nabla
v_k|^2.
\end{equation*}
Doing the same for $E(w_k)$, yields
\begin{align*}
E(v_k) + E(w_k) &\ge -\nu\( \frac{1}{\sigma_k}+\frac{1}{\mu_k}\)
+\frac{1-\sigma_k^{-1}}{2}\int |\nabla v_k|^2
+\frac{1-\mu_k^{-1}}{2}\int |\nabla w_k|^2\\
&\ge -\nu\( \frac{1}{\sigma_k}+\frac{1}{\mu_k}\)
+\frac{1-\sigma_k^{-1}}{2\|v_k\|_{L^2}^2}\| v_k\|_{L^4}^4
+\frac{1-\mu_k^{-1}}{2\|w_k\|_{L^2}^2}\| w_k\|_{L^4}^4,
\end{align*}
where in the second step, we have used the Gagliardo-Nirenberg inequality. Passing to the
limit, we infer
\begin{align*}
\lim\inf_{k\to \infty} \( E(v_k) + E(w_k) \)\ge -\nu
+\frac{1}{2}\min\(\frac{1-\theta}{\theta\rho},
\frac{\theta}{(1-\theta)\rho}\) \liminf_{k\to \infty} \|u_{n_k}\|_{L^4}^4,
\end{align*}
for any $\theta\in (0,1)$. By H\"older's inequality $\| u \|_{L^3}^3 \le \|u\|_{L^4}^2 \| u \|_{L^2}$ and thus,
in view of Step 1 and the fact that $\|u_{n_k}\|_{L^2}^2=\rho>0$, we infer $$\liminf_{k\to \infty}\|u_{n_k}\|_{L^4}^4>0.$$
This is in contradiction to \eqref{eq:lower} and consequently rules out dichotomy.
\smallbreak
\noindent{\bf Step 3.} We can now invoke \cite[Proposition 1.7.6(i)]{CazCourant} to deduce that for $u\in H^1(\mathbb R^2)$ and $(y_k)\subset \mathbb R^2$:
$u_{n_k}(\cdot - y_k)\to u$ in $L^p(\mathbb R^2)$ for all $2\le p<\infty$. Together with the weak lower semicontinuity of the $H^1$ norm
and the usual bound on the nonlinear potential energy, this implies
\[
E(u) \le \lim_{k\to \infty}E(u_{n_k})=-\nu,
\]
and thus, the existence of a constraint energy minimizer.
\smallbreak
\noindent{\bf Step 4.} The orbital stability now follows by invoking classical arguments of \cite{CaLi82} (see also \cite{CazCourant}):
Assume, by contradiction, that
there exist a sequence of initial data $(u_{0,n})_{n\in \mathbb N}\subset H^1(\mathbb R^2)$, such that
\begin{equation}\label{eq:8.3.17}
\|u_{0,n}-\phi\|_{H^1}\Tend n \infty 0,
\end{equation}
and a sequence $(t_n)_{n\in \mathbb N}\subset \mathbb R$, such that the sequence of solutions $u_n$ to \eqref{eq:nls} associated to the initial data $u_{0,n}$ satisfies
\begin{equation}\label{eq:8.3.18}
\inf_{\varphi\in \mathcal E(\rho)}\left\|u_n(t_n, \cdot) -
\varphi\right\|_{H^1}>\varepsilon,
\end{equation}
for some $\varepsilon>0$.
Denoting $v_n=u_n(t_n, \cdot)$, the above inequality reads
\begin{equation*}
\inf_{\varphi\in \mathcal E(\rho)}\|v_n-\varphi\|_{H^1}>\varepsilon.
\end{equation*}
In view of \eqref{eq:8.3.17}, we find that, one the one hand:
\begin{equation*}
\int_{\mathbb R^2}|u_{0,n}|^2 \Tend n \infty \int_{\mathbb R^2}|\phi|^2,\quad
E\(u_{0,n}\)\Tend n \infty E(\phi)=\inf_{v\in \Gamma(\rho)}E(v).
\end{equation*}
One the other hand, the conservation laws for mass and energy imply
\begin{equation*}
M(v_n) \Tend n \infty M(\phi),\quad
E\(v_{n}\)\Tend n \infty E(\phi),
\end{equation*}
and thus, $(v_n)_n$ is a minimizing sequence for the problem
\eqref{eq:8.3.5}. From the previous steps, there exist a
subsequence, still denoted by $(u_n)_{n\in \mathbb N}$, and a sequence of points
$(y_n)_{n\in \mathbb N}\subset \mathbb R^2$, such that $v_n(\cdot -y_n)$ has a strong limit $u$
in $H^1(\mathbb R^2)$. In particular, $u$ satisfies \eqref{eq:8.3.5},
hence a contradiction.
\end{proof}
|
1,314,259,993,498 | arxiv | \section{Introduction}
The estimation of treatment effects is one of the core problems in causal inference. A treatment effect is a measure used to compare interventions in randomized experiments, policy analysis, and medical trials. The treatment effect measures the difference in outcomes between units assigned to the treatment versus those assigned to the control. There have been a variety of related approaches for estimating treatment effects including those based on graphical models \citep{pearl2009causal} and the potential outcomes framework \citep{rubin1978bayesian}. In this paper, we develop methodology that builds on the potential outcomes framework as defined in \cite{rubin2005causal} to estimate treatment effects.
In the potential outcomes framework we compare the observed outcome to the outcome under the counterfactual, that is, what the outcome would be under a different set of treatment conditions. If the counterfactual outcome were known then the treatment effect on an individual unit is the difference between the outcome under the observed and counterfactual interventions. The \emph{fundamental problem of causal inference} is that in general for any unit one can only observe the outcome under a single treatment condition. As a consequence unit level causal effects are not identifiable. However, population level causal effects can be identified under some standard assumptions (see Section \ref{former}). An estimator of population level effects is the average treatment effect (ATE) which is a measure of the difference in the mean outcomes between units assigned to the treatment and units assigned to the control. If treatment effects are homogenous across individuals then estimators such as the ATE that consider causal effects at an aggregate level are reasonable, however such estimators will overlook subgroup or covariate-level specific heterogeneity in treatment effects. There is evidence that heterogeneity in treatment effects is more the rule than the exception \citep{heckman2006understanding, green2012modeling, xie2012estimating}.
A quantity in addressing heterogeneous treatment effects is the conditional average treatment effect (CATE) which is the average treatment effect conditional on the covariate level of a unit of observation. One can consider the CATE as a difference of two regression functions -- the average response given treatment at a set of covariate levels minus the the average response assuming the control condition and the same set of covariate levels. One can estimate the ATE by marginalizing the CATE over the joint distribution of the covariates. There are a number of approaches for estimating the two aforementioned causal estimands. The main approach for modeling heterogeneous treatment effects based on the CATE is conditional mean regression. Under this approach, we model the CATE as a difference between the conditional mean outcome given the treatment for particular covariate levels minus the mean outcome given the control at the same covariate levels \citep{ding2017causal}. The implementation of these models can be approached both parametrically and non-parametrically.
The most popular parametric methods for estimating the difference between the conditional mean outcomes include linear and polynomial regression \citep{pearl2009causal}, along with penalized regression approaches such as least absolute subset selection operator and ridge regression \citep{tibshirani1996regression}. At the other end of the spectrum are non-parametric regression models to estimate the difference between the conditional means. Examples include boosting \citep{powers2017some}, Bayesian additive regression trees (BART) \citep{hill2011bayesian, hahn2017bayesian, chipman2010bart} as well as classical regression trees \citep{athey2015machine, breiman1984classification} and random forests \citep{wager2017estimation, foster2011subgroup, breiman2001random}. These methods have some limitations to their use and we provide a brief discussion of these.
The use of random forests for CATE estimation as defined in \cite{wager2017estimation} provides some interesting theoretical results that allow for probabilistically valid statistical inference. These methods are theorized to outperform classical methods particularly in the presence of irrelevant covariates. This technique however, has been demonstrated to be outperformed in application \citep{hahn2017bayesian}. In addition, without a procedure for imposing a degree of regularization, random forests are difficult to actually deploy for heterogeneous treatment effect estimation \citep{wendling2018comparing}. BART and its variants \citep{hahn2017bayesian, hill2011bayesian} present a persuasive argument for their use in application, but there is limited work on their formal inferential properties \citep{wager2017estimation} for learning heterogeneous treatment effects. Specifically for BART, formal statistical analysis is hurdled by the lack of theory arguing posterior concentration around the true conditional mean function -- the key quantity of interest in heterogeneous treatment effect estimation via conditional mean regression.
An alternative to modeling the difference in conditional mean outcomes is the use of transformed responses or outcome variables (TRV) \citep{dudik2011doubly, beygelzimer2009offset} that is ideologically similar to concepts of \emph{inverse probability weighting} (IPW) \citep{hirano2003efficient}. The TRV approach introduces a transformation for the outcome and the treatment indicator variable for which the conditional expectation given a covariate level is equivalent to the CATE. This allows it to be used with \emph{off-the-shelf} machine learning techniques and has been applied to optimal treatment policy estimation in the same vein as ideas of \emph{double-robustness} as reviewed in \cite{ding2017causal} that combine regression adjustment with weighting. More recent work on the TRV has attempted to model it as a function of the observed covariates via regression trees \citep{athey2015machine} and boosting \citep{powers2017some}. This has raised questions of estimation quality of the approach given the high variance of the procedure. We assert that this is a consequence of the properties of the TRV that have not been explicitly accounted for in the model since past work has relied on using it as a benchmark for other methods \citep{athey2016recursive}.
In this paper we introduce a novel non-parametric Bayesian model based on Gaussian process regression \citep{singh2016gaussian, rasmussen2006Gaussian} for inference of the TRV that allows us to infer a posterior distribution on the CATE. The model we propose is a finite mixture of Gaussian-processes \citep{rasmussen2000infinite} that leverages the distribution implied by the transformation. This formulation is aimed at improving the overall quality of inference on the treatment effects with a correctly specified model.
This approach has benefits over both conditional mean regression and other TRV based techniques. In practice, we never estimate either the treatment nor the control function perfectly and different covariate distributions for the treatment and control groups can lead to biases in the treatment effect estimation \citep{powers2017some}. The TRV allows for the joint modeling of information from both the treated and control groups which can help circumvent the aforementioned estimation challenge which for instance has been discussed as a specific limitation of conditional mean regression with random forests \citep{wager2017estimation}. This joint modeling is also an improvement over Bayesian techniques that place individual vague priors on the treatment and control outcome models since the prior on the treatment effect as the difference of the two is possibly \emph{doubly} vague \citep{hahn2017bayesian}. This can make inference a challenge since it is difficult to control the degree of heterogeneity that the model adapts to. Furthermore the TRV generates unbiased estimates for the CATE \citep{powers2017some}.
In addition to its benefits over conditional mean regression methods, the model we introduce offers four advantages over other TRV modeling approaches. First, we significantly improve the accuracy of point estimation by explicitly modeling the distribution of the transformed outcome. Second, by modeling the distribution of the transformed outcome specifically we are able to greatly reduce the variance of causal estimands i.e. the average treatment effect and the conditional average treatment effect. Reducing the variance of the estimators is crucial since this has been the main criticism of the TRV approach \citep{athey2015machine, powers2017some}. This provides tighter uncertainty intervals relative to the approaches discussed in \cite{athey2015machine} and \cite{wager2017estimation}. Third, our approach is well suited for instances when the treated and control groups share information since our proposed mechanism jointly models the behavior of both via the transformation.
The methodology we introduce makes a number of significant contributions to the estimation of heterogeneous treatment effects. Our main contribution is that we improve the overall quality of inference by improving the point estimation with a correctly specified model. In addition, the proposed framework is flexible in that we do not assume a functional form for how heterogeneity of treatment effects are driven by the levels of the observed covariates. Finally, our proposed framework is easily adapted to studies where the mechanism by which individuals receive the treatment is unknown. For this problem, past work has relied on a two-stage procedure for learning this treatment assignment mechanism first and then utilizing this in the model. We instead propose an approach whereby the treatment assignment mechanism and the treatment effects are jointly learnt in a unified framework. By working under this paradigm we have a twofold gain. First, the uncertainty quantification from our proposed model reflects uncertainty from all stages of inference including the learning of the assignment mechanism, and the treatment effects. Second as a by-product of the \emph{feedback} in between the two estimation stages, the assignment mechanism makes more complete use of the data, which can improve estimation of causal effects.
The remainder of this paper is organized as follows: in Section \ref{former}, we introduce the TRV, the relevant notation and the assumptions inherent to the TRV approach. We state our new model in Section \ref{model}. Our approach is benchmarked against to TRV regression trees and random forests, along with non-TRV weighted tree methods as discussed in \cite{athey2016recursive}, as well as Bayesian tree models in \citep{hahn2017bayesian, hill2011bayesian} on both simulated and real data in Section \ref{data}. We close with a summary of our findings and possible areas of future work.
\section{Transformed Response Variables Framework}\label{former}
In order to formulate the approach of transformed outcomes, we first define some notation that we will use throughout this paper. The observed data $\mathcal{D}$ consists of a sample of size $n$ where for each unit of observation we are given a response variable $Y_{i} \in \mathbb{R}$ and a covariate vector$X_{i} \in \mathbb{R}^{p}$. In addition to the observed data, we denote as $W_{i} \in \{0, 1\}$ the treatment assignment. The corresponding treatment assignment probability is denoted as $e_{i} = \mathbb{P}(W_{i} = 1)$. Finally, the potential outcome is denoted as $Y_{i}(W_{i} = w)$.
Under the potential outcomes framework, in order to estimate treatment effects from observational data certain assumptions about the treatment assignment mechanism need to be satisfied. Briefly, these assumptions are that the treatment assignment is \emph{individualistic} (A1), \emph{probabilistic} (A2) and \emph{ignorable}(A3). Details of these assumptions are left to the reader in \cite{imbens2015causal}. A1 and A2 are implied under the assumption that the units of observation are a simple random sample from the target population that are independent and identically distributed.
Assumptions A2 and A3 are together known as the \emph{strong ignorability} assumption and grants the indentifying equivalence between the potential outcome and the causal conditioning,
$Y(W = w) \stackrel{P}{=} Y \mid W = w$. All three of the assumptions summarized here are always satisfied in randomized trials; In observational studies the assumptions may hold to varying degrees.
For instance, A2, which is also sometimes referred to as the overlap condition can be directly assessed. However, by comparison A3 is untestable and therefore indirect techniques are needed to determine the degree to which it is satisfied most commonly via sensitivity analyses \citep{rosenbaum1982assessing}. These assumptions are necessary for the formal results in the transformed response variable framework to hold.
Beyond these, we make one additional assumption that allows us to simplify the statistical the model we specify in this paper: Stable Unit Treatment Value Assumption (SUTVA) --- This condition assumes no interference between observations, and that there are no multiple versions of the treatment (A4). In its absence, we would need to define a different potential outcome for the unit of observation not just for each treatment received by that unit but for each combination of treatments received by every other observation in the experiment. Relaxing these assumptions will be discussed in Section \ref{futurework} as an avenue that our future work will aim to explore.
The causal estimands considered are the conditional average treatment effect (CATE), that we denote as $\tau^{CATE}$ and the average treatment effect (ATE) that we denote as $\tau^{ATE}$. $\tau^{CATE}$ is the primary estimate of interest in modeling heterogeneous treatment effects and is defined as,
\begin{equation}
\label{eq:cate}
\tau^{CATE} = \mathbb{E}_{Y}[Y(1) - Y(0) \mid X = x],
\end{equation}
the ATE can be derived by integrating over the the joint distribution of the covariates
\begin{equation}
\label{eq:ate}
\tau^{ATE} = \mathbb{E}_{X}\Big[ \mathbb{E}_{Y}[Y(1) -Y(0) \mid X =x ] \Big] =\mathbb{E}_{X}[\tau^{CATE}].
\end{equation}
The idea behind the transformed response variable apporach is to define a variable $Y_i^{*}$ for which the conditional expectation with respect to the response recovers the CATE under A3 (see Appendix \ref{app:A} for a proof of this result). A transformation that satisfies the above condition is,
\begin{equation}
\label{eq:trv}
Y^{*}_{i} = f(W_{i}, Y_{i}, e_{i})= \frac{W_{i} - e_{i}}{e_{i}(1-e_{i})} Y_{i}.
\end{equation}
The transformation requires knowledge of the probability of receiving the treatment. We assume that the treatment assignment probability depends on the observed covariate levels, or $e_{i} = e_{i}(X = x_{i})$ is a propensity score. A trivial example is when the propensity score is a fixed covariate independent value, $e_{i} = e$. This is not an example commonly seen in real observational causal inference problems and is as such not considered as a part of the model presented here, albeit \citep{athey2015machine, Athey:2015:MLC:2783258.2785466, athey2016recursive} consider it as a means of model validation.
\subsection{Strengths and Weaknesses of Past Work in TRV Modeling}\label{LMS}
TRV modeling offers three main advantages when used for estimating treatment effects as demonstrated in prior studies. Foremost amongst these is that the TRV can easily be modeled with any supervised learning method. For instance, regression trees and random forests have been used \citep{athey2015machine,Athey:2015:MLC:2783258.2785466,wager2017estimation} as has boosting \citep{powers2017some}. This is not an exhaustive list, and there are a myriad of other methods that can be used in conjunction with the TRV to estimate heterogeneous treatment effects. Furthermore, relative to conditional mean regression, this method does not ignore the propensity score which explicitly enters the estimation via the transformation. Finally, based on the modeling approach used, we can address treatment effect heterogeneity flexibly and therefore avoid issues arising from model misspecification since it is likely that there are complex relationships between the covariates and heterogeneity of the treatment effects. Despite their usefulness, the TRVs have some key weaknesses.
First, as mentioned in \cite{athey2015machine} and \cite{powers2017some} using TRVs as CATE estimators results in high variance estimates of the causal estimands. By construction the treatment assignment probability and the assignment itself only enter the model implicitly via the transformation and are therefore only accounted for indirectly. In addition, the treatment assignment probability only appears in the denominator, and if this is close to zero or one, the variance can spike. Similar difficulties have been seen in IPW \citep{hirano2003efficient} estimators, that like this transformation grant more weight to tail (read: unlikely) observations. Combining supervised learning techniques with inverse-probability weighting, gives rise to double-robust estimators, which in spirit is also similar to our modeling of the transformed outcome.
\cite{ding2017causal} summarize that the instability of the estimator due to extreme treatment assignment probabilities is even worse in this case than in inverse-probability weighting, since there are potentially two sources of model misspecification. While we can address concerns of model misspecification using flexible machine-learning models, this flexibility is a double-edged sword. When the model generates predictions that are inherently high variance such as those of regression trees, this means that the method suffers in terms of efficiency and the quality of inference is degraded.
Second, uncertainty quantification using methods built atop inverse-probability weighting in general and transformed outcomes in specific is difficult. As discussed at length earlier, there are theoretical concerns due to the the impact of extreme weights which is a limitation of the transformation. There are also practical concerns with uncertainty quantification under specific models for the TRV as it relates to generating intervals. For single regression trees as well as the other ensemble learning methods which have been used for TRV modeling, intervals have been generated using the bootstrap. Prior work \citep{wager2014confidence} has suggested that in certain applications the Monte Carlo error can dominate the uncertainty quantification produced. In conjunction with the high variance inherent to the aforementioned approaches, we might be unable to gather useful insights. If treatment effects are small (near zero), the conflation of the Monte Carlo noise with the underlying sampling noise may lead us to overstate the variance and therefore lower the power of our analysis. In addition note that when the sample size is small, \cite{powers2017some} demonstrate that the variance of the TRV is small as well -- it increases with increasing sample size. Hence, in situations where bootstrapping is likely to do well for the uncertainty in the model i.e. in large samples, the high variance of the TRV is even more so an issue.
Based on these limitations, we propose the Gaussian process mixture model in Section \ref{model}. Our proposed model attempts to overcome the aforementioned limitations by leveraging the mixture distribution implied by the transformation. In addition, we still aim to model the TRV flexibly and capture the complexity of treatment effect heterogeneity. We achieve gains in the quality of inference by constructing a likelihood that reflects the error structure imposed by the TRV under some basic assumptions that earlier work with this technique has ignored. The details of these findings will be discussed in greater depth in Section \ref{data} where these approaches are applied to real and simulated data.
\section{The Gaussian Process Mixture Model}\label{model}
We specify a non-parametric Bayesian model based on a mixture of Gaussian processes to model heterogeneous treatment effects. Our model is based on the transformed response variable framework. It is motivated by three objectives: (1) to explicitly model the distribution implied by the transformed outcome with the goal of reducing the variance of the TRV generated estimates that have hitherto been produced using non-probabilistic models, (2) model the two treatment groups jointly so we can borrow strength and therefore improve inference even relative to non-TRV based methods for estimating treatment effects, and (3) making more complete use of the data by jointly modeling the transformed response as well as the treatment assignment probabilities in a one step model. The \emph{feedback} between the two stages in joint modeling can improve the point estimation of treatment effects and the propensity scores \citep{zigler2013model}. Throughout this section we assume A1-A4 are satisfied.
\subsection{Model Specification}
\label{understanding}
A natural starting point is to consider two non-parametric regression functions for the response under treatment and control, respectively
\begin{eqnarray*}
Y_{i}(1) &=& f_{1}(x_{i}) + \varepsilon_{i}(1), \quad \epsilon_{i}(1) \stackrel{\mathrm{iid}}{\sim} \mathrm{N}(0, \sigma^{2}), \\
Y_{i}(0) &=& f_{0}(x_{i}) + \varepsilon_{i}(0), \quad \epsilon_{i}(0) \stackrel{\mathrm{iid}}{\sim} \mathrm{N}(0, \sigma^{2}).
\end{eqnarray*}
In expectation, the difference of these two non-parametric functions is the conditional average treatment effect. Substituting these non-parametric regression functions under the treatment and control cases in the definition of the TRV in \eqref{eq:trv} yields the following mixture model,
\begin{equation}
\label{eq:newmodel}
Y_{i}^{*} = g(x_{i}) + \varepsilon_{i}^{*},
\end{equation}
$$
\varepsilon_{i}^{*} \sim e_{i} \mathrm{N}\bigg((1-e_{i}) h(x_{i}), \frac{1}{e_{i}^{2}}\sigma^{2}\bigg) + (1-e_{i})\mathrm{N}\bigg(-e_{i} h(x_{i}), \frac{1}{(1-e_{i})^{2}}\sigma^{2}\bigg).
$$
where $g(\cdot)$ is interpreted as the conditional average treatment effect,
$$g(x_{i}) = f_{1}(x_{i}) - f_{0}(x_{i}).$$
while the function $h(\cdot)$, helps expresses the multi-modal nature of the error distribution that is implied by the transformation,
$$h(x_{i}) = \frac{f_{1}(x_{i})}{e_{i}} + \frac{f_{0}(x_{i})}{1-e_{i}}.$$
A detailed derivation of this model is given in Appendix \ref{app:B}.
The argument for specifying the TRV mixture model rather than individual models for the treatment and control is that the conditionals $Y_i \mid X_i, W_i =1$ and
$Y_i \mid X_i, W_i =0$ may not be perfectly estimable. Past work has indicated that ignoring shared information between the treated and untreated groups is a potential source of bias in the treatment effect estimation \citep{powers2017some}. Under the Bayesian paradigm, methods that place individual vague priors on the aforementioned conditionals make it challenging to control the degree of heterogeneity the model adapts to since the implied priors on their differences is potentially extremely vague \citep{hahn2017bayesian}.
Our model formulation can be considered under two specifications -- when the treatment assignment probabilities are known and when they need to be inferred from the data. The details of each specification are given in Sections \ref{simspec} and \ref{compspec1} for the two cases respectively.
\subsubsection{Model specification with known assignment probabilities}
\label{simspec}
We will place Gaussian process priors on both $g$ and $h$ and will specify an inverse gamma prior on $\sigma^2$ to leverage conjugacy. Therefore, for the case where the treatment probabilities are known we specify the following model,
\begin{equation}
\label{basicmodel}
\begin{aligned}
Y_{i}^{*} &= g(x_{i}) + \varepsilon_{i}^{*},\\
\varepsilon_{i}^{*} \sim e_{i} \mathrm{N}\bigg((1-e_{i}) h(x_{i}), \frac{1}{e_{i}^{2}}\sigma^{2}\bigg) &+ (1-e_{i})\mathrm{N}\bigg(-e_{i} h(x_{i}), \frac{1}{(1-e_{i})^{2}}\sigma^{2}\bigg),\\
g & \sim\mathrm{GP}(0, \kappa_{g}), \\
h & \sim\mathrm{GP}(0, \kappa_{h}), \\
\sigma^2 & \sim \mathrm{IG}(a,b).
\end{aligned}
\end{equation}
Here $\mathrm{IG}(a, b)$ is the inverse gamma distribution with hyper-parameters $a$ and $b$ and $\mathrm{GP}(\textbf{0}, \boldsymbol{\kappa})$ denotes the Gaussian process priors on the function $g$ and $h$. Both priors are zero mean and have covariance kernels specified (1) a non-stationary linear kernel $\kappa_{g}(u, v) = s^{2}_{0} + \sum_{i=1}^{p}s^{2}_{i}(u_{i}-c_{i})(v_{i}-c_{i})$, with hyper-parameters $s^{2}_{0}, \ldots s^{2}_{p}$ on $g$ and (2) a square exponential, $\kappa_{h}(u, v) = s_{h}^{2}\exp\{- \tau^2 \| u-v\|^2\} $ with hyper-parameters $\tau, s^{2}$ on $h$. These kernels rely on the notion of similarity between data points -- if the inputs are closer together than the target values of the response, in this case the TRV are also likely to be close together. Under the Gaussian process prior, the kernel functions described above formally define what is near or similar.
The hyper-parameters $s^{2}_{0}, \ldots s^{2}_{p}$ can be interpreted in the context of linear regression with $\{\mathrm{Normal}\sim(0, s^{2}_{j})\}_{j=0}^{p}$ priors on the $p+1$ regression coefficients including the intercept. The offset $\{c_{i}\}_{i=1}^{p}$ determines the $x$ coordinate of the point that all the lines in the posterior is meant to go through. This provides some insight into how these can be set for applied modeling problems. In cases where there is a large number of covariates, many of which are thought to share information, the prior variance for those dimensions can be made small, with a higher degree of mass concentrated near zero to induce more shrinkage. In contrast, where there is a small number of important covariates the prior variance can be set to make the prior more diffuse. The offset can be set to the average of each covariates observed value. This is a general overview of the strategy that we have employed.
\subsubsection{Model specification with unknown assignment probabilities}
\label{compspec1}
Computing the TRV requires knowledge of the treatment assignment probabilities $\{e_i\}_{i=1}^{n}$. In the case where these are unknown we consider them as latent variables and add extra levels to the hierarchical model specified in \eqref{basicmodel} to model the treatment assignment probabilities. We model the assignment probabilities individually so for notational ease, later in this paper we use $\pmb{e} = \{e_i\}_{i=1}^{n}$. Our specification, \emph{apriori}, assumes that the assignment mechanism and the outcome model are independent.
\paragraph{Modeling the Propensity Score}
In order to learn the treatment assignment probabilities, we specify a probit regression model that is layered onto the model defined in \eqref{basicmodel}.
\begin{equation}
\label{indassign}
\begin{aligned}
W_{i} &\sim\mathrm{Ber}(e_{i}),\\
e_{i} &= \Phi(X_{i} \boldsymbol{\beta}), \\
\boldsymbol{\beta} & \sim\mathrm{N}_{p+1}(0, \Psi_{p+1 \times p + 1}).
\end{aligned}
\end{equation}
Where $\Phi$ denotes the standard Normal cumulative distribution function. In this paper we will only consider the above Gaussian prior on $\boldsymbol \beta$ with prior covariance $\Psi$. However, additional complexity can be added by allowing the coefficient vector $\boldsymbol \beta$ to vary via a hierarchical prior structure as may be motivated by more complex multi-stage clustered data.
\subsection{Posterior Sampling with Known Assignment Probabilities}
\label{inference}
Inference for the model specified in Section \ref{simspec} involves sampling from a posterior distribution via straightforward Gibbs-sampling.
We define $\mathbf{g} = (g(x_{1}), \ldots, g(x_{n}))$ and $\mathbf{h} = (h(x_{1}), \ldots, h(x_{n}))$ as the values of the two regression functions on the training data.
We denote the TRV as $\textbf{Y}^{*} = (Y_{1}^{*}, \ldots, Y_{n}^{*})$ . In this case the target joint posterior distribution is
\begin{equation}
\label{eq:jd1}
\pi(\mathbf{g}, \mathbf{h}, \sigma^{2} \mid \mathcal{D}).
\end{equation}
Due to prior conjugacy the conditional distributions: $\pi(\mathbf{g} \mid \mathbf{h}, \sigma^{2}, \mathcal{D})$, $\pi(\mathbf{h} \mid \mathbf{g}, \sigma^{2}, \mathcal{D})$ and $\pi(\sigma^{2} \mid \mathbf{h}, \mathbf{g}, \mathcal{D})$ all have simple forms that we can easily sample from. We first state some matrices and vectors that will enter our calculations: $\mathbf{D}$ is an $n\times n$ diagonal matrix with entries $\mathbf{D}_{ii} = \bigg( \frac{W_i }{e_{i}^{2}} \sigma^2 + \frac{1-W_i}{(1-e_{i})^{2}}\sigma^{2}\bigg)$, $\boldsymbol{\Lambda}$ is also an $n\times n$ diagonal matrix with entries $\boldsymbol{\Lambda} _{ii} = \bigg(W_{i}(1-e_{i}) + (1-W_{i})(-e_{i})\bigg)$, $\mathbf{K}$ is also an $n\times n$ diagonal matrix with entries $\mathbf{K}_{ii} = \sigma^{2}\mathbf{D}_{ii}$, and $\mathbf{m} = \Lambda \textbf{H}$. We also denote the covariance matrix
$\pmb{\kappa}_{g}$ with the $ij$-th entry as taking the value $\kappa_g(x_i,x_j)$ and similarly $\pmb{\kappa}_{h}$ is a matrix with the $ij$-th entry taking the value $\kappa_h(x_i,x_j)$. We now state the conditional distributions that will enter our Gibbs sampler,
\begin{equation}
\label{eq:fcg}
\begin{aligned}
\pi(\mathbf{g} \mid \mathbf{h}, \sigma^{2}, \mathcal{D}) &\sim \mathrm{N}\big((\pmb{\kappa}_{g}^{-1}+\mathbf{D}^{-1})^{-1}(\mathbf{D}^{-1}\textbf{Y}^{*} - \mathbf{m}\}, \{\pmb{\kappa}_{g}^{-1}+\mathbf{D}^{-1})^{-1}\big), \\
\pi(\mathbf{h} \mid \mathbf{g}, \sigma^{2}, \mathcal{D}) &\sim \mathrm{N}\bigg((\pmb{\kappa}_{h}^{-1} + \boldsymbol{\Lambda}^{T} \mathbf{D}^{-1} \boldsymbol{\Lambda})^{-1} \boldsymbol{\Lambda}^{T} \mathbf{D}^{-1} (\textbf{Y}^{*}-\mathbf{g}), (\pmb{\kappa}_{h}^{-1} + \boldsymbol{\Lambda}^{T} \mathbf{D}^{-1} \boldsymbol{\Lambda})^{-1} \bigg), \\
\pi(\sigma^{2} \mid \mathbf{h}, \mathbf{g}, \mathcal{D}) & \sim \mathrm{IG} \bigg(a+\frac{n}{2}, b + \frac{(\textbf{Y}^{*}-\mathbf{g} - \mathbf{m})^{T}\mathbf{K}^{-1}(\textbf{Y}^{*}-\mathbf{g}- \mathbf{m})}{2}\bigg).
\end{aligned}
\end{equation}
The Gibbs steps that would be used to sample from these full conditional distributions are given appendix \ref{app:D}.
\subsection{Posterior Sampling with Unknown Assignment Probabilities}
\label{complexinference}
There are two additional problems with respect to inference when the assignment probabilities are unknown: one needs to estimate the assignment probabilities $\pmb{e}$ and use these to compute the TRV $\textbf{Y}^{*}$. The following target posterior distribution corresponds to the model when the treatment probabilities are modeled as specified by the probit augmentation to the model in \eqref{indassign}.
\begin{eqnarray}
&\pi(\mathbf{g}, \mathbf{h}, \textbf{Y}^{*}, \sigma^{2},\boldsymbol{e}, \boldsymbol{\beta} \mid \mathcal{D}). \label{post3}
\end{eqnarray}
In this setting the joint posterior is more complicated than equation \eqref{eq:jd1} and is harder to sample from since it cannot be completely decomposed into Gibbs steps. Generating samples requires incorporating the full conditional distributions from the previous section, along with additional steps to sample the treatment assignment probability by using a Metropolis-within-Gibbs step and constructing the transformed outcome.
The Metropolis-Hastings step consists of specifying a proposal distribution $q(\boldsymbol{\beta})$, and given a candidate value $\pmb{\beta}^* \sim q(\pmb{\beta})$ is accepted with acceptance probability,
\begin{equation}
\label{eq:mhratio}
\alpha = \min\left(1, \frac{\pi(\mathbf{g}, \mathbf{h}, \textbf{Y}^{*},\sigma^{2}, \boldsymbol{\beta}^{*}, \pmb{e} \mid \mathcal{D}) \, q(\pmb{\beta})}{\pi(\mathbf{g}, \mathbf{h}, \textbf{Y}^{*}, \sigma^{2}, \boldsymbol{\beta}, \pmb{e} \mid \mathcal{D}) \, q(\pmb{\beta}^{*})}\right).
\end{equation}
where the posterior for evaluation is given in \eqref{post3}. We have used a symmetric random walk proposals\footnote{We generate proposals as $\pmb{\beta}^{*}\sim\mathrm{N}(\mu = \pmb{\beta}^{j-1}, \sigma^{2})$ i.e. from a Gaussian distribution that is centered at the last accepted value. The variance controls the step size of the proposals and needs to be tuned for the application.} in order to reduce the overall computational burden. Once we have sampled the coefficients for the probit model, we can deterministically compute the treatment assignment probability and the TRV. The complete algorithm for this sampling scheme is detailed in appendix \ref{app:D}.
\paragraph{Joint Bayesian modeling and the feedback problem:} The joint Bayesian model specified in this paper for learning the assignment mechanism $\pmb{e}$ and the transformed outcome $\textbf{Y}^{*}$ leads to a feedback problem of the type described in \cite{zigler2013model}. The treatment assignment probability $\pmb{e}$ appears in the joint posterior distribution both as a part of the transformed outcome model through \eqref{eq:fcg} as well as its own model in \eqref{indassign}. Therefore its posterior samples involve information from both. In the specific context of the assignment model, this means that the posterior samples of parameters in learning $\pmb{e}$ are informed by information from the outcome stage.
Under the classical method of using $\pmb{e}$ as a dimension reduced covariate representation in the outcome stage model (an analog to our transformed outcome), \citep{zigler2013model} demonstrate that the estimation of causal effects is poor. There is a possibility of considerable bias due to the distortion of the causal effects. Furthermore, the usefulness of the propensity score adjustment as a replacement for the covariates is also compromised.
However, this is not the concern in the modeling scheme proposed in this paper. \cite{zigler2013model} show that the nature of the feedback between the two stages is altered when the outcome stage model is augmented with adjustment for the individual covariates and that this method can recover causal effects akin to when a classical two stage procedure is used. Our approach via the kernels of the Gaussian processes provides individual covariate adjustment therefore alleviating concerns created by the feedback. Therefore we reap the benefits of the joint estimation, but by means of suitably elicited priors, and individually controlled covariates, we bypass the concerns of feedback. In fact, by making more complete use of the data, we are arguably able to improve the overall quality of estimation.
\section{Results on Simulated and Real Data}
\label{data}
In this section we validate our Gaussian process based TRV model on simulated and real data. We use the simulations to show that our approach outperforms other techniques (both TRV as well as conditional mean regression type methods). This holds true both when the treatment assignment probabilities are known or need to be inferred from the data. We also observe on the simulated data that our model does in fact recover the causal effects in the TRV framework in the presence of feedback as theorized earlier. Our assertion is based on comparisons of mean squared error, bias and point-wise coverage of the uncertainty intervals generated by the model.
The real data analyzed here comes from a study of the causal effects of debit card ownership on household spending in Italy \citep{mercatanti2014debit} -- we will refer to these data as the SHIW data. In the analysis of the SHIW data we jointly infer treatment effects as well as the treatment assignment probability for each individual, as these are not observed.
The most interesting aspect of our analysis of the SHIW data is that we are able to identify heterogeneity in the treatment effects. We find that the impact of debit card usage on aggregate household spending is found to vary based on income and this variability is highest at the lowest levels of income -- a notion that is validated under behavioral economic theory which further lends credibility to our proposed model.
\subsection{Estimands Used and Modelling Approaches Compared}
In this section we state the estimands that we will use for comparing our method to other non-parametric methods. We will also state in detail how we compute the relevant estimand for both our method and the other techniques considered. The analysis is focused on the estimation of the CATE.
\paragraph{Gaussian process mixture model:} We first specify the procedure we use to estimate the CATE for our model. The model is trained on data $(x_1,...,x_n)$ and the values of the two functions are
\begin{eqnarray*}
{\mathbf{g}} = (g(x_{1}), \ldots, g(x_{n})), \\
{\mathbf{h}} = (h(x_{1}), \ldots, h(x_{n})).
\end{eqnarray*}
We will use the function values to evaluate the accuracy of our estimators.
Depending on whether the treatment assignment probabilities are observed or not we obtain posterior samples $\left(\mathbf{g}^{(j)},\mathbf{h}^{(j)}\right)_{j=1}^K$
or $\left(\mathbf{g}^{(j)},\mathbf{h}^{(j)}, \boldsymbol{e}^{(j)}\right)_{j=1}^K$, respectively, using which we can compute posterior samples for the conditional average treatment effect at each location $x_i$, $i=1,...,n$ as
\begin{equation*}
{\tau_{i}^{CATE}}^{(j)}(x_i) = g^{(j)}(x_i).
\end{equation*}
Given the posterior samples we can compute a posterior mean as a point estimate,
$\widehat{\tau_{i}^{CATE}}$ along with its corresponding credible intervals. Where applicable, marginalizing over the values $x_{i}$ allows us to compute posterior estimates of the average treatment effect $\widehat{\tau_{i}^{ATE}}$.
Based on the quantities that we have specified above, the mean squared error, bias and coverage used for model validation are specified as follows,
\begin{equation*}
\mathrm{Mean \>\>\> Squared \>\>\> Error} = \frac{1}{n}\sum_{i=1}^{n}(\tau_{i}^{CATE}-\widehat{\tau_{i}^{CATE}})^{2},
\end{equation*}
\begin{equation}
\mathrm{Bias} = \frac{1}{n}\sum_{i=1}^{n}(\tau_{i}^{CATE}-\widehat{\tau_{i}^{CATE}}),
\end{equation}
\begin{equation*}
\mathrm{Coverage} = \frac{1}{n}\sum_{i=1}^{n} \textbf{1}(\tau_{i}^{CATE}\in[\tau_{i}^{CATE, \>\>\> lwr}, \tau_{i}^{CATE, \>\>\> upr}]).
\end{equation*}
\paragraph{Summary of alternative methods used:} We will compare our proposed Gaussian process mixture model approach to other regression based methods for estimating treatment effects. We have considered random forests and single regression trees for treatment effect estimation via TRV modeling as well as \emph{fit based trees}, \emph{causal trees} \citep{athey2016recursive}, and BART\citep{hahn2017bayesian, hill2011bayesian} as non-TRV alternatives \footnote{We use the implementations of these methods in the \texttt{R} packages \texttt{causalTree} \citep{causal2016tree}, \texttt{rpart}\citep{rpart2002}, \texttt{randomForest}\citep{rf2018} and \texttt{BART}\citep{bart2018}}.
None of the aforementioned methods have an obvious framework for learning the treatment assignment probabilities internally. This a crucial step in computing the CATE and ATE both via TRV and non-TRV based estimation techniques. In the case of the regression trees and random forests for TRV modeling, the TRV needs to be computed from the learnt propensity score first before any modeling can commence. The BART model uses the propensity score as an additional covariate, while causal and fit based trees use the propensity score as a weighting mechanism.
Therefore, we will use a two-step procedure where we first use the data to infer the treatment assignment probabilities and then given these estimates, apply the aforementioned regression methods to estimate the treatment effect. The treatment assignment probability vector $\pmb{e}$ is estimated via logistic regression \citep{rubin1996matching}, a standard approach for estimating propensity scores in the causal inference literature.
\subsection{Results on Simulated Data}
The objective of the simulation studies presented in this section is to compare the performance of the Gaussian process mixture model to, BART, causal trees, fit based trees, the random forest and single regression tree models. We consider two criteria in our comparison. The first criteria is a comparison of the accuracy of the CATE, in terms of mean squared error and bias. The second criterion involves assessing how well the methods quantify uncertainty by considering the coverage of the intervals produced by all the models.
\subsubsection{Simulated Data Model}
\label{simstudy}
In order to evaluate the proposed model as well as the other aforementioned approaches, we consider two simulation settings -- one high dimensional case (with 40 covariates) and one low dimensional case (with 5 covariates) each with its own covariate level heterogeneity and a sample size of $n = 250$. For the remainder of this analysis, the high dimensional case is referred to as Case A, and the low dimensional case is referred to as Case B. By design neither of these simulation cases has a meaningful average treatment effect. We start with a detailed description of Case A.
In this framework, covariates $X_{1}, \ldots X_{30}$ are independent covariates, $X_{31}, \ldots X_{35}$ depend on pairs of covariates, while $X_{36}, \ldots, X_{40}$ depend on groups of three as follows,
\begin{align*}
X_{k} &\sim \mathrm{Normal}(0, 1); \>\>\> k = 1, \ldots, 15,\\
X_{k} &\sim \mathrm{Uniform}(0, 1); \>\>\> k = 16, \ldots, 30,\\
X_{k} &\sim \mathrm{Bernoulli}(q_{k}); \>\>\> q_{k} = \mathrm{logit}^{-1}(X_{k-30} - X_{k-15}); \>\>\> k = 31,\ldots, 35,\\
X_{k} &\sim \mathrm{Poisson}(\lambda_{k}); \>\>\> \lambda_{k} = 5 + 0.75 X_{k-35}(X_{k-20} + X_{k-5}); \>\>\> k = 36,\ldots, 40.\\
\end{align*}
Next, we simulate the propensity score and the corresponding treatment assignments. This has been done as a simple linear transformation since the focus of the paper is not propensity score modeling but rather CATE modeling. The propensity scores and the treatment effects of interest for Case A are given in figure \ref{fig:simLarge}.
\begin{align*}
p_{i} &= \mathrm{logit}^{-1}(0.3 \sum_{k = 1}^{5}X_{k} -0.5 \sum_{k = 21}^{25}X_{k} -0.0001 \sum_{k = 26}^{35}X_{k} + 0.055 \sum_{k = 36}^{40}X_{k} ),\\
W &\sim \mathrm{Bernoulli}(p_{i}).
\end{align*}
Finally we generate the potential outcomes and the observed outcomes.
\begin{align*}
f(\mathbf{X}) &= \frac{\sum_{k = 16}^{19} X_{k}\exp(X_{k+14})}{1+\sum_{k = 16}^{19}X_{k} \exp(X_{k+14})},\\
Y(0) &= 0.15 \sum_{k=1}^{5}X_{k} + 1.5 \exp(1+1.5 f(\mathbf{X}))+ \epsilon_{i},\\
Y(1) &= \sum_{k = 1}^{5}\{ 2.15 X_{k} + 2.75 X_{k}^{2} + 10 X_{k}^{3}\} + 1.25 \sqrt{0.5 + 1.5\sum_{k = 36}^{40}X_{k}} + \epsilon_{i},\\
Y &= WY(1) + (1-W)Y(0);\>\>\> \epsilon_{i} \stackrel{\mathrm{IID}}{\sim} \mathrm{Normal}(0, 0.0001).
\end{align*}
\begin{figure}[htb]
\centering
\makebox[\textwidth]{
\includegraphics[scale = 0.33, page = 1]{./Results/JMLR/"simPlotsScoreLarge".pdf}\hspace{-0.2cm}
\includegraphics[scale = 0.33, page = 3]{./Results/JMLR/"simPlotsScoreLarge".pdf}
}
\\ \vspace{-.25cm}
\makebox[\textwidth]{ (a) \hspace{2.2in} (b)
}
\caption{Summary plots of Case A (a) Histogram of the true propensity scores for each of the two treatment groups. (b) Treatment effects. The simulation was generated with $n = 250$}
\label{fig:simLarge}
\end{figure}
The lower dimensional case, which we have adapted from the simulation study in \cite{hahn2017bayesian} is presented similarly. We start by simulating the following 5 covariates,
\begin{align*}
X_{k} &\sim \mathrm{Normal}(0, 1); \>\>\> k = 1, \ldots, 3,\\
X_{4} &\sim \mathrm{Bernoulli}(p = 0.25), \\
X_{5} &\sim \mathrm{Binomial}(n = 2, p = 0.5).
\end{align*}
In this scheme, unlike Case A, all the covariates are independent. The propensity score model analogous to the previous case is a linear transformation of the covariates.
\begin{align*}
p_{i} &= \mathrm{logit}^{-1}(0.1X_{1}-0.001X_{2}+.275X_{3}-0.03X_{4}),\\
W &\sim \mathrm{Bernoulli}(p_{i}).
\end{align*}
\noindent Finally we generate the potential outcomes and the observed outcomes. The results of this simulation are presented in figure \ref{fig:simSmall}.
\begin{align*}
f(\mathbf{X}) &= -6 + h(X_{5}) + |X_{3}-1|,\\
h(0) &= 2, \>\>\> h(1) = -1, \>\>\> h(2) = -4,\\
Y(0) &= f(X) - 15 X_{3} + \epsilon_{i},\\
Y(1) &= f(X)+ (1 + 2X_{2}X_{3}) + \epsilon_{i},\\
Y&= WY(1) + (1-W)Y(0); \>\>\> \epsilon_{i} \stackrel{\mathrm{IID}}{\sim} \mathrm{Normal}(0, 0.0001).
\end{align*}
\begin{figure}[htb]
\centering
\makebox[\textwidth]{
\includegraphics[scale = 0.33, page = 1]{./Results/JMLR/"simPlotsScoreSmall".pdf}\hspace{-0.2cm}
\includegraphics[scale = 0.33, page = 3]{./Results/JMLR/"simPlotsScoreSmall".pdf}
}
\\ \vspace{-.25cm}
\makebox[\textwidth]{ (a) \hspace{2.2in} (b)
}
\caption{Summary plots of Case B (a) Histogram of the true propensity scores for each of the two treatment groups. (b) Treatment effects. The simulation was generated with $n = 250$}
\label{fig:simSmall}
\end{figure}
\subsubsection{Comparison of Methods}
\label{ressim}
The first stage of our analysis compares the CATE estimation in instances when the treatment assignment probability is assumed to be known. We focus on the mean squared error, bias and coverage of the CATE under Case A and Case B along with visual analyses of model adaptability to gauge fit quality. For the proposed model the samplers were run for $K = 6, 000$ steps with $1,000$ initial steps burned off. No thinning of the samples generated was needed. Similarly, for the non-Bayesian methods, $K = 5, 000$ replications of the bootstrap were generated. The comparison of point estimates of the CATE under Case A is presented in figure \ref{fig:caseAComparisonKnown} and Case B in figure \ref{fig:caseBComparisonKnown} for the sub-case where the treatment assignment mechanism is known; the corresponding diagnostic measures are presented in tables \ref{tb:caseASummaryKnown} and \ref{tb:caseBSummaryKnown}.
\begin{figure}[htb]
\centering
\makebox[\textwidth]{
\includegraphics[scale = 0.35]{"./Results/JMLR/model_large_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/tot_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/rf_cate_known".pdf}\hspace{-0.4cm}
} \\
\makebox[\textwidth]{ (a) \hspace{1.8in} (b) \hspace{1.8in} (c)
} \\
\makebox[\textwidth]{
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/ct_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/fit_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/bart_cate_known".pdf}
}\\
\makebox[\textwidth]{ (d) \hspace{1.8in} (e) \hspace{1.8in} (f) }
\caption{Comparison of the CATE estimates when the treatment probabilities are known for Case A (a) the GP mixture model (b) the transformed outcome regression tree (c) the transformed outcome random forest (d) the causal tree (e) fit based tree (f) BART}
\label{fig:caseAComparisonKnown}
\end{figure}
In Case A, both in terms of point estimation, as well as uncertainty quantification, we can conclude that when the treatment assignment is known, the proposed model is the overall winner. As we can see, it adapts well to the heterogeneity of the treatment effects in the data, and is able to recover the effects to a high degree as observed in figure \ref{fig:caseAComparisonKnown}(a). It also has the lowest mean squared error of the models presented and the point-wise coverage of its uncertainty intervals, while low relative to tree based methods, is better than BART (see table \ref{tb:caseASummaryKnown}). Furthermore, the bias of the model is generally lower than causal trees, fit based trees and transformed outcome trees.
It warrants mention that BART only adapts to heterogeneity minimally. We can attribute this to the complexity of regularization in causal inference problems \citep{hahn2017bayesian} from the shrinkage prior as well as poor mixing of the MCMC used for BART in high dimensions \citep{pratola2016efficient}. We see similar behavior from transformed outcome trees, where post-estimation \emph{pruning} can lead to regularization induced bias as well. An elaborate discussion on bias in causal inference applications from regularized models originally designed for prediction is given in \cite{hahn2017bayesian} and \cite{hahn2018regularization}.
\begin{figure}[htb]
\centering
\makebox[\textwidth]{
\includegraphics[scale = 0.35]{"./Results/JMLR/model_small_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/tot_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/rf_cate_known".pdf}\hspace{-0.4cm}
} \\
\makebox[\textwidth]{ (a) \hspace{1.8in} (b) \hspace{1.8in} (c)
} \\
\makebox[\textwidth]{
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/ct_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/fit_cate_known".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/bart_cate_known".pdf}
}\\
\makebox[\textwidth]{ (d) \hspace{1.8in} (e) \hspace{1.8in} (f) }
\caption{Comparison of the CATE estimates when the treatment probabilities are known for Case B (a) the GP mixture model (b) the transformed outcome regression tree (c) the transformed outcome random forest (d) the causal tree (e) fit based tree (f) BART}
\label{fig:caseBComparisonKnown}
\end{figure}
In Case B, the model performs well in terms of recovering the high degree of heterogeneity but it suffers in terms of mean square error and bias. The model still adapts well to the heterogeneity inherent in the data, and is able to recover the effects as observed in figure \ref{fig:caseBComparisonKnown}(a), albeit with a higher degree of overall noise. This noisiness translates to high mean squared error and bias, where the other alternative models perform better, with one minor caveat. Due to the piece-wise nature of the tree based models, they do not adapt to the heterogeneity as well as the proposed model and BART do. Furthermore, the model also has the highest degree of point-wise uncertainty interval coverage (see table \ref{tb:caseBSummaryKnown}).
\begin{table}[ht]
\centering
\begin{tabular}{rlrrr}
\hline
& Model Type & Mean Square Error & Bias & 95\% CI Coverage \\
\hline
1& Gaussian-Process Mixture & 4191.665 & 13.207 & 0.780 \\
2& Bayesian Additive Regression Tree & 5856.135 & -5.351 & 0.596 \\
3& Transformed Outcome Tree & 7769.077 & 14.374 & 0.876 \\
4& Fit Based Tree & 6154.396 & 15.633 & 0.928 \\
5& Causal Tree & 8390.039 & 21.923 & 0.964 \\
6& Transformed Outcome Random Forest & 4993.576 & 0.317 & 0.932 \\
\hline
\end{tabular}
\caption{Case A - Conditional Average Treatment Effect Summary (Known)}
\label{tb:caseASummaryKnown}
\end{table}
We also compare the CATE estimation for both cases when the treatment assignment probabilities are unknown and need to be inferred from the data. The comparison of the point estimation is given in figures \ref{fig:caseAComparisonUnknown} and \ref{fig:caseBComparisonUnknown} respectively for the two cases, with the corresponding summary measurements of fit in tables \ref{tb:caseASummaryUnknown} and \ref{tb:caseBSummaryUnknown}.
\begin{table}[ht]
\centering
\begin{tabular}{rlrrr}
\hline
& Model Type & Mean Square Error & Bias &95\% CI Coverage \\
\hline
1 & Gaussian Process Mixture & 50.262 & 3.174 & 0.988 \\
2 & Bayesian Additive Regression Tree & 5.498 & 0.229 & 0.808 \\
3 & Transformed Outcome Tree & 16.421 & 0.202 & 0.900 \\
4 & Fit Based Tree & 15.620 & 0.282 & 0.952 \\
5 & Causal Tree & 21.143 & 0.974 & 0.972 \\
6 & Transformed Outcome Random Forest & 118.745 & -0.582 & 0.816 \\
\hline
\end{tabular}
\caption{Case B - Conditional Average Treatment Effect Summary (Known)}
\label{tb:caseBSummaryKnown}
\end{table}
For Case A, the performance of the model is far superior in terms of adapting to the heterogeneity, as indicated in figure \ref{fig:caseAComparisonUnknown}(a), in particular compared to the performance of the transformed outcome random forest and BART given in figures \ref{fig:caseAComparisonUnknown}(c) and \ref{fig:caseAComparisonUnknown}(f). The deterioration in the quality of the estimates from BART is particularly noticeable and can be attributed to the same over-regularization observed before which is even more of a concern since there is additional uncertainty from the learning of the assignment mechanism. Furthermore, while the point-wise coverage of the uncertainty interval is lower relative to the other models, the Gaussian process mixture is the clear winner in terms of the mean square error. The proposed model also outperforms the tree based models (causal and fit based trees as well as transformed outcome trees) in terms of bias (see table \ref{tb:caseASummaryUnknown}) and its point-wise interval coverage is stable relative to BART, which speaks to the models overall robustness despite the added layer of complexity from learning the assignment mechanism.
\begin{figure}[htb]
\centering
\makebox[\textwidth]{
\includegraphics[scale = 0.35]{"./Results/JMLR/model_large_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/tot_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/rf_cate_unknown".pdf}\hspace{-0.4cm}
} \\
\makebox[\textwidth]{ (a) \hspace{1.8in} (b) \hspace{1.8in} (c)
} \\
\makebox[\textwidth]{
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/ct_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/fit_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 1]{"./Results/JMLR/bart_cate_unknown".pdf}
}\\
\makebox[\textwidth]{ (d) \hspace{1.8in} (e) \hspace{1.8in} (f) }
\caption{Comparison of the CATE estimates when the treatment probabilities are unknown for Case A (a) the GP mixture model (b) the transformed outcome regression tree (c) the transformed outcome random forest (d) the causal tree (e) fit based tree (f) BART}
\label{fig:caseAComparisonUnknown}
\end{figure}
We see that for Case B, the results of the analysis are similar to when the treatment assignment was known. The performance of the model is comparable in terms of adapting to the heterogeneity relative to the other models, as indicated in figure \ref{fig:caseBComparisonUnknown}(a) -- albeit again with a similar degree of noisiness as earlier. However, we again out-perform transformed outcome random forests in terms of point estimation with lower mean squared error. The only aspect in which the model out performs all the other methods considered is in terms of point-wise interval coverage.
\begin{figure}[htb]
\centering
\makebox[\textwidth]{
\includegraphics[scale = 0.35]{"./Results/JMLR/model_small_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/tot_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/rf_cate_unknown".pdf}\hspace{-0.4cm}
} \\
\makebox[\textwidth]{ (a) \hspace{1.8in} (b) \hspace{1.8in} (c)
} \\
\makebox[\textwidth]{
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/ct_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/fit_cate_unknown".pdf}\hspace{-0.4cm}
\includegraphics[scale = 0.35, page = 2]{"./Results/JMLR/bart_cate_unknown".pdf}
}\\
\makebox[\textwidth]{ (d) \hspace{1.8in} (e) \hspace{1.8in} (f) }
\caption{Comparison of the CATE estimates when the treatment probabilities are unknown for Case B (a) the GP mixture model (b) the transformed outcome regression tree (c) the transformed outcome random forest (d) the causal tree (e) fit based tree (f) BART}
\label{fig:caseBComparisonUnknown}
\end{figure}
Our conclusion is that the model performs well when there are a large number of covariates present, and the degree of heterogeneity in the treatment effects is high. The flexibility of the mixture of Gaussian processes ensures adaptability, where tree based models fail particularly when there is shared information in the covariates (as is true in Case A) since the prior provides some degree of built-in regularization that is not as excessive as that of BART. However, when the number of covariates is small, the flexibility of the model hurts its overall performance since we observe that our estimates are generally noisier. These limitations of the model are discussed as avenues for future work in the last section of this paper.
\begin{table}[ht]
\centering
\begin{tabular}{rlrrr}
\hline
& Model Type & Mean Square Error & Bias &95\% CI Coverage \\
\hline
1 & Gaussian Process Mixture & 3916.562 & 13.207 & 0.780\\
2 & Bayesian Additive Regression Tree & 6754.058 & -5.569 & 0.624 \\
3 & Transformed Outcome Tree & 6289.891 & 7.061 & 0.880 \\
4 & Fit Based Tree & 6154.396 & 15.633 & 0.932 \\
5 & Causal Tree & 8390.039 & 21.923 & 0.968 \\
6 & Transformed Outcome Random Forest & 12124.426 & -21.958 & 0.960 \\
\hline
\end{tabular}
\caption{Case A - Conditional Average Treatment Effect Summary (Unknown)}
\label{tb:caseASummaryUnknown}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{rlrrr}
\hline
& Model Type & Mean Square Error & Bias &95\% CI Coverage \\
\hline
1 & Gaussian Process & 31.517 & 1.898 & 1.000 \\
2 & Bayesian Additive Regression Tree & 6.259 & 0.118 & 0.776\\
3 & Transformed Outcome Tree & 16.421 & 0.202 & 0.892 \\
4 & Fit Based Tree & 15.620 & 0.282 & 0.956\\
5 & Causal Tree & 19.652 & 0.876 & 0.972 \\
6 & Transformed Outcome Random Forest & 115.329 & -0.349 & 0.820 \\
\hline
\end{tabular}
\caption{Case B - Conditional Average Treatment Effect Summary (Unknown)}
\label{tb:caseBSummaryUnknown}
\end{table}
\subsection{Results on the Italy Survey on Household Income and Wealth (SHIW)}
\label{real}
Our application of the GP mixture model to a real data aimed at the estimation the causal effects of debit card ownership on household spending. A causal analysis of this question was developed in \cite{mercatanti2014debit} using data from the Italy Survey on Household Income and Wealth (SHIW) to estimate the population average treatment effect for the treated (PATT). The SHIW is a biennial, national population representative survey run by Bank of Italy. The subset of the SHIW data we considered consists of $n = 564$ observations with 385 untreated and 179 treated observations. The outcome variable is the monthly average spending of the household on all consumer goods. The treatment condition is whether the household possesses one and only one debit card, and the control condition is that the household does not possess \emph{any} debit cards. The covariates we used include: cash inventory held by the household, household income, average interest rate in the province where the household resides, measurement of wealth, and the number of banks in the province in which the household resides. See \cite{mercatanti2014debit} for more details about the data. Our analysis of these data will consist of comparing estimates of the ATE and CATE (with respect to household income) of our GP mixture model to the same alternative models as the previous section.
\begin{table}[ht]
\centering
\begin{tabular}{rrrrrr}
\hline
Decile &$Mean \quad Income$ & $\widehat{\tau^{CATE}}$ & $\widehat{\tau^{CATE}_{lwr}}$ & $\widehat{\tau^{CATE}_{upr}}$& \\
\hline
1&-1.137 & 0.629 & 0.404 & 0.857 & \\
2& -0.831 & 0.567 & 0.374 & 0.761 & \\
3& -0.638 & 0.558 & 0.381 & 0.734 & \\
4& -0.472 & 0.459 & 0.298 & 0.620 & \\
5& -0.310 & 0.425 & 0.270 & 0.578 & \\
6& -0.114 & 0.396 & 0.245 & 0.546 & \\
7& 0.103 & 0.343 & 0.190 & 0.490 & \\
8& 0.397 & 0.272 & 0.097 & 0.441 & \\
9&0.848 & 0.172 & -0.050 & 0.389 & \\
10& 2.143 & -0.125 & -0.513 & 0.251 & \\
\hline
\end{tabular}
\caption{Conditional average treatment effect with average income by decile}
\label{tb:cateIncomeRealModel}
\end{table}
We start with a presentation of the CATE under our model against income in \ref{fig:realComparison}(a).The proposed model estimates an overall downward trend in the effect of owning a debit card, i.e. as the level of income increases, the effect of owning a debit card declines. In order to summarize this effect, we consider the CATE for binned deciles of income for the proposed model in figure \ref{fig:realComparison}(b) and the alternative models in figure \ref{fig:realComparison}(c). We find that the proposed model detects a statistically meaningful effect for the first eight deciles of income, and this effect is estimated to decline in size. For the final two deciles, the model concludes that there is no statistically meaningful effect of owning a debit card. These results are summarized in table \ref{tb:cateIncomeRealModel}. By comparison, the inference from the alternative approaches is not quite as clear. BART and transformed outcome trees, detect minimal heterogeneity. With BART, this flattening can be attributed to over-regularization due to the prior, as seen in the simulated data case, while for transformed outcome trees, the axis-parallel splits used to estimate the model are not always suitable for partitioning the covariates. By comparison transformed outcome random forests, transformed outcome trees and causal trees demonstrate the most heterogeneity at the highest two deciles of income. These results are summarized in table \ref{tb:comparisonTable} in Appendix \ref{app:C}.
In order to be comprehensive and comparable to past work, we have also produced estimates of the average treatment effect in table \ref{tb:ateReal}. The proposed Gaussian process mixture detects a statistically meaningful ATE. This result is consistent with the findings of \cite{mercatanti2014debit}. Furthermore, we also see that the uncertainty interval for the Gaussian process mixture is the tightest of the methods used here, all of which with the exception of BART generate similar inference. This result is consistent with the findings on simulated data presented in the last section since the BART model does not adapt to heterogeneity well in instances where the number of covariates is high with large contributions to the variation in the treatment effects. Again this argues that the GP mixture model may be outperforming the other methods.
\begin{table}[ht]
\centering
\begin{tabular}{rlrrr}
\hline
& Model Type & $\widehat{\tau^{ATE}}$ & $\widehat{\tau^{ATE}_{lwr}}$ & $\widehat{\tau^{ATE}_{upr}}$ \\
\hline
1& Gaussian Process Mixture & 0.369 & 0.220 & 0.518 \\
2 & Transformed Outcome Tree & 0.470 & 0.210 & 0.555 \\
3 & Fit Based Tree & 0.378 & 0.214 & 0.608 \\
4 & Causal Tree & 0.475 & 0.360 & 0.939 \\
5& Bayesian Additive Regression Tree & 0.115 & -1.129 & 1.397 \\
6& Transformed Outcome Random Forest & 0.414 & 0.229 & 0.604 \\
\hline
\end{tabular}
\caption{Comparison of average treatment effects.}
\label{tb:ateReal}
\end{table}
Based on the economic concepts of \emph{income elasticity of demand}, \emph{consumer choice} and \emph{substitution effects} \citep{varian2014intermediate}, the heterogeneity identified at the lowest levels of standardized income is a more sensible result relative to the implication of the other approaches. At the lowest levels of income, economic agents are more likely to substitute debit card use for cash in an effort to maximize spending. The debit cards act as an inflator of perceived financial resources and this effect is expected to diminish as the overall income grows. Therefore, the GP mixture model makes a more convincing case for capturing the true nature of how holding a debit card influences spending.
\begin{figure}[htb]
\centering
\makebox[\textwidth]{
\includegraphics[scale = 0.33, page = 1]{./Results/JMLR/"real_data_model".pdf}\hspace{-0.2cm}
\includegraphics[scale = 0.33, page = 2]{./Results/JMLR/"real_data_model".pdf}\hspace{-0.2cm}
\includegraphics[scale = 0.33]{./Results/JMLR/"real_data_comparison".pdf}
}
\\ \vspace{-.25cm}
\makebox[\textwidth]{ (a) \hspace{2.2in} (b) \hspace{2.2in} (c)
}
\caption{Estimated CATE (a) against Income (b) binned effects against deciles of income (b) binned effects against deciles of income for comparison}
\label{fig:realComparison}
\end{figure}
\clearpage
\section{Discussion and Future Work}
\label{futurework}
We have proposed a novel non-parametric Bayesian model to estimate heterogeneous treatment effects. Our approach combines the \emph{transformed response variable} framework with a mixture of Gaussian-processes. The motivation for the GP mixture model was to improve the accuracy of our point estimates as well as to better quantify uncertainty relative to other models particularly those from the Bayesian non-parametrics literature. We compared the performance of our technique to a single regression tree and random forest model within the TRV framework as well as two conditional mean regression type weighted tree based methods and BART. We used simulation studies to show instances where our approach is a better estimator with respect to both point estimation and uncertainty quantification. Furthermore, our approach also has the advantage in that we can address the case where treatment assignment probabilities are unknown within our model; other methods require a two-stage process where another model is required to infer the treatment assignment probabilities. This tandem estimation provides better insight into the data generating process and also captures uncertainty from all levels of inference.
In addition, a Bayesian model of treatment effects with a single likelihood for the design and analysis stages creates concerns of feedback since the TRV depends on the assignment mechanism. We demonstrate that our model is robust to this feedback due to both our prior specification as well as individual covariate adjustment via the Gaussian process covariance functions. However, this raises the theoretical question of whether there is a weaker condition that can be satisfied and still lead to effective inference of treatment effects which is the first area that we aim to explore in future work.
There are several ways we can extend our model to be more robust and flexible. In the context of robustness, the GP prior covariance functions specified impose smoothness assumptions on the treatment effects that may not be realistic in a myriad of applied settings. Relaxing the smoothness and using non-parametric models that have been developed to model dose-response curves may result in richer and more reliable inference. Furthermore, as noted earlier inference using the TRV is sensitive to the probability of receiving the treatment and can create biases and instability when the assignment probability are close to their extremes. While we have addressed instability in the estimation of effects using a correctly specified model and indirectly improved propensity score estimation, we have not directly curbed the susceptibility of the method to extreme weights. The variance of the mixture model is still influenced by the reciprocal of the treatment assignment probability (as is the case generally with IPW estimators). Extending our model to be more insensitive to these extreme cases is vital in application.
Under the theme of model flexibility, we are currently fixing the hyper-parameter values within the kernels of the Gaussian-processes since attempting to learn these from the data creates two problems that we need to carefully study. First, learning these parameters is difficult from a sampling perspective since the target distributions are often extremely multi-modal. A promising avenue for addressing this is the use of a combination of sampling and optimization \citep{levine2001implementations} -- this is particularly important since Bayesian non-parametric methods are known to be sensitive to prior calibration. This is crucial in instances where the degree of heterogeneity in treatment effects is small as we have seen via simulation study. Second, the scalability of Gaussian processes is very limited \citep{johndrow2015approximations} and hence increasing the number of parameters that we are attempting to learn hurts the scalability even more. This broadly summarizes the areas that we will explore in future work.
\begin{appendices}
\section{Proof of Equivalence}\label{app:A}
We now show that the transformation presented in section \ref{former} in expectation recovers the CATE i.e.
$$\mathbb{E}_{Y}[Y^{*} \mid X = x] = \tau^{CATE}.$$
\noindent
\begin{proof}
First observe that $Y_{i} = Y_{i}(W_{i}) = W_{i}Y_{i}(1) + (1 - W_{i})Y_{i}(0).$
By the definition of the TRV
\begin{eqnarray*}
A = \mathbb{E}_{Y}[Y^{*} \mid X = x, \mathcal{D}] &=& \mathbb{E}_{Y} \left[\frac{W-e_{i}}{e_{i}(1-e_{i})}Y \mid X = x, \mathcal{D}\right], \\
&=& \frac{1}{e_{i}(1-e_{i})}\left( \mathbb{E}_{Y}[YW \mid X = x, \mathcal{D}] - e_{i} \mathbb{E}_{Y}[Y \mid X=x, \mathcal{D}]\right).
\end{eqnarray*}
\noindent Due to the ignorability of the treatment assignment the following holds
\begin{eqnarray*}
A & = & \frac{1}{e_{i}(1-e_{i})}(e_{i}\mathbb{E}_{Y}[Y \mid W = 1, X=x, \mathcal{D}] - e_{i} \mathbb{E}_{Y}[Y \mid X = x, \mathcal{D}]) \\
&=&\frac{1}{1-e_{i}}\mathbb{E}_{Y}[Y \mid X = x, W =1,\mathcal{D}] - \frac{1}{1-e_{i}}\mathbb{E}_{Y}[Y \mid X=x, \mathcal{D}].
\end{eqnarray*}
By iterating expectations the following holds:
\begin{eqnarray*}
A &=&\frac{1}{1-e_{i}}\mathbb{E}_{Y}[Y \mid W=1, X=x, \mathcal{D}] - \frac{1}{1-e_{i}}\mathbb{E}_{W}[\mathbb{E}_{Y}[Y \mid W=1,X=x, \mathcal{D}]], \\
&=&\frac{1}{1-e_{i}}\mathbb{E}_{Y}[Y \mid W = 1,X_{i}=x, \mathcal{D}] - \frac{e_{i}}{1-e_{i}}\mathbb{E}_{Y}[Y \mid W= 1,X=x, \mathcal{D}] - \\
& &\mathbb{E}_{Y}[Y \mid W = 0, X=x, \mathcal{D}].
\end{eqnarray*}
Collecting the first two terms provides the desired result
$$A=\mathbb{E}_{Y}[Y \mid W = 1, X= x, \mathcal{D}] - \mathbb{E}_{Y}[Y \mid W =0, X=x, \mathcal{D}].$$
\end{proof}
\section{Derivation of Model}\label{app:B}
The derivation of the model presented in the paper begins with the transformation of interest given as follows, with $Y_{i}$ denoting the observed response, $W_{i}$ the assigned treatment and $e_{i} = P(W_{i} = 1)$
\begin{equation*}
Y_{i}^{*} = \frac{W_{i} - e_{i}}{e_{i}(1-e_{i})}Y_{i}.
\end{equation*}
In addition, we define the two regression functions for the outcome, one under the treatment and one under the control,
\begin{align*}
(Y_{i}|W_{i} = 0) = f_{0}(X_{i})+\epsilon_{i}(0),\\
(Y_{i}|W_{i} = 1) = f_{1}(X_{i})+\epsilon_{i}(1).\\
\end{align*}
Using the transformation, and substituting the regression functions under the two cases i.e. when $W_{i} = 1$ and when $W_{i} = 0$ and assuming further that $\epsilon(1), \epsilon(0) \stackrel{IID}{\sim}\mathrm{N}(0, \sigma^{2})$, we can define with probability $e_{i}$,
\begin{align*}
(Y_{i}^{*}|W_{i} = 1) &= \frac{f_{1}(X_{i}) - e_{i}f_{1}(X_{i})+e_{i}f_{0}(X_{i})}{e_{i}} + f_{1}(X_{i})-f_{0}(X_{i}) + \frac{\epsilon_{i}(1)}{e_{i}},\\
&= f_{1}(X_{i})-f_{0}(X_{i}) + (1-e_{i})\bigg(\frac{ f_{1}(X_{i})}{e_{i}}+\frac{f_{0}(X_{i})}{1-e_{i}} \bigg)+\frac{\epsilon_{i}(1)}{e_{i}},\\
&= g(X_{i})+ (1-e_{i})h(X_{i})+\frac{\epsilon_{i}(1)}{e_{i}}.
\end{align*}
and similarly, with probability $1-e_{i}$ that,
\begin{align*}
(Y_{i}^{*}|W_{i} = 0) &= \frac{-(1-e_{i})f_{1}(X_{i}) +(1- e_{i})f_{0}(X_{i})-f_{0}(X_{i})}{e_{i}} + f_{1}(X_{i})-f_{0}(X_{i}) - \frac{\epsilon_{i}(0)}{1-e_{i}},\\
&= f_{1}(X_{i})-f_{0}(X_{i}) + (-e_{i})\bigg(\frac{ f_{1}(X_{i})}{e_{i}}+\frac{f_{0}(X_{i})}{1-e_{i}} \bigg)-\frac{\epsilon_{i}(0)}{1-e_{i}},\\
&= g(X_{i})+ (-e_{i})h(X_{i})+\frac{\epsilon_{i}(0)}{e_{i}-1}.
\end{align*}
This yields the mixture model model that we have presented in the paper,
\begin{align*}
Y_{i}^{*} &= g(X_{i}) + \varepsilon_{i},\\
\varepsilon_{i}\sim(e_{i})\mathrm{Normal}((1-e_{i})h(X_{i}), \frac{\sigma^{2}}{e_{i}^{2}})&+(1-e_{i})\mathrm{Normal}(-e_{i}h(X_{i}), \frac{\sigma^{2}}{(1-e_{i})^{2}}).
\end{align*}
\section{Comparison of SHIW Data}
\label{app:C}
This section presents comparative analysis using various methods for the CATE estimation for the SHIW data using the Gaussian process mixture in section \ref{real}. Point estimates of the CATE along with $95\%$ uncertainty intervals for each decile of income, along with the average value of income in that decile are presented in table \ref{tb:comparisonTable}.
\section{Sampling Algorithms for Model Specifications}
\label{app:D}
\paragraph{Algorithm for inference with known assignment probabilities:}
For the full conditional distributions specified in \eqref{eq:fcg} we can run the following Gibbs sampling procedure to generate a sequence $(\mathbf{g}^{(j)}, \mathbf{h}^{(j)}, \sigma^{((j)})_{j=1}^K$ as follows,
\begin{enumerate}
\item[a)] Initialize ${\mathbf h}^{(0)}$, $\sigma^{(0)}$, and ${\mathbf g}^{(0)}$;
\item[b)] For $j= 1,...,K$
\begin{enumerate}
\item[1)] $\mathbf{g}^{(j)} \sim \pi(\mathbf{g} \mid \mathbf{h}^{(j-1)}, \sigma^{(j-1)}, \mathcal{D})$;
\item[2)] $\mathbf{h}^{(j)} \sim \pi(\mathbf{h} \mid \mathbf{g}^{(j)}, \sigma^{(j-1)}, \mathcal{D})$ ;
\item[2)] $ \sigma^{(j)} \sim \pi(\sigma \mid \mathbf{h}^{(j)}, \mathbf{g}^{(j)}, \mathcal{D}).$
\end{enumerate}
\end{enumerate}
Given the sequence $(\mathbf{g}^{(j)}, \mathbf{h}^{(j)}, \sigma^{(j)})_{j=1}^K$ we discard an initial $K_0$ of the samples to address burn-in of the chain and we thin the remaining samples by a small factor $\gamma$ to obtain independent samples from the joint posterior distribution in section \ref{simspec} and in equation \eqref{eq:jd1}. We will specify the burn-in and thinning settings whenever we discuss applications of the method.
\paragraph{Algorithm for inference with unknown assignment probabilities:} For the full posterior stated in equation \eqref{post3} a standard Gibbs sampling procedure of the type specified above cannot be used for sampling the treatment assignment probabilities. We use a n\"aive approach to sampling the assignment probabilities in addition to the other model parameters with an additional Metropolis-within-Gibbs step. This results in the following procedure:
\begin{enumerate}
\item[a)] Initialize ${\mathbf h}^{(0)}$, $\sigma^{(0)}$, ${\mathbf g}^{(0)}$, and $\boldsymbol{\beta}^{(0)}$. Use $\boldsymbol{\beta}^{(0)}$to compute $\boldsymbol{e}^{(0)}$;
\item[b)] Compute $\mathbf{Y}^*$ from the initial $\boldsymbol{e}^{(0)}$ and data;
\item[c)] For $j= 1,...,K$
\begin{enumerate}
\item[1)] $\mathbf{g}^{(j)} \sim \pi(\mathbf{g} \mid \mathbf{h}^{(j-1)}, \sigma^{(j-1)}, \boldsymbol{e}^{(j-1)}, \mathbf{Y}^*, \mathcal{D})$;
\item[2)] $\mathbf{h}^{(j)} \sim \pi(\mathbf{h} \mid \mathbf{g}^{(j)}, \sigma^{(j-1)}, \boldsymbol{e}^{(j-1)}, \mathbf{Y}^*, \mathcal{D})$ ;
\item[3)] $ \sigma^{(j)} \sim \pi(\sigma \mid \mathbf{h}^{(j)}, \mathbf{g}^{(j)}, \boldsymbol{e}^{(j-1)}, \mathbf{Y}^*, \mathcal{D})$;
\item[4)] Use Metropolis-Hastings step to sample $\boldsymbol{\beta}^{(j)}$;
\item[5)] Compute $\boldsymbol{e}^{(j)}$ from $\boldsymbol{\beta}^{(j)}$ and data;
\item[6)] Compute $\mathbf{Y}^*$ from $\boldsymbol{e}^{(j)}$ and data.
\end{enumerate}
\end{enumerate}
Therefore using the steps in the algorithm above we simulate a sequence $(\mathbf{g}^{(j)}, \mathbf{h}^{(j)}, \sigma^{(j)}, \pmb{\beta}^{(j)},$ $\pmb{e}^{(j)},\mathbf{Y}^{*(j)})_{j=1}^K$ akin to earlier with burn-in and thinning considerations that reflects draws from the joint distribution in section \ref{compspec1} in \eqref{post3}.
\end{appendices}
\clearpage
\begin{sidewaystable}
\begin{center}
\tiny
\begin{tabular}{rrrrrrrrrrrrrrrrrr}
\hline
Decile & $Mean \quad Income$ & $tot$ & $fit$ & $ct$ & $BART$ & $RF$ & $lwr_{tot}$ & $upr_{tot}$ & $lwr_{fit}$ & $upr_{fit}$ &$lwr_{ct}$ & $upr_{ct}$ & $lwr_{BART}$ & $upr_{BART}$ & $lwr_{RF}$ & $upr_{RF}$ \\
\hline
1 & -1.137 & 0.470 & 0.234 & 0.637 & 0.118 & 0.420 & 0.089 & 0.678 & 0.087 & 0.741 & 0.202 & 1.103 & -1.166 & 1.394 & 0.082 & 0.772 \\
2 & -0.831 & 0.470 & 0.417 & 0.500 & 0.117 & 0.383 & 0.087 & 0.672 & 0.096 & 0.730 & 0.191 & 1.117 & -1.211 & 1.397 & 0.083 & 0.770 \\
3 & -0.638 & 0.470 & 0.538 & 0.515 & 0.105 & 0.461 & 0.067 & 0.679 & 0.083 & 0.733 & 0.183 & 1.094 & -1.204 & 1.398 & 0.082 & 0.772 \\
4 & -0.472 & 0.470 & 0.258 & 0.430 & 0.089 & 0.376 & 0.076 & 0.676 & 0.094 & 0.725 & 0.193 & 1.105 & -1.224 & 1.393 & 0.085 & 0.744 \\
5 & -0.310 & 0.470 & 0.093 & 0.307 & 0.095 & 0.276 & 0.077 & 0.670 & 0.082 & 0.740 & 0.198 & 1.112 & -1.191 & 1.398 & 0.075 & 0.766 \\
6 & -0.114 & 0.470 & 0.598 & 0.681 & 0.116 & 0.391 & 0.074 & 0.676 & 0.097 & 0.732 & 0.204 & 1.103 & -1.107 & 1.397 & 0.083 & 0.772 \\
7 & 0.103 & 0.470 & 0.471 & 0.552 & 0.096 & 0.407 & 0.081 & 0.660 & 0.086 & 0.748 & 0.212 & 1.112 & -1.136 & 1.370 & 0.087 & 0.760 \\
8 & 0.397 & 0.470 & 0.414 & -0.103 & 0.122 & 0.269 & 0.086 & 0.682 & 0.092 & 0.744 & 0.192 & 1.104 & -1.120 & 1.385 & 0.082 & 0.760 \\
9 & 0.848 & 0.470 & 0.363 & 0.384 & 0.134 & 0.442 & 0.082 & 0.667 & 0.094 & 0.744 & 0.178 & 1.108 & -1.119 & 1.400 & 0.087 & 0.776 \\
10 & 2.143 & 0.470 & 0.396 & 0.835 & 0.156 & 0.706 & 0.075 & 0.673 & 0.107 & 0.735 & 0.190 & 1.108 & -1.068 & 1.406 & 0.081 & 0.760 \\
\hline
\end{tabular}
\end{center}
\caption{Comparison of conditional average treatment effects by decile of standardized income, along with 95\% uncertainty intervals using alternative models.}
\label{tb:comparisonTable}
\end{sidewaystable}
\clearpage
\section{Acknowledgements}
The authors gratefully acknowledge the support of Andrea Mercatanti (Department of Statistics, Bank of Italy) for providing data for this paper and sincerely thank Elizabeth Lorenzi (Duke University) for providing insightful commentary and expertise on the topic of causal inference.
\clearpage
|
1,314,259,993,499 | arxiv | \section{Introduction}
Studying edge-on galaxies opens up the possibility to observe something that is not feasible with face-on galaxies: the halo of the galaxy. It is now clear that sufficiently deep and well-resolved radio observations of edge-on galaxies show emission high above their galactic discs \citep{changes_dr1}. Often, the radio halo extent is greater than the radial extent, as observations of a large number of edge-on galaxies show. This can provide insight into the mechanisms that lead to the formation of radio haloes, in particular galactic winds, which are a key factor
in galaxy evolution \citep{Veilleux2005}. Stellar feedback is thought to play
a significant role in galaxies below halo masses of $10^{12}~\rm M_\sun$, whereas active galactic nuclei (AGN) dominate the feedback process in more massive galaxies (e.g. \citealt{Silk2013, Li2018}). The details of stellar feedback are important, and the question of how much mass, heavier elements (metals), and angular momentum are removed over time is crucial to explain the observed characteristics of galaxies \citep{Scannapieco2002, Scannapieco2008}. In this context, cosmic-ray driven winds are important as they facilitate winds in normal (i.e. non-starbursting) late-type galaxies such as our own Milky Way.
Cosmic-ray driven winds cannot only occur in conditions of low star formation rate (SFR) surface densities where purely thermally driven winds fail \citep{Everett2008}, but can also drive slower, cooler winds that are much denser than the hot, fast thermally driven winds \citep{Girichidis2018}. These winds shape in particular the lower mass galaxies, such as dwarf irregular galaxies \citep{Tremonti2004}, so it is important to understand how they work.
Observationally, cosmic rays outside of our Milky Way can be studied via their electron component, the cosmic-ray electrons (CRe). These GeV electrons spiral around magnetic field lines and emit synchrotron radiation that has a characteristic spectral index of $\alpha \approx -0.7$ ($I_\nu\propto \nu^{\alpha}$), which can be distinguished from a thermal radio continuum spectrum with a spectral index of $\alpha\approx -0.1$.
Lower radio frequencies are beneficial to study radio haloes for two reasons. First, the influence of the thermal radio continuum can be neglected, which makes it possible to study the uncontaminated synchrotron emission component. Second, as a result of spectral ageing, lower frequencies are more sensitive to older emission, which allows
us to study the emission farther away from the star formation regions in the disc, where young CRe are injected. These low-frequency studies have become possible with the advent of the LOw Frequency ARray \citep[LOFAR;][]{lofar2013}, which gives us a detailed view on galaxies at a frequency of about 140~MHz.
This work is the third study of a radio halo with LOFAR.
\cite{Heesen2018} have observed the Local Group starburst dwarf irregular galaxy IC\,10 at 140~MHz and showed that the galaxy likely possesses a slow wind starting at only $20~\rm km\,s^{-1}$, possibly accelerating further into the halo and exceeding the escape velocity within 1~kpc from the disc. \cite{Mulcahy2018} have investigated the propagation of cosmic rays in the late-type spiral edge-on galaxy NGC\,891. Their 146 MHz map showed intensity scale heights that were 70\% larger than at $1.5$~GHz, which shows the promise of low-frequency observations in studying the halo structure of spiral galaxies. Without detailed modelling of cosmic-ray propagation, these authors could not decide whether diffusion or advection in a wind is responsible for the CRe in the halo. This is something we would like to address in this study of NGC\,3556.
NGC\,3556 (M\,108, UGC\,06225) is a star-forming, nearby edge-on spiral galaxy at a distance of 14~Mpc with an inclination of $81\degr$ \citep{irwin2012}. The galaxy has a SFR of $2.17~\rm M_\odot\,yr^{-1}$ and a SFR surface density (SFRD) of $4.4\times 10^{-3}~\textrm{M}_\sun\,\textrm{yr}\,^{-1}\,\textrm{kpc}^{-2}$, which is typical for late-type edge-on galaxies \citep{changes_dr1}.
Most studies of this galaxy have focussed on localised objects such as \ion{H}{I} shells \citep{King1997} or the disc--halo interaction \citep{wang2003}. An early radio continuum study showed that the spectral index of the galaxy steepens towards the halo \citep{debruyn1979}, which is often seen in other edge-on galaxies
(i.e. NGC\,5775, \citealt{duric1998}; NGC\,4631, \citealt{Hummel1990}; NGC\,253, \citealt{heesen2009_253}). The favoured explanations of the steepening are either a cosmic-ray ageing effect, limiting the lifetime of the electrons to about $10^7$ years, or that the magnetic field is weaker in the halo than in the disc. The authors also note that a single-component fit cannot reproduce the vertical profile of the surface brightness of the radio continuum emission.
The low-frequency capabilities of LOFAR make it possible to observe different components of the galaxy, such as low-energy CRe. These CRe are believed to originate in supernova remnants from diffusive shock acceleration \citep{Bell1978, Blandford1978, Caprioli2012}. This implies that a galaxy with a higher SFR and, consequently, a higher rate of exploding supernovae (SNe), produce more low-energy CRe that can be detected at low radio frequencies.
Various energy-loss processes and transport mechanisms can change the expected vertical profile of a galaxy. A faster wind increases the scale height by driving the CRe out into, or even out of, the halo.
This has been seen in extreme star-forming galaxies such as M\,82 \citep{adebahr2013}. On the other hand, strong energy losses may lead to the non-detection of CRe far out in the halo \citep{Beck2015}.
The possibility to detect CRe also depends on the presence of a galactic magnetic field, without which the electrons would not radiate synchrotron radiation. Studying the distribution of radio synchrotron emission and
the magnetic field helps us to constrain the parameters and the effects responsible for the observed properties of the halo.
In this paper, we present deep LOFAR observations of NGC\,3556. These data are combined with observations from the Continuum Haloes in Nearby Galaxies -- an EVLA Survey \citep[CHANG-ES;][]{irwin2012} to study the distribution of CRe and magnetic fields. This galaxy is particularly interesting to test whether late-type galaxies with SFRs similar to the Milky Way have outflows, in addition to the starburst galaxies mentioned earlier. This paper is organised as follows. In Sect.~\ref{sec:data}, we present the data used in the analysis. Section~\ref{sec:results} contains our results.
The results are discussed in Sect.~\ref{sec:discussion}, and conclusions are presented in Sect.~\ref{sec:conclusions}.
\section{Data}
\label{sec:data}
\subsection{LoTSS}
\label{lotss}
The low-frequency images were taken from the LOFAR Two-metre Sky Survey \citep[LoTSS;][]{Shimwell2017}. This survey aims to cover the entire northern sky with a total of 3170 pointings, each being observed for eight hours. This produces fields with a resolution of 6 arcseconds and a noise level of about 100~$\mu$Jy/beam.
The LoTSS first data release covers a 424-square-degree region of the HETDEX Spring Field (right ascension 10h45m00s to 15h30m00s and declination $45\degr 00\arcmin 00\arcsec$ to $57\degr 00\arcmin 00\arcsec$), which includes NGC~3556.
After the initial amplitude and phase calibration, time deviations due to clock drifts, changes in the ionosphere were calibrated. These first steps were performed using \textsc{prefactor} \citep{degasperin_18a}.\footnote{Available and described at \url{https://github.com/lofar-astron/prefactor}}. After that, an initial sky model was constructed using \textsc{PyBDSF} \citep{pybdsf2015}.\footnote{Available at \url{https://github.com/lofar-astron/PyBDSF}}
There are two major data calibration techniques for LOFAR data.
Both are designed as pipelines but use a different approach to data reduction. First, \textsc{FACTOR}, splits the entire field up in different facets using a Voronoi tessellation \citep{okabe2000} and does a traditional self-calibration on each facet separately, effectively employing a direction-dependent self-calibration.\footnote{Available at \url{https://github.com/lofar-astron/factor}} Individual phase solutions of the facets are then combined to allow for the imaging of the large field of view \citep{weeren2016,williams2016}. The second method, \textsc{KillMS}, applies Kalman filtering and solves for every term in the radio interferometric measurement equation \citep{tasse2014}.
The LoTSS data were calibrated and imaged using \textsc{KillMS}. \cite{Shimwell2017} found that the resulting flux densities of compact and bright ($> 100$~mJy) sources compare very well to values in the TGSS-ADR1 catalogue (TIFR GMRT Sky Survey Alternative Data Release; \citealt{2017A&A...598A..78I})
based on observations with the Giant Metrewave Radio Telescope (GMRT).
However, at high contrast levels, all bright objects reveal a bowl-like artefact. In addition, the calibration scheme employed by \textsc{KillMS} is insensitive to some faint diffuse emission, which might result in up to 20\% underestimated flux densities. These issues are currently rectified for the LoTSS data release 2 (DR2; Shimwell, T. 2018, private communication), but these data were not available yet for NGC\,3556.
\subsection{CHANG-ES}
The CHANG-ES project used the newly upgraded Karl G. Jansky Very Large array (VLA) to observe a sample of 35 galaxies. The science cases of the project included the analysis of the disc-halo interface and star formation properties in general \citep{irwin2012}. The sample was chosen by the following criteria. The inclination of the galaxies had to be larger than $75\degr$ in order to be able to distinguish the disc from the halo. The size of the galaxies had to be between $4\arcmin$, owing to the desired spatial resolution, and $25\arcmin$, owing the field of view of the VLA in the desired frequency and array configuration.
The observations were conducted in two bands: $1.5$~GHz ($L$ band) and 6~GHz ($C$ band). The different array configurations that were used, B, C, and D, accommodate different resolutions, ranging from about $1\arcsec$
in the B array/$C$ band, to about $46\arcsec$
in the D-array/$L$-band configuration. The calibration of the CHANG-ES data was conducted with the Common Astronomy Software Application Package \citep[{\small CASA};][]{McMullin2007} and a standard calibration technique was used\footnote{Available at \url{http://casa.nrao.edu}}. Because of the available resolution of the LoTSS data, $\approx$5\arcsec and $\approx$20\arcsec, and the aim of looking at the halo of the galaxy, the $C$ band and $L$ band CHANG-ES data were used for the analysis presented in this work.
\subsection{H$\alpha$ imaging}
We retrieved H$\alpha$ and $R$-band images and the corresponding flat-field and bias
images from the Isaac Newton Group (ING) archive. The data were taken on 5 Feburary 2008 with the
Isaac Newton $2.5$ m telescope (INT) and the Wide Field Prime Focus CCD mosaic camera (WFC). Three exposures of 250~s through the WFCH6568 H$\alpha$ filter (central wavelength
$656.8$~nm, filter band Full Width Half Maximum (FWHM) $\rm = 9.5~nm$) and one 200 s exposure in the
Harris $R$-band filter (central wavelength $\rm 638.0~nm$, filter band $\rm FWHM = 152.0~nm$)
were taken. Since NGC\,3556 fits fully into one CCD chip, we only reduced this
part of the data using the standard CCD reductions tools in
IRAF\footnote{IRAF is distributed by the National Optical Astronomy
Observatory, which is operated by the Association of Universities for
Research in Astronomy (AURA) under a cooperative agreement with the National
Science Foundation.}.
Pixels that were affected by cosmic rays have been corrected using the Image Reduction and Analysis Facility (IRAF) version of the
{\small LACOSMIC} routine \citep{vanDokkum2001}. We then shifted the images to
a common reference frame using 15 stars present on all four images using {\small IMEXA},
{\small IMCENTROID}, and {\small IMSHIFT} tasks in {\small IRAF} and co-added the three H$\alpha$ images. We
then subtracted the scaled $R$-band image from the combined H$\alpha$ image to
produce a continuum corrected pure emission line image. This image contains mostly the flux of the H$\alpha$ line with a small contribution of [\ion{N}{II}], which has two emission lines also covered by the narrow-band filter used.
A typical value for the ratio of the stronger [\ion{N}{II}] line at $\lambda$ =
$658.3$~nm
to H$\alpha$ is in the range of $0.3$ to $0.4$ in integrated spectra
of star-forming galaxies \citep[e.g.][]{Lehnert1994}, and therefore
somewhat higher
than in \ion{H}{II} regions because of the presence of diffuse ionised gas (DIG) in and
above the galaxies.
This makes the contribution of [\ion{N}{II}] emission to our line image
uncertain, but should be
of the order of 40$\%$ for the filter used in view of the visible
emission from the diffuse ionised medium \citep[see also][]{Collins2000}.
This contribution is different in \ion{H}{ii} regions and the DIG
\citep[e.g.][]{Dettmar1992}. The use of a broad-band filter as continuum
filter is less exact than the use of a dedicated narrow-band continuum
filter, but is a standard method yielding good results
\citep[e.g.][]{Gildepaz2003}. Since we are only interested in relative fluxes
and especially the morphology of the thermal gas, we did not perform a
detailed flux calibration. Following basic reduction the $R$-band image was
astrometrically calibrated using the {\small ASTROMETRY.NET} \citep{lang2010} routines
and the astrometric solution transferred to the H$\alpha$ image and the
continuum-corrected H$\alpha$ image.
Finally we measured the seeing on the H$\alpha$
and $R$-band images by fitting Gauss and Moffat functions to several bright, non-saturated stars. The resulting angular resolution of our set of images is $1\farcs5$.
\begin{table}[!htbp]
\caption{Characteristics of the maps used for the main part of the analysis.}
\label{tableData}
\centering
\begin{tabular}{cccc}
\hline\hline
Survey & Frequency & Sensitivity & Resolution\cr
& (GHz) & ($\mu\rm Jy\,beam^{-1}$) & ($\arcsec$)\cr
\hline
LoTSS & $0.144$ & 400 &21 \cr
CHANG-ES$^{a}$ & $1.5$ & 57 &21 \cr
CHANG-ES$^{b}$ & 6 & 22 &21 \cr
\hline
\end{tabular}
\tablefoot{
$^a$ Combined VLA C and D arrays.\\
$^b$ VLA D array.
}
\end{table}
\subsection{Imaging}
In Sect.~\ref{sec:results} we present total power and spectral index maps from the CHANG-ES and LOFAR data and a map of the magnetic field strength. For the CHANG-ES data, the image deconvolution with the {\small CLEAN} algorithm was performed via {\small CASA}. For deconvolution, the robust value of the Briggs weighting scheme \citep{Briggs1995} was set to 2 (close to natural weighting) to emphasise faint halo emission. Subsequent images were smoothed, if not indicated otherwise, to a circular synthesised Gaussian beam of $21\arcsec$ FWHM in order to have a uniform set of images while emphasising faint radio haloes. Table~\ref{tableData} summarises the properties of the images.
\section{Results}
\label{sec:results}
This section is organised as follows:
In Sect.~\ref{sectResultsTotalPower}, we present the total power maps from LOFAR and CHANG-ES;
in Sect.~\ref{sectResultsSpectralIndex}, we calculate the radio spectral index for the whole galaxy from the LOFAR and CHANG-ES flux density measurements and published values at other frequencies, and then we present spectral index maps from the CHANG-ES $1.5$ and 6 GHz maps and from the LOFAR 144 MHz and the CHANG-ES $1.5$ GHz images;
in Sect.~\ref{sectResultsMF},
we present a map of the magnetic field strength derived from the LOFAR 144 MHz image and the spectral index between 144~MHz and $1.5$~GHz and assuming equipartition;
in Sect.~\ref{sectResultsPI}, CHANG-ES polarisation images and a map of the Faraday rotation measured are presented;
in Sect.~\ref{subsec:scale_heights}, scale heights are derived from fits to the total power maps and the map of the equipartition magnetic field;
in Sect.~\ref{sectCRprop}, the data are used to investigate the transport of cosmic rays from the disc into the halo of NGC\,3556; and
finally, in Sect.~\ref{sectResultsSmallScaleStruct}, we
examine small-scale structures in the non-thermal and thermal surface brightness distributions traced by
the high-resolution ($6\arcsec$) LoTSS image and a continuum-subtracted H$\alpha$ image of NGC\,3556.
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{N3556_LOF_RGB.png}
\includegraphics[width=\columnwidth]{N3556_C+D-L_RGB.png}
\includegraphics[width=\columnwidth]{N3556_DC_RGB.png}
\caption{Radio images in contours superimposed on a three-colour optical image from SDSS.
{\it Top panel:} LOFAR 144-MHz image in contours ranging from 3$\sigma = 1.2~\rm mJy\,beam^{-1}$ and increasing by a factor of two.
{\it Middle panel:} 1--2-GHz ($L$ band, combined C and D array) image in contours starting at 3$\sigma = 171~\mu\rm Jy\,beam^{-1}$ and increasing by a factor of two.
{\it Bottom panel:} 4--8-GHz ($C$ band, D array) image in contours starting at 3$\sigma = 66~\mu\rm Jy\,beam^{-1}$ and increasing by a factor of two. Some {\scriptsize CLEAN} artefacts are visible in the form of symmetric features on the south side of the galaxy.
In all panels, the synthesised beam of $21\arcsec$ FWHM is shown as a filled white circle in the bottom left corner.
}
\label{figsLOFARVLA}
\end{figure}
\subsection{Total power}
\label{sectResultsTotalPower}
In Fig.~\ref{figsLOFARVLA} (top panel), we show the LOFAR 144 MHz image in contours superimposed on a three-colour optical image from SDSS DR14 \citep{sdss_dr14}. The radio synchrotron halo extends up to 10~kpc into the halo, while the radial extent is comparable to the extent of the galaxy at optical wavelengths. The total flux density measured within the first contour level amounts to $1.43 \pm 0.36$~Jy. This is in agreement with the 151 MHz measurement from the 6C survey \citep{6c_p3} within the uncertainties.
In the middle and bottom panels of Fig.~\ref{figsLOFARVLA}, we show the CHANG-ES VLA images in contours superimposed on the same optical image from the SDSS. The $1.5$ GHz image ($L$ band) is presented in the middle panel
and the 6 GHz image ($C$ band, D array) is presented in the bottom panel. The higher frequency CHANG-ES maps show a lesser vertical extent of the halo than the LOFAR map, although the extent of the total intensity along the major axis barely changes.
In $L$ band, data from the C and D arrays have been combined to attain the extended emission found in compact arrays, while keeping a resolution close to that of the extended array. Within the first contour, the total power flux density at $1.5$~GHz amounts to $280 \pm 10$~mJy, where the first contour starts at a noise level of 3$\sigma$ ($171~\mu\rm Jy\,beam^{-1}$). In $C$ band (6~GHz), the total flux density measured within the first contour amounts to $92 \pm 3$~mJy, where the first contour starts at a level of 3$\sigma$ ($66~\mu \rm Jy\,beam^{-1}$). At $1.5$~GHz, the flux density agrees within the uncertainties with that of \citet{changes_dr1}, who used only the D array data. At 6~GHz, our flux density is slightly higher than that of \citet{changes_dr1}, which we ascribe to our lower angular resolution, helping us to detect faint emission in the halo.
\begin{figure}[!thbp]
\includegraphics[width=\columnwidth]{N3556_SpI_lit.eps}
\caption{Flux density measurements of NGC\,3556 from the published literature and from this work. A linear fit to the logarithmic values results in a spectral index of $-0.67 \pm 0.06$.}
\label{allfluxes}
\end{figure}
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{N3556_CL_SPI.png}
\includegraphics[width=\columnwidth]{N3556_Lhba_SPI.png}
\caption{Radio spectral index maps.
{\it Top panel:} From the CHANG-ES data calculated between
$1.5$ and 6~GHz.
{\it Bottom panel:} Between the 144 MHz LOFAR and the $1.5$ GHz VLA maps.
Red contours show the surface brightness distribution of the $1.5$ GHz map (top) and 144 MHz map (bottom) with the same contour levels as in Fig.~\ref{figsLOFARVLA}.
}
\label{figSPIX}
\end{figure}
\subsection{Spectral index}
\label{sectResultsSpectralIndex}
Comparing the obtained flux densities to known values from the literature at different frequencies allows us to calculate the global radio continuum spectrum (\cite{Sramek1975},\cite{Gregory1991}, \cite{White1992}, \cite{Condon1998}, \cite{Heeschen1964}, \cite{Gioia1980}, \cite{Rengelink1997}, \cite{Hales1990}, \cite{Israel1990}). In Fig.~\ref{allfluxes}, we show a logarithmic
plot of the literature values and a fit to the values, resulting in a spectral index of $\alpha = -0.67\pm 0.06$. This fits well with the expected synchrotron spectral index of $\alpha=-0.7$ as typically found in spiral galaxies \citep{Beck2015}.
We also looked at the spatially resolved spectral index distribution, both between $C$ and $L$ band and between $L$ band and LOFAR 144~MHz. In Fig.~\ref{figSPIX}, we show both spectral index maps.
The spectral index map between $C$ and $L$ band shows a thin disc with a spectral index of about $-0.7$, except for the central and western star-forming regions where the spectral index is $\approx$-0.5.
In the spectral index map between $144$~MHz and $L$ band,
no clear disc structure is seen and the disc and halo appear to have a very similar spectral index ($\alpha \approx -0.5$). At a distance of about $8$~kpc from the disc, the spectrum steepens to values of $\alpha \approx -1$ and lower. The steepening in the halo is expected if spectral ageing plays an important role in the halo.
\subsection{Magnetic field}
\label{sectResultsMF}
We calculated magnetic field maps from the total intensity maps using the equipartition assumption, which states that the energy densities of cosmic rays and magnetic fields are equal in galaxies. Following \cite{Beck2005}, the magnetic field strength is calculated using
\begin{equation}
\label{beckeq}
B_{\rm eq} = \dfrac{4\pi (1-2\alpha) (K_0 + 1) I_\nu E_p^{1+2\alpha}
(\nu / 2c_1)^{-\alpha}}{[(-2\alpha -1) c_2(-\alpha) l c_4(i)]^{1/(3 - \alpha)}} \,
,\end{equation}
where $\alpha$ is the synchrotron spectral index, defined as $I_\nu \propto \nu^\alpha$, where \textit{I$_\nu$} is the synchrotron intensity at frequency $\nu$. The value $K_0$ is a constant factor representing the ratio of the proton to electron number density. This constant is usually assumed to be $\approx$100. This value, however, increases towards the halo of galaxy, which makes the derived strengths lower limits \citep{Beck2005}. \cite{Lacki2013} also show that $K_0$ remains the same to an order of magnitude even in starburst galaxies such as M\,82. Furthermore, \textit{l} represents the path length along the line of sight through the galaxy, assumed to be $20$~kpc, \textit{i} is the inclination angle of the galaxy, $E_p$ is the proton rest mass, and \textit{c$_{1-4}$} are constants.
To produce a map of the equipartition magnetic field strength, we used the $144$~MHz map and the spectral index map
between $144$~MHz and $1.5$~GHz (bottom panel of Fig.~\ref{figSPIX}), but blanked pixels with $\alpha > -0.5$ because such regions are likely contaminated by free-free absorption, which would result in significantly underestimated values of the magnetic field strength.
The resulting magnetic field strength map is presented in Fig.~\ref{mag_L}.
The galaxy exhibits magnetic field strengths around 10
$\mu$G within the disc and smaller field strengths around 5~$\mu$G in the halo. This is in the same range as found in other spiral galaxies (e.g. \citealt{Beck2015}). The average uncertainty of the estimates of the magnetic field strength is about 10\%.
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{N3556_Lhba_B.png}
\caption{Magnetic field strength map derived through the equipartition assumption using the spectral index between $L$ band and $144$~MHz and synchrotron flux density at $144$~MHz. Red contours show the surface brightness distribution of the $144$~MHz map as shown in Fig.~\ref{figsLOFARVLA}.
The central region and a region to the west (both in white) were blanked because the spectral index $\alpha$ is higher than $-0.5$.
}
\label{mag_L}
\end{figure}
\subsection{Polarised intensity}
\label{sectResultsPI}
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{N3556_RGB_DL_MAG.png}
\includegraphics[width=\columnwidth]{N3556_RGB_DC_MAG.png}
\includegraphics[width=\columnwidth]{N3556_RM_DL.png}
\caption{
{\it Top panel:} Magnetic field vectors (red) and PI contours from the CHANG-ES D-array/$L$-band data overlaid on the same optical image as in Fig.~\ref{figsLOFARVLA} at $57\arcsec$ FWHM resolution.
Contours start at $3\sigma = 90~\mu\rm Jy\, beam^{-1}$, increasing by a factor of two, and red lines indicate magnetic field vectors.
{\it Middle panel:} Magnetic field vectors (red) and PI contours from the D-array/$C$-band CHANG-ES data overlaid onto the image as in Fig.~\ref{figsLOFARVLA} at FWHM 21\arcsec resolution. Contours start at a PI level of $3\sigma=27~\mu\rm Jy\,beam^{-1}$, increasing by a factor of two.
{\it Bottom panel:} Rotation measure map from the CHANG-ES D-array/$L$-band data with values within the 3$\sigma$ contour of the PI map from the $L$-band data (top panel).
The white cross indicates the position of the centre of the galaxy. Filled red circles indicate the synthesised beam size.
}
\label{figPIBRM}
\end{figure}
CHANG-ES data provides us with full Stokes information, which we used in combination with rotation measure synthesis \citep[RM synthesis;][]{brentjens2005} to obtain the polarised intensity (PI) and the orientation of the ordered magnetic field component present in NGC\,3556. To see the extent of the magnetic field we used the D-array/$L$-band data because of its sensitivity to large angular scales.
In Fig.~\ref{figPIBRM}, we show magnetic fields vectors (in red) in regions where the PI in the $L$-band CHANG-ES image is larger than 3$\sigma$ ($90~\mu\rm Jy\,beam^{-1}$) overlaid on the SDSS optical image. The contours show the PI, starting at the 3$\sigma$ level. The polarised emission at $1.5$~GHz is limited to the eastern side of the galaxy. The total polarised flux density is $1.2 \pm 0.2$~mJy.
The middle panel of Fig.~\ref{figPIBRM}
shows the PI map and the corresponding magnetic field vectors from the D-array/$C$-band map. The polarisation is no longer limited to a single patch, but is observed across the whole galaxy.
The large bandwidth combined with the high spectral resolution allows us to study the Faraday rotation measure (RM) of the galaxy. We used the RM synthesis technique developed by \citet{brentjens2005} to obtain the RM map shown in the bottom panel of Fig.~\ref{figPIBRM}.
The RM map from the D-array/$L$-band data shows values between $-30$ and $10~\rm rad\,m^{-2}$ within the first (3$\sigma$) $1.5$ GHz PI contour (Fig.~\ref{figPIBRM}, top panel), with positive RM values found only outside the visible optical extent of the galaxy as shown by the SDSS image. The foreground RM value of the Milky Way, measured from a map provided by \cite{Oppermann2015}, is approximately $6 \pm 5~\rm rad\,m^{-2}$.
\subsection{Scale heights}
\label{subsec:scale_heights}
To find the scale heights of the total power profile, exponential functions were fitted to the data. Following \cite{Dumke1995}, an intrinsic exponential profile,
\begin{equation}
w(z) = w_0\,\textnormal{exp}(-z/z_0) \, ,
\end{equation}
is convolved with the beam of the telescope which, for a map that has been deconvolved with the {\small CLEAN} algorithm, is
a Gaussian profile,
\begin{equation}
g(z) = \frac{1}{\sqrt{2\pi\sigma^2}}\,\textnormal{exp}(-z^2/2\sigma^2 ) \, ,
\end{equation}
resulting in the expected distribution of the galaxy which is described by
\begin{multline}
\label{eq:doubleexp}
W_{\textnormal{exp}}(z) = \frac{w_0}{2}\,\textnormal{exp}(-z^2/2\sigma^2)\left[\textnormal{exp}\left( \frac{\sigma^2 - zz_0}{\sqrt{2}{\sigma z_0}} \right) ^2 \,\textnormal{erfc}\left(\frac{\sigma^2-zz_0}{\sqrt{2}\sigma z_0} \right) \right. \\ + \left. \textnormal{exp}\left( \frac{\sigma^2 + zz_0}{\sqrt{2}{\sigma z_0}} \right) ^2 \,\textnormal{erfc}\left(\frac{\sigma^2+zz_0}{\sqrt{2}\sigma z_0} \right)\right]\, ,
\end{multline}
where $w_0$ is the amplitude and $z_0$ the scale height of the distribution, and erfc is the complementary error function
\begin{equation}
\textnormal{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_{x}^{\infty} \textnormal{exp}(-r^2){\rm d}r \, .
\end{equation}
Since most edge-on spiral galaxies show two disc components \citep{thick_2006}, the so-called thin and thick disc, a two-component exponential function is fitted using Equation~(\ref{eq:doubleexp}) twice. These two components are characterised by their amplitudes $w_1$ and $w_2$ and scale heights $z_1$ and $z_2$, which replace variable $w_0$ and $z$ in Eq.~(\ref{eq:doubleexp}). The sum of these two components is then fitted to the inclination-corrected profile. Because of the high inclination of $81\degr$ a simple $\sin(i)$ correction was performed. The quality of each fit is determined by a reduced $\chi^2$ analysis. The profile was computed from box integrations on the galaxy, having 21 boxes across the minor axis, each having a size of $21\arcsec \times 300\arcsec$. The large size across the major axis ensures a global average.
\subsubsection{Total power}
In Fig.~\ref{scales_TP}, we present the results from the scale height analysis of the $1.5$ GHz and 144 MHz maps with the best-fit parameters listed in Table~\ref{TP_scaleheiths_table}.
The fits are of good quality and reveal
two distinct components with scale heights that are in good agreement with measurements for other edge-on galaxies \citep[e.g.][]{changes9}. As expected from previous measurements and from the effects of spectral ageing, the extent of the halo of the galaxy is larger at $144$~MHz than at $1.5$~GHz.
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{C+D_TP_scaleheight.eps}
\includegraphics[width=\columnwidth]{HBA_TP_scaleheight.eps}
\caption{Total power profile plots of NGC 3556 derived from the VLA $1.5$ GHz map (top) and LOFAR 144 MHz map (bottom). Blue points represent measured data and red dashed lines represent fits to it.}
\label{scales_TP}
\end{figure}
\begin{table}[!htbp]
\caption{
Best-fit flux density amplitudes and scale heights of thin and thick discs for the VLA $1.5$ GHz map and the LOFAR 144 MHz map.}
\label{TP_scaleheiths_table}
\centering
\begin{tabular}{ccccc}
\hline\hline
Parameter & $1.5$~GHz & 144~MHz & Unit\\
\hline
$w_1$ & $7.8 \pm 1$ & $22.6 \pm 3.7$ & $\rm mJy\,beam^{-1}$\\
$z_1$ & $1.4 \pm 0.2$ & $1.9 \pm 0.5$ & kpc\\
\hline
$w_2$ & $1.3 \pm 1.1$ & $6.2 \pm 4.3$ & $\rm mJy\,beam^{-1}$\\
$z_2$ & $3.3 \pm 0.8$ & $5.9 \pm 1.9$ & kpc\\
\hline
$\chi_{\rm red}^ 2$ & $1.23$ & $1.47$ &\\
\hline
\end{tabular}
\end{table}
\subsubsection{Magnetic field}
We analysed the magnetic field strength profile in the same way as the profiles of the total power maps.
The resulting amplitudes and scale heights of the thin and thick disc are listed in Table~\ref{Mag_scaleheiths_table}.
Figure~\ref{magscaleheight} shows the magnetic field scale height derived from the magnetic field map as derived from LOFAR 144~MHz. As for the radio continuum intensities, the magnetic field profile shows two distinct components. While the scale height of the thick magnetic field component appears to be very high ($23.7$ $\pm$ $4.1$~kpc), the value is consistent with results by \citet{Pakmor2017} who analysed magnetic field simulations of Milky Way-like galaxies.
\begin{figure}[h]
\includegraphics[width=\columnwidth]{HBA_BField_capped_scaleheight.eps}
\caption{Profile of the magnetic field strength in NGC 3556 derived from the LOFAR $144$~MHz map. The blue points represent measured data and the red dashed line the fit to the data.
}
\label{magscaleheight}
\end{figure}
\begin{table}[!htbp]
\caption{List of magnetic field strengths and scale heights in the thin and thick discs as derived from the LOFAR $144$~MHz map.
}
\label{Mag_scaleheiths_table}
\centering
\begin{tabular}{ccc}
\hline\hline
Parameter & Value & Unit\\
\hline
$B_1$ & $8.2 \pm 0.9$ & $\mu$G\\
$h_{B1}$ & $1.5 \pm 0.3$ & kpc\\
\hline
$B_2$ & $4.9 \pm 0.4$ & $\mu$G\\
$h_{B2}$ & $23.7 \pm 4.1$ & kpc\\
\hline
$\chi^2_{\rm red}$ & $1.56$ &\\
\hline
\end{tabular}
\end{table}
\subsection{Cosmic-ray propagation}
\label{sectCRprop}
We now use our data to investigate the transport of cosmic rays from the disc into the halo, using the electrons as proxies for GeV protons and heavier nuclei. Our model assumes that the electrons are injected in the galactic midplane at $z=0~\rm kpc$ following a power law with an injection spectral index $\gamma$, so that the CRe number density is $N(E,z=0)\propto E^{-\gamma}$ where $E$ is the CRe energy. The CRe number density is then evolved as function of distance from the disc using equations for pure diffusion and advection \citep{heesen2016}. Energy losses that are taken into account include synchrotron and inverse Compton (IC) radiation and adiabatic losses. The model neglects other types of losses, such as ionisation and bremsstrahlung losses. At frequencies below and around 1~GHz, these types of losses are only important in dense gaseous regions
in the disc plane (e.g. \citealt{Basu2015}).
The CRe number density is then convolved in frequency space with the synchrotron emission spectrum of an individual CRe to calculate synchrotron intensities. Finally, this vertical intensity profile is convolved with the effective beam (Section~\ref{subsec:scale_heights}), such that they can directly be compared with the observations. These steps are carried out in the SPectral INdex Numerical Analysis of K(c)osmic-ray Electron Radio-Emission \citep[{\small SPINNAKER;}][]{heesen2016} computer program, for which we now provide a graphical user interface with {\small SPINTERACTIVE}.\footnote{SPINNAKER and SPINTERACTIVE can obtained from: \href{www.github.com/vheesen/Spinnaker}{www.github.com/vheesen/Spinnaker}}
We model the vertical profile of the magnetic field strength as
\begin{equation}
B(z) = B_1\exp\left (-\frac{z}{h_{B1}}\right ) + (B_0 - B_1)\exp\left (-\frac{z}{h_{B2}}\right ),
\end{equation}
where $B_1$ is the magnetic field strength of the thin disc component and $h_{\rm B1}$ and $h_{\rm B2}$ are the magnetic field scale heights of the thin and thick disc components, respectively. The value $B_0$ is the total magnetic field strength in the disc, which we fix as $B_0=9~\mu\rm G$. We simultaneously fit the floating parameters of the magnetic field model together with the diffusion coefficient or advection speed, when either fitting the diffusion or advection model, respectively. The CRe injection index $\gamma$ is another free parameter. The global quality of the fit was determined by calculating the average reduced $\chi^2$ from the values for each profile.
For a diffusion-type propagation, the model cannot reproduce the observations. With the parameters from Table \ref{Mag_scaleheiths_table}, the following result can be achieved, as presented in Fig.~\ref{spinplot_diffusion}. The resulting reduced $\chi^2$ is $4.4$, which cannot be called an acceptable fit.
The observed spectral index steepens rather gradually from the disc into the halo as a function that can be described as linear. Furthermore, the discovery of a very extended ($\approx$20~kpc) exponential component also hints at advection rather than diffusion, as the latter leads to Gaussian intensity profiles. This is now supported by the findings of our fitting routine. We also tested energy-dependent diffusion coefficients, which we parametrised as $D=D_0(E/1~{\rm GeV})^{\mu}$. We found that this did not improve the fit, but on the contrary made them even worse.
\begin{figure*}[!h]
\includegraphics[width=17cm]{diffusion.eps}
\caption{Resulting profiles of the diffusive cosmic-ray propagation simulation. Blue dots represent normalised measured intensities; the top plot shows the LOFAR 144 MHz measurements and middle plot shows the VLA $1.5$ GHz measurements. The bottom panel shows the radio spectral index profile. In all panels, the red lines represent the simulated quantity. The scale for the intensity profiles is logarithmic, whereas the spectral index is shown in linear scale.}
\label{spinplot_diffusion}
\end{figure*}
As the next step, we explored an advection model with a constant wind speed. This provides us with a much improved fit with a reduced $\chi^2$ of $1.9$. The resulting best-fitting model is presented in Fig.~\ref{spinplot_advection}, where we find a best-fitting advection speed of $145~\rm km\,s^{-1}$.
This can be compared with the escape velocity, which, for a truncated isothermal sphere, can be calculated following \cite{Veilleux2005}:
\begin{equation}
v_{\rm{esc}} = \sqrt{2}\, v_{\textnormal{rot}}\, \sqrt[]{1+\ln\left ( \frac{R_{\rm max}}{r}\right )},
\end{equation}
where $\varv_{\textnormal{rot}}$ is the rotational velocity of the galaxy
(equals to $154~\rm km\, s^{-1}$ for NGC~3556, \citealt{King1997}),
$r$ is the spherical radial coordinate, and $R_{\rm max}$ is the outer radius of the truncated isothermal sphere.
We calculated escape velocities assuming three different values for the outer radius (10, 30, and 60~kpc) and radial distances in the galactic disc ranging from 1 to 10~kpc from the centre of the galaxy. The resulting velocities are presented in Fig.~\ref{windspeed_fig}. The green area corresponds to escape velocities for $R_{\rm max}=10$~kpc; the red area for $R_{\rm max} = 30$~kpc, and the grey area to $R_{\rm max} = 60$~kpc. The lower and upper bounds of each area represent the limits of the radial distance: $1$~kpc at the top and $10$~kpc at the bottom.
\begin{figure*}[!htbp]
\includegraphics[width=17cm]{adv_const.eps}
\caption{Resulting profiles of the advective cosmic-ray propagation simulation with a constant wind speed. Blue dots represent normalised measured intensities; the top plot shows the LOFAR $144$~MHz measurements and middle plot shows the VLA $1.5$ GHz measurements. The bottom panel shows the radio spectral index profile. In all panels, the red lines represent the simulated quantity. The scale for the intensity profiles is logarithmic, whereas the spectral index is shown in linear scale.}
\label{spinplot_advection}
\end{figure*}
The fit can be improved if we assume the advection velocity to increase in the halo. We parametrised the advection velocity as
\begin{equation}
\varv(z) = \varv_0 \left (1 + \left (\frac{z}{h_{\varv}}\right )^{\beta}\right ).
\end{equation}
The acceleration parameter, $\beta$, determines the shape of the velocity profile and $h_{\varv}$ is the velocity scale height. We found that $\beta=1$ gave the best-fitting model, which means a linear acceleration. We also tested square root ($\beta = 0.5$) and quadratic ($\beta=2$) velocity profiles. They gave poorer fits. In Fig.~\ref{spinplot_best_advection}, we present the best-fitting accelerated wind model with a reduced $\chi^2$ of $1.25$ with the corresponding advection velocity profile presented in Fig.~\ref{windspeed_fig}. This model starts with a slow advection speed in the midplane of $123~\rm km\,s^{-1}$ but accelerates away from the disc to reach our lowest estimate for the escape velocity at a height of about $5$~kpc. The wind accelerates further so that the advection speed at the outer limit of the observable halo at $z=15$~kpc is $>350~\rm km\,s^{-1}$, reaching the escape velocity of even our largest assumed isothermal sphere
($R_{max} = 60$~kpc). Figure~\ref{windspeed_fig} shows the simulated wind speed as function of distance from the disc for an accelerated wind model and the escape velocities for different isothermal spheres radii.
For this model the cosmic rays and magnetic field are in approximate energy equipartition even away from the disc in the halo; the cosmic rays slightly dominate the energy density by a factor of up to five. This appears to be a reasonable assumption if the cosmic rays are able to drive the wind. The best-fitting parameters for our models are presented in Table~\ref{spintable}.
\begin{table}[!htbp]
\caption{Best-fitting parameters for the cosmic-ray transport models.}
\centering
\begin{tabular}{ccc}
\hline\hline
Parameter & Value &\\
\hline
\multicolumn{3}{c}{Diffusion model}\\
\hline
$B_0$ & $9.0$ & $\mu$G\\
$B_1$ & $5.0$ & $\mu$G\\
$h_{B1}$ & $4.0$ & kpc\\
$h_{B2}$ & $33.0$ &kpc\\
$D_0$ & $21.7$ & $10^{28}~\rm cm^2\,s^{-1}$\\
$\mu$ & $0$\\
$\gamma$ & $2.4$\\
$\chi^2_{\rm red}$ & $4.4$\\
\hline
\multicolumn{3}{c}{Advection model (constant speed)}\\
\hline
$B_0$ & $9.0$ & $\mu$G\\
$B_1$ & $8.0$ & $\mu$G\\
$h_{B1}$ & $3.5$ & kpc\\
$h_{B2}$ & $11.5$ &kpc\\
$\varv_0$ & $145$ & $\rm km\,s^{-1}$\\
$\gamma$ & $2.2$\\
$\chi^2_{\rm red}$ & $1.9$\\
\hline
\multicolumn{3}{c}{Advection model (linearly increasing speed)}\\
\hline
$B_0$ & $9.0$ & $\mu$G\\
$B_1$ & $8.0$ & $\mu$G\\
$h_{B1}$ & $5.5$ & kpc\\
$h_{B2}$ & $21.0$ &kpc\\
$\varv_0$ & $123$ & $\rm km\,s^{-1}$\\
$h_v$ & $8.0$ & kpc\\
$\gamma$ & $2.2$\\
$\chi^2_{\rm red}$ & $1.3$\\
\hline
\label{spintable}
\end{tabular}
\end{table}
\begin{figure*}[!htbp]
\includegraphics[width=17cm]{adv_linear.eps}
\caption{Resulting profiles of the advective cosmic-ray propagation simulation using a linearly accelerating wind speed. The blue dots represent normalised measured intensities; the top plot shows the LOFAR 144 MHz measurements, and middle plot shows the VLA $1.5$ GHz measurements. The bottom panel shows the radio spectral index profile. In all panels, the red lines represent the simulated quantity. The scale for the intensity profiles is logarithmic, whereas the spectral index is shown in linear scale.}
\label{spinplot_best_advection}
\end{figure*}
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{windspeed3.png}
\caption{Wind speed profile of the different wind models derived from the simulations shown in
Figs~\ref{spinplot_advection} and
\ref{spinplot_best_advection} (blue lines; constant or linearly increasing speed)
and ranges of calculated escape velocities for this galaxy.
The coloured areas show the escape velocities calculated for different values of the outer radius of the galaxy
($R_{\textrm{max}} = 60$~kpc in grey;
$R_{\textrm{max}} = 30$~kpc in red; and
$R_{\textrm{max}} = 10$~kpc in green). In each coloured area plot, the radial distance $R$ from the centre of the galaxy in the galactic plane ($R = (r^2-z^2)^{1/2}$, where $r$ is the radius in spherical coordinates and $z$ is the distance above the disc)
increases from 1~kpc (top) to 10~kpc (bottom).}
\label{windspeed_fig}
\end{figure}
\subsection{Small-scale structures in thermal and non-thermal emission}
\label{sectResultsSmallScaleStruct}
Since the problems of the \textsc{KillMS} pipeline, negative bowls and possibly missing flux were not noticeable at high
surface brightness features, we used the high-resolution $5\arcsec$
LoTSS image of NGC\,3556 to compare the structure of the non-thermal
radio continuum with the thermal emission traced by the continuum-subtracted
H$\alpha$ image.
For this, we overlaid the LoTSS high-resolution images as greyscales with
a pseudo-colour version of the continuum-subtracted H$\alpha$ image. The result
is given in Fig.~\ref{HA-LoTSS}.
Several aspects are worth noting. Our H$\alpha$ image shows some extended diffuse
H$\alpha$ emission
in addition to the \ion{H}{ii} regions, as already noted by
\citet{Collins2000}.
While there is a general correlation
between H$\alpha$ and radio continuum, there is not a good one-to-one correspondence
of the brightest \ion{H}{II} regions and the brightest LOFAR emission neither
spatially nor in flux/intensity. This may be an effect of the dust
absorption, which affects the H$\alpha$ line flux in the disc. The number
of \ion{H}{II} regions that coincide with LOFAR emission are therefore at the
front side of the galaxy. The H$\alpha$ bright central region is also the brightest
region in radio continuum, implying efficient dust removal, probably due to
a wind from this central region.
There are several kiloparsec-sized radial filaments
visible in the LoTSS image extending into the halo; see arrow marks in
Fig.~\ref{HA-LoTSS}. In most cases (\#2, 3, 4, and 7) these filaments connect back to
large \ion{H}{ii} regions in the disc of NGC\,3556, or broader filaments (\#1, 5, 6, and 8)
connect to a large, bright region containing DIG.
It is tempting to identify these filaments as magnetised chimneys from giant \ion{H}{ii} regions
and more evolved, bright regions of DIG with still significant SNe activity. Despite the lower
resolution and smaller signal-to-noise ratio, the overall morphology of the radial filaments extending into the halo resembles that of the
starburst galaxy M82, see for example\ the unpublished 5 GHz map (NRAO News Release, 2014 February 3).
The non-detection of a diffuse large-scale outflow in our H$\alpha$ data is most probably due to the relatively low sensitivity of our optical
data.
Still, the clear connection of non-thermal radio continuum filaments to large star-forming
regions and regions of large diffuse H$\alpha$ emission implies an outflow/wind
driven by the combined pressure from cosmic rays and hot thermal gas as result of SNe activity.
\begin{figure}[!htbp]
{\resizebox{\hsize}{!}{\includegraphics{NGC3556_lofar_ha_label.pdf}
}}
\caption{
Sub-image measuring $9\arcmin \times 6\arcmin$ of the high-resolution LoTSS
tile containing NGC\,3556 plotted as greyscale image; our continuum-subtracted H$\alpha$ image is overlaid as a pseudo-colour image (intensity scale
rainbow: from low intensity = violet and blue to high intensity = red).
North is up, east is left in the image. Several radio structures are marked
and numbered.}
\label{HA-LoTSS}
\end{figure}
\section{Discussion}
\label{sec:discussion}
As shown in Sect.~\ref{sec:results}, the extent of the faint halo increases from high to low frequency. The radial extent, however, does not increase. This has been observed in many spiral edge-on galaxies (e.g.\ in the CHANG-ES sample) and can be interpreted as a cosmic-ray driven galactic wind \citep{Butsky2018}.
Furthermore, the extent in the CHANG-ES maps is generally smaller than in the LOFAR map. This can be a result of the missing-spacing problem, where an interferometer does not `see' emission on angular scales above a certain size. For the D-array/$C$-band combination, however, the largest angular scale is about $4\arcmin$ in size.\footnote{\url{https://science.nrao.edu/facilities/vla/\\docs/manuals/oss/performance/resolution}}
This is comparable to the extent seen in the LOFAR map, so it is reasonable to assume that the halo would be seen in its entirety. Hence, we conclude that since the observed halo extent is smaller than in the $144$~MHz map, the CRe did not propagate any further as a result of spectral ageing.
As the top panel of Fig.~\ref{figPIBRM}
shows, the galaxy exhibits polarised $1.5$~GHz emission in only one patch localised at the eastern side of the galaxy. Such localised regions of PI have been observed before, such as in NGC\,5055 \citep{Heald2009}. This side of the galaxy is the approaching side \citep{Wiegert2011}. \cite{Braun2010} argued that such an azimuthal asymmetry of PI can be the result of a combination of Faraday depolarisation and a field projection in which a spiral disc field is combined with a quadrupolar halo field. Since this depends on the presence of Faraday depolarisation, the PI should be found across the galaxy's disc at higher frequency observations. We can test with our 6 GHz observations.
The middle panel of Fig.~\ref{figPIBRM} shows that polarisation can indeed be found across the disc at $6$~GHz, suggesting that the magnetic topology proposed by \cite{Braun2010} may be present in this galaxy. An argument for this can also be found in the polarisation vectors found in both maps. While the $L$-band map only shows a small patch with a curvature that resembles a quadrupolar field, the higher resolution $C$-band map shows more locations, which resemble the outline of a quadrupolar field. First, there are vertical field lines in the centre of the galaxy, and second, on both sides of the galaxy there are plane parallel field lines that curve upwards and downwards, as one expects from a quadrupolar field. The reason why the parallel part of the field is missing in the $L$-band map is that the polarised emission comes from a foreground layer of the galaxy, while emission from a higher depth along the line of sight is depolarised.
Figure~\ref{figPIBRM} (bottom panel)
indicates that the magnetic field on the southern side of the galaxy is pointing away from the observer and the northern side of the magnetic field of the galaxy is pointing towards the observer. This would contradict a quadrupole field, since the magnetic field in such a field configuration would have the same direction above and below the centre of the field. Unfortunately, the Faraday resolution in $C$ band is very poor,
$\approx 1000~\rm rad\,m^{-2}$ because of the observing set-up of the $C$-band observations used by CHANG-ES, such that the values obtained from the $C$-band map do not provide us with any further useful information.
The fitted scale heights at two frequencies can be used to approximate the CRe propagation type, by calculating the ratio of the scale heights because they are approximately proportional to the ratio of the frequencies. The relevant relations for different propagation types and their derivations are found, for example, in \cite{changes9} as follows:
\begin{eqnarray}
\textnormal{Diffusion:~} \dfrac{h_1}{h_2} = \left(\dfrac{\nu_1}{\nu_2}\right)^{-1/8}\\
\textnormal{Energy-dependent diffusion ($\mu=0.5$):~} \dfrac{h_1}{h_2} = \left(\dfrac{\nu_1}{\nu_2}\right)^{-1/4}\\
\textnormal{Advection:~}: \dfrac{h_1}{h_2} = \left(\dfrac{\nu_1}{\nu_2}\right)^{-1/2} \, .
\end{eqnarray}
These loss processes are all for a synchrotron loss dominated halo, which means that the CRe lose their energy through synchrotron radiation before they can escape the galaxy. The three ratios are for energy-independent diffusion, energy-dependent diffusion, and advection. However, for an escape-dominated halo, in which the CRe propagate fast enough so that they can escape the galaxy before they radiate away their energy, the ratio can be smaller or even approaching unity.
For this work, the expected ratios for $\nu_1/\nu_2 = $144$~{\rm MHz} / $1.5$~{\rm GHz}$ are given in Table~\ref{ratiotable}.
\begin{table}[!htbp]
\caption{Expected scale height ratios given the frequencies used in this work for different propagation types, based on \cite{changes9}.}
\begin{center}
\begin{tabular}{cccc}
\hline\hline
Diffusion & Diffusion ($\mu=0.5$) & Advection\\
\hline
$1.34$ & $1.80$ & $3.24$ \\
\hline
\label{ratiotable}
\end{tabular}
\end{center}
\end{table}
The measured ratio of the scale heights at 144~MHz to at $1.5$~GHz is $1.3 \pm 0.41$ for the thin disc and $1.81\pm 0.71$ for the thick disc. The large errors are the result of the scale height errors that propagate through the calculation. Therefore, this calculation is only conclusive in that sense that an advection-type propagation in a synchrotron loss dominated halo can be excluded.
The CRe propagation simulations, however, are more conclusive and show that only advection results in a good fit, such as the model with a constant wind speed of $145~\rm km\,s^{-1}$. An even better fit was achieved using a linearly accelerated wind. This then yields an initial wind speed of $\varv_0 = 123~\rm km\,s^{-1}$. In this model, the advective timescale with the wind profile shown in Fig.~\ref{windspeed_fig} is $67~\rm Myr$ at the edge of the observable halo, which is the time that the CRe would need to reach a height of $15$~kpc. Depending on the assumed outer radius of the isothermal sphere, the escape velocity is reached at heights between $5$ and $15$~kpc. This is similar to values obtained by \mbox{\cite{Everett2008}} and \cite{Mao2018} who simulated cosmic-ray driven outflows.
To calculate if it is possible for the CRe to escape the galaxy, we calculated the energy of CRe observed at LOFAR frequencies and their synchrotron lifetime.
The energy of CRe can be approximated by the relation
\begin{equation}
E = \left(\frac{\nu}{16.1\,{\rm MHz}} \right)^{0.5}
\left(\frac{B_\perp}{\mu G}\right)^{-0.5},
\end{equation}
since the radiation of CRe is mostly synchrotron radiation, which depends on the initial energy and the strength of the magnetic field \citep{changes9}. Calculating this energy using the LOFAR observing frequency of 144~MHz and a magnetic field strength of $9~\mu$G, the lower limit of the values found in the centre of the galaxy, the resulting energy is 1~GeV.
Using equations from \cite{heesen2009} and \cite{changes9}, the synchrotron lifetime can now be calculated using
\begin{equation}
\frac{t_{\rm syn}}{{\rm yr}} = 8.35 \times 10^{9}
\left( \frac{E}{{\rm GeV}} \right)^{-1}
\left( \frac{B_\perp}{\mu G} \right)^{-2} \, .
\end{equation}
With a magnetic field strength of $9~\mu$G, the field strength we assumed for the cosmic-ray propagation simulations in the disc plane, and a CRe energy of 1~GeV, the resulting synchrotron lifetime is of the order of 100~Myr. The calculated value for the synchrotron lifetime is higher than the value obtained from the accelerated wind model, therefore we can conclude that the CRe can indeed escape from the galaxy before they are lost to observations owing to their synchrotron lifetime.
Another sign of a wind can be found in the \emph{Suzaku} X-ray observations of this galaxy, as shown in Fig.~\ref{fig:suzaku}.
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{Suzaku_lofar.png}
\caption{LOFAR 144 MHz radio continuum intensity contours overlaid onto a \emph{Suzaku} soft X-ray ($0.3$--2~keV) image. The contours start at $600~\mu\rm Jy\,beam^{-1}$ and increase by factors of two. The filled red circle represents the 21\arcsec FWHM synthesised beam.}
\label{fig:suzaku}
\end{figure}
This image is constructed in the $0.3$--2~keV band with a $67.5$ ks observation of the X-ray Imaging Spectrometer, XIS-3, aboard the observatory. The map was smoothed to a resolution of $21\arcsec$ FWHM.
The lack of detailed correlation between the X-ray and radio emission, however, indicates the outflows driven by the hot gas and cosmic rays may have very different dynamics, which are yet to be investigated. These results led us to the conclusion that the measured values of the magnetic field strength and the fitted propagation parameters fit well into simulations for this type of a galaxy, which indeed show the existence of a galactic wind for such a galaxy.
\section{Summary and conclusions}
\label{sec:conclusions}
In this paper, we have utilised deep LOFAR observations from the LoTSS survey at $144~\rm MHz$ to study the nearby late-type spiral galaxy NGC~3556. Its high inclination angle means that we can study the galaxy in an edge-on position, giving us a good view of its halo. We used further VLA observations from the CHANG-ES survey at effective frequencies of $1.5$ and 6~GHz to calculate radio spectral indices. We performed RM synthesis to measure the linearly polarised emission to study the structure of the magnetic field. Furthermore, we used 1D cosmic-ray transport models applied to the electrons, assuming a steady state with a balance between injection and energy losses. These are our main results:
\begin{itemize}
\item We obtained polarisation and rotation measure maps through RM synthesis, which show that the PI is only present on the eastern, approaching side of the galaxy.
Polarisation maps in $C$ band, however, show the outline of what could be a quadrupolar magnetic field. This would explain the missing polarisation in $L$ band via a magnetic topology that has been proposed by \cite{Braun2010}.
\item We analysed the total power maps using stripe integration techniques and found a scale height of $1.43$~kpc for the thin disc at $1.5$~GHz and $1.86$~kpc at 144~MHz. For the thick disc, the scale heights are $3.28$ and $5.93$~kpc, respectively.
\item Using the equipartition assumption and an exponential fit to the magnetic field profile, we calculated the magnetic field strength in the galaxy to be $9~\mu$G with a magnetic field scale height $1.5$~kpc for the thin disc, and $23.7$~kpc for the thick disc.
\item We simulated cosmic-ray propagation using 1D transport models for pure diffusion and advection. We found that we can rule out diffusion as the dominating transport process since advection gives much better fits to the data. Our best-fitting model is a linearly accelerating wind with an initial wind speed of $123~\rm km\,s^{-1}$ that reaches the escape velocity at a height between 5 and $8$~kpc if a truncation radius of $10$~kpc is assumed, between $10$ and $12$~kpc if a truncation radius of $30$~kpc is assumed, and finally, at $\approx$ $15$~kpc if a radius of $60$~kpc is assumed. In such a model, the cosmic rays and magnetic field are close to energy equipartition in the halo with the cosmic rays dominating by a factor of a few.
\item We estimate the CRe lifetime at 1~GeV as seen by LOFAR to be approximately 100~Myr before they radiate their energy away. From the cosmic-ray propagation model we calculated that the CRe advective timescale is 67~Myr to reach a height of 15~kpc, the sensitivity limit of our observations. Hence, this model is consistent with the CRe lifetime estimate.
\end{itemize}
Our work demonstrates the potential that LOFAR has in studying the radio haloes of nearby galaxies. Our discovery of a very extended halo magnetic field component with a scale height of at least 20~kpc shows that we have access to components in galaxies that thus far have evaded detection at GHz frequencies. Our best-fitting model requires an accelerating wind, which, to our knowledge, is also a first for modelling of radio haloes in external galaxies. Our advection solution starts with a modest advection speed of $123~\rm km\,s^{-1}$ to accelerate and reach the escape velocity a few kiloparsec away from the disc. Such a behaviour is predicted by the many models of cosmic-ray driven winds that have recently gained traction. Radio continuum observations offer opportunities to measure the transport of cosmic rays outside of the Milky Way to test these models. Since LoTSS will observe the entire northern hemisphere, we can be sure that many more exciting opportunities will arise to study this subject further. More specifically, the variation of cosmic-ray driven wind properties with underlying SFRD or other galaxy properties can be explored, which is unique and powerful to constrain models.
\begin{acknowledgement}
The data used in work was in part processed on the Dutch national e-infrastructure with the support of SURF Cooperative through grant e-infra 160022 \& 160152.\\
\newline
BNW acknowledges support from the Polish National Centre of Sciences (NCN), grant no. UMO-2016/23/D/ST9/00386.\\
\newline
This paper is based (in part) on data obtained with the International LOFAR
Telescope (ILT) under project code LC3\_008. LOFAR (van Haarlem et al. 2013) is the LOw
Frequency ARray designed and constructed by ASTRON. It has observing, data
processing, and data storage facilities in several countries, which are owned by
various parties (each with their own funding sources) and are collectively
operated by the ILT foundation under a joint scientific policy. The ILT resources
have benefitted from the following recent major funding sources: CNRS-INSU,
Observatoire de Paris, and Université d'Orléans, France; BMBF, MIWF-NRW, MPG,
Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and
Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology
Facilities Council, UK; Ministry of Science and Higher Education, Poland.\\
\newline
Part of this work was carried out on the Dutch national e-infrastructure with the support of the SURF Cooperative through grant e-infra 160022 \& 160152. The LOFAR software and dedicated reduction packages on \url{https://github.com/apmechev/GRID\_LRT} were deployed on the e-infrastructure by the LOFAR e-infragroup, consisting of J. B. R. Oonk (ASTRON \& Leiden Observatory), A. P. Mechev (Leiden Observatory) and T. Shimwell (ASTRON) with support from N. Danezi (SURFsara) and C. Schrijvers (SURFsara).\\
\newline
This research has made use of data analysed using the University of
Hertfordshire high-performance computing facility
(\url{http://uhhpc.herts.ac.uk/}) and the LOFAR-UK computing facility
located at the University of Hertfordshire and supported by STFC
[ST/P000096/1].\\
\newline
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS website is \url{www.sdss.org}.
SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.\\
\newline
The work at Ruhr-University Bochum is supported by BMBF Verbundforschung
under D-LOFAR IV - FKZ: 05A17PC1.\\
\newline
The project has in part also benefitted from the exchange programme between Jagieollonian University Krakow and Ruhr-University Bochum.\\
\newline
We thank Olaf Wucknitz for his useful comments.
\newline
We thank the anonymous referee for constructive and helpful comments.
\end{acknowledgement}
\bibliographystyle{aa}
|
1,314,259,993,500 | arxiv | \section{Introduction}
Rapid theoretical and experimental development of quantum computers has led to a productive crossover of ideas between the fields of many-body condensed matter physics and of quantum information and computation \cite{Augusiak2012,Zeng2019}. On the one hand, a principal application of quantum devices is the simulation of quantum many-body systems that are not amenable to classical computational methods \cite{Preskill2018,McClean2016,Kandala2017}. However, the relationship is not merely one-way: concepts from many-body physics can also be useful in designing new quantum devices with improved information processing capabilities. This direction is exemplified by recent work on many-body localization, time crystals, and fractons \cite{Else2016,Yao2017,Abanin2019,Else2020,Khemani2020,Khemani2019}, which have been variously proposed for robust storage of quantum information \cite{Yao2015,Santos2020}.
Studies of discrete time crystals (DTCs) in spin systems have largely employed single-spin rotations as the driving terms that are needed to realize the DTC phase \cite{Else2016,Yao2017,Zhang2017,Choi2017}. Such driving can be achieved in quantum dots (QDs), for instance, by electric dipole spin resonance (EDSR) via an embedded micromagnet \cite{PioroLadriere2008,Watson2018,Sigillito2019,Takeda2020}. But gate-defined QDs also afford exquisite control over spin interactions, whether by detuning or symmetric barrier gates \cite{Petta2005,Reed2016,Martins2016}. This motivates the exploration of novel driving protocols in which the spin interactions are periodically modulated. Driving the interactions also allows one to implement important operations, such as a \textsc{swap}\ between the states of neighboring QD spins, which is useful for measuring states in the middle of an array by shuttling the desired state to the edge for readout. A recent paper has developed a \textsc{swap}\ DTC driving protocol in which exchange driving of spin pairs by \textsc{swap}\ operations, followed by periods of weak interaction, produces time-crystal-like signatures in a four spin QD array \cite{Qiao2020}.
In this paper, we explore the preservation and manipulation of entanglement in QD spin chains via the \textsc{swap}\ DTC protocol. We show that arbitrary states in the $S_z=0$ subspace of two neighboring spins can be preserved for long times, with marked improvement over the undriven interacting system. This result, obtained for finite chains, is reminiscent of DTC physics in the thermodynamic limit, due to the crucial role played by interactions in stabilizing the state. It also suggests the application of the \textsc{swap}\ DTC protocol as a form of dynamic quantum memory, protecting the state of the two entangled spins. One may further consider such pairs of neighboring spins as forming singlet-triplet (ST) qubits \cite{Levy2002,Petta2005}. For this case, we design a universal gate set, which includes a high-fidelity \textsc{cz}\ gate through the modification of the \textsc{swap}\ DTC protocol. Taken together, these results show that DTC-based physics offers a promising route for developing quantum information processing systems in solid-state spin arrays.
The paper is structured as follows. Section~\ref{sec:model} introduces the model and the driving protocol for the \textsc{swap}\ DTC. Section~\ref{sec:PDs} presents phase diagrams that demonstrate the robustness of the DTC phase to the presence of driving errors, a key requirement for the \textsc{swap}\ DTC to constitute a genuine phase of matter and to be of practical use. In Section~\ref{sec:retprob}, we investigate the time dependence of the return probability and uncover the existence of $4T$ periodic oscillations for initial entangled spin states, in contrast with the usual $2T$ time translation symmetry breaking found in earlier studies. Section~\ref{sec:undriven} compares the return probabilities for different driving protocols and for the undriven Heisenberg spin chain, illustrating the importance of driving for preserving entangled states of the two spins in an ST qubit. Section~\ref{sec:switchstate} demonstrates the single-qubit gate allowing for coherent switching of the preserved state. Section~\ref{sec:twoqubitgates} describes the \textsc{cz}\ gate inspired by the \textsc{swap}\ DTC protocol and presents numerical calculations of its fidelity. Finally, the results are summarized in Section~\ref{sec:conclusion}.
\section{Model of a \textsc{swap}\ Time Crystal \label{sec:model}}
We consider a one-dimensional chain of spin-1/2 degrees of freedom consisting of $L = 2N_q$ sites. The Hamiltonian for this system is given by
\begin{align}
H = \sum_{\langle ij \rangle,\alpha} \frac{J_{ij}}{4} \sigma^\alpha_i \sigma^\alpha_j + \sum_i \frac{1}{2}(B_0 + \delta B_i ) \sigma^z_i, \label{eq:heisham}
\end{align}
where $\alpha = \{x,y,z\}$ and $\langle ij \rangle$ indicates nearest-neighbors. $J_{ij}$ is the exchange interaction, $B_0$ is an externally applied uniform magnetic field, and $\delta B_i$ is a random Gaussian-distributed contribution to the total field with variance $\sigma_B$ due to nuclear spin noise (as in GaAs, for instance).
Although the principles we discuss apply to generic spin-1/2 Heisenberg chains, we find it helpful to think of the system as an array of coupled ST qubits \cite{Levy2002}. An ST qubit consists of a pair of electron spins on neighboring QDs subject to a large magnetic field that separates out the polarized states, $|T_+ \rangle = | \! \!\uparrow \uparrow \rangle$ and $|T_- \rangle = | \! \!\downarrow \downarrow \rangle$, leaving behind the computational subspace $\{ |S \rangle, | T_0 \rangle \}$ of the singlet ($|S \rangle = (| \! \!\uparrow \downarrow \rangle - |\! \!\downarrow \uparrow \rangle )/\sqrt{2}$)) and $S_z = 0$ triplet ($|T_0 \rangle = (|\! \!\uparrow \downarrow \rangle + |\! \!\downarrow \uparrow \rangle )/\sqrt{2}$)) states. The resulting two-level system admits a Bloch sphere representation, as shown in Fig. \ref{fig:schematic}, where the basis $\{ {|\! \uparrow \downarrow \rangle}, {|\! \downarrow \uparrow \rangle} \}$ is chosen for the $\hat{z}$ direction. ST qubits are actively being studied as an encoding for qubits that are naturally insensitive to uniform magnetic field fluctuations \cite{Petta2005,Shulman2012,Wang2012,Calderon-Vargas2015,Nichol2017,Buterakos2019,Colmenar2019,Cerfontaine2020a}. $N_q$ is the number of ST qubits in the chain, which are comprised of pairs of neighboring sites $(2q-1,2q)$, with $q=1,2,...$ (Fig. \ref{fig:schematic}).
\begin{figure}[h]
\includegraphics[scale=0.8]{spinchain_schematic_STbloch3.pdf}
\caption{ Schematic of an $L=4$ Heisenberg spin chain with variable exchange interactions $J_{12}$, $J_{23}$, and $J_{34}$. One can think of this system as a pair of coupled ST qubits (with leakage), as indicated by the purple ovals. $J_{12}$ and $J_{34}$ are used to execute \textsc{swap}\ operations on the spins defining these qubits, while $J_{23}$ yields an interaction between them. The ST qubit Bloch sphere is also shown. \label{fig:schematic}}
\end{figure}
Time crystalline phases were previously discovered in driven Heisenberg chains by applying tailored ``H2I'' pulse sequences or magnetic field gradients that convert the Heisenberg interactions into effective Ising ones \cite{Barnes2019,Li2020}. In both approaches, the periodic driving consisted of single-particle terms that rotate the spins by $\pi$, whether by idealized $\delta$-function pulses or realistic EDSR methods. Notably, it was found to be necessary to apply H2I pulses or field gradients in order to stabilize a DTC for the levels of magnetic field noise present in experiment (e.g. 18 MHz in GaAs, such that $T_2^* \approx 10$ ns).
Here, we consider a driving protocol based on varying the exchange interactions in a QD array, instead of single-spin manipulations. This approach has several advantages. For one, it can be performed in systems that lack the micromagnet needed for EDSR. More importantly, the timescales for modifying the nearest-neighbor exchange are very fast (a few nanoseconds), whereas EDSR is slower for the weak to moderate field gradients typically used in experiment \cite{PioroLadriere2008}. The fundamental idea of our approach is to drive the system periodically by fast \textsc{swap}\ operations within each ST qubit, followed by long evolution times during which neighboring ST qubits interact \cite{Qiao2020}. Both of these operations are implemented by the same underlying physical mechanism, namely, the nearest-neighbor exchange coupling between QD spins. More specifically, we consider the following unitary evolution over one drive period:
\begin{align}
U = U_{SWAP}(T_{S}) U_{evo}(T_{e}) . \label{eq:swapDTCprotocol}
\end{align}
The two parts of this protocol are piecewise constant, with the \textsc{swap}\ piece given by $U_{SWAP}(T_{S}) = e^{-i H_S T_S}$, where
\begin{align}
H_S = \frac{J_{S}}{4}(1-\epsilon) \sum_{i=1,\alpha}^{L/2} \sigma^\alpha_{2i-1} \sigma^\alpha_{2i} + \sum_{i=1}^L \frac{1}{2}(B_0 + \delta B_i ) \sigma^z_i
\end{align}
is applied for time $T_S$ such that $J_S T_S = \pi$, thus interchanging the spin states of sites $2i-1$ and $2i$. $\epsilon$ introduces a fractional error in the \textsc{swap}\ pulse, corresponding to an underrotation for $\epsilon > 0$. For the $L=4$ chain, the \textsc{swap}\ interactions are illustrated by the light blue dashed lines in Fig. \ref{fig:schematic}, such that $J_{12} = J_{34} = J_S$. The evolution piece $U_{evo}(T_{e}) = e^{-i H_e T_e}$ is generated by the Hamiltonian
\begin{align}
H_e= \frac{J_{e}}{4} \sum_{i=1,\alpha}^{L/2-1} \sigma^\alpha_{2i} \sigma^\alpha_{2i+1} + \sum_{i=1}^L \frac{1}{2}(B_0 + \delta B_i ) \sigma^z_i. \label{eq:Hevo}
\end{align}
These interactions are indicated by the light green dashed line in Fig. \ref{fig:schematic}, with $J_{23} = J_e$. In the following sections, we explore the consequences of this driving protocol for the stabilization of quantum information. Unless otherwise stated, we assume an $L=4$ chain in our numerical calculations. The calculations were performed using the QuSpin Python package for exact diagonalization of quantum many-body systems \cite{Weinberg2017}.
\section{Phase Diagrams \label{sec:PDs}}
One of the defining features of a time crystal is its stability to perturbations due to the presence of non-zero interactions in the system. Earlier work on both Ising model and Heisenberg model DTCs has shown that sufficiently weak driving pulse errors (i.e. over- or under-rotation of the spins relative to $\pi$ radians) do not destroy the phase. Here we examine the corresponding errors in performing an incomplete \textsc{swap}\ operation. Fig.~\ref{fig:PD}(a) shows the subsystem return probability for qubit 1 (sites 1 and 2) of an $L=4$ spin chain, after four periods of the protocol ($n_T=4$). The system is initialized in the product state in which each ST qubit is in its individual non-interacting ground state, the latter being determined by the local magnetic field gradient across the double QD. Thus, the initial state chosen varies over the field noise disorder realizations. This scenario is naturally realized in experiments with gate-defined QD arrays. In our calculations, we fix the evolution time to $T_e = 1.4$ $\mu$s, and we vary the interaction strength $J_e$ and the fractional error in performing a \textsc{swap}, i.e. an error of $\epsilon = 0.5$ corresponds to a $\sqrt{\textsc{swap}}$, while for $\epsilon=1$ no operation is performed at all. We find that typical levels of charge noise have little effect on the results, so we neglect this here.
The wedge-shaped regions of high return probability for small $\epsilon$ and increasing $J_e$ illustrate that interactions are crucial for preserving the quantum state of qubit 1 in the presence of driving errors. We note that not driving the system at all ($\epsilon = 1$) is also very effective for preserving the state of qubit 1 (though of course in this case there is no time translation symmetry breaking). We examine this further in Section~\ref{sec:undriven}.
\begin{figure}[h]
\includegraphics[scale=0.8]{plot_retprobPD_DTCST_swaperr.pdf}
\caption{ (a) Phase diagram of the return probability for an initial ${|\! \uparrow \downarrow \rangle}$ state on qubit 1 as a function of inter-qubit coupling $J_e$ and pulse error $\epsilon$. (b) Phase diagram of the return probability for an initial singlet state of qubit 1. Parameters are $L=4$, $B_0 = 3075$ MHz, $\sigma_B = 18$ MHz, $T_{e}=1.4$ $\mu$s, $T_S=2$ ns, $J_S = \pi/T_S$, $n_T$=4. Here we have chosen parameters similar to those of Ref. \cite{Qiao2020}. The initial state of qubit 2 is the product state that minimizes the field gradient energy for a given disorder realization. Results are averaged over 192 disorder realizations. \label{fig:PD}}
\end{figure}
In contrast, Fig. \ref{fig:PD}(b) reveals that when qubit 1 is initialized in a singlet state, \textsc{swap}\ driving is required to produce a high return probability after four periods of evolution. Here, the initial state of qubit 2 is still the product state determined by the local field gradient. While $J_e=0$ yields a high singlet return probability for a perfect \textsc{swap}, the presence of finite interactions does increase the value of the return probability, as seen in Fig. \ref{fig:PD1D}. The singlet return probability peaks when $J_e T_e = \pi n$ (for $J_e$ measured in rad/$\mu$s). In weak magnetic field gradients, these values correspond to performing $n$ \textsc{swap}\ operations on sites belonging to different neighboring qubits (e.g. sites 2 and 3 in the $L=4$ chain). An even $n$ yields a net trivial operation (for perfect \textsc{swap} s), while odd $n$ causes the initial singlet on sites 1 and 2, $S_{12}$, to be transferred to sites 1 and 3 during the evolution piece of the protocol, which is then undone after three additional periods in the $L=4$ case. The low values of $S_{12}$ in between the peaks can be understood as arising from the monogamy of entanglement, since an incomplete \textsc{swap}\ leads to site 1 remaining partially entangled with the rest of the chain after four periods, and thus less entangled with site 2. When the initial state is the product state ${|\! \uparrow \downarrow \rangle} {|\! \uparrow \downarrow \rangle}$, the \textsc{swap}\ on 2 and 3 produces a spin echo-like effect that accounts for the maxima when $n$ is odd.
\begin{figure}[h]
\includegraphics[scale=0.4]{plot_DTCSTqubits_PD1DvsJinter.pdf}
\caption{ Return probability for qubit 1 as a function of $J_e T_e$, for the initial states ${|\! \uparrow \downarrow \rangle}$ (blue line) and the singlet (orange line). Parameters are $L=4$, $B_0 = 3075$ MHz, $\sigma_B = 18$ MHz, $J_S = 250$ MHz, $T_{e}=1.4$ $\mu$s, $T_S=2$ ns, $n_T$=4, $\epsilon = 0$. The initial state of qubit 2 is ${|\! \uparrow \downarrow \rangle}$. Results are averaged over 960 disorder realizations. \label{fig:PD1D}}
\end{figure}
\section{Return Probability Dynamics \label{sec:retprob}}
The dynamics are also different depending on whether the initial state is a product or singlet state. Fig. \ref{fig:retprobvst_udS} illustrates the $2T$ periodicity of the return probability for qubit 1 when the system is initialized in ${|\! \uparrow \downarrow \rangle} {|\! \uparrow \downarrow \rangle}$ and $J_e T_e = \pi$. The results agree with those for a chain driven by single-spin $\pi$ rotations, as both operations have the same effect: ${|\! \uparrow \downarrow \rangle} {|\! \uparrow \downarrow \rangle} \rightarrow {|\! \downarrow \uparrow \rangle} {|\! \downarrow \uparrow \rangle}$.
\begin{figure}[h]
\includegraphics[scale=0.89]{plot_returnprob_ud_S.pdf}
\caption{ Time dependence of the return probabilities for qubit 1, given an initial state of ${|\! \uparrow \downarrow \rangle}$ for qubit 2. The blue line shows the return probability for ${|\! \uparrow \downarrow \rangle}$, given the initial product state ${|\! \uparrow \downarrow \rangle} {|\! \uparrow \downarrow \rangle}$. The orange shows the return probability for the singlet state $|S \rangle$, given the initial product state $|S \rangle {|\! \uparrow \downarrow \rangle}$ (orange line). Parameters are $L=4$, $B_0 = 3075$ MHz, $\sigma_B = 18$ MHz, $T_e = 1.4$ $\mu$s, $J_e=\pi/T_e$, $T_S=2$ ns, $J_S=\pi/T_S$, $\epsilon = 0$. Results are averaged over 6000 disorder realizations. \label{fig:retprobvst_udS}}
\end{figure}
On the other hand, the $L=4$ chain shows a $4T$ periodicity for the singlet return probability of qubit 1. This is in striking contrast with previous work on discrete time crystals, which generally found a $2T$ periodicity for spin-1/2 degrees of freedom \cite{Else2016,Yao2017,Zhang2017,Choi2017}. In fact, for $\sigma_B \ll J_e$ we find that an $L$ site chain has a singlet return probability with $LT$ periodicity. This can be easily understood as arising from successive applications of \textsc{swap} s, coming from both the explicit driving part of the protocol and the evolution part tuned to $J_e T_e = \pi$. For instance, when $L=6$ we have the following steps that transfer the singlet state down the chain, where it is ``reflected'' off the right edge and returns back to its initial position:
\begin{align}
S_{12} &\xrightarrow{\textsc{swap}} S_{12} \xrightarrow{evo} S_{13} \xrightarrow{\textsc{swap}} S_{24} \xrightarrow{evo} S_{35} \notag\\
&\xrightarrow{\textsc{swap}} S_{46} \xrightarrow{evo} S_{56} \xrightarrow{\textsc{swap}} S_{56} \xrightarrow{evo} S_{46}\notag\\
&\xrightarrow{\textsc{swap}} S_{35} \xrightarrow{evo} S_{24} \xrightarrow{\textsc{swap}} S_{13} \xrightarrow{evo} S_{12}
\end{align}
However, the experimentally relevant interaction strength needed to perform a single \textsc{swap}\ over $1.4$ $\mu$s is $\sim 350$ kHz, which is much smaller than the magnetic field noise $\sim 18$ MHz in GaAs QDs. For realistic levels of field noise, the singlet return probability displays a $4T$ periodicity regardless of chain length. Moreover, we find that when the disorder starts at small values and increases toward 18 MHz, the transition between $6T$ and $4T$ periodicity is smooth, with the return probability at $6T$ gradually decreasing while that at $4T$ increases (as opposed to a shift in the peak from $6T$ to $4T$ through intermediate values).
The $4T$ periodicity observed at sufficiently strong disorder can be explained as follows. First, note that each part of the protocol involves interactions only between disjoint pairs of spins. Thus, we may consider the Hamiltonian, Eq.~\eqref{eq:heisham}, restricted to two sites $a$ and $b$,
\begin{align}
H_{ab} = \frac{J}{4} ( \sigma^x_a \sigma^x_b + \sigma^y_a \sigma^y_b + \sigma^z_a \sigma^z_b) + \frac{1}{2} ( B_a \sigma^z_a + B_b \sigma^z_b ),
\end{align}
where $B_{a,b}$ is the total field at site $a,b$. In general, the two spins coupled in a given part of the protocol can have parallel or antiparallel orientations. Within the $\{ {|\! \uparrow \downarrow \rangle}, {|\! \downarrow \uparrow \rangle} \}$ subspace the evolution operator $U = e^{-it H_{ab}}$ is
\begin{align}
U_1 = e^{iJt/2} \begin{pmatrix}
\cos ( \frac{\alpha t}{2} )+ \frac{i \Delta}{\alpha} \sin(t\alpha/2) & -\frac{i J}{\alpha} \sin (\frac{\alpha t}{2}) \\
-\frac{i J}{\alpha} \sin (\frac{\alpha t}{2}) & \cos (\frac{\alpha}{2} ) - \frac{i \Delta}{\alpha} \sin(\frac{\alpha t}{2})
\end{pmatrix},
\end{align}
with $\alpha = \sqrt{J^2 + \Delta^2}$ and $\Delta = B_b - B_a$ the field gradient across the pair. We have multiplied $U$ (and hence $U_1$) by a global phase, $e^{iJt/4}$, to simplify the following analysis. The \textsc{swap}\ part of the protocol is performed in 2 ns, so that $J_S \gg \Delta$ and we may neglect errors in the transition ${|\! \uparrow \downarrow \rangle} \xrightarrow{\textsc{swap}} {|\! \downarrow \uparrow \rangle}$. For the evolution part of the protocol we use perturbation theory in $(J_e/\Delta)$ to obtain the approximate evolution
\begin{align}
U_1' = e^{iJ_et/2} \begin{pmatrix}
e^{i \Delta t /2} & 0 \\
0 & e^{-i \Delta t /2}
\end{pmatrix}. \label{eq:udduevo}
\end{align}
On the other hand, the evolution in the $\{ {|\! \uparrow \uparrow \rangle}, {|\! \downarrow \downarrow \rangle} \}$ subspace is given by
\begin{align}
U_2 = \begin{pmatrix}
e^{-i B_{tot} t/2} & 0 \\
0 & e^{i B_{tot} t/2}
\end{pmatrix}, \label{eq:uuddevo}
\end{align}
where $B_{tot} = B_a + B_b$. Now starting from the initial state $|\psi_0 \rangle = ({|\! \uparrow \downarrow \rangle} - {|\! \downarrow \uparrow \rangle}) | \! \uparrow\downarrow \cdots \rangle$ (suppressing the normalization of the state) and successively applying \textsc{swap} s and the evolutions in Eq.~\eqref{eq:udduevo} and Eq.~\eqref{eq:uuddevo}, we find
\begin{align}
| \psi_0 \rangle \rightarrow | \psi_1 \rangle = i e^{i \Delta T_e /2} | \! \downarrow \uparrow \downarrow\uparrow \cdots \rangle - e^{i B_{tot} T_{e}/2} | \! \uparrow \downarrow \downarrow\uparrow \cdots \rangle
\end{align}
after the first period, where we used that $e^{iJ_eT_e/2}=i$, and we ignored accumulated phases coming from spins other than the first three. The second period of the protocol yields
\begin{align}
| \psi_1 \rangle \rightarrow | \psi_2 \rangle = - ( {|\! \uparrow \downarrow \rangle} + {|\! \downarrow \uparrow \rangle} )| \! \uparrow\downarrow \cdots \rangle,
\end{align}
so that the first qubit is in the state $| T_0 \rangle$. Two further periods then recover the initial state on sites 1 and 2, explaining the $4T$ periodicity of the singlet return probability.
\begin{figure}[h]
\includegraphics[scale=0.82]{plot_returnprob_nearpairs_varyJTandspinconfig2.pdf}
\caption{ Singlet return probability for the cases in which the total phase accumulation of the evolution part of the protocol is $J_e T_e = 2 \pi/3$ (blue line) and in which the initial state is $|S\rangle|\! \uparrow \uparrow \rangle |\! \downarrow \downarrow \rangle$ (orange line). In the first case, the initial state is the singlet state for qubit 1 and the product states minimizing the field gradient energies for the other qubits. In the second case, $J_e T_e = \pi$. Other parameters are $L=6$, $B_0 = 3075$ MHz, $\sigma_B = 18$ MHz, $T_e = 1.4$ $\mu$s, $T_S= 2$ ns, $J_S = \pi/T_S$, $\epsilon=0$. Results are averaged over 6000 disorder realizations. \label{fig:retprobvst_extracases}}
\end{figure}
To provide further support for this simple physical picture, we consider two extensions of the idea. We note the $4T$ periodicity fundamentally arises from the phase factor $e^{i J_e t/2}$ in Eq.~\eqref{eq:udduevo} becoming trivial after four periods, when $J_e T_e = \pi$ (here $J$ is given in radians and $\hbar = 1$). Thus, one should obtain a different periodicity when $J_e T_e$ is chosen such that the relative phase winding occurs at another rate. That this is indeed the case is shown in Fig.~\ref{fig:retprobvst_extracases}, where $J_e T_e = 2\pi/3$ and the resulting periodicity of the singlet return probability maxima is $6T$. Alternatively, one may consider initializing the second qubit in the state ${|\! \uparrow \uparrow \rangle}$ (with the first qubit still initialized in $|S\rangle$). A similar argument as above shows that the first qubit returns to the singlet state after $2T$, in agreement with the orange curve in Fig.~\ref{fig:retprobvst_extracases}. In longer chains, a singlet state prepared in the bulk experiences $4T$ periodicity of the return probability at an interaction strength $J_e T_e = \pi/2$, half the value for a ST qubit on the edge. This is essentially due to the increased number of neighbors, and mirrors the case of the single spin return probability, for which the phase diagram of a bulk spin has half the period compared to that for an edge spin \cite{Li2020}.
\section{Comparison with the Undriven System \label{sec:undriven}}
As noted in Section~\ref{sec:PDs}, the product state ${|\! \uparrow \downarrow \rangle}$ on qubit 1 is well-preserved even in the absence of \textsc{swap}\ driving. In Fig.~\ref{fig:retprobvst_compareproductsinglet}(a) we study the return probability as a function of time, for several different driving protocols. Two different undriven cases are presented. In the first, the Heisenberg interactions are equal throughout the chain and set to the same value as used for the \textsc{swap}\ driving evolution: $J_{12}=J_{23}=J_{34}=\pi/T_{e}$. However, since the \textsc{swap}\ DTC evolution piece only involves inter-qubit $J_e$, the second undriven case mirrors this by setting $J_{23} = \pi/T_e$ and $J_{12} = J_{34} = 0$. In either case, while the undriven and \textsc{swap} -driven cases perform similarly up to ten periods, in the long-time limit the undriven cases are clearly superior. The saturation value of the return probability for the undriven cases tends to grow with increasing field noise strength \cite{Barnes2016}. We note, however, that it does not ultimately approach 1 in the large noise limit. This is due to the fact that disorder averaging mixes in unfavorable field configurations, which limits the overall return probability. On the other hand, applying a uniform linear field gradient (not shown) does tend to increase the return probability towards 1, as the gradient strength increases.
We also compare the \textsc{swap}\ protocol to more traditional single-spin driving. Thus, we consider an idealized instantaneous $\pi$ rotation of all the spins (i.e. a delta-pulse in time):
\begin{align}
V_\delta(t) = \frac{\pi}{2} \sum_{s=1}^\infty \delta (t - s T ) \sum_{j=1}^L \sigma^x_j.
\end{align}
In this case, all nearest-neighbor exchange interactions are turned on, as in the first undriven case. The period of the delta-pulses is adjusted to coincide with the total period of \textsc{swap}\ driving cases, $T_\delta = T_e + T_S$. Fig.~\ref{fig:retprobvst_compareproductsinglet}(a) shows that for an initial product state, the \textsc{swap}\ driving is preferable to the single-spin rotations of the delta-pulse case for experimentally relevant levels of magnetic field noise.
\begin{figure}[h]
\includegraphics[scale=0.48]{plot_returnprob_comparedriving_product_and_singletstatesv2.pdf}
\caption{ (a) Comparison between the undriven system and driving protocols, for an initial product state that minimizes the field gradient energy of qubit 1. (b) Comparison between the undriven system and driving protocols, for an initial singlet state of qubit 1. Other parameters are $L=4$, $B_0 = 3075$ MHz, $\sigma_B = 18$ MHz, $T_e = 1.4$ $\mu$s, $J_e = \pi/T_e$. For the \textsc{swap}\ driving case $T_S = 2$ ns, $J_S = \pi/T_S$, and for both driven cases $\epsilon=0$. The initial state of qubit 2 is the one minimizing the field gradient energy. Results are plotted stroboscopically for every $2T$ and averaged over 6000 disorder realizations. \label{fig:retprobvst_compareproductsinglet}}
\end{figure}
Turning to the case where qubit 1 is initially in an entangled state, it is apparent from Fig.~\ref{fig:retprobvst_compareproductsinglet}(b) that an initial singlet state is not at all preserved for the undriven protocols, whereas the \textsc{swap}\ case leads to a high return probability every four periods, in accordance with the results above. In the given parameter regime, we again see that delta-pulse single-spin rotations are inferior to \textsc{swap}\ pulses for preserving the initial state.
We have seen that the product states ${|\! \uparrow \downarrow \rangle}$ and ${|\! \downarrow \uparrow \rangle}$ survive longer in the absence of \textsc{swap}\ driving, whereas $|S\rangle$ and $|T_0\rangle$ are preserved better when the system is driven. This suggests that if we consider ``unbalanced'' superpositions $\cos(\theta/2) {|\! \uparrow \downarrow \rangle} - \sin(\theta/2) {|\! \downarrow \uparrow \rangle}$ where $0<\theta<\pi/2$, there should exist some value $\theta_*$ such that for $\theta>\theta_*$, driving is beneficial for state preservation. The value of $\theta_*$ in fact depends on how long one wishes to preserve the state, as is shown in Fig.~\ref{fig:retprobvst_unbalanced}. The undriven system return probabilities depend strongly on $\theta$, but are essentially time-independent after an initial decay. Here we have considered the first type of undriven system, in which all nearest-neighbor exchange interactions are nonzero and equal. In contrast, \textsc{swap}\ driving leads to a steady decay of the return probability as the number of driving periods is increased; this decay is relatively insensitive to $\theta$. The intersection of the return probability curves for the undriven and \textsc{swap}-driven cases yields the time below which \textsc{swap}\ driving enhances the attainable return probability for a given initial state parameterized by $\theta$. Conversely, we may fix the time scale at a desired value and then read off the value of $\theta_*$ by adjusting $\theta$ until the undriven return probability curve intersects the \textsc{swap}-driving curve at that time. Similar results are obtained for states with complex coefficients (not shown). Averaging over 88 states approximately distributed equally across the Bloch sphere, the undriven system yields a return probability of $0.65$ after 40 periods, compared to $0.90$ for the \textsc{swap}\ driven case. This indicates that a generic state is much better preserved by driving the system with the \textsc{swap}\ DTC protocol.
\begin{figure}[h]
\includegraphics[scale=0.4]{plot_returnprob_comparedriving_unbalanced_superpositionv2.pdf}
\caption{ Comparison between the undriven (first case; all nearest neighbor interactions on) (solid lines) and \textsc{swap} -driven (dashed lines) systems when qubit 1 is initialized in $\cos(\theta/2) {|\! \uparrow \downarrow \rangle} - \sin(\theta/2) {|\! \downarrow \uparrow \rangle}$. Results are shown for $\theta=0,\pi/16,\pi/8$. Results are plotted stroboscopically every $4T$. Other parameters are $L=4$, $B_0 = 3075$ MHz, $\sigma_B = 18$ MHz, $T_e = 1.4$ $\mu$s, $J_e = \pi/T_e$. For the driven case, $T_S=2$ ns, $J_S = \pi/T_S$, $\epsilon=0$. The initial state of qubit 2 is ${|\! \uparrow \downarrow \rangle}$. Results are averaged over 6000 disorder realizations. \label{fig:retprobvst_unbalanced}}
\end{figure}
\section{Switching preserved states \label{sec:switchstate}}
In the course of an information processing task, it is necessary to be able to change what state is stored in the memory. In Fig.~\ref{fig:switchstateandvaryalpha}(a) we show that an initial $|S\rangle$ state, preserved for 20 periods, can be switched to the $|T_0\rangle$, and subsequently preserved to a similar degree. The switching operation is performed simply by inserting an additional two periods with $J_e = 0$, halfway through the experimental run.
\begin{figure}[h]
\includegraphics[scale=0.5]{plot_returnprob_switchstate_varyalpha_combined.pdf}
\caption{ (a) Return probabilities for qubit 1 for the singlet (blue line) and triplet (orange line) states, for a system initialized in the singlet state for qubit 1 and subject to the switching protocol half way through the total evolution. (b) End time return probability for the state $|\psi \rangle = {|\! \uparrow \downarrow \rangle} + e^{-i\alpha} {|\! \downarrow \uparrow \rangle}$ for qubit 1, when it is initialized in the singlet state and subject to the switching protocol half way through the evolution. Other parameters are $L=4$, $B_0 = 3075$ MHz, $\sigma_B = 18$ MHz, $T_e = 1.4$ $\mu$s, $J_e = \pi/T_e$. The initial state of qubit 2 is ${|\! \uparrow \downarrow \rangle}$. Results are averaged over 6000 disorder realizations.\label{fig:switchstateandvaryalpha}}
\end{figure}
More generally, one can switch from $|S\rangle$ to an arbitrary state of the form ${|\! \uparrow \downarrow \rangle} + e^{i \alpha} {|\! \downarrow \uparrow \rangle}$ by adjusting the value of $J_e$ during the two extra periods, such that $J_e T_e = \alpha$. Fig.~\ref{fig:switchstateandvaryalpha}(b) shows that the return probability for the new state after $\sim 40$ total periods of evolution remains large, regardless of the choice of $\alpha$.
\section{Implementing Two-qubit Gates \label{sec:twoqubitgates}}
While the preservation of quantum states is an important task for quantum computing, it is also necessary to manipulate states and execute various logical gates. Here we explore the possibility of using the \textsc{swap}\ driving protocol to realize two-qubit gates in a chain of ST qubits. We first note that when qubit 1 is initialized in a singlet state, the return probability oscillates with period $4T$ ($2T$) if qubit 2 is in state ${|\! \uparrow \downarrow \rangle}$ (${|\! \uparrow \uparrow \rangle}$). This implies that the evolution after two periods is equivalent (up to single-qubit rotations) to a \textsc{cnot} gate, where qubit 1 is the target, and qubit 2 is the control, since qubit 1 flips from $|S\rangle$ to $|T_0\rangle$ depending on whether the spins in qubit 2 are parallel or antiparallel. However, this approach suffers from the disadvantage that parallel spin states are not part of the computational subspace of ST qubits. Conditional control of individual spins using ESR or EDSR would alleviate this issue by allowing one to temporarily map ${|\! \downarrow \uparrow \rangle} \rightarrow {|\! \downarrow \downarrow \rangle}$ to execute the \textsc{cnot}, before restoring the ${|\! \downarrow \uparrow \rangle}$ state of the control bit.
Another approach is based on the effective Ising Hamiltonian between exchange-coupled ST qubits in a linear array \cite{Wardrop2014}. An Ising interaction of the appropriate duration can be converted to a \textsc{cz} gate by applying additional single-qubit rotations \cite{Jones2001}:
\begin{align}
\textsc{cz} = e^{-i \pi/4}e^{i \pi \sigma^z_1/4}e^{i \pi \sigma^z_2/4}e^{-i \pi \sigma^z_1 \sigma^z_2/4}
\end{align}
This suggests viewing the protocol for the \textsc{swap}\ time crystal not only as a means of state preservation, but also as a way to generate two-qubit gates. Indeed, whereas two periods of the protocol $U$ of Eq.~\eqref{eq:swapDTCprotocol} yield the best state preservation when $J_eT_e = \pi$ (for product states of a single qubit), setting $J_eT_e = \pi/2$ produces a \textsc{cz} gate when followed by single-qubit rotations on each ST qubit, due to the effective Ising interaction between the ST qubits. Later, we compare this two-period gate to one that uses a single period of \textsc{swap}\ DTC evolution. We numerically study the \textsc{cz}\ protocol in the $L=4$ spin chain, configured as two ST qubits. The accuracy of the proposed gate can be assessed by looking at the probability of finding the evolved spins in the state that would be obtained from an ideal \textsc{cz} gate: $p_{\textsc{cz}} = | \langle \textsc{cz}_{ideal,i} | \textsc{cz}_{actual,i} \rangle |^2$. Here, $| \textsc{cz}_{actual,i} \rangle = U_{\textsc{cz}} | \psi_i \rangle$ and $| \textsc{cz}_{ideal,i} \rangle = \textsc{cz} | \psi_i \rangle$, where truncation of the state to the logical subspace is implicit. The physically implemented gate is given by
\begin{align}
U_{\textsc{cz}} = \mathcal{R}^{(1,2)}_{z,\pi/2} [U_{\textsc{swap}}(T_S) U_{evo}(T_e)]^2, \label{eq:UCZ}
\end{align}
where the exchange coupling $J_e$ in $U_{evo}$ is such that $J_e T_e = \pi/2$, while $J_S$ in $U_{SWAP}(T_S)$ remains the value required for a \textsc{swap}\ operation: $J_S T_S = \pi$. The operation $\mathcal{R}^{(1,2)}_{z,\pi/2}$ implements a simultaneous rotation on each qubit by $\pi/2$.
The fact that $U_{\textsc{cz}}$ approximates a $\textsc{cz}$ gate can be seen by noticing that in the physically relevant parameter regime where $J_S\gg\Delta$ and $J_e\ll\Delta$, where $\Delta$ is the magnetic field gradient across neighboring QDs, the evolution (truncated to the logical subspace) after two periods is approximately given by
\begin{align}
[U_{SWAP}(T_S) U_{evo}(T_e)]^2 \approx \begin{pmatrix} i & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & i
\end{pmatrix}\label{eq:2periodEvol}
\end{align}
in the basis $\{|0\rangle |0\rangle,|0\rangle |1\rangle ,|1\rangle |0\rangle,|1\rangle |1\rangle\}$, with $|0\rangle = {|\! \uparrow \downarrow \rangle}$ and $|1\rangle = {|\! \downarrow \uparrow \rangle}$ forming the logical basis of the ST qubits. This result can be obtained using the approximate expressions for each piece of the evolution given in Sec. \ref{sec:retprob}. The subsequent application of the $z$ rotations on each ST qubit as indicated in Eq.~\eqref{eq:UCZ} converts the right-hand side of Eq.~\eqref{eq:2periodEvol} into a \textsc{cz} gate. Below, we show that the discrepancy between $U_{\textsc{cz}}$ and $\textsc{cz}$ is mostly due to additional single-qubit gates that arise from terms of order $\Delta/J_S$ and $J_e/\Delta$. Thus, $U_{\textsc{cz}}$ remains locally equivalent to a $\textsc{cz}$ gate even when these higher-order effects are included.
In Fig.~\ref{fig:CZretprobmakhlin}(a) we present numerical results for the $\textsc{cz}$ gate probability, $p_{\textsc{cz}}$, for 100 randomly selected initial product states of the ST qubits: $| \psi_i \rangle = | \psi^{(1)}_i \rangle | \psi^{(2)}_i \rangle$. Despite the single-qubit gates caused by finite $\Delta/J_S$ and $J_e/\Delta$, the mean probability is high: $\bar{p}_{CZ} = 0.991$. The use of more complicated pulse shaping techniques that effectively remove these extra local gates can be expected to improve this result further \cite{Wang2012,Barnes2015,Zeng2019a}. Unless noted otherwise, calculations are performed with fixed field gradients across each ST qubit, without any ``noise'' component. Corrective pulse shaping can be designed using the knowledge of these gradients to produce a pure $\textsc{cz}$ gate. In our simulations, the $\mathcal{R}^{(1,2)}_{z,\pi/2}$ operation is implemented by allowing each ST qubit to precess freely under its respective field gradients for a time $(T_{g} - t_r - 2T_S)/2$. Here $T_{g}=1$ $\mu$s is the total gate time, while
\begin{align}
t_r = \begin{cases}
\pi/(2\Delta) & \mathrm{if}\; \Delta > 0\\
3\pi/(2\Delta) & \mathrm{if}\; \Delta <0
\end{cases}
\end{align}
After this precession, a \textsc{swap}\ pulse is applied and the qubit is allowed to precess again until $T_g - T_S$, at which time a final \textsc{swap}\ is applied. This process allows for the rotation of the single-qubit state, along with an additional spin-echo-like part that keeps the different qubits in sync. Below, we also consider the noisy situation in which the true values of the gradients deviate from the ones assumed by the experimentalist implementing the gate.
\begin{figure}[h]
\includegraphics[scale=0.6]{plot_STtwoqubitgate_makhlin_vs_Jinter_retprob_evonT2.pdf}
\caption{(a) Probability $p_{\textsc{cz}} = | \langle \textsc{cz}_{ideal} | \textsc{cz}_{actual} \rangle |^2$ of finding the spin chain in the state that would be produced by an ideal \textsc{cz} gate after the sequence in Eq.~\eqref{eq:UCZ} is applied. $J_e = \pi/(2T_e)$, with $T_e=1.4$ $\mu$s, as indicated by the vertical gray line in panel (b). Initial states are random product states in the ST qubit logical subspace. The red dashed line indicates the mean CZ gate probability $\overline{p_{\textsc{cz}}}=0.991$. (b) Makhlin invariants $G_1$, $G_2$, and $G_3$, as functions of the inter-qubit coupling $J_e$. Other parameters are $L=4$, $\Delta B_1=18$ MHz, $\Delta B_2=7$ MHz, $T_e=1.4$ $\mu$s, $T_S=2$ ns, $J_S = \pi/T_S$, $T_{g}=1$ $\mu$s. \label{fig:CZretprobmakhlin}}
\end{figure}
To assess the intrinsic entangling properties of the physical two-qubit \textsc{cz}, we compute the Makhlin invariants $G_1$, $G_2$, and $G_3$, which characterize a given two-qubit gate up to arbitrary single-qubit rotations \cite{Makhlin2002,Zhang2003}. The Makhlin invariants for an ideal \textsc{cz}\ are $G_1 = G_2 = 0$ and $G_3 = 1$. Fig.~\ref{fig:CZretprobmakhlin}(b) shows the Makhlin invariants for the physical \textsc{cz}\ as functions of the inter-qubit coupling $J_e$. For the optimal value $J_e = \pi/(2T_e)$, the values of $G_{1,2,3}$ are given in Table~\ref{tab:makhlin}. One sees that the invariants of the physical gate closely approximate those of the ideal one. This suggests that errors in the single-qubit rotations are the main factor leading to the imperfect \textsc{cz}\ probabilities shown in Fig.~\ref{fig:CZretprobmakhlin}(a). We also note that $G_3$ is necessarily real for any two-qubit gate. Thus, the small imaginary part in the numerical calculation must arise due to leakage out of the computational subspace. Fig.~\ref{fig:CZretprobmakhlin}(b) indicates that significant departures from the optimal $J_e$ lead to non-negligible errors in $G_1$ and $G_3$. Thus, precise experimental control over the magnitude of $J_e$ is important for realizing the desired gate. For a value of $J_e$ that is 1\% larger than optimal one, however, $G_3$ remains well within 0.01\% of its ideal value.
\begin{table
\begin{tabular}{|c | c | c | p{6em} |}
\hline
& $G_1$ & $G_2$ & $G_3$ \\ [0.5ex]
\hline\hline
Actual \textsc{cz}\ & $3.5 \times 10^{-5}$ & $-4.1 \times 10^{-7}$ & 1 + $3.9 \times 10^{-5}$ $- 9.0 \times 10^{-7} i$ \\
\hline
Ideal \textsc{cz}\ & 0 & 0 & 1 \\
\hline
\end{tabular}
\caption{Makhlin invariants for the \textsc{swap}-DTC two-qubit \textsc{cz}\ gate. Parameters are the same as in Fig.~\ref{fig:CZretprobmakhlin}. }
\label{tab:makhlin}
\end{table}
One should also consider variations in the magnetic field gradients across the two qubits. While these can be controlled to some extent, for instance, by micromagnet design, there are also contributions due to nuclear spin noise. Fig.~\ref{fig:makhlinvsgrads} shows the Makhlin invariants for the physical \textsc{cz}\ gate as functions of the magnetic field gradients across qubits 1 and 2, respectively (the left spins of each qubit are assumed to have the same field value). In this figure, the axes give the nominal field gradients that are assumed in order to determine the pulse sequences that execute the necessary $z$ rotations. The actual magnetic fields used in the calculation are modified, however, by the addition of Gaussian random field noise with standard deviation $\sigma_B = 1$ MHz. The difference between the nominal and actual field values leads to errors in the single-qubit rotations of Eq.~\eqref{eq:UCZ}. As the Makhlin invariants are unaffected by single-qubit rotations, the results are essentially the same as for $\sigma_B=0$ (not shown). Nevertheless, we find that large values ($\sim 100$ MHz) of the field gradients lead to sizable departures from the ideal \textsc{cz}\ gate, due to errors in the \textsc{swap}\ gates induced by the gradients. But for $\Delta B_1$, $\Delta B_2 < 50$ MHz, the Makhlin invariants remain close to the ideal ones. Use of composite pulse shaping is expected to allow for successful operation in the larger gradient regime as well.
\begin{figure}[h]
\includegraphics[scale=0.55]{plot_STtwoqubitgate_makhlin_sigB1.pdf}
\caption{ Makhlin invariants $G_1$, $G_2$, and $G_3$ as functions of the nominal magnetic field gradients across each qubit. The true magnetic field for each data point is modified by the addition of Gaussian random field noise with standard deviation $\sigma_B = 1$ MHz. Other parameters are $L=4$, $J_e=\pi/(2T_e)$, $T_e=1.4$ $\mu$s, $n_T=2$, $T_S=2$ ns, $J_S=\pi/T_S$, $T_{g}=1$ $\mu$s. Results are averaged over 40 disorder realizations. \label{fig:makhlinvsgrads}}
\end{figure}
Unlike the Makhlin invariants, the \textsc{cz}\ gate probabilities are reduced by inaccurate $z$ rotations, and thus by differences between the nominal and actual magnetic field gradients in the system. Fig.~\ref{fig:retprobBnoise} shows the return probabilities in the presence of $\sigma_B = 1$ MHz Gaussian field noise when the nominal gradients are $\Delta B_1 = 18$ MHz and $\Delta B_2 = 7$ MHz. We find that the mean return probability is lowered from 0.991 in the noiseless case to 0.968 in the presence of noise. This suggests that reliable knowledge of the field gradients is crucial for obtaining accurate ST qubit gates.
\begin{figure}[h]
\includegraphics[scale=0.5]{plot_STtwoqubitgate_retprob_init_states_rand_evonT2_sigB1.pdf}
\caption{\textsc{cz}\ gate probabilities for random initial product states of the ST qubits, for which the true magnetic field for each data point is modified by the addition of Gaussian random field noise with standard deviation $\sigma_B = 1$ MHz. The red dashed line shows the mean value $\overline{p_{\textsc{cz}}}=0.968$. Other parameters are $L=4$, $\Delta B_1=18$ MHz, $\Delta B_2=7$ MHz, $J_e=\pi/(2T_e)$, $T_e=1.4$ $\mu$s, $n_T=2$, $T_S=2$ ns, $J_S = \pi/T_S$, $T_{g}=1$ $\mu$s. Results are averaged over 20 disorder realizations. \label{fig:retprobBnoise}}
\end{figure}
An alternative metric for the quality of the physical \textsc{cz}\ gate is given by the fidelity: \cite{Pedersen2007,Economou2015}
\begin{align}
f = \frac{1}{20} (\Tr [U_{CZ,p} U_{CZ,p}^\dagger] + | \Tr [U_{CZ,p}^\dagger CZ^*]|^2), \label{eq:unitaryfidelity}
\end{align}
where $CZ^* = U_4 U_3 CZ U_2 U_1$ is the generalized \textsc{cz}\ consisting of the ordinary \textsc{cz}\ preceded by arbitrary one-qubit unitaries $U_{1,2}$ of the two qubits, and followed by the arbitrary unitaries $U_{3,4}$. Furthermore, $U_{CZ,p}$ is the DTC part ($[U_{SWAP}(T_S) U_{evo}(T_e)]^2$) of the physical \textsc{cz}\ gate projected down to the computational subspace, and $CZ^*$ is optimized over the parameters $\alpha_i, \beta_i, \gamma_i, \delta_i$ defining the one-qubit unitaries $U_i = e^{i \alpha_i} \mathcal{R}_z(\beta_i) \mathcal{R}_y (\gamma_i) \mathcal{R}_z (\delta_i)$. With this definition, the optimized fidelity of the physical \textsc{cz}\ gate is shown as a function of the magnetic field gradients in Fig.~\ref{fig:optimizedfidelity}. For gradients below $50$ MHz, the optimized fidelity reaches values in excess of $0.995$, indicating that single-qubit rotations are the limiting factor in achieving an accurate gate in this case. While $z$ rotations can be performed by turning off the intra-qubit exchange coupling $J_S$ for the appropriate length of time, thereby allowing the system to evolve in the ``always on'' field gradients, perfect $x$ rotations cannot be similarly achieved by applying a single value of $J_S$ for a given time, as the axis of rotation is tilted due to the gradients. This again highlights the need for pulse shaping methods to improve single-qubit rotations.
\begin{figure}[h]
\includegraphics[scale=0.55]{plot_unitaryfidelity_optimized_varyBgrads.pdf}
\caption{ Optimized unitary fidelity of Eq.~\eqref{eq:unitaryfidelity} as a function of the magnetic field gradients across each qubit. Other parameters are $L=4$, $J_e=\pi/(2T_e)$, $T_e=1.4$ $\mu$s, $n_T=2$, $T_S=2$ ns, $J_S = \pi/T_S$, $T_{g}=1$ $\mu$s. \label{fig:optimizedfidelity}}
\end{figure}
Thus far we have considered a two-qubit \textsc{cz}\ gate that requires two periods of the \textsc{swap}\ DTC driving protocol, with a modified value of $J_e$ that maximizes the gate performance instead of preserving the initial state. It is natural to ask whether a \textsc{cz}\ gate could also be executed using a single period of inter-qubit evolution. That is indeed the case, as illustrated in Fig.~\ref{fig:CZretprobmakhlin_nT1}(a), which shows that for a single evolution period such that $J_e T_e = \pi$, the Makhlin invariants are close to their ideal values. Here, the evolution is not followed by the subsequent intra-qubit \textsc{swap}\ pulses of the DTC protocol, as these amount to unnecessary additional single-qubit rotations. However, the corresponding \textsc{cz}\ gate probabilities for the optimal value of $J_e$ are very poor [Fig.~\ref{fig:CZretprobmakhlin_nT1}(a)]. This is due to the fact that the one-period protocol lacks the spin-echo behavior of the two-period version discussed above, which cancels the continuous $z$ rotations of ST qubits with finite field gradients. Nevertheless, one can still achieve high \textsc{cz}\ gate probabilities by selectively rotating each qubit through different angles $\theta_{z,1}$, $\theta_{z,2}$, such that the total rotation for each qubit at the end of the gate is the required $\mathcal{R}_{z,\pi/2}$. This is seen in Fig.~\ref{fig:CZvaryangle_nT1}, which displays the \textsc{cz}\ gate probability as a function of single-qubit rotation angles applied to each qubit after the inter-qubit evolution part of the gate. The optimal choices of rotation angles depend on the field gradients across each qubit; in Fig.~\ref{fig:CZvaryangle_nT1} the highest return probability attained is 0.980, comparable to that of the two-period CZ protocol.
\begin{figure}[h]
\includegraphics[scale=0.6]{plot_STtwoqubitgate_makhlin_vs_Jinter_retprob_evonT1.pdf}
\caption{ (a) \textsc{cz}\ gate probability for the $n_T = 1$ protocol, using the optimal $J_e=\pi/T_e$, indicated by the vertical gray line in panel (b). The red dashed line indicates the mean CZ gate probability $\overline{p_{\textsc{cz}}}=0.675$. (b) Makhlin invariants for the $n_T=1$ protocol for the \textsc{cz}\ gate. Other parameters are $L=4$, $\Delta B_1=18$ MHz, $\Delta B_2=7$ MHz, $\sigma_B=0$, $T_e=1.4$ $\mu$s, $T_{g}=1$ $\mu$s. \label{fig:CZretprobmakhlin_nT1}}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.55]{plot_STtwoqubitgate_retprob_varyzrotangles_Bgrad18_7.pdf}
\caption{ Mean \textsc{cz}\ gate probability for the $n_T=1$ protocol, varying the single-qubit rotation angles applied after the two-qubit evolution. Other parameters are $L=4$, $\Delta B_1=18$ MHz, $\Delta B_2=7$ MHz, $\sigma_B=0$, $J_e=\pi/T_e$, $T_e=1.4$ $\mu$s, $T_{g}=1$ $\mu$s. Results are averaged over 100 randomly selected initial states, which are product states of generic ST qubit states.\label{fig:CZvaryangle_nT1}}
\end{figure}
The advantage of the one-period protocol (apart from the two-fold reduction in gate time) can be seen by considering the Makhlin invariants as functions of the magnetic field gradients [Fig.~\ref{fig:makhlinvsgradsnT1}]. The invariants remain within $10^{-5}$ of their ideal values throughout the range considered, thus showing considerable improvement from the two-period case at large gradients. This suggests that optimizing over arbitrary single-qubit operations before and after an ideal \textsc{cz}\ gate, in the manner of Eq.~\eqref{eq:unitaryfidelity}, should lead to very high fidelities. We confirm this expectation, as shown in Fig.~\ref{fig:optimizedfidelitynT1}, where the lowest infidelity over the range of gradients considered is only $\sim 5 \times 10^{-7}$. Infidelities obtained in experiments will likely be higher due to single-qubit rotation errors. Despite the significantly improved fidelities of the one-period protocol over the two-period version, the fact that the required $z$ rotations are gradient-dependent may present further experimental challenges. This would necessitate adaptive control of the pulse sequence, in response to a prior measured value of the field gradient. The two-period sequence, on the other hand, always involves $z$ rotations of $\pi/2$ for each qubit, regardless of the gradient strength, such that the pulse sequence does not need to be changed ``on the fly.''
\begin{figure}[h]
\includegraphics[scale=0.55]{plot_STtwoqubitgate_makhlin_sigB1_nT1.pdf}
\caption{ Makhlin invariants $G_1$, $G_2$, and $G_3$ as functions of the nominal magnetic field gradients across each qubit. (Note that $1 - \mathrm{Re}[G_3]$ is plotted in (c)). The true magnetic field for each data point is modified by the addition of Gaussian random field noise with standard deviation $\sigma_B = 1$ MHz. Other parameters are $L=4$, $J_e=\pi/T_e$, $T_e=1.4$ $\mu$s, $n_T=1$, $T_{g}=1$ $\mu$s. Results are averaged over 40 disorder realizations. \label{fig:makhlinvsgradsnT1}}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.55]{plot_unitaryinfidelity_optimized_varyBgrads_nT1.pdf}
\caption{ \textsc{cz}\ gate infidelity for the one-period protocol ($n_T = 1$) as a function of the magnetic field gradients across each qubit. The infidelity at each point is optimized over single-qubit gate parameters. Other parameters are $L=4$, $J_e=\pi/T_e$, $T_e=1.4$ $\mu$s, $T_{g}=1$ $\mu$s. \label{fig:optimizedfidelitynT1}}
\end{figure}
\section{Conclusions \label{sec:conclusion}}
We have shown that driving exchange interactions, as opposed to performing single-spin rotations, in QD spin chains leads to an alternative route to time crystal physics that can be used for the preservation and manipulation of quantum states. We demonstrated that such driving is particularly useful for preserving the entangled singlet and triplet spin states often used as logical qubit states for quantum computation, and on average preserves arbitrary states on the Bloch sphere better than the undriven case. In addition, we uncovered additional signatures of the exchange-driven time crystal phase, including a $4T$ periodicity of the singlet return probability that runs counter to the $2T$ periodicity normally encountered in such systems. We also considered applications of this time crystal physics to the design of exchange-driven quantum gates for singlet-triplet qubits. In particular, we showed that a simple modification of the \textsc{swap}-DTC protocol yields a high-fidelity \textsc{cz}\ gate, up to single-qubit operations. These results suggest that time crystal physics may be beneficial to quantum information applications based on QD spin qubits.
\begin{acknowledgments}
We thank Bikun Li and Fernando Calderon-Vargas for helpful discussions. This work is supported by DARPA Grant No. D18AC00025.
\end{acknowledgments}
|
1,314,259,993,501 | arxiv | \section{Introduction}
\setcounter{equation}{0}
\hskip 1em
In this paper, we study the following free boundary problem modeling
tumor growth in necrotic phase:
\begin{equation}\label{1.1}
\left\{
\begin{array}{rll}
\Delta\sigma\,&=\sigma \chi_{\Omega^+(t)}\qquad \qquad & \mbox{in}\;\;
\Omega^+(t)\cup\Omega^-(t),\; t>0,
\\ [0.2 cm]
\Delta p\,&=-\mu(\sigma-\tilde{\sigma}) \chi_{\Omega^+(t)}+ \nu \chi_{\Omega^-(t)}
\quad \qquad & \mbox {in}\;\;
\Omega^+(t)\cup\Omega^-(t), \; t>0,
\\ [0.2 cm]
\sigma\,&=\bar{\sigma}, \quad p=\gamma\kappa
\qquad & \mbox{on}\;\; \Gamma^+(t), \; t>0,
\\ [0.2 cm]
\sigma\,&=\hat\sigma,\quad [\![\partial_n\sigma]\!]=0 & \hbox{on}\;\; \Gamma^-(t), \; t>0,
\\ [0.2 cm]
[\![ p ]\!]\,&=0,\quad \, [\![\partial_n p]\!]=0\quad
& \hbox{on}\;\; \Gamma^-(t), \; t>0,
\\ [0.2 cm]
\partial_y \sigma\,&=0, \quad \,\partial_y p=0 & \hbox{on}\;\; \Gamma_0, \; t>0,
\\ [0.2 cm]
V\,&=-\partial_{n} p \qquad \qquad \qquad & \hbox{on}\;\; \Gamma^+(t), \; t>0,
\\ [0.2 cm]
\Gamma^+(0)\,&=\Gamma^+_0 & \hbox{at}\;\; t=0,
\end{array}
\right.
\end{equation}
where $\sigma=\sigma(x,y,t)$ and $p=p(x,y,t)$ are
unknown functions representing concentration of nutrient
and internal pressure
within tumor, respectively, $\Omega^+(t)$ and $\Omega^-(t)$ are
unknown domains occupied by tumor proliferating cells and necrotic cells
at time $t>0$,
respectively, and
$$
\Omega^+(t):=\{(x,y)\in \Bbb R^{n-1}\times \Bbb R:\eta(x,t)<y<\rho(x,t)\},
$$
$$
\Omega^-(t):=\{(x,y)\in \Bbb R^{n-1}\times \Bbb R: 0<y<\eta(x,t)\},
$$
where $\eta(x,t)$ and $\rho(x,t)$ are unknown functions satisfying
$0<\eta(x,t)<\rho(x,t)$ for $x\in \Bbb R^{n-1}$ and $t>0$,
$\Gamma^+(t)$ and $\Gamma^-(t)$ are free boundaries,
and
$$
\Gamma^+(t):={\rm graph}(\rho(x, t))=
\{(x,y)\in\Bbb R^{n-1}\times \Bbb R: y=\rho(x,t)\},
$$
$$
\Gamma^-(t):={\rm graph}(\eta(x,t))=\{
(x,y)\in\Bbb R^{n-1}\times \Bbb R: y=\eta(x,t)\},
$$
$\Gamma_0:={\rm graph}(0)$ is the fixed bottom boundary,
$\kappa$ is the mean curvature and $V$
is the outward normal velocity of
upper tumor surface $\Gamma^+(t)$, respectively, $\partial_n$ denotes
the outward normal derivative with respect to $\Omega^{+}(t)$,
$\bar{\sigma}$, $\tilde{\sigma}$, $\hat\sigma$, $\mu$, $\nu$ and
$\gamma$ are positive constants, where $\bar{\sigma}$ represents constant
external nutrient supply, $\tilde{\sigma}$ is a critical value for
the balance of cell apoptosis and mitosis, $\hat\sigma$ is the nutrient
level for tumor cell necrosis, $\mu$ is the proliferation rate of tumor
proliferating cells,
$\nu$ is the dissolution rate of necrotic cells,
and $\gamma$ is cell-to-cell adhesiveness. We assume
that $0<\hat\sigma<\tilde\sigma<\bar\sigma$. $\chi_{\Omega^\pm(t)}$ is the
characteristic function of $\Omega^\pm(t)$, respectively.
The notation $[\![p]\!]$
denotes the jump of $p$ across $\Gamma^-(t)$, and
$$
[\![p]\!]:=\Upsilon p^+-\Upsilon p^- \qquad\mbox{for}\quad
p^+=p\big|_{\Omega^+(t)}\;\hbox{\; and\; }\;
p^-=p\big|_{\Omega^-(t)},
$$
where $\Upsilon$ is the trace operator on $\Gamma^-(t)$.
Similarly, $[\![\partial_n p]\!]$ and $[\![\partial_n \sigma]\!]$
denote the jump of the normal derivatives of $p$ and $\sigma$
across $\Gamma^-(t)$, respectively.
Problem (\ref{1.1}) is originated from mathematical model proposed by
Byrne and Chaplain [\ref{byr-cha-96}], for the growth of necrotic
tumor in vitro which is cultivated on an impermeable support membrane,
and tumor cells are multilayered.
The first equation describes the diffusion and consumption of nutrient
in tumor region; the second equation is based on Darcy's law and mass
conservation law;
the third equation means constant nutrient supply and pressure is continuous
across the upper tumor surface, by taking cell-to-cell adhesiveness into account;
the later three lines of equations mean that nutrient concentration, pressure and
their normal derivatives are continuous across the lower free boundary,
nutrient and tumor cells can not pass
through the bottom boundary. For
more details we refer to [\ref{byr-cha-96}].
In the non-necrotic case, i.e., $\Omega_-(t)=\emptyset$, the corresponding
problem of (\ref{1.1}) has been well studied. Cui and Escher [\ref{cui-esc-09}]
established local well-posedness and asymptotic stability of the unique flat
equilibrium (independent of $x$).
Zhou et al. [\ref{zhou-esc-cui}] proved that there exist infinitely many bifurcation
stationary solutions. It is worthy to mention that
another extensively studied model is solid tumor spheroid model, where
tumor region is sphere-shaped. For similar solid tumor spheroid models,
many illuminative results such as global
well-posedness, existence of bifurcation stationary solutions and Hopf
bifurcations, and asymptotical stability of radially symmetric equilibrium
have been established, we refer to [\ref{cui-16},
\ref{cui-esc-07}, \ref{cui-esc-08}, \ref{fri-07}, \ref{fri-hu-06-0}, \ref{fri-hu-06},
\ref{fri-hu-08}, \ref{fri-rei-99}, \ref{hao-12}, \ref{wu-16},
\ref{wu-cui-10}, \ref{wu-zhou-17}]
and references therein.
In the necrotic case, we observe that problem (\ref{1.1}) has two free boundaries.
The evolution of the upper free boundary $\Gamma^+(t)$ is
governed by the equation $V=-\partial_n p$, but the evolution of the
lower free boundary $\Gamma^-(t)$ is implicit. This is a remarkable
feature and make the analysis of problem (1.1) in high dimension
is much more difficult than the non-necrotic case.
By the maximum principle, since $0<\hat\sigma<\bar\sigma$,
we have $\sigma(x,y,t)\equiv \hat\sigma$ in $\Omega^-(t)$,
and $\sigma(x,y,t)> \hat\sigma$ in $\Omega^+(t)$ at each
$t>0$. Let $\Omega(t):= \{(x,y)\in\Bbb R^{n-1}\times
\Bbb R: 0<y<\rho(x,t)\}$ for a given function $\rho(x, t)$,
we can rewrite $\Omega^+(t)=\{(x,y)
\in\Omega(t): \sigma(x,y,t) >\hat\sigma\}$,
$\Omega^-(t)=\mbox{int}\{ (x,y)\in \Omega(t):
\sigma(x,y,t)=\hat\sigma\}$ and
$\Gamma^-(t)=\partial\Omega^+(t)\cap
\partial \Omega^-(t)$. We see that
$(\sigma(x,y,t), \Gamma^-(t))$ satisfies
an obstacle problem:
\begin{equation}\label{1.2}
\left\{
\begin{array}{l}
-\Delta \sigma +\sigma\ge0,\qquad \sigma \ge \hat \sigma, \qquad
(-\Delta\sigma+\sigma)(\sigma-\hat\sigma)=0\qquad\mbox{in }\;\; \Omega(t),
\\ [0.2 cm]
\sigma=\bar\sigma \qquad
\mbox{on }\;\; \Gamma^+(t),
\qquad\qquad
\partial_y\sigma=0 \qquad \mbox{on }\;\; \Gamma_0.
\end{array}
\right.
\end{equation}
The regularity of free boundary of obstacle problems in high dimension is
very difficult to study (cf. [\ref{caf-77}, \ref{kin-sta}]). Even for
smooth domain $\Omega(t)$, the solution $\sigma\notin C^2(\Omega(t))$.
It makes a main difficulty arising
in necrotic tumor model from non-necrotic case. To solve this
problem,
we first show there is a unique flat stationary solution
(see Section 2). It implies that we can consider obstacle problem (\ref{1.2})
in a small neighborhood of the flat stationary solution. Motivated by
Cui [\ref{cui-16}] and Hamilton [\ref{ham-82}],
by using Nash-Moser implicit function theorem,
for given $\Gamma^+(t)={\rm graph}(\rho(x,t))$ closed to
the flat equilibrium, we prove that
the solution $(\sigma(x,y,t), \Gamma^-(t))$
is smoothly depending on $\rho(x,t)$, and $\Gamma^-(t)$
is actually smooth in space variables (see Lemma 3.2).
Then we further solve the first six lines of equations of problem $(\ref{1.1})$ and
get the solution $p(x,y,t)$ which also smoothly depends on $\rho(x,t)$,
finally by the second last equation
$V=-\partial_n p$ we reduce problem (\ref{1.1}) into an abstract differential
equation $\partial_t\rho+\Psi(\rho)=0$ only containing
function $\rho(x,t)$. In suitable Banach spaces, we show this abstract
differential equation is of parabolic type and the local well-posedness follows
by geometric theory of parabolic differential equations. By a delicate analysis and
computation, we study the spectrum of the linearized operator at the
flat stationary solution, and by linearized stability principle we can
get asymptotic stability of the flat stationary solution.
To give a precise statement
of our main results, we introduce some notations.
In this paper, we only consider the case $n=2$, and
the higher-dimensional case can be treated similarly.
We denote the solution of problem (\ref{1.1}) by
$(\sigma, p, \eta, \rho)$, with
$\Gamma^-(t)={\rm graph}(\eta(x,t))$
and $\Gamma^+(t)={\rm graph}(\rho(x,t))$.
For the sake of simplicity, we impose that
\begin{equation}\label{1.3}
\sigma(x,y,t),\; p(x,y,t), \; \eta(x,t),\; \rho(x,t) \mbox{ are }
2\pi\mbox{-periodic in } x\in\mathbb{R}.
\end{equation}
We identify $\mathbb {S}=\mathbb{R}/{2\pi}\mathbb{Z}$, and
identify continuous
2$\pi$-periodic function space $C_{per}(\mathbb{R})=C(\mathbb{S})$.
Given $s>0$,
we denote by $BUC^{s}(\Bbb S)$ the space of all bounded and uniformly
H\"older continuous functions on $\Bbb S$ of order $s>0$.
Let $h^{s}(\Bbb S)$ denote the
little H\"older space, a closure of $BUC^\infty(\Bbb S)$
in $BUC^{s}(\Bbb S)$. Similarly, we denote by $h^{s}(\Omega)$
the closure of $BUC^{\infty}(\Omega)$ in $BUC^{s}(\Omega)$ for
bounded open domain $\Omega$ in $\Bbb R^2$.
Our first main result is stated as follows:
\medskip
{\bf Theorem 1.1} \ \ {\em Let $\tilde\sigma>\hat\sigma>0$ be given.
There exists a positive constant $\sigma_*>\tilde\sigma$
depending only on $\hat\sigma$ and $\tilde\sigma$, such that
free boundary problem $(1.1)$ has a unique flat stationary solution
$(\sigma_s, p_s, \eta_s, \rho_s)$ if and only if $\bar{\sigma}>
\sigma_*$.
}
\medskip
We shall prove this result in Section 2. Recall that in non-necrotic case
(see Theorem 2 of [\ref{cui-esc-09}]), there exists a unique
non-necrotic flat stationary solution for all $\bar\sigma>\tilde\sigma$.
It is an interesting difference that necrotic flat stationary solution
does not exist for $\tilde\sigma<\bar\sigma\le \sigma_*$.
\medskip
Our second main result is about asymptotic stability of the flat stationary solution.
\medskip
{\bf Theorem 1.2} \ \ {\em
$(i)$ There exists a positive threshold value $\gamma_*$
of cell-to-cell adhesiveness such that for any $\gamma>\gamma_*$,
the flat stationary solution $(\sigma_s, p_s, \eta_s, \rho_s)$ is asymptotically
stable in the following sense: There exists a constant $\epsilon>0$ such that
if $\rho_0\in h^{4+\alpha}(\mathbb{S})$,
$\|\rho_0\|_{h^{4+\alpha}(\mathbb{S})}<\epsilon$ and $\Gamma^+_0=
{\rm graph}(\rho_s+\rho_0)$,
then
the solution $(\sigma, p, \eta, \rho)$ of problem $(\ref{1.1})$ exists for
all $t > 0$ and converges to $(\sigma_s, p_s, \eta_s, \rho_s)$
exponentially fast as $t\to+\infty$.
$(ii)$
If $0<\gamma<\gamma_*$, then $(\sigma_s, p_s, \eta_s, \rho_s)$
is unstable.}
\medskip
The above result implies that cell-to-cell
adhesiveness $\gamma$ plays an important role on tumor's
stability. A smaller value of $\gamma$ may make tumor
more aggressive. The threshold value $\gamma_*$ of
cell-to-cell adhesiveness is given by (\ref{4.19})
and (\ref{4.21}), and $\gamma_*$ can be regarded as a
function of the dissolution rate $\nu$. By
$\displaystyle {d \gamma_*/ d \nu}\le0$,
we see that a
smaller value of $\nu$ may make
tumor more unaggressive. While in the limiting case $\nu=0$,
the flat stationary solution is not asymptotically stable
for all $\gamma>0$. (see Remark 5.2).
\medskip
The structure of the rest of this paper is arranged as follows.
In the next section, we study the existence and uniqueness
of flat stationary solution. In Section 3, by using implicit
function theorem and classical theory of elliptic equations
we reduce free boundary
problem (\ref{1.1}) into a Cauchy problem in little H\"older
spaces, and establish the local well-posedness.
Section 4 is devoted to study the
linearized problem at flat stationary solution and
compute eigenvalues. In the
last section we make stability analysis and give a proof of
Theorem 1.2.
\medskip
\hskip 1em
\section{Flat stationary solution}
\setcounter{equation}{0}
\hskip 1em
In this section, we study the existence and uniqueness
of flat stationary solution of free boundary problem
(\ref{1.1}).
We denote flat stationary solution by $(\sigma_s(y), p_s(y), \eta_s, \rho_s)$ with
$0<\eta_s<\rho_s$. It satisfies the following problem
\begin{equation}\label{2.1}
\left\{
\begin{array}{rll}
\sigma_s''(y)\,&=\sigma_s(y), \qquad \quad
p_s''(y)=-\mu(\sigma_s(y)-\tilde{\sigma}) \qquad \qquad & \hbox{ for }
\eta_s<y<\rho_s,
\\ [0.2 cm]
\sigma_s''(y)\, &= 0, \qquad\quad\quad\;\;\; p_s''(y)=\nu
\quad \qquad &\hbox{ for }\; 0<y<\eta_s,
\\ [0.2 cm]
\sigma_s(\rho_s)\,&=\bar{\sigma}, \qquad\qquad\;\; p_s(\rho_s)=0,
\\ [0.2 cm]
\sigma_s(\eta_s)\,&=\hat\sigma,\qquad\qquad \;\;\sigma_s'(\eta_s)=0 ,
\\ [0.2 cm]
p_s(\eta_s^+)\,&=p_s(\eta_s^-),\quad\quad\; p_s'(\eta_s^+)=p_s'(\eta_s^-),
\\ [0.2 cm]
\sigma_s'(0)\,&=0, \qquad p_s'(0)=0 ,\qquad p_s'(\rho_s)=0.
\end{array}
\right.
\end{equation}
We easily get that
\begin{equation}\label{2.2}
\sigma_s(y)=
\left\{
\begin{array}{ll}
\displaystyle {\bar\sigma \sinh (y-\eta_s)
+\hat\sigma\sinh(\rho_s-y)\over\sinh(\rho_s-\eta_s)}
\qquad\qquad\qquad\qquad\quad\;\; & \mbox{for}\;\;\eta_s\le y\le \rho_s,
\\ [0.2 cm]
\hat\sigma\qquad
& \mbox{for}\;\; 0<y<\eta_s,
\end{array}
\right.
\end{equation}
\begin{equation}\label{2.3}
p_s(y)=
\left\{
\begin{array}{ll}
\displaystyle {\mu\over 2}\tilde\sigma(y^2-\rho_s^2)
+(\nu-\mu\tilde\sigma)(y-\rho_s)\eta_s+\mu(\bar\sigma-\sigma_s(y))
\qquad & \mbox{for}\;\;\eta_s\le y\le \rho_s,
\\ [0.3 cm]
\displaystyle {\nu\over 2} (y^2-\eta_s^2)+p_0 \qquad
& \mbox{for}\;\; 0<y<\eta_s,
\end{array}
\right.
\end{equation}
where $p_0=
{\mu\over 2}\tilde\sigma(\eta_s^2-\rho_s^2)+(\nu-\mu\tilde\sigma)
(\eta_s-\rho_s)\eta_s
+\mu(\bar\sigma-\hat\sigma)$.
By $\sigma_s'(\eta_s)=0$, there holds
\begin{equation}\label{2.4}
\cosh(\rho_s-\eta_s)={\bar\sigma\over\hat\sigma}.
\end{equation}
Using this formula,
\begin{equation}\label{2.5}
\sigma_s'(\rho_s)={\bar\sigma\cosh(\rho_s-\eta_s)-\hat\sigma\over
\sinh(\rho_s-\eta_s)}
= \sqrt{\bar\sigma^2-\hat\sigma^2}.
\end{equation}
By $p_s'(\rho_s)=0$ we have $ \mu\tilde\sigma\rho_s+
(\nu-\mu\tilde\sigma)\eta_s-\mu\sigma_s'(\rho_s)=0$.
It implies that
\begin{equation}\label{2.6}
(\nu-\mu\tilde\sigma)\eta_s+\mu\tilde\sigma\rho_s=\mu\sqrt{\bar\sigma^2-\hat\sigma^2}.
\end{equation}
Then from (\ref{2.4}) and (\ref{2.6}), we obtain
\begin{equation}\label{2.7}
\eta_s={\mu\over\nu}\big( \sqrt{\bar\sigma^2-\hat\sigma^2} - \tilde\sigma
\ln (\bar\sigma+\sqrt{\bar\sigma^2-
\hat\sigma^2}) +\tilde\sigma\ln \hat\sigma
\big),
\end{equation}
\begin{equation}\label{2.8}
\rho_s=\eta_s+\ln (\bar\sigma+\sqrt{\bar\sigma^2-
\hat\sigma^2}) -\ln \hat\sigma.
\end{equation}
Clearly, $\rho_s>\eta_s$ if and only if $0<\hat\sigma<\bar\sigma$.
Next, we only need to make sure $\eta_s>0$ for
$0<\hat\sigma<\min\{\tilde\sigma,\bar\sigma\}$.
Define a function
$$
f(a,r):=\sqrt{r^2-1}-a \ln (r+\sqrt{r^2-1})\qquad\mbox{for}\;\;
r>1,\; a>1.
$$
Note that
$$
\partial_r f(a,r) = {r-a\over \sqrt{r^2-1}},\;\qquad\; f(a,a)<0\qquad\;
\mbox{for}\;\; r>1,\; a>1.
$$
It follows that for any $a>1$, there exists a positive constant $a_*>a$
such that
$$
f(a,r)\left\{
\begin{array}{l}
<0,\qquad 1< r<a_*,
\\ [0.2 cm]
=0, \qquad r=a_*,
\\ [0.2 cm]
>0,\qquad r>a_*.
\end{array}
\right.
$$
By (\ref{2.7}) we see that $\displaystyle \eta_s = {\mu\hat\sigma\over \nu}
f( {\tilde\sigma\over \hat\sigma}, {\bar\sigma\over\hat\sigma})$.
Recall that $0<\hat\sigma<\min\{\tilde\sigma,\bar\sigma\}$. We immediately
obtain that there exists a positive constant $\sigma_*>\tilde\sigma$
depending only on $\tilde\sigma$ and $\hat\sigma$,
such that $\eta_s>0$ for $\bar\sigma>\sigma_*$ and $\eta\le 0$ for
$\bar\sigma\le \sigma_*$.
In conclusion, we have
\medskip
{\bf Theorem 2.1}\ \ {\em Assume $0<\hat\sigma<\tilde\sigma$.
There exists a positive constant $\sigma_*>\tilde\sigma$
depending only on $\tilde\sigma$ and $\hat\sigma$, such that for
$\bar\sigma>\sigma_*$, problem $(\ref{1.1})$ has a unique flat stationary solution
$(\sigma_s, p_s,\eta_s,\rho_s)$ given by $(\ref{2.2})$, $(\ref{2.3})$,
$(\ref{2.7})$ and $(\ref{2.8})$.
If $\bar\sigma\le\sigma_*$, problem $(\ref{1.1})$ has no flat stationary solution.
}
\medskip
It is interesting to compare this result with non-necrotic case.
From Theorem 2 of [\ref{cui-esc-09}], we see that there exists a unique
non-necrotic flat stationary solution for $0<\tilde\sigma<\bar\sigma$.
But in necrotic case, we see for $\tilde\sigma<\bar\sigma\le \sigma_*$,
necrotic flat stationary solution does not exist.
\medskip
\hskip 1em
\section{Reduction and Well-posedness}
\setcounter{equation}{0}
\hskip 1em
In this section, we reduce free boundary problem (\ref{1.1})
into a Cauchy problem in little H\"older spaces, and study the
local well-posedness.
First, we transform free boundary problem (\ref{1.1})
into an equivalent problem on a fixed domain. Later on,
we always assume $0<\hat\sigma<\tilde\sigma<\sigma_*<\bar\sigma$.
By Theorem 2.1, problem (\ref{1.1}) has a unique flat stationary solution
$(\sigma_s, p_s,\eta_s,\rho_s)$. Denote
$$
\Omega_s=\{(x,y)\in\Bbb S\times\Bbb R: 0<y<\rho_s\},\qquad
\Bbb D_s=\{(x,y)\in\Bbb S\times\Bbb R: 0<y<\eta_s\},
$$
$$
\Gamma_s=\Bbb S\times\{\rho_s\},\qquad
J_s=\Bbb S\times\{\eta_s\},\qquad
\Gamma_0=\Bbb S\times\{0\},\qquad
\Bbb E_s=\Omega_s\backslash \overline{\Bbb D}_s.
$$
Let $r_0:=(\rho_s-\eta_s)/8$,
$\delta\in (0, r_0)$ and $\alpha\in(0,1)$, set
\begin{equation}\label{3.1}
\mathcal{O}_\delta:=\{\rho\in h^{4+\alpha}(\Bbb S):
\,\|\rho\|_{h^{4+\alpha}(\Bbb S)}<\delta \}.
\end{equation}
For $\rho,\eta\in \mathcal{O}_\delta$, we denote
$$
\begin{array}{rl}
\Omega_\rho=\{(x,y)\in \Bbb S\times \Bbb R: 0<y<\rho_s+\rho(x)\},
\qquad
& \Bbb D_\eta=\{(x,y)\in \Bbb S\times \Bbb R: 0<y<\eta_s+\eta(x)\},
\\[0.2 cm]
\Gamma_\rho=\{(x,y)\in\Bbb S\times\Bbb R: y=\rho_s+\rho(x)\},
\qquad
& J_\eta=\{(x,y)\in\Bbb S\times\Bbb R: y=\eta_s+\eta(x)\},
\end{array}
$$
$$
\Omega_{\rho,\eta}=\Omega_\rho\backslash \overline {\Bbb D}_\eta
\qquad\quad\mbox{and}\qquad\quad
\Bbb E_\eta=\Omega_s\backslash \overline {\Bbb D}_\eta.
\qquad\qquad\qquad
$$
Choose a function $\varphi \in C^\infty(\Bbb R)$ such that
\begin{equation}\label{3.2}
0\le\varphi(y)\le1,
\qquad
\varphi(y)=
\left\{
\begin{array}{l}
1,\quad \mbox{for}\;|y| \le \delta,
\\ [0.2 cm]
0,\quad \mbox{for}\;|y| \ge 3\delta,
\end{array}
\right.
\qquad
\sup|\varphi'(y)|< {1/ \delta}.
\end{equation}
Given $\rho\in\mathcal{O}_\delta$, we introduce
a mapping
$$
\Phi_\rho: \Omega_s \to \Omega_\rho,
\qquad
(x,y)\to \big(x, y+\varphi(y-\rho_s)\rho(x)\big).
$$
Clearly, $\Phi_\rho(\Omega_s)=\Omega_\rho$,
$\Phi_\rho(\Gamma_s)=\Gamma_\rho$
and $\Phi_\rho$ is a $h^{4+\alpha}$ diffeomorphism
from $\Omega_s$ onto $\Omega_\rho$.
Moreover, for any $\eta\in\mathcal O_\delta$,
$\Phi_\rho$ is the identity mapping on $\Bbb D_\eta$.
Define the induced push-forward operator $\Phi^\rho_*$,
and pull-back operator $\Phi^*_\rho$ by
\begin{equation}\label{3.3}
\Phi^\rho_* u=u\circ \Phi_\rho^{-1} \qquad \mbox{for}\;\;
u\in C(\Omega_s),
\qquad\;\;
\Phi^*_\rho v=v \circ \Phi_\rho \qquad \mbox{for}\;\;
v\in C(\Omega_\rho).
\end{equation}
Next, we introduce the following transformed operators:
\begin{equation}\label{3.4}
\;\;\; \mathcal A(\rho)u:=\Phi_{\rho}^*\Delta(\Phi_*^{\rho}u),
\qquad\;
\mathcal B(\rho)u:=\langle\nabla(\Phi_*^\rho u)|_{\Gamma_\rho},
{\bf n}_\rho\rangle
\qquad\;\mbox{for}\;\; u\in H^2(\Omega_s),
\end{equation}
where ${\bf n}_\rho=(-\rho_x,1)$ is the outward normal on
$\Gamma_\rho$, $\langle\cdot,\cdot \rangle$ denotes the
Euclidean inner product, and
$\displaystyle H^2(\Omega_s)$ stands for Sobolev space.
By Lemma 2.2 of [\ref{esc-sim-97-1}],
we have
\begin{equation}\label{3.5}
\left\{
\begin{array}{l}
\mathcal A\in C^\infty\big(\mathcal O_\delta,L(h^{k+2+\alpha}(\Omega_s),
h^{k+\alpha}(\Omega_s))\big),\;\;\;\; 0\le k\le 2,
\\ [0.2 cm]
\mathcal B \in C^\infty\big(\mathcal O_\delta,L(h^{k+1+\alpha}(\Omega_s),
h^{k+\alpha}(\Bbb S))\big),\quad \;\;\; 0\le k\le 3.
\end{array}
\right.
\end{equation}
Denote by $\mathcal K(\rho)$ the transformed mean curvature
on $\Gamma_{\rho}$ and
\begin{equation}\label{3.6}
\displaystyle\mathcal K(\rho)=-(1+\rho_x^2)^{-{3\over2}}\rho_{xx}.
\end{equation}
For some $T>0$, and a function
$\rho\in C([0,T), \mathcal O_\delta)\cap C^1([0,T),
h^{1+\alpha}(\mathbb{S}))$, we identify $\rho(x,t)=\rho(t)(x)$
for $t\in [0,T)$ and $x\in\mathbb{S}$.
By an elementary analysis, the outward normal velocity $V$
of tumor surface $\Gamma_{\rho(t)}$ can be given by
$$
V={\rho_t/ \sqrt{1+\rho_x^2}}.
$$
Let $\chi_{\Bbb D_\eta}$ and $\chi_{\Bbb E_\eta}$ be the
characteristic functions of $\Bbb D_\eta$ and $\Bbb E_\eta$,
respectively.
Rewrite
$$
\Gamma^+_0={\rm graph}(\rho_s+\rho_0)\qquad \mbox{for\; some }\;\;
\rho_0\in\mathcal O_\delta,
$$
and
$$
u(x,y,t)=\Phi^*_\rho\sigma (x,y, t), \quad
v(x,y,t)=\Phi^*_\rho\, p(x,y,t).
$$
One can easily check that free boundary problem (\ref{1.1}) is
transformed into the following problem:
\begin{equation}\label{3.7}
\left\{
\begin{array}{rll}
\mathcal A(\rho)u\,&=u \chi_{\Bbb E_\eta} \qquad & \hbox{in}
\;\;\Omega_s,\; t>0,
\\ [0.2 cm]
\mathcal A(\rho)v\,&=-\mu(u-\tilde{\sigma}) \chi_{\Bbb E_\eta}
+\nu \chi_{\Bbb D_\eta}
\quad\;\; \qquad &\hbox{in}\;\;
\Omega_s, \; t>0,
\\ [0.2 cm]
u\,&=\bar{\sigma}, \quad v=\gamma\mathcal K(\rho)
\qquad & \hbox{on}\;\; \Gamma_s, \; t>0,
\\ [0.2 cm]
u\,&=\hat\sigma,\quad [\![\partial_n u]\!]=0 & \hbox{on}\;\; J_{\eta}, \; t>0,
\\ [0.2 cm]
[\![v]\!]\,&=0,\quad \,[\![\partial_n v]\!]=0\quad
& \hbox{on}\;\; J_{\eta}, \; t>0,
\\ [0.2 cm]
\partial_y u\,&=0, \quad \,\partial_y v=0 & \hbox{on}\;\; \Gamma_0, \; t>0,
\\ [0.2 cm]
\partial_t \rho\,&=-\mathcal B(\rho) v \qquad \qquad \qquad & \hbox{on}\;\; \Bbb S,
\;\;\, t>0,
\\ [0.2 cm]
\rho(0)\,&=\rho_0 & \hbox{on}\;\; \Bbb S,\;\;\, t=0.
\end{array}
\right.
\end{equation}
By above transformation, we have
\medskip
{\bf Lemma 3.1} \ \ {\em A quadruple $(u, v, \eta, \rho)$
is a solution of problem $(\ref{3.7})$ if and only if
the quadruple $(\sigma, p,\eta_s+\eta,\rho_s+\rho)$
is a solution of problem $(\ref{1.1})$ in the neighborhood
of $(\sigma_s, p_s,\eta_s,\rho_s)$,
with $\sigma=\Phi_*^{\rho}u$ and
$p=\Phi_*^{\rho} v$.
}
\medskip
Next we further reduce problem (\ref{3.7}) into a Cauchy problem
in little H\"older space for $\rho$ only.
Given $\rho\in \mathcal O_\delta$, we consider the following
problem:
\begin{equation}\label{3.8}
\left\{
\begin{array}{l}
\mathcal A(\rho)u=u \chi_{\Bbb E_\eta} \qquad \hbox{in}\;\;
\Omega_s,
\\ [0.2 cm]
u\big|_{\Gamma_s}=\bar{\sigma},
\qquad \partial_y u\big|_{\Gamma_0}=0,
\\ [0.2 cm]
u=\hat\sigma,\quad [\![\partial_n u]\!]=0 \quad \hbox{on}\;\; J_{\eta}.
\end{array}
\right.
\end{equation}
For any $\eta\in\mathcal O_\delta$, by the maximum principle,
$u\equiv \hat\sigma$ in $\Bbb D_\eta$,
and $u>\hat\sigma$ in $\Bbb E_\eta$. On the other hand, since
$u=\hat\sigma$ on $J_\eta$, we have
$$
\partial_n u={u_x\eta_x-u_y\over\sqrt {1+\eta_x^2}}
=-u_y\sqrt{1+\eta_x^2} \qquad\quad \mbox{on}\;\;J_\eta.
$$
It implies that $[\![\partial_n u]\!]=0$ is equivalent to
$\partial_y u = 0$ on $J_\eta$.
Hence for problem (\ref{3.8}), we only need to solve
\begin{equation}\label{3.9}
\left\{
\begin{array}{rlr}
\mathcal A(\rho)u\,&=u \qquad\;\; & \hbox{in}\;\;
\Bbb E_\eta,
\\ [0.2 cm]
u \, &=\bar{\sigma}\qquad & \hbox{on}\;\; \Gamma_s,
\\ [0.2 cm]
\partial_y u\,& =0\qquad & \hbox{on}\;\; J_{\eta},
\\ [0.2 cm]
u\, &=\hat\sigma \qquad & \hbox{on}\;\; J_{\eta}.
\end{array}
\right.
\end{equation}
Recently, Cui [\ref{cui-16}] studied a similar obstacle
problem based on Nash-Moser implicit
function theorem.
Motivated by this method and with some
modifications to the proof of Theorem 5.2 of
[\ref{cui-16}] ,
we have the following result:
\medskip
{\bf Lemma 3.2} \ \ {\em There exists a constant $\delta_1\in(0,r_0)$,
such that
for any $\rho\in\mathcal O_{\delta_1}$, problem $(\ref{3.9})$ has a unique solution
$(u,\eta)$ satisfying $u\in h^{4+\alpha}(\Bbb E_\eta)$ and
$\eta\in C^\infty(\Bbb S)$. Moreover, the mapping $\rho\mapsto (u,\eta)$
from $\mathcal O_{\delta_1}$ to $h^{4+\alpha}(\Bbb E_\eta)\times C^\infty(\Bbb S)$
is smooth.}
\medskip
{\bf Proof.} \ \ Denote
$$
\Bbb H:=\{(x,y)\in\Bbb S\times\Bbb R:
{\rho_s+\eta_s\over 2}<y<\rho_s\},
\qquad
\Bbb K:= \{(x,y)\in\Bbb S\times\Bbb R:
\eta_s<y<{\rho_s+\eta_s\over 2}\}.
$$
Let $r_1:=\min\{(\rho_s-\eta_s)/8,\eta_s/8\}\le r_0$ and $\delta\in(0,r_1)$.
For any $m\in\Bbb N$, $m\ge4$ and $\alpha\in(0,1)$, denote
$$
\widetilde{\mathcal O}_\delta^{m+\alpha}:=\{\eta\in h^{m+\alpha}(\Bbb S):
\|\eta\|_{h^{4+\alpha}(\Bbb S)}<\delta\}.
$$
For
any $\eta\in\widetilde{\mathcal O}_\delta^{m+\alpha}$, we introduce
a mapping
$$
\widetilde\Phi_\eta: \Bbb R^2 \to \Bbb R^2,
\qquad
(x,y)\to \big(x, y+\varphi(y-\eta_s)\eta(x)\big),
$$
where $\varphi$ is a smooth function given by (\ref{3.2}).
We have
$\widetilde\Phi_\eta$ is a $h^{m+\alpha}$ diffeomorphism
from $\Bbb E_s$ onto $\Bbb E_\eta$, and
$\widetilde\Phi_\eta$ is the identity mapping on $\Bbb H$.
Similarly as (\ref{3.3}), we can define the
push-forward operator $\widetilde\Phi^\eta_*$,
and pull-back operator $\widetilde\Phi^*_\eta$
induced by $\widetilde\Phi_\eta$, and for any
$\rho\in\mathcal O_\delta$ and
$\eta\in\widetilde{\mathcal O}_\delta^{m+\alpha}$,
we define an operator
$$
\mathscr A(\rho,\eta)v:=
\widetilde\Phi_{\eta}^*\mathcal A(\rho)
(\widetilde\Phi_*^{\eta}v)
\qquad\mbox{for}\;\; v\in BUC^2(\Bbb E_s).
$$
Notice that for any $\rho\in\mathcal O_\delta$ and
$\eta\in\widetilde{\mathcal O}_\delta
^{m+\alpha}$,
$\mathcal A(\rho)\equiv\Delta $ in $ \Bbb K$,
we see that $\mathscr A(\rho,\eta)$ is independent of $\eta$ on $\Bbb H$,
and independent of $\rho$ on $\Bbb K$. Moreover,
$\mathscr A(\rho,\eta)$ is uniformly elliptic and
by Lemma 2.2 in [\ref{esc-sim-97-1}], we have
$$
\mathscr A\in C^\infty\big(\mathcal O_\delta\times
\widetilde{\mathcal O}_\delta
^{m+\alpha},L(h^{4+\alpha}(\Bbb E_s)\cap h^{m+\alpha}(\Bbb K),
h^{2+\alpha}(\Bbb E_s)\cap h^{m-2+\alpha}(\Bbb K))\big).
$$
Set $\tilde u =u\circ \widetilde\Phi_\eta$. The first three
equations of $(\ref{3.9})$ is equivalent to
\begin{equation}\label{3.10}
\mathscr A(\rho,\eta)\tilde u=\tilde u \quad \hbox{in}\;\;
\Bbb E_s,
\qquad\;
\tilde u =\bar{\sigma}\quad \hbox{on}\;\; \Gamma_s,
\qquad\;
\partial_y \tilde u =0 \quad \hbox{on}\;\; J_{s}.
\end{equation}
By well-known regularity theory of second-order elliptic differential equations,
problem $(\ref{3.10})$ has a unique solution $\tilde u:=\widetilde{\mathcal U}(\rho,\eta)
\in h^{4+\alpha}(\Bbb E_s)$, and
by Lemma 2.3 in [\ref{esc-sim-97-1}], for $m\ge 4$,
\begin{equation}\label{3.11}
\widetilde{\mathcal U}\in C^\infty(\mathcal O_\delta\times \widetilde{\mathcal O}_\delta^{m+\alpha},
h^{4+\alpha}(\Bbb E_s)).
\end{equation}
Next, we further show some much more profound properties of
$\;\widetilde{\mathcal U}$.
Recall from Part II.1 of Hamilton [\ref{ham-82}], Banach space
$h^{4+\alpha}(\Bbb S)$ can be regarded as a tame Fr\'echet space,
$C^\infty(\Bbb S)$ with a collection of seminorms $\{\| \;\; \|_{h^{m+\alpha}(\Bbb S)},
m=0,1,2\cdots\}$, and $BUC^\infty(\Bbb K)$ with a collection of seminorms
$\{\| \;\; \|_{h^{m+\alpha}(\Bbb K)}, m=0,1,2\cdots\}$ are both
tame Fr\'echet spaces. Denote
$$
\widetilde{\mathcal O}_\delta^\infty =\{\eta\in C^\infty(\Bbb S):
\|\eta\|_{h^{4+\alpha}(\Bbb S)}<\delta\}.
$$
By Theorem 3.3.5
in Part II of [\ref{ham-82}], we have
\begin{equation}\label{3.12}
\widetilde{\mathcal U} \; \mbox{ is a smooth tame mapping from }
\; \mathcal O_\delta\times
\widetilde{\mathcal O}_\delta^\infty \; \mbox{ to } \;
h^{4+\alpha}(\Bbb E_s)\cap BUC^\infty (\Bbb K),
\end{equation}
which means that for all $m\in \Bbb N$, $m\ge 4$,
\begin{equation}\label{3.13}
\widetilde{\mathcal U}\in C^\infty(\mathcal O_\delta\times
\widetilde{\mathcal O}_\delta^{m+\alpha},
h^{4+\alpha}(\Bbb E_s)\cap h^{m+\alpha}(\Bbb K)),
\end{equation}
and
\begin{equation}\label{3.14}
\| \widetilde{\mathcal U}(\rho,\eta)\|_{h^{4+\alpha}(\Bbb E_s)}+
\| \widetilde{\mathcal U}(\rho,\eta)\|_{h^{m+\alpha}(\Bbb K)} \le C_m (1+
\|\rho\|_{h^{4+\alpha}(\Bbb S)}+\|\eta\|_{h^{m+\alpha}(\Bbb S)}),
\end{equation}
where $C_m$ is a positive constant dependent on $m$.
We define a mapping $F:\mathcal O_\delta\times\widetilde{\mathcal O}_\delta^{m+\alpha}
\to h^{m+\alpha}(\Bbb S)$ by
$$
F(\rho,\eta)= \widetilde{\mathcal U}(\rho,\eta) \Big|_{J_s}-\hat\sigma.
$$
It is easy to see that $F\in C^\infty(\mathcal O_\delta\times
\widetilde{\mathcal O}_\delta^{m+\alpha}, h^{m+\alpha}(\Bbb S))$.
Moreover, by (\ref{3.12}) we have
\begin{equation}\label{3.15}
F \; \mbox{ is a smooth tame mapping from }
\; \mathcal O_\delta\times
\widetilde{\mathcal O}_\delta^\infty \; \mbox{ to } \;
C^\infty (\Bbb S).
\end{equation}
Clearly, $F(0,0)=0$ and problem (\ref{3.9}) is equivalent to the equation
$F(\rho,\eta)=0$.
Next we compute the Fr\'echet derivative of $F$ with respect to $\eta$
at $(\rho,\eta)\in \mathcal O_\delta\times \widetilde{\mathcal O}_\delta^\infty$,
which is denoted by $D_\eta F(\rho,\eta)$.
Let $\mathcal U(\rho,\eta)$ be the solution of the first three equations of problem (\ref{3.9}).
For any $\zeta\in C^\infty(\Bbb S)$, we easily verify that
$$
D_\eta F(\rho,\eta)\zeta= \mathcal Z(\rho,\eta, \zeta)\big|_{J_\eta},
$$
where
$z=\mathcal Z(\rho,\eta,\zeta)$
is the solution of the following problem
\begin{equation}\label{3.16}
\mathcal A(\rho) z=z \quad \mbox{in}\;\;
\Bbb E_\eta,
\qquad
z=0 \quad \mbox{on}\;\; \Gamma_s,
\qquad
\partial_ y z=-\partial_{yy}\, \mathcal U(\rho,\eta) \zeta
\quad \mbox{on}\;\; J_\eta.
\end{equation}
Since $\mathcal U(0,0)=\sigma_s\big|_{\Bbb E_s}$, we have
$\partial_{yy}\,\mathcal U(0,0)\big|_{J_s}=\sigma_s''(\eta_s^+)=\hat\sigma>0$.
Thus for sufficiently small $\delta>0$, we have
$\partial_{yy}\,\mathcal U(\rho,\eta)\big|_{J_\eta}>\hat\sigma/2$
for $(\rho,\eta)\in \mathcal O_\delta\times \widetilde{\mathcal O}_\delta^\infty$.
By (\ref{3.16}), for any $\xi\in C^\infty(\Bbb S)$ we have
$$
[D_\eta F(\rho,\eta)]^{-1}\xi=-{\partial_y {\mathcal T}(\rho,\eta,\xi) \over
\partial_{yy}\, \mathcal U(\rho,\eta) }\Big|_{J_\eta},
$$
where $z=\mathcal T(\rho,\eta,\xi)$ is the solution of the problem
$$
\mathcal A(\rho) z=z \quad \mbox{in}\;\;
\Bbb E_\eta,
\qquad\;
z =0 \quad \mbox{on}\;\; \Gamma_s,
\qquad\;
z=\xi
\quad \mbox{on}\;\; J_\eta.
$$
Notice that $D_\eta F(0,0)$ is an isomorphism from $h^{m+\alpha}(\Bbb S)$
onto $h^{m+1+\alpha}(\Bbb S)$ for all $m\in\Bbb N$, so classical
implicit function theorem in Banach spaces is not available here.
But on the other hand,
similarly as (\ref{3.15}), we can show the mapping
$$
(\rho,\eta,\xi)\mapsto [D_\eta F(\rho,\eta)]^{-1}\xi \; \mbox{ is smooth tame from }
\; \mathcal O_\delta\times
\widetilde{\mathcal O}_\delta^\infty\times C^\infty(\Bbb S) \; \mbox{ to } \;
C^\infty (\Bbb S).
$$
Thus by Nash-Moser implicit function theorem (see Theorem 3.3.1 in Part III of
[\ref{ham-82}]), there exist sufficiently small
$\delta_1, \delta'_1\in(0,r_0)$,
and a unique smooth tame mapping $\mathcal S$ from $\mathcal O_{\delta_1}$
to $\widetilde{\mathcal O}_{\delta'_1}^\infty$
such that
$$
\mathcal S(0)=0\qquad
\mbox{and} \qquad
F(\rho,\mathcal S(\rho))=0.
$$
By letting $u=\mathcal U(\rho,\mathcal S(\rho))$ and $\eta=\mathcal S(\rho)$,
we see that $(u,\eta)$ is the solution of problem (\ref{3.9}), and the mapping
$\rho\mapsto(u,\eta)$ is smooth.
The proof is complete. $\qquad\Box$
\medskip
\medskip
By the proof of Lemma 3.2, for any $\rho\in \mathcal O_{\delta_1}$,
problem (\ref{3.8}) has a unique solution
\begin{equation}\label{3.17}
u=\left\{
\begin{array}{ll}
\mathcal U(\rho,\mathcal S(\rho))\quad\;& \mbox{in}
\;\; \Bbb E_{\mathcal S(\rho)},
\\ [0.2 cm]
\hat\sigma\qquad& \mbox{in}\;\;
\Bbb D_{\mathcal S(\rho)},
\end{array}
\right.
\qquad\quad\mbox{and}\qquad\quad
\eta=\mathcal S(\rho).
\end{equation}
Next we consider the following problem
\begin{equation}\label{3.18}
\left\{
\begin{array}{rll}
\mathcal A(\rho)v\,&=-\mu(u-\tilde{\sigma})\chi_{\Bbb E_\eta}
+\nu \chi_{\Bbb D_\eta}
\quad \qquad \; &\hbox{in}\;\;
\Omega_s,
\\ [0.2 cm]
v\,&=\gamma\mathcal K(\rho)
\qquad & \hbox{on}\;\; \Gamma_s,
\\ [0.2 cm]
[\![v]\!]\,&=0,\quad \,[\![\partial_n v]\!]=0\quad
& \hbox{on}\;\; J_{\eta},
\\ [0.2 cm]
\partial_y v\,&=0 & \hbox{on}\;\; \Gamma_0,
\end{array}
\right.
\end{equation}
where $u$ and $\eta$ are given by (\ref{3.17}).
For the sake of simplicity,
we first study
\begin{equation}\label{3.19}
\quad\;\,
\left\{
\begin{array}{rll}
\mathcal A(\rho)w^+\,&=-\mu(\mathcal U(\rho,\mathcal S(\rho))-\tilde\sigma)
\qquad\quad\;\; & \mbox{in} \;\; \Bbb E_{\mathcal S(\rho)},
\\ [0.2 cm]
\mathcal A(\rho) w^-\,& = \nu \qquad &\mbox{in}\;\;\Bbb D_{\mathcal S(\rho)},
\\ [0.2 cm]
w^+\,&=0
\qquad & \hbox{on}\;\; \Gamma_s,
\\ [0.2 cm]
w^+\,&=w^-
& \hbox{on}\;\; J_{\mathcal S(\rho)},
\\ [0.2 cm]
\partial_n w^+\,&=\partial_n w^-\quad
& \hbox{on}\;\; J_{\mathcal S(\rho)},
\\ [0.2 cm]
\partial_y w^-\,&=0 & \hbox{on}\;\; \Gamma_0.
\end{array}
\right.
\end{equation}
\medskip
{\bf Lemma 3.4} \ \ {\em There exists a positive constant $\delta_2\in (0,\delta_1)$
such that for any $\rho\in \mathcal O_{\delta_2}$, problem $(\ref{3.19})$ has
a unique solution $(w^+,w^-)\in h^{4+\alpha}(\Bbb E_{\mathcal S(\rho)})\times
h^{4+\alpha}(\Bbb D_{\mathcal S(\rho)})$, and the mapping
$\rho\mapsto (w^+,w^-)$ is smooth in $\mathcal O_{\delta_2}$.
}
\medskip
{\bf Proof.} \ \ For given $\rho\in\mathcal O_{\delta_1}$ and $\zeta\in h^{4+\alpha}(\Bbb S)$,
we consider
\begin{equation}\label{3.20}
\left\{
\begin{array}{rll}
\mathcal A(\rho)w^+\,& =-\mu(\mathcal U(\rho,\mathcal S(\rho))-\tilde\sigma)
\quad & \mbox{in}\;\; \Bbb E_{\mathcal S(\rho)},
\\ [0.2 cm]
w^+\,& =\zeta \qquad
& \mbox{on}\;\; J_{\mathcal S(\rho)},
\\ [0.2 cm]
w^+\,& =0 \qquad& \mbox{on}\;\; \Gamma_s,
\end{array}
\right.
\qquad\quad
\left\{
\begin{array}{rll}
\mathcal A(\rho)w^-\,& =\nu
\quad & \mbox{in}\;\; \Bbb D_{\mathcal S(\rho)},
\\ [0.2 cm]
w^-\,& =\zeta \quad& \mbox{on}\;\; J_{\mathcal S(\rho)},
\\ [0.2 cm]
\partial_y w^-\,& =0 \quad
& \mbox{on}\;\; \Gamma_0.
\end{array}
\right.
\end{equation}
From Lemma 3.2, we see $\mathcal S(\rho)\in C^{\infty}(\Bbb S)$
and $\mathcal U(\rho,\mathcal S(\rho))\in h^{4+\alpha}(\Bbb E_{\mathcal S(\rho)})$.
By classical regularity theory of elliptic differential equations,
problem (\ref{3.20}) has a unique
solution $(w^+,w^-)$ such that
\begin{equation}\label{3.21}
w^+:=\mathcal W^+(\rho,\zeta)
\in h^{4+\alpha}(\Bbb E_{\mathcal S(\rho)})\qquad
\mbox{ and }\qquad
w^-:=\mathcal W^-(\rho,\zeta)\in h^{4+\alpha}(\Bbb D_{\mathcal S(\rho)}).
\end{equation}
Since the mappings $\mathcal S$ and $\mathcal U$ are both
smooth in $\mathcal O_{\delta_1}$, the mappings $\mathcal W^+$
and $\mathcal W^-$
are also smooth in $\mathcal O_{\delta_1}\times h^{4+\alpha}(\Bbb S)$.
Recall that $\partial_n$ is outward normal derivative on $J_{\mathcal S(\rho)}$
with respect to $\Bbb E_{\mathcal S(\rho)}$.
Define a mapping
$G: \mathcal O_{\delta_1}\times h^{4+\alpha}(\Bbb S)\to h^{3+\alpha}(\Bbb S)$ by
\begin{equation}\label{3.22}
G(\rho,\zeta)=
\partial_n \mathcal W^+(\rho,\zeta)\Big |_{J_{\mathcal S(\rho)}}
-\partial_n \mathcal W^-(\rho,\zeta)\Big |_{J_{\mathcal S(\rho)}}\qquad
\mbox{for}\;\;\rho\in\mathcal O_{\delta_1},\;\;
\zeta\in h^{4+\alpha}(\Bbb S).
\end{equation}
It is easy to see that problem (\ref{3.19}) is equivalent to the equation
$G(\rho,\zeta)=0$.
Since $\mathcal W^+$ and $\mathcal W^-$ are smooth,
we have
\begin{equation}\label{3.23}
G\in C^\infty\big(\mathcal O_{\delta_1}\times h^{4+\alpha}(\Bbb S),
h^{3+\alpha}(\Bbb S)\big).
\end{equation}
By (\ref{2.1})--(\ref{2.3}), we see $G(0, p_0)=0$, where $p_0=p_s(\eta_s)$.
Note that
$$
\mathcal S(0)=0, \quad
\mathcal U(0,\mathcal S(0))=\sigma_s,
\quad
\mathcal W^+(0,p_0)=p_s\big|_{\Bbb E_s},
\quad
\mathcal W^-(0,p_0)=p_s\big|_{\Bbb D_s}.
$$
By a direct computation,
we have
$$
D_\zeta G(0,p_0)\xi =- \partial_y z^+\big|_{J_s}+\partial_y z^-\big|_{J_s}
\qquad
\mbox{for}\quad \xi\in h^{4+\alpha}(\Bbb S),
$$
where $z^+$ and $z^-$ are the solutions to the following two problems, respectively,
\begin{equation}\label{3.24}
\begin{array}{l}
\Delta z^+ =0
\quad \mbox{in}\;\; \Bbb E_s,
\qquad
z^+ =\xi \quad \mbox{on}\;\; J_s,
\qquad
z^+ =0 \quad \mbox{on}\;\; \Gamma_s,
\\ [0.3 cm]
\Delta z^- =0 \quad \mbox{in}\;\; \Bbb D_s,
\qquad
z^-=\xi \quad \mbox{on}\;\; J_s,
\qquad
\partial_y z^- =0 \quad\mbox{on}\;\; \Gamma_0.
\end{array}
\end{equation}
For any $\xi\in C^\infty(\Bbb S)$ with the expression
$\displaystyle \xi(x)=\sum_{k\in \Bbb Z} \xi_k e^{ikx}$,
we obtain
\begin{equation}\label{3.25}
D_\zeta G(0,p_0)\xi = \sum_{k\in \Bbb Z} \tau_k \xi_k e^{ikx},
\end{equation}
where $\tau_0=(\rho_s-\eta_s)^{-1}$ and
$\tau_k=k(\coth k(\rho_s-\eta_s)+\tanh k\eta_s )$ for
$k\neq 0$, $k\in\Bbb Z$.
Obviously, there exist two positive constant $C_1$ and $C_2$ such that
$$
C_1\sqrt{k^2+1}\le \tau_k\le C_2 \sqrt{k^2+1}.
$$
It implies that
\begin{equation}\label{3.26}
D_\zeta G(0,p_0) \mbox{ is an isomorphism from }
H^{r+1}(\Bbb S) \mbox{ onto } H^r(\Bbb S)
\mbox{ for }r>0,
\end{equation}
where
$\displaystyle H^r(\Bbb S)=\{f\in L^2(\Bbb S):
\sum_{k\in\Bbb Z} (k^2+1)^r |\widehat{f}(k)|^2< +\infty\}$.
From (\ref{3.25}), we easily obtain that for $\xi\in C^\infty(\Bbb S)$
with $\displaystyle \xi(x)=\sum_{k\in \Bbb Z} \xi_k e^{ikx}$,
$$
[D_\zeta G(0,p_0)]^{-1} \xi = \sum_{k\in \Bbb Z} \tau_k^{-1} \xi_k e^{ikx}.
$$
Define a function $\tau(x)=x(\coth (\rho_s-\eta_s)x+\tanh \eta_sx )$
for $|x|\ge1$. It is easy to verify that
$$
\tau_k=\tau(k)\quad \mbox {for }\; k\neq 0\quad \mbox{and}\quad
\sup_{|x|\ge1} |\tau'(x)|+|x||\tau''(x)|<+ \infty.
$$
Using above relations one can prove that
$$
\left\{
\begin{array}{l}
\displaystyle \sup_{k\in \Bbb Z} | k |\big|{1\over \tau_k}\big| < +\infty,
\\ [0.4 cm]
\displaystyle \sup_{k\in \Bbb Z} | k |^2\big|{1\over \tau_{k+1}}
-{1\over \tau_k} \big| < +\infty,
\\ [0.4 cm]
\displaystyle \sup_{k\in \Bbb Z} | k |^3\big|
{1\over \tau_{k+2}}-{2\over \tau_{k+1}}+
{1\over \tau_k}\big| < +\infty.
\end{array}
\right.
$$
Then by Theorem 4.5 of [\ref{are-bu-04}] (or [\ref{sch-tri-87}]),
we have
\begin{equation}\label{3.27}
[D_\zeta G(0,p_0)]^{-1}\in L(C^r(\Bbb S), C^{r+1}(\Bbb S))\qquad
\mbox{for }\;\; r>0.
\end{equation}
By Sobolev embedding theorem, $ H^{4+r}(\mathbb{S})\,
{\hookrightarrow}\, C^{4+\alpha}(\mathbb{S})$ for
$r>{3/2}$.
Notice that $h^{4+\alpha}(\mathbb{S})$ is the closure of
$H^{4+r}(\mathbb{S})$ in $C^{4+\alpha} (\mathbb{S})$ for
$r>{3/2}$.
By (\ref{3.26}) and (\ref{3.27}), we obtain that
$[D_\zeta G(0,p_0)]^{-1}\in L(h^{3+\alpha}(\Bbb S), h^{4+\alpha}(\Bbb S))$
and
$$
D_\zeta G(0,p_0) \mbox{ is an isomorphism from }
h^{4+\alpha}(\Bbb S) \mbox{ onto } h^{3+\alpha}(\Bbb S).
$$
Hence by classical implicit function theorem
in Banach spaces, there exist sufficiently small
constants $\delta_2, \delta_2'\in(0,\delta_1)$,
and a unique mapping $\mathcal R\in C^\infty(
\mathcal O_{\delta_2}, h^{4+\alpha}(\Bbb S))$
such that
$$
\mathcal R(0)=p_0,\qquad
\|\mathcal R(\rho)-p_0\|_{h^{4+\alpha}(\Bbb S)}\le \delta'_2
\qquad \mbox{and} \qquad
G(\rho,\mathcal R(\rho))=0.
$$
By letting $(w^+,w^-)=(\mathcal W^+(\rho,\mathcal R(\rho)),
\mathcal W^-(\rho,\mathcal R(\rho)))$, we see that $(w^+,w^-)$
is the solution of problem
(\ref{3.19}), and the desired result follows immediately.
$\qquad \Box$
\medskip
\medskip
\medskip
By the proof of Lemma 3.4, we denote
\begin{equation}\label{3.28}
\mathcal W(\rho)=\left\{
\begin{array}{l}
\mathcal W^+(\rho,\mathcal R(\rho))\qquad\mbox{in}\;\; \Bbb E_{\mathcal S(\rho)},
\\ [0.2 cm]
\mathcal W^-(\rho,\mathcal R(\rho))\qquad\mbox{in}\;\; \Bbb D_{\mathcal S(\rho)},
\end{array}
\right.
\qquad\;\;\mbox{for}\;\;\;\; \rho\in\mathcal O_{\delta_2}.
\end{equation}
Consider the problem
\begin{equation}\label{3.29}
\mathcal A(\rho) v_0=0
\quad \hbox{in}\;\;
\Omega_s,
\qquad
v_0 =\gamma\mathcal K(\rho)
\quad \hbox{on}\;\; \Gamma_s,
\qquad
\partial_y v_0 =0\quad \hbox{on}\;\; \Gamma_0.
\end{equation}
Note that by (\ref{3.3}), we have
\begin{equation}\label{3.30}
\mathcal K \in C^\infty(\mathcal O_{\delta_2},h^{2+\alpha}(\Bbb S)).
\end{equation}
By classical regularity theory of elliptic differential equations,
problem (\ref{3.29}) has a unique solution
$v_0:= \mathcal V(\rho)\in h^{2+\alpha}(\Omega_s)$. Moreover,
by (\ref{3.5}), (\ref{3.30}) and Lemma 2.3 in [\ref{esc-sim-97-1}],
\begin{equation}\label{3.31}
\mathcal V \in C^\infty (\mathcal O_{\delta_2}, h^{2+\alpha}(\Omega_s)).
\end{equation}
From (\ref{3.28}), (\ref{3.29}) and Lemma 3.4, for any $\rho\in\mathcal O_{\delta_2}$,
we see problem (\ref{3.18})
has a unique solution
\begin{equation}\label{3.32}
v= \mathcal V(\rho)+ \mathcal W(\rho).
\end{equation}
Later on, we always fix $0<\delta\le\delta_2$.
Define a mapping $\Psi: \mathcal O_\delta\to h^{1+\alpha}(\Bbb S)$ by
\begin{equation}\label{3.33}
\Psi(\rho):=\mathcal B(\rho)\mathcal V(\rho) +\mathcal B(\rho)\mathcal W(\rho)
\qquad\mbox{for}\;\; \rho\in\mathcal O_\delta.
\end{equation}
It follows from $(\ref{3.5})$, (\ref{3.31}) and Lemma 3.4 that
\begin{equation}\label{3.34}
\Psi \in C^\infty(\mathcal O_\delta, h^{1+\alpha}(\Bbb S)).
\end{equation}
With all above reductions, we see that problem (\ref{3.7}) is equivalent to
the following Cauchy problem
\begin{equation}\label{3.35}
\left\{
\begin{array}{ll}
\partial_t \rho + \Psi(\rho)=0
\qquad\;\; & \mbox{on}\;\; \Bbb S, \;\; t>0,
\\ [0.2 cm]
\rho(0)=\rho_0 \qquad & \mbox{on}\;\; \Bbb S.
\end{array}
\right.
\end{equation}
More precisely, we have
\medskip
{\bf Lemma 3.5} \ \ {\em The function $\rho$ is the
solution of problem $(\ref{3.35})$ if and only if
$(u, v, \eta, \rho)$ is the solution of problem $(\ref{3.7})$
with $(u, v,\eta)$ given by $(\ref{3.17})$ and $(\ref{3.32})$.
}
\medskip
Next we study local well-posedness of problem (\ref{3.35}).
For any $\rho\in\mathcal O_\delta$, we define the Fr\'echet
derivative of nonlinear operator $\Psi$ at $\rho$ by
$$
\displaystyle D \Psi(\rho)\zeta:=\lim_{\varepsilon\to0}
{\Psi(\rho+\varepsilon\zeta)-\Psi(\rho)\over\varepsilon}
\qquad\quad \mbox{for}\;\; \zeta\in h^{4+\alpha}(\Bbb S).
$$
Let $E_0$ and $E_1$ be two Banach spaces, $E_1$
is densely and continuously embedded into $E_0$. Denote by
$\mathcal H(E_1,E_0)$ the subspace of all linear operators
$A\in L(E_1,E_0)$ such that $-A$ generates a strongly continuous
analytic semigroup on $E_0$. We have the following result:
\medskip
{\bf Lemma 3.6} \ \ {\em
$D \Psi (\rho)\in \mathcal H(h^{4+\alpha}(\Bbb S),h^{1+\alpha}(\Bbb S))$
for $\rho\in\mathcal O_\delta$. }
\medskip
{\bf Proof.} \ \ Let $\Psi_1(\rho):=\mathcal B(\rho)\mathcal V(\rho)$
and $\Psi_2:=\mathcal B(\rho)\mathcal W(\rho)$, then
$$
\Psi(\rho)=\Psi_1(\rho)+\Psi_2(\rho)\qquad \mbox{for}\;\; \rho\in\mathcal O_\delta.
$$
Notice that the following problem is the corresponding transformed
periodic Hele-Shaw model with surface tension:
$$
\left\{
\begin{array}{rll}
\mathcal A(\rho) v_0\,&=0
\quad \qquad &\hbox{ in }\;
\Omega_s, \;t>0,
\\ [0.2 cm]
\partial_y v_0\,&=0 & \hbox{ on }\; \Gamma_0,\;t>0,
\\ [0.2 cm]
v_0 \,&=\gamma\mathcal K(\rho)
\qquad & \hbox{ on }\; \Gamma_s,\; t>0,
\\ [0.2 cm]
\rho_t\,&=-\mathcal B(\rho) v_0 \qquad \qquad
\qquad & \hbox{ on }\; \Bbb S, \; \; \, t>0,
\end{array}
\right.
$$
and similarly, it can be reduced to $\partial_t\rho+\Psi_1(\rho)=0$ for $t>0$.
Thus by well-known results of Hele-Shaw models (cf. [\ref{esc-sim-97-1}]),
we have $D\Psi_1(\rho)\in \mathcal H(h^{4+\alpha}(\Bbb S),h^{1+\alpha}(\Bbb S))$,
for any $\rho\in\mathcal O_\delta$.
On the other hand, by Lemma 3.4, (\ref{3.5}) and (\ref{3.28}), we have
$\Psi_2\in C^\infty(\mathcal O_\delta,h^{3+\alpha}(\Bbb S))$ and
$D\Psi_2(\rho)\in L(h^{4+\alpha}(\Bbb S),h^{3+\alpha}(\Bbb S))$. Since
$h^{3+\alpha}(\Bbb S)$ is compactly embedded into $h^{1+\alpha}(\Bbb S)$,
by the well-known perturbation result (cf. Theorem I.1.5.1 in [1], or
Proposition 2.4.3 in [\ref{lunardi}]), we get the desired result. $\qquad\Box$
\medskip
The above result implies that problem (\ref{3.35}) is of parabolic type
in $\mathcal O_\delta$.
Thus by using analytic semigroup theory and applications to parabolic
differential problems (see [\ref{amann}] and [\ref{lunardi}]), we get the local
well-posedness.
\medskip
{\bf Theorem 3.7} \ \ {\em Given $\rho_0\in \mathcal O_\delta$. There exists
a maximal $T>0$ such that problem $(\ref{3.35})$ has a unique solution
$\rho\in C([0,T),\mathcal O_\delta)\cap C^1([0,T), h^{1+\alpha}(\Bbb S))$.
}
\medskip
From Theorem 3.7, and combining Lemma 3.1, Lemma 3.5, we see that free
boundary problem (\ref{1.1}) is locally wellposed, and given $\rho_0\in \mathcal O_\delta$,
there exists a unique solution $(\sigma, p,\eta, \rho)$ of problem (\ref{1.1}).
\medskip
\hskip 1em
\section{Linearization and Eigenvalues}
\setcounter{equation}{0}
\hskip 1em
In this section we study linearization of problem (\ref{3.35})
at the stationary solution $\rho=0$, and compute all eigenvalues
of $D\Psi(0)$.
First, we study the linearization
of free boundary problem (\ref{1.1}) at flat stationary
solution $(\sigma_s, p_s, \eta_s, \rho_s)$.
Let
\begin{equation}\label{4.1}
\sigma=\sigma_s+\epsilon \phi (x,y,t),\quad
p=p_s+\epsilon \psi(x,y,t),\quad
\eta=\eta_s+\epsilon \xi(x,t), \quad
\rho=\rho_s+\epsilon \zeta(x,t),
\end{equation}
where $\phi$, $\psi$, $\xi$ and $\zeta$ are unknown functions.
At each time $t>0$,
by (\ref{3.6}), the mean curvature of the curve
$y=\rho_s+\epsilon\zeta$ can be expressed by
\begin{equation}\label{4.2}
\mathcal K(\epsilon\zeta)=-\epsilon\zeta_{xx} +O(\epsilon^2).
\end{equation}
Let ${\bf n}_{\epsilon\zeta}=(-\epsilon\zeta_x,1)$ be the
outward normal
direction on $y=\rho_s+\epsilon\zeta$.
We compute
\begin{equation}\label{4.3}
\begin{array}{rl}
\langle\nabla p, {\bf n}_{\epsilon\zeta}\rangle \big|_{y=\rho_s+\epsilon\zeta}
&=(-\epsilon p_x \zeta_x + p_y )\big|_{y=\rho_s+\epsilon\zeta}
= \partial_y(p_s+\epsilon\psi)\big|_{y=\rho_s+\epsilon\zeta}
+O(\epsilon^2)
\\ [0.3 cm]
&=\epsilon p_s''(\rho_s)\zeta+\epsilon \partial_y\psi\big|_{y=\rho_s}+O(\epsilon^2)
\\ [0.3 cm]
& =-\epsilon\big[\mu(\bar\sigma-\tilde\sigma)\zeta-\partial_y\psi\big|_{y=\rho_s}
\big]+O(\epsilon^2).
\end{array}
\end{equation}
By substituting (\ref{4.1}) into problem (\ref{1.1}), collecting all
first order $\epsilon$-terms and with the aid of (\ref{4.2}), (\ref{4.3})
and the fact that
$$
\sigma_s''(\eta_s^+)=\hat\sigma,
\qquad
\sigma_s''(\eta_s^-)=0,
\qquad
\sigma_s'(\rho_s)=\sqrt{\bar\sigma^2-\hat\sigma^2},
$$
$$
p_s''(\eta_s^+)=-\mu(\hat\sigma-\tilde\sigma),
\qquad
p_s''(\eta_s^-)=\nu,
\qquad
p_s'(\rho_s)=0,
$$
we obtain the linearization of problem (\ref{1.1}) at
$(\sigma_s, p_s, \eta_s, \rho_s)$
is given by
\begin{equation}\label{4.4}
\left\{
\begin{array}{ll}
\Delta \phi=\phi \chi_{ \Bbb E_s}
\qquad & \mbox{in}\;\; \Omega_s,\;t>0,
\\ [0.3 cm]
\Delta \psi=-\mu\phi \chi_{\Bbb E_s}
\qquad & \mbox{in}\;\; \Omega_s,\;t>0,
\\ [0.3 cm]
\phi=-\sqrt{\bar\sigma^2-\hat\sigma^2}\zeta,\qquad
\psi =-\gamma\zeta_{xx}
\qquad\qquad & \mbox{on}\;\; \Gamma_s,\;t>0,
\\ [0.3 cm]
\phi=0,\qquad [\![\partial_y \phi]\!]=-\hat\sigma\xi
\qquad & \mbox{on}\;\; J_s,\;t>0,
\\ [0.3 cm]
[\![\psi]\!]=0,\quad\; [\![\partial_y \psi]\!]=
\big(\mu(\hat\sigma-\tilde\sigma)+\nu\big)\xi
\qquad & \mbox{on}\;\; J_s,\;t>0,
\\ [0.3 cm]
\partial_y\phi=0 ,\quad \partial_y\psi=0 & \mbox{on}\;\; \Gamma_0,\;t>0,
\\ [0.3 cm]
\partial_t\zeta=-\partial_y \psi\big|_{y=\rho_s} +\mu(\bar\sigma-\tilde\sigma)\zeta
& \mbox{on}\;\; \Bbb S,\;\,\;t>0.
\end{array}
\right.
\end{equation}
For any given $\zeta\in h^{4+\alpha}(\Bbb S)$, by solving problem
$(4.4)_1$--$(4.4)_6$, we get a unique solution $(\phi,\psi,\xi)$.
Since problem (\ref{1.1}) is equivalent to problem (\ref{3.35}), their corresponding
linearizations at flat stationary solution are also equivalent. It implies that
\begin{equation}\label{4.5}
D\Psi(0)\zeta = \partial_y \psi\big|_{y=\rho_s} - \mu(\bar\sigma-\tilde\sigma)\zeta
\qquad\mbox{for}\;\; \zeta\in h^{4+\alpha}(\Bbb S).
\end{equation}
Next, we given an explicit expression of $D\Psi(0)$ and study its eigenvalues.
For any given
\begin{equation}\label{4.6}
\zeta(x)=\displaystyle \sum_{k\in\Bbb Z} c_k {\bf e}^{ikx}
\in C^\infty(\Bbb S),
\end{equation}
set
\begin{equation}\label{4.7}
\phi(x,y)=\sum_{k\in\Bbb Z} a_k(y) {\bf e}^{ikx},
\qquad
\psi(x,y)=\sum_{k\in\Bbb Z} b_k(y) {\bf e}^{ikx},
\qquad
\xi(x)=\displaystyle \sum_{k\in\Bbb Z} d_k {\bf e}^{ikx},
\end{equation}
where $a_k(y)$ and $b_k(y)$ are unknown functions, $d_k$ is unknown
coefficient for each $k\in\Bbb Z$.
Substituting (\ref{4.6}) and (\ref{4.7}) into (\ref{4.4}), we see that
for each $k\in\Bbb Z$, there hold
\begin{equation}\label{4.8}
\left\{
\begin{array}{ll}
\displaystyle a_{k}''-k^2 a_{k}=a_{k} \qquad
\qquad \qquad\, & \mbox{for}\;\; \eta_s< y <\rho_s,
\\ [0.4 cm]
a_k(y)=0 \qquad
& \mbox{for}\;\; 0< y \le \eta_s,
\\ [0.4 cm]
\displaystyle a_k' (\eta_s^+)=-\hat\sigma d_k,
\\ [0.4 cm]
a_k(\rho_s)=-\sqrt{\bar\sigma^2-\hat\sigma^2}c_k,
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{4.9}
\left\{
\begin{array}{l}
\displaystyle b_{k}''-k^2 b_{k} =- \mu a_{k}
\qquad\qquad\quad\;\;\;\;
\mbox{for}\;\; \eta_s< y <\rho_s,
\\ [0.4 cm]
\displaystyle b_{k}''-k^2 b_{k} = 0
\qquad\qquad\qquad\qquad
\mbox{for}\;\; 0< y <\eta_s,
\\ [0.4 cm]
\displaystyle b_k'(\eta_s^+)
= b_k'(\eta_s^-)+\big(\mu(\hat\sigma-\tilde\sigma)+\nu\big) d_k,
\\ [0.4 cm]
b_k(\eta_s^+)=b_k(\eta_s^-),
\\ [0.4 cm]
b_k (\rho_s)=\gamma k^2 c_k,
\\ [0.4 cm]
\displaystyle b_k'(0)=0.
\end{array}
\right. \;
\end{equation}
By solving problem (\ref{4.8}), we obtain that for each $k\in\Bbb Z$,
\begin{equation}\label{4.10}
a_k(y)=\left\{
\begin{array}{ll}
\displaystyle
-{\sinh \sqrt{k^2+1}(y-\eta_s)\over
\sinh \sqrt{k^2+1}(\rho_s-\eta_s)}\sqrt{\bar\sigma^2-\hat\sigma^2}c_k \qquad
& \mbox{for}\;\; \eta_s\le y\le \rho_s,
\\[ 0.4 cm]
0 \qquad\quad & \mbox{for}\;\; 0< y< \eta_s,
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{4.11}
d_k=
{ \sqrt{k^2+1}\sqrt{\bar\sigma^2-\hat\sigma^2}c_k
\over \hat\sigma
\sinh \sqrt{k^2+1}(\rho_s-\eta_s)}.
\end{equation}
Then by solving problem (\ref{4.9}), an elementary computation shows that,
for each $k\neq0$, $k\in\Bbb Z$,
\begin{equation}\label{4.12}
b_k(y)=\left\{
\begin{array}{ll}
\displaystyle
-\mu a_k(y)+(\gamma k^2-\mu\sqrt{\bar\sigma^2-\hat\sigma^2})c_k
{\cosh ky\over \cosh k\rho_s}+ e_k {\sinh k(\rho_s-y)\over
\sinh k(\rho_s-\eta_s)}\quad& \mbox{for}\;\; \eta_s\le y\le \rho_s,
\\ [0.6 cm]
\displaystyle
(\gamma k^2-\mu\sqrt{\bar\sigma^2-\hat\sigma^2})c_k
{\cosh ky\over \cosh k\rho_s}+ e_k {\cosh ky\over
\cos k\eta_s}\qquad& \mbox{for}\;\; 0<y<\eta_s,
\end{array}
\right.
\end{equation}
where
\begin{equation}\label{4.13}
e_k={(\mu\tilde\sigma-\nu) d_k\over
k[\coth k(\rho_s-\eta_s)+\tanh k\eta_s]}\qquad
\qquad \mbox{for}\;\ k\neq0,\; k\in\Bbb Z.
\end{equation}
By using (\ref{2.4}) and (\ref{4.11}), we have $d_0=c_0$, then
\begin{equation}\label{4.14}
b_0(y)=\left\{
\begin{array}{ll}
\displaystyle
\Big[\mu\sqrt{\bar\sigma^2-\hat\sigma^2}
\Big({\sinh(y-\eta_s)\over \sinh(\rho_s-\eta_s)}-1\Big)
+(\mu\tilde\sigma-\nu)
(\rho_s-y)\Big]c_0
\quad\;\; \; & \mbox{for}\;\; \eta_s\le y\le \rho_s,
\\ [0.5 cm]
\displaystyle
\Big[ -\mu\sqrt{\bar\sigma^2-\hat\sigma^2}+(\mu\tilde\sigma-\nu)
(\rho_s-\eta_s)\Big]c_0 \qquad& \mbox{for}\;\; 0<y<\eta_s.
\end{array}
\right.
\end{equation}
By (\ref{4.10})--(\ref{4.13}), for $k\neq 0$, we compute
\begin{equation}\label{4.15}
\begin{array}{rl}
&b_k'(\rho_s)-\mu (\bar\sigma-\tilde\sigma)c_k
\\ [0.3 cm]
= \,& \displaystyle
-\mu a_k'(\rho_s)+(\gamma k^2-\mu
\sqrt{\bar\sigma^2-\hat\sigma^2})c_k k\tanh k\rho_s
- {k\, e_k\over\sinh k(\rho_s-\eta_s)}-\mu (\bar\sigma-\tilde\sigma)c_k
\\ [0.3 cm]
= \, & \lambda_k(\gamma) c_k,
\end{array}
\end{equation}
where
\begin{equation}\label{4.16}
\begin{array}{rl}
\lambda_k(\gamma)
\,& = \gamma k^3 \tanh k\rho_s + \mu\sqrt{\bar\sigma^2-\hat\sigma^2}\big[
\sqrt{k^2+1}\coth \sqrt{k^2+1}(\rho_s-\eta_s)- k\tanh k\rho_s\big]
\\ [0.3 cm]
& \displaystyle
+{(-\mu\tilde\sigma+\nu)\sqrt{k^2+1} \sqrt{\bar\sigma^2-\hat\sigma^2} \over
\hat\sigma\sinh k(\rho_s-\eta_s)\sinh \sqrt{k^2+1} (\rho_s-\eta_s)
\big[ \coth k(\rho_s-\eta_s)+\tanh k\eta_s\big]}-\mu (\bar\sigma-\tilde\sigma),
\end{array}
\end{equation}
for $k\neq 0$ and $\gamma>0$.
Note that (\ref{2.4}) implies $\coth (\rho_s-\eta_s)=\bar\sigma/
\sqrt{\bar\sigma^2-\hat\sigma^2}$.
Then from (\ref{4.14}) we compute
\begin{equation}\label{4.17}
\begin{array}{rl}
b_0'(\rho_s)-\mu(\bar\sigma-\tilde\sigma)c_0
\, & = \mu c_0 \sqrt{\bar\sigma^2-\hat\sigma^2}
\coth (\rho_s-\eta_s) -(\mu\tilde\sigma- \nu)c_0
-\mu(\bar\sigma-\tilde\sigma)c_0
\\ [0.3 cm]
\, &= \mu \bar\sigma c_0- (\mu\tilde\sigma-\nu)c_0
-\mu(\bar\sigma-\tilde\sigma)c_0
\\ [0.3 cm]
\, & = \nu c_0.
\end{array}
\end{equation}
By (\ref{4.5})--(\ref{4.7}) and (\ref{4.15})--(\ref{4.17}),
we have
\medskip
\medskip
{\bf Lemma 4.1} \ \ {\em For any $\zeta\in
C^\infty(\Bbb S)$ given by $\displaystyle\zeta=\sum_{k\in\Bbb Z}
c_{k}{\bf e}^{ikx}$, there holds
\begin{equation}\label{4.18}
D\Psi(0)\zeta=\sum_{k\in\Bbb Z}
\lambda_k(\gamma)c_{k}{\bf e}^{ikx},
\end{equation}
where $\lambda_k(\gamma)$ is given by $(\ref{4.16})$ for $k\neq0$,
and $\lambda_0(\gamma) \equiv \nu$.
}
\medskip
\medskip
Obviously, for each $k\in\Bbb Z$ and $\gamma>0$,
$\lambda_k(\gamma)$ is an eigenvalue of the linearized
operator $D\Psi(0)$. We have the following properties:
\medskip
{\bf Lemma 4.2} \ \ {\em $(i)$ For any $\gamma>0$,
$\lim_{k\to\infty}\lambda_k(\gamma)=+\infty$.
$(ii)$ There exists a constant $\gamma_*>0$, such that
if $\gamma>\gamma_*$, we have $\lambda_k(\gamma)>0$ for
all $k\in \Bbb Z$; and if $0<\gamma<\gamma_*$, there exists
at least an integer $k_0\in\Bbb Z$ such that $\lambda_{k_0}(\gamma)<0$.
}
\medskip
{\bf Proof}. \ \ $(i)$ By a direct analysis, we have
$$
\lim_{k\to+\infty}\tanh k\rho_s=\lim_{k\to+\infty}\coth k(\rho_s-\eta_s)=1,
$$
$$
\lim_{k\to-\infty}\tanh k\rho_s=\lim_{k\to-\infty}\coth k(\rho_s-\eta_s)=-1,
$$
$$
\lim_{k\to \infty} \big(\sqrt{k^2+1}\coth\sqrt{k^2+1}(\rho_s-\eta_s)- k\tanh k\rho_s
\big)=0.
$$
Hence by (\ref{4.16}), we immediately obtain $\lim_{k\to\infty}\lambda(\gamma)
=+\infty$ for any $\gamma>0$.
$(ii)$ Define a sequence $\{\gamma_k\}_{k\neq0}$ by
\begin{equation}\label{4.19}
\begin{array}{rl}
\gamma_k
\,&\displaystyle := {1\over k^3 \tanh k\rho_s }\Big\{ \mu\sqrt{\bar\sigma^2-\hat\sigma^2}
\Big[k\tanh k\rho_s -\sqrt{k^2+1}\coth \sqrt{k^2+1}(\rho_s-\eta_s)\Big]
\\ [0.3 cm]
& \displaystyle
+{(\mu\tilde\sigma-\nu)\sqrt{k^2+1} \sqrt{\bar\sigma^2-\hat\sigma^2} \over
\hat\sigma\sinh k(\rho_s-\eta_s)\sinh \sqrt{k^2+1} (\rho_s-\eta_s)
\big[ \coth k(\rho_s-\eta_s)+\tanh k\eta_s\big]}+\mu (\bar\sigma-\tilde\sigma)
\Big\}.
\end{array}
\end{equation}
Clearly, we have
\begin{equation}\label{4.20}
\lim_{k\to\infty}\gamma_k=0
\qquad \mbox{and}\qquad
\lim_{k\to\infty} k^3 \tanh k\rho_s \gamma_k =\mu(\bar\sigma-\tilde\sigma)>0.
\end{equation}
Let
\begin{equation}\label{4.21}
\gamma_*:=\sup_{k\neq0}\{\gamma_k\}.
\end{equation}
By (\ref{4.20}) we see that $\gamma_*$ is well-defined and $\gamma_*>0$.
By (\ref{4.19}), we rewrite (\ref{4.16}) as
\begin{equation}\label{4.22}
\lambda_k(\gamma)=k^3 \tanh k\rho_s\, \big(\gamma-\gamma_k\big)\qquad
\mbox{for}\;\; k\neq 0,\;k\in\Bbb Z.
\end{equation}
Then the desired result follows from (\ref{4.20}) and (\ref{4.21}).
$\qquad\Box$
\medskip
Denote $\sigma(D\Psi(0))$ by
the spectrum of $D\Psi(0)$.
By Lemma 4.1 and
Lemma 4.2, we have
\medskip
{\bf Corollary 4.3} \ \ {\em $(i)$ If $\gamma>\gamma_*$, there exists
a constant $\varpi>0$ such that
$$
\sigma(D\Psi(0))\subset \{\lambda\in \Bbb C: {\rm Re} \lambda \ge \varpi\}.
$$
$(ii)$ If $0<\gamma<\gamma_*$, then $\sigma(D\Psi(0))\cap
\{\lambda\in \Bbb C: {\rm Re}\lambda<0\}\neq \emptyset$.
}
\medskip
{\bf Proof}. \ \ Since $D\Psi(0)\in L(h^{4+\alpha}(\Bbb S),
h^{1+\alpha}(\Bbb S))$, and $h^{4+\alpha}(\Bbb S)$ is
compactly embedded into $h^{1+\alpha}(\Bbb S)$,
we see that $\sigma(D\Psi(0))$
consists of all eigenvalues. By Lemma 4.1, we easily show that
all eigenvalues
of the restriction of $D\Psi(0)$ in $H^{4+r}(\Bbb S)$
are given by $\lambda_k(\gamma)$ for $k\in\Bbb Z$.
Since $h^{4+\alpha}(\Bbb S)$
is the closure of $H^{4+r}(\Bbb S)$ in $C^{4+\alpha}(\Bbb S)$ for
$r>3/2$, we have
$$
\sigma(D\Psi(0))=\{\lambda_k(\gamma);k\in \Bbb Z\}.
$$
Let $\gamma>\gamma_*$. By (\ref{4.21}) and (\ref{4.22}),
we see that
$$
\lambda_k(\gamma)\ge k^3\tanh k\rho_s \,(\gamma-\gamma_*)
\ge \tanh \rho_s\, (\gamma-\gamma_*)>0 \qquad
\mbox{for}\;\; k\neq 0.
$$
Notice that $\lambda_0(\gamma)\equiv \nu>0$. Take
$\varpi\in (0,\min\{\tanh \rho_s\, (\gamma-\gamma_*), \nu\})$, then
$\lambda(\gamma)>\varpi$ for all $k\in\Bbb Z$. It implies
that the assertion $(i)$ holds. The assertion $(ii)$
directly follows from Lemma 4.2 $(ii)$. The proof is complete.
\qquad $\Box$
\medskip
\medskip
\hskip 1em
\section{Asymptotic stability}
\setcounter{equation}{0}
\hskip 1em
In this section we study asymptotic stability of the
stationary solution $\rho=0$ of problem (\ref{3.35})
and give a proof of our main result Theorem 1.2.
Since problem (\ref{3.35}) is of parabolic type in
$h^{1+\alpha}(\Bbb S)$, by using geometric theory
of parabolic equations in Banach spaces, we have
\medskip
{\bf Theorem 5.1}\ \ {\em $(i)$ If $\gamma>\gamma_*$,
then the stationary solution $0$ of problem $(\ref{3.35})$
is asymptotically stable. More precisely, there exists
a positive constant $\epsilon$ such that for any given $\rho_0
\in \mathcal O_\delta$ with
$\|\rho_0\|_{h^{4+\alpha}(\Bbb S)}<\epsilon$,
problem $(\ref{3.35})$ has a unique solution
$\rho(t)\in C([0,+\infty),\mathcal O_\delta)
\cap C^1([0, +\infty)$,
$h^{1+\alpha}(\Bbb S))$, which converges exponentially fast
to $0$ as $t\to +\infty$.
$(ii)$ If $0<\gamma<\gamma_*$,
then the stationary solution $0$ is unstable.
}
\medskip
{\bf Proof}. \ \ $(i)$ Let $\gamma >\gamma_*$.
Recall that $h^{4+\alpha}(\Bbb S)$ is densely and compactly
embedded into $h^{1+\alpha}(\Bbb S)$. Set $A:= -D \Psi(0)$
and
$$
G(\rho):=-\Psi(\rho)+D\Psi(0)\rho \qquad\mbox{ for}\;\;
\rho\in\mathcal O_\delta.
$$
Clearly,
we have $G(0)=0$ and $D G(0)=0$.
Problem (\ref{3.35}) is equivalent to the following problem
\begin{equation}\label{5.1}
\rho'(t)=A\rho(t)+G(\rho(t)) \qquad\mbox{for}\;\;t>0,\qquad\quad
\rho(0)=\rho_0.
\end{equation}
By Lemma 3.6, $A$ generates a strongly continuous
analytic semigroup on $h^{1+\alpha}(\Bbb S)$. By Corollary 4.3 $(i)$,
we have $\sup \{ \mbox {Re} \lambda: \lambda\in\sigma(A)\}
<-\varpi<0$. Thus by Theorem 9.1.2 of [\ref{lunardi}],
there are positive constants $\omega, \epsilon$ and $M$
such that if the initial value $\rho_0\in \mathcal{O}_\delta$
and $\|\rho_0\|_{h^{4+\alpha}(\Bbb S)}<\epsilon$, then
the solution $\rho(t)$
of problem (\ref{3.35}) exists globally and
\begin{equation}\label{5.2}
\|\rho(t)\|_{h^{4+\alpha}(\mathbb{S})}+\|\rho'(t)\|_{h^{1+\alpha}(\Bbb S)}
\leq M e^{-\omega t}\|\rho_0\|_{h^{4+\alpha}
(\mathbb{S})} \qquad \mbox {for}\;\; t\geq 0.
\end{equation}
$(ii)$ If $0<\gamma<\gamma_*$, by Corollary 4.3 $(ii)$ we have
$\sigma_+(A)= \sigma(A)\cap \{\lambda\in \Bbb C: {\rm Re}\lambda>0\}\neq\emptyset$
and $\inf \{ {\rm Re} \lambda: \lambda\in\sigma_+(A)\}>0$.
Thus by Theorem 9.1.3 in [\ref{lunardi}], the stationary solution $\rho=0$ is unstable.
The proof is complete.\qquad $\Box$
\medskip
{\bf The proof of Theorem 1.2} \ \ By Lemma 3.1, Lemma 3.5 and Theorem 5.1 $(i)$,
we see that the flat stationary solution $(\sigma_s, p_s, \eta_s, \rho_s)$
is asymptotically stable for $\gamma>\gamma_*$. More precisely,
there is a constant $\epsilon>0$ such that for any
$\rho_0\in \mathcal O_\delta$ satisfying
$\|\rho_0\|_{h^{4+\alpha}(\Bbb S)}<\epsilon$,
problem $(\ref{1.1})$ has a unique
global solution $(\sigma(t), p(t), \eta(t), \rho(t))$
with the form of
$$
\sigma(t)=\Phi_*^{\tilde\rho(t)} u(t),\qquad
p(t)=\Phi_*^{\tilde\rho(t)} v(t), \qquad \eta(t)=\eta_s+\mathcal S(\tilde\rho(t)),
\qquad \rho(t)=\rho_s+\tilde\rho(t),
$$
where $\tilde\rho(t)$ is the solution of problem (\ref{3.35}) with
$\tilde\rho(0)=\rho_0$, and $u(t)$, $v(t)$, $\mathcal S(\tilde\rho(t))$
are given by (\ref{3.17}) and (\ref{3.32}). By (\ref{5.2}) and the reduction in
Section 3, we see that $(\sigma(t),p(t), \eta(t), \rho(t))$
converges exponentially fast to $(\sigma_s, p_s, \eta_s, \rho_s)$
in $h^{4+\alpha}(\Omega_{\tilde\rho(t)}\backslash J_{\mathcal S(\tilde\rho(t))} )
\times h^{2+\alpha}
(\Omega_{\tilde\rho(t)}\backslash J_{\mathcal S(\tilde\rho(t))} )
\times h^{4+\alpha}(\Bbb S) \times h^{4+\alpha}(\Bbb S)$,
as time goes to infinity.
Similarly, by Lemma 3.1, Lemma 3.5 and
Theorem 5.1 $(ii)$,
the flat stationary solution $(\sigma_s, p_s, \eta_s, \rho_s)$
is unstable for $0<\gamma<\gamma_*$.
The proof is complete.
\qquad $\Box$
\medskip
{\bf Remark 5.2} \ \ From (\ref{4.19}) and (\ref{4.21}), we easily obtain
$d\gamma_k/d\nu<0$ for each $k\neq0$, $k\in \Bbb Z$. Thus we have
$d\gamma_*/d\nu\le0$. It implies that the
smaller value of $\nu$ may make
tumor more unaggressive. In the limiting case $\nu=0$,
since $\lambda_0(\gamma)= \nu=0$, we have
$0\in\sigma(D\Psi(0))$. It implies that the flat stationary
solution is not asymptotically stable any more for all $\gamma>0$.
\medskip
\hskip 1em
\vspace{.05in}
{\small
|
1,314,259,993,502 | arxiv |
\section{Introduction}
\Ma{The Chiral Magnetic Effect (CME) is the generation of an electric current $\bm J$ in presence of an external magnetic field $\bm B$~ \cite{FKW08}}:
\begin{equation}
\braket{\bm{J}}=\frac{e^2}{2\pi^2}\mu_5\bm{B}.\label{CMEdef}
\end{equation}
\Ma{The current~\eq{CMEdef} appears naturally in a particular set of physical systems characterized by a broken invariance under the spatial reflection $\mathcal P$. The broken $\mathcal P$--invariance may be realized, for example, in condensed matter systems with massless fermionic quasiparticles of a Weyl or Dirac type. Such fermions are characterized by different (left and right) chiralities which are often said to be ascribed to different ``two Weyl nodes''. If the number of fermions with different chiralities is not equal to each other, then the system is $\mathcal P$--broken. The chiral imbalance is convenient to characterize by a difference denoted by $\mu_5=\mu_L-\mu_R$ between the chemical potentials in the right ($\mu_R$) and left ($\mu_L$) Weyl nodes. The difference in the chemical potentials determines the magnitude of the CME current in Eq.~\eq{CMEdef}.}
\Al{The CME is a relevant transport phenomenon that has its roots in the physics of quantum anomalies. The theory is said to be anomalous if there exists a quantity which is conserved at the classical level and which fails to do so when going to the quantum realm. In particular, the CME stems from the axial anomaly which leads to non-conservation of the chiral current in Weyl systems described by the Weyl Hamiltonian of a massless particle with the wavefunction~$\psi_s$:}
\begin{equation}
H_s = s v_F \psi^{+}_s\bm{\sigma}\cdot\bm{k}\psi_s.\label{Weyl1}
\end{equation}
\Ma{Here the parameter $s=\pm1$ denotes the chirality of the particle propagating with the velocity~$v_F$ and momentum $\bm k$ and $\bm \sigma$ is the vector of the Pauli matrices. In relativistic systems, Lorentz invariance forces $v_F=c$, while in condensed matter systems, the velocity $v_F$ is not constrained to any particular value. It is often said that the parameter $s=\pm1$ labels two distinct ``Weyl points''.}
\Al{The Weyl systems~(\ref{Weyl1}) possess two quantities that are conserved at the level of classical equations of motion. These are the electric charge $Q$ and the axial charge $Q_5$ that are described, respectively, by electric four-current $J^\mu$ and the chiral current\footnote{\Ma{We reserve the notation $\bm j$ for another current to be defined below~\eq{conservedcurrent}.}} $j^\mu_5$:
\begin{equation}
J^\mu \equiv (Q,{\bm J}) = {\bar \Psi} \gamma^\mu \Psi,
\qquad
j^{\mu}_5 \equiv (q_5,{\bm j}_5) = {\bar \Psi} \gamma_5\gamma^{\mu} \Psi.
\label{eq:j:5}
\end{equation}
Mathematically, the conservation implies that the four-divergence of the both currents is identically zero, $\partial_\mu J^\mu = \partial j^\mu_5= 0$, provided the wave function $\Psi = (\psi_R, \psi_L)^T$ satisfies the classical equation of motion $H \Psi = 0$ where the Hamiltonian $H = {\mathrm{diag}}\,(H_R,H_L)$ incorporates both chiral modes~\eq{Weyl1}. We use the conventional nomenclature of $\gamma$ matrices in the chiral basis:
\begin{equation}
\gamma^0 = \left(\begin{matrix} 0 & {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \\ {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} & 0 \end{matrix}\right),
\qquad
{\bm \gamma} = \left(\begin{matrix} 0 & {\bm \sigma} \\ - {\bm \sigma} & 0 \end{matrix}\right),
\qquad
\gamma^5 = \left(\begin{matrix} - {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} & 0 \\ 0 & {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}} \end{matrix}\right).
\label{eq:gamma}
\end{equation}
In presence of conventional electromagnetic fields, the quantum fluctuations lead to nonconservation of the chiral current. Technically, the loss of conservation appears as a result of the so-called triangle diagrams of virtual fermions that lead to~\cite{A69,BJ69}:}
\begin{equation}
\partial_{\mu}j^{\mu}_5=\frac{1}{2\pi^2}\bm{E}\cdot\bm{B}.\label{chiralanom}
\end{equation}
\Al{The triangle diagrams that give rise to the non-conserved current (\ref{chiralanom}) are also responsible for the generation of the current in Eq.(\ref{CMEdef}). The form of the anomaly can be partially fixed by some algebraic constraints on an effective action of the theory that leads the right hand side of Eq.(\ref{chiralanom}). These constraints are imposed by the algebraic structure of the symmetry that is anomalously broken~\cite{Kharzeev:2013ffa}. Technically, the relation of the chiral (triangular) anomaly to the generation CME current~\eq{CMEdef} requires a rigorous derivation which takes into account the Wess-Zumino consistency conditions~\cite{L16}.} \Ma{In the presence of the chiral gauge fields $A^{5}_{\mu}$ that are coupled to the chiral current $j^{\mu}_5$, the anomalous effects become more subtle and the currents~\eq{eq:j:5} have to be modified consistently. In our work $A^5_\mu \equiv 0$ so that we will use the straightforward definition~\eq{eq:j:5} for vector and axial currents.}
\Ma{In general, two ways have been proposed to create an environment that is able to generate the anomalous current~\eq{CMEdef}. The first approach is to drive the system out of equilibrium in order to reach a stationary regime where $\mu_5\neq0$. This regime may be achieved by applying simultaneously an electric field $\bm{E}$ parallel to $\bm{B}$ so that the chiral anomaly creates a charge imbalance via the chiral anomaly~\eq{chiralanom} and generates a nonzero chiral chemical potential $\mu_5$. Then the system generates a non-equilibrium electric current via the CME mechanism~\eq{CMEdef}. Notice that the chiral imbalance $\mu_5 \neq 0$ does not exist in an equilibrium regime as the population of the left-handed particles and the right-hand particles mix with each other due to interactions and then relaxes towards $\mu_5 = 0$ equilibrium.
Therefore, the anomalous electric current~\eq{CMEdef} is zero in thermal equilibrium.}
\Ma{The second approach could consist in moving the position of the Weyl nodes in energies without carrying the system out of equilibrium}:
\begin{equation}
\braket{\bm{J}}=\frac{e^2}{4\pi^2}(\epsilon_{R}-\epsilon_{L})\bm{B}=0,\label{CMEwrong}
\end{equation}
\Ma{where the energies $\epsilon_{L,R}$ are the position of left and right Weyl points in energy. The equilibrium current~\eq{CMEwrong} is, however, vanishing in Hermitian systems. Physically, the current is zero because the difference in energies of the right- and left-handed chiral fermions does not create the true chiral imbalance. Mathematically, the current~\eq{CMEwrong} does not exist because the difference in energies $(\epsilon_{R}-\epsilon_{L})$ is sensitive to the chirality of the fermion and, therefore, is nothing but the zeroth component of a chiral gauge field, $A^5_0$. Once the chiral gauge field appears, the definition of the physical electric current starts to differ from the naive covariant version~\eq{eq:j:5} by an addition of an extra term coming from the so-called Bardeen polynomials. This term cancels out this energy difference precisely, and physical (so called, ``consistent'') version of the current, vanishes in thermal equilibrium~\eq{CMEwrong}. For further details and technicalities we refer an interested reader to Ref.~\cite{L16}.
}
\Ma{Despite in thermodynamic equilibrium the axial chemical potential is zero, $\mu_5 = 0$, the vector chemical $\mu = \mu_R + \mu_L$ for a generic fermionic system may still be nonzero. In the presence of the background magnetic field $\bm B$, the system generates -- via the same chiral anomaly -- the chiral current:
\begin{equation}
\braket{\bm{J}_5}=\frac{e^2}{2\pi^2}\mu\bm{B},\label{CSEdef}
\end{equation}
which is the direct analogue of the CME~\eq{CMEdef} but now the chiral sector. Equation~\eq{CSEdef} describes the Chiral Separation Effect (CSE) which will play a role in our derivations below along with the CME.}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=0.2]{bands.pdf} & \\[-50mm]
& \includegraphics[scale=0.2]{LLwithmu.pdf} \\[10mm]
(a) & (b)
\end{tabular}
\caption{(color online) (a) Band structure of $\mathcal{H}$ at zero magnetic field but finite chemical potential. Contrary to Hermitian systems, the presence of chemical potentials might modify strongly the spectrum. (b) Landau level spectrum for the non-Hermitian model for finite chemical potential $\mu$. Finite values of $\mu$ shifts the LLL spectrum (red) not only upwards or downwards, but it is laterally shifted. The lateral shift makes the contribution from LLL to be non-zero for the CME.}
\label{fig:bands}
\end{figure}
While the non-equilibrium situation has been explored extensively in the literature leading, for instance, to the celebrated negative quadratic magnetoresistivity in Weyl metals, the equilibrium scenario appears to be not possible, and to date there is consensus that the CME is not possible in equilibrium~\cite{MP15,Y15,Z16}.
The statement of absence of CME in equilibrium can be seen as an extension of a no-go theorem given by Bloch, concerning the existence of equilibrium currents in solids in the thermodynamic limit\cite{B49}. This theorem has been extended to chiral matter in Ref.~\cite{Y15}, and refined in Ref.~\cite{Z16} (the absence of CME in equilibrium using the chiral kinetic formalism has been obtained in Ref.~\cite{MP15}). There are three elements usually associated to this theorem in chiral matter: The existence of Weyl nodes that always come by pairs\cite{NN181},\cite{NN281}, (local) gauge invariance and, of course, the assumption that the system is in the equilibrium state. As, we have mentioned, it is known how to break the second condition and drive the system out of equilibrium. Recently, it has been proposed that the first assumption of having pairs of Weyl nodes can be broken in Weyl superconductors, where an external magnetic field indices a gap in one of the Weyl nodes (and its particle-hole conjugate), leaving effectively a single Weyl node\cite{OBA17}. However, we stress here that the presence of Weyl nodes is not a strict requirement to the absence (or presence) of CME~\cite{Y15,Z16}.
Interestingly, non-Hermitian fermionic systems appear to be a
\Ma{promising physical environment that can be realized in real experiments.}
Nowadays, there is a surge of interest in non-Hermitian systems for many different reasons, ranging from very fundamental questions in the quantum (and statistical) theory of fields and the role of topology in non-Hermitian systems\cite{B05,C17,GAK18}, to applied science. Among them, specially interesting are the non-Hermitian systems that display a real spectrum, as the $\mathcal{P}*\mathcal{T}$-symmetric systems or the quasi-Hermitian systems. Although non-Hermitian, they display a unitary evolution, and it is possible to define a consistent thermodynamics for them\cite{GDS16}.
\section{The model} We will study is a non-Hermitian extension of the massive Dirac model in $(3+1)$ dimensions, where, together with the usual mass term $m$, an anti-Hermitian mass $m_5$ is introduced~\cite{AB15,ABM15,AMS17}:
\begin{equation}
H=\bm{\alpha}\cdot\bm{k}+m\hat{\beta}+m_5\hat{\beta}\gamma_5. \label{NHWeylHam}
\end{equation}
\Ma{Here we used the original Dirac notations ${\bm \alpha} \equiv \gamma^0 {\bm \gamma}$ and $\hat{\beta} \equiv \gamma^0$ where the Dirac gamma matrices are given in Eq.~\eq{eq:gamma}, and $\bm k$ describes the momentum of the particle.} The advantage of this model is that the two first terms of the right hand side of Eq.(\ref{NHWeylHam}) are Hermitian by themselves, so the only non-Hermitian (anti-Hermitian) term is $m_5\gamma_5$. \Al{When $m_5$ the model Eq.(\ref{NHWeylHam}) corresponds to the usual Dirac model for relativistic fermions. Also, it constitutes the low energy model for the bulk states of topological insulators, and when $m=0$ or $m=m(\bm{k})$ becomes a function with nodal points in momentum space, this model describes Weyl fermions \cite{AMV18}.}
\Al{To date, there is not known experimental realization of an electronic system with the non-Hermitian mass term $m_5\hat{\beta}\gamma_5$. } \Ma{However, we are not aware of any no-go theorem that would forbid this term to appear in open systems. Therefore, we consider the model~\eq{NHWeylHam} as a generic system which captures the essential properties of the non-Hermitian mass. Our aim is to show, conceptually, that the equilibrium Chiral Magnetic Effect is, in principle, possible in a generic non-Hermitian system.}
It is already stated in the literature that non-Hermitian hamiltonians are not gauge-invariant in general. This can be viewed as the fact that the Noether theorem relating continuous symmetries and conserved currents in field theories, does not hold in non-Hermitian systems\cite{ABM15,AMS17,M18}. For this reason, there is some arbitrariness when defining a coupling to electromagnetic fields in the Hamiltonian (\ref{NHWeylHam}). In the present work, we will interested to compare our results with the ones in Hermitian systems, so we will consider the coupling to electromagnetic fields to the model (\ref{NHWeylHam}) with $m_5=0$, which is Hermitian (and the principle of local gauge invariance holds), and later we switch on the non-Hermitian term $m_5\hat{\beta}\gamma_5$.
Also, that we cannot apply the Noether theorem for Hermitian systems to non-Hermitian ones does not imply that conserved currents exist for the latter.
\Al{In a conventional Hermitian quantum mechanics, the time dependence of any operator $\mathcal{O}$ can be determined in the Heisenberg picture:}
\begin{equation}
\mathcal{O}(t)=e^{iH^{+}t}\mathcal{O}e^{-iH t}.\label{Heis1}
\end{equation}
\Al{If we naively try to proceed further following the same steps for a NH system, we will get an unconventional expression for the time variation of the operator $\mathcal{O}(t)$\cite{GHK10,SZ13}:}
\begin{equation}
\dot{\mathcal{O}} \equiv \frac{ d\mathcal{O}(t)}{d t}=ie^{iH^{+}t}\left(H^+\mathcal{O}-\mathcal{O}H\right)e^{-iH t}.\label{TevolNH}
\end{equation}
For Hermitian systems, $H^+=H$ and we recognize the commutation with $H$ as the condition any operator must satisfy to be a conserved quantity. For non-Hermitian systems, we immediately see that an operator is a conserved quantity if, instead of commuting with the Hamiltonian, it fulfills the quasi-Hermiticity condition: $H^+\mathcal{O}=\mathcal{O}H$ so $\dot{\mathcal{O}}=0$. In the case of the $U(1)$ charge symmetry, it is clear that the generator of this symmetry in Hermitian systems commutes with $H$ but does not satisfy the quasi-Hermiticity condition, so it is not a conserved quantity for non-Hermitian systems. As we will see in next paragraphs, it is possible to find operators that, not commuting with $H$, they satisfy the quasi-Hermiticity condition, thus defining conserved quantities.
Before computing the CME and CSE for the system at hands, it is convenient to understand what is the physical meaning of the conserved currents associated to the Hamiltonian (\ref{NHWeylHam}). The most convenient way is the following\cite{M08}: The Hamiltonian (\ref{NHWeylHam}) is a quasi-Hermitian Hamiltonian that satisfies the relation $H\eta=\eta H^{+}$, where $\eta$ is some positive definite operator called \emph{metric operator}. The condition of the metric operator of being positive definite, allows us, among other things, to define a non-unitary similarity transformation $S$ ($\eta=S^{+}S$) that maps the NH Hamiltonian $H$ in Eq.(\ref{NHWeylHam}) onto a \emph{hermitian} Hamiltonian $\hat{H}$ (for further details we refer to the Appendix (\ref{App:equilibrium})). Then, we can find the conserved currents in the auxiliary hermitian Hamiltonian and use the mapping $S$ on these conserved current to find the corresponding conserved quantities in the non-hermitian model.
\Al{In certain systems, where the similarity matrix $S$ cannot be constructed explicitly, it is still possible to identify certain conserved quantities. These quantities are associated with the operators that are symmetries of the system. Namely, given an operator $\mathcal{O}$, we can construct another operator $\mathcal{O}^{\prime}=\eta\mathcal{O}$, whose time evolution is described, according to Eqs.(\ref{Heis1},\ref{TevolNH}), as follows:}
\begin{equation}
\frac{d\mathcal{O^{\prime}}(t)}{d t}=i \eta e^{iH t}[H,\mathcal{O}]e^{-iH t}.\label{Heis2}
\end{equation}
\Al{We see that this new operator $\mathcal{O^{\prime}}$ now possesses a conventional time evolution of the quantum mechanics in the Heisenberg picture. Also, if the original operator $\mathcal{O}$ is a symmetry of the problem (that is $[H,\mathcal{O}]=0$), then the new operator $\mathcal{O^{\prime}}=\eta\mathcal{O}$ defines a conserved quantity as well. This discussion allows us to motivate the use of a bi-orthogonal formulation in our paper. It is clear, indeed, that the expression in Eq.(\ref{Heis2}), constructed with the help of the operator $\mathcal{O^{\prime}}=\eta\mathcal{O}$, follows the standard time evolution in terms of the conjugate wavefunctions $(\bra{\psi},\ket{\psi})$.}
\Al{Alternatively, instead of using the modified operator $\mathcal{O^{\prime}}$, we could had perfectly maintained the operator $\mathcal{O}$ and defined a modified conjugate wavefunction $\bra{\psi}\eta$. The pair $(\bra{\psi}\eta,\ket{\psi})$ is called a bi-orthogonal set. We will make use of this formulation in the next sections.}
\Al{Let us now apply the mentioned results to the model defined in Eq.(\ref{NHWeylHam}). The corresponding procedure, developed in Ref.~\cite{AMS17}, utilizes the metric operator $\eta=\mathbf{1}+\frac{m_5}{m}\gamma_5$. It turns our that the Hermitian model associated to the non-Hermitian Hamiltonian $H$ corresponds to a massive Dirac spinor $\chi$ with mass $M=\sqrt{m^2-m^2_5}$. We construct the $U(1)$ conserved current $\hat{j}^{\mu}=\overline{\chi}\gamma^{\mu}\chi$ associated with the spinor $\chi$. After using the inverted mapping $S$, we get the corresponding current in the non-Hermitian system in terms of the field $\psi^{+}$:}
\begin{equation}
j^{\mu}=\psi^{+}\gamma^{0}(\mathbf{1}+\frac{m_5}{m}\gamma_5)\gamma^{\mu}\psi=\psi^{+}\gamma^{0}\eta\gamma^{\mu}\psi=\bar{\psi}\eta\gamma^{\mu}\psi.\label{conservedcurrent}
\end{equation}
\Al{Since $\partial_{\mu}\hat{j}^{\mu}=0$, can trivially show that the current $j^{\mu}$ corresponds to a conserved quantity $\partial_{\mu}j^{\mu}=0$. We thus see that the current $j^{\mu}=\eta J^{\mu}$ is a conserved current with $J^{\mu}=\gamma^\mu$ being a symmetry
of the Hamiltonian in Eq.(\ref{NHWeylHam}).}
As it will discussed in the next section, the most important consequence of having the conserved current $j^{\mu}$ is that we can define a chemical potential $\mu$ associated to $j^0=\eta$.
We immediately see that the current is made of a piece proportional to the identity, as it corresponds to an abelian current in the Hermitian case, together with a chiral current weighted by $m_5/m$ that implies a chiral imbalance. We will show in the next Section and in the Appendix (\ref{App:equilibrium}) that a system defined by the Hamiltonian (\ref{NHWeylHam}) that exchange particles in a manner defined by this precise chemical potential $\mu$ defines is in a trully equilibrium thermal state with non vanishing CME and CSE.
\section{Computation of CSE and CME with biorthogonal quantum mechanics}
Here we will tackle the problem using the biorthogonal quantum mechanics formalism\cite{Br14}. Within this formalism, we distinguish between the eigenstates of $H$: $H\psi_s=\varepsilon^s_k\psi_s$, their complex conjugates: $\psi^{+}_sH^+=\psi^{+}_s\varepsilon^s_k$, the bi-orthogonal states $\phi_s$: $H^+\phi_s=\varepsilon^s_k\phi_s$, and their complex conjugates, $\phi^{+}_sH=\varepsilon^s_k\phi^+_s$. The point is that, because $H$ is not Hermitian, $\psi_s\neq \phi_s$, and $\psi^+_s\neq\phi^+_s$. Also, for the same lack of Hermititivity, the states are not orthogonal $\braket{\psi^+_s|\psi_{s^{\prime}}}\neq \delta_{ss^{\prime}}$, where $\braket{\cdot|\cdot}$ is the standard scalar product in the corresponding Hilbert space. However, the state sets $\psi_s$ and $\phi_s$ form a bi-orthogonal basis:
\begin{equation}
\braket{\psi^+_s|\phi_{s^{\prime}}}=\braket{\phi^+_s|\psi_{s^{\prime}}}\propto\delta_{ss^{\prime}}.
\end{equation}
For the model (\ref{NHWeylHam}) we can define a metric operator $\eta$, that not only fulfills the quasi-Hermiticity condition, $\eta H=H^{+}\eta$ but it is positive definite. The existence of such operator simplifies the construction of the bi-orthogonal basis sets, since these two bases are related to each other through $\eta$:
\begin{equation}
\phi_s=\frac{1}{\braket{\psi^+_s|\eta\psi_s}}\eta\psi_s.
\end{equation}
With this particular normalization, we have $\braket{\psi^+_s|\phi_{s^{\prime}}}=\braket{\phi^+_s|\psi_{s^{\prime}}}=\delta_{ss^{\prime}}$. For the Hamiltonian at hands (\ref{NHWeylHam}), such metric operator $\eta$ is $\eta=\mathbf{1}+\frac{m_5}{m}\gamma_5$\cite{AMS17}. The existence of a metric operator allowing us to define a well-defined inner product in the corresponding Hilbert space, defines an unitary time evolution of the states, as long as the spectrum is real, so a consistent description of quantum mechanics is allowed for the system, although being non-Hermitian. \Al{Also, it is now easy to see that any time operator will evolve using the conventional Heisenberg picture within the bi-orthogonal formalism.}
Another relevant consequence of the existence of the metric operator is that $\eta$ is a conserved quantity, since, as we mentioned, the matrix $\eta$ fulfills the pseudo-Hermiticity condition \Al{(remember Eq.(\ref{TevolNH}))}. Although $\eta$ does not commute with the non-Hermitian Hamiltonian $H$\cite{SBM18}, it allows for a construction of an unitary evolution. \Al{The existence of a conserved quantity makes it possible to define a Lagrange multiplier $\mu$ associated to the operator $\eta$. Since $\eta$ is a conserved quantity, that Lagrange multiplier $\mu$ plays the role of the chemical potential. Consequently, we can define a new Hamiltonian}
\begin{equation}
\mathcal{H}=H-\mu\eta,\label{HwithB}
\end{equation}
\Al{as it is done in the standard Hermitian statistical mechanics. Of course, due to the non-Hermitian nature of the problem, the conserved quantity does not need to commute with $H$. Instead, to be conserved, the corresponding operator should satisfies the aforementioned pseudo-Hermiticity condition.
}
\Al{However, the existence a common basis between $H$ and any operator $\mathcal O$ is possible if and only if the operator $\mathcal O$ commutes with the Hamiltonian $H$, irrespective of the Hermiticity of $H$. This fact means that, we will not be able to find a common basis for $\eta$ and $H$ in terms of the eigenstates of the number operator, as it happens in conventional Hermitian Quantum Mechanics. This problem may be circumvented by building the bi-orthogonal basis which is, in turn, is constructed by diagonalizing the new Hamiltonian~\eq{HwithB} $\mathcal{H}$ instead of the original Hamiltonian $H$. }
\Al{
In order to compute the non-Hermitian version of the Chiral Magnetic Effect, we consider the model Eq.(\ref{HwithB}) in a classical background of an external constant magnetic field $\bm{B}$ that points along the third dimension. As it is a trivial exercise to obtain the Landau levels for this model, we are not presenting the details of the derivation. However, we are willing to highlight two properties of these Landau levels.}
\Al{
First, the algebraic wavefunction structure in the non-Hermitian system does not differ from the Hermitian case: The system is translationally invariant along the direction of the magnetic field $\bm{B}$, so that the momentum along this direction is a conserved integral of motion. Thus, we may use a standard Fourier transformation of the wavefunction along this direction. The dispersion relation of $\mathcal{H}$ for a non-zero chemical potential is presented in Fig.~\ref{fig:bands}(a).}
\Al{
Second, the wavefunctions are highly degenerate as in the Hermitian case, with the same Landau degeneracy. The current along the magnetic field becomes diagonal. Therefore, following a standard procedure used in the Hermitian case, we can integrate over the transverse spatial directions when computing quantum averages. This approach greatly simplifies the calculations and allows us to consider the problem as quasi-one dimensional.}
Although from the perspective of constructing a thermal equilibrium ensemble, the lack of Hermiticity in the system might suppose a problem\cite{RB15}, here we will argue that this is not actually the case in our model system under general grounds, since the requirement the one-particle correlation function built from the bi-orthogonal basis satisfies the \Al{Kubo-Martin-Schwinger (KMS) periodicity condition: $\braket{\Psi^+(0)\Psi(\tau^{\prime})}=-\braket{\Psi^{+}(\beta)\Psi(\tau^{\prime})}$ \cite{K57,MS59,HHW67}. It is known in the context of quantum statistical mechanics that states satisfying the KMS boundary condition extremize the Von Neumann entropy $S=-Tr[\rho \log \rho]$, being $\rho$ the density matrix operator , so they describe equilibrium states\cite{HHW67}.} The point is to notice that, for non-zero $\mu$, the time evolution of any field operator is done through the exponential of $\mathcal{H}$ (and not of $H$) so then the one-particle correlation function will satisfy the KMS boundary condition and we will be able to built an equilibrium ensemble\cite{FW71}. \Al{In the Appendix (\ref{App:equilibrium}) we provide an explicit proof that the correlation functions of the non-Hermitian system considered in the present work can be mapped to the correlation functions of an equilibrium thermal state, thus satisfying the KMS condition.} Also, this fact has been pointed out in the existing literature of non-Hermitian systems\cite{J07,BPP20}.
\Al{In our particular case, using the effective one-dimensional model, we will focus only in the lowest Landau level (LLL) after integrating over the perpendicular coordinates and explicitly writing the Landau level denegeracy $\rho=2\pi eB_3$} the equilibrium thermal average of any observable $\mathcal{O}$ will be
\begin{equation}
\braket{\mathcal{O}}=\frac{e^2 B_3}{4\pi^2}\sum_{\omega_n}\int^{\infty}_{-\infty} d k_3 Tr[\mathcal{O}G_0(i\omega_n,k_3)],
\end{equation}
where $G_0(i\omega_n,k_3)$ is the single-particle propagator in imaginary time:
\begin{equation}
G_0(i\omega_n,k_3)=\sum_{s,n}\frac{\ket{\psi_s}\bra{\phi_s}}{i\omega_n-\varepsilon^{s}_{\mu}(k_3)},\label{Gfunction}
\end{equation}
and $(\psi_s,\phi_s)$ are the bi-orthogonal sets of single-particle eigenstates of the model (\ref{HwithB}) in presence of an external magnetic field $\bm{B}=B_3\hat{\bm{z}}$: $\mathcal{H}\psi_s=\varepsilon^{s}_{\mu}(k_3)\psi_s$, $\mathcal{H}^+\phi_s=\varepsilon^{s}_{\mu}(k_3)\phi_s$. The generic label $s$ comprises band labelling, the spin index $\tau$, and the \Al{Landau level index $N$}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.23]{CMEdelta.pdf}
\end{center}
\caption{(color online) CME as a function of $\mu/m$ for three values of $\delta=m_5/m$. The vanishing CME for the Hermitian case, $\delta=0$ is recovered.}
\label{fig:CMEdelta}
\end{figure}
For the operators defined as $\mathcal{O}=\frac{\partial \mathcal{H}}{\partial \lambda}$, we can generalize the Feynman-Hellmann theorem to the bi-orthogonal basis (See Appendix{\ref{App:HFtheorem}}), if the eigenstates are real:
\begin{eqnarray}
\braket{\phi^+_s|\mathcal{O}\psi_s}=\braket{\phi^+_s|\frac{\partial \mathcal{H}}{\partial \lambda}\psi_s}=\frac{\partial \varepsilon_{s}}{\partial \lambda},
\end{eqnarray}
obtaining, after performing the Matsubara summation,
\begin{equation}
\braket{\mathcal{O}}=\frac{e^2 B_3}{4\pi^2}\sum_{s,n}\int^{\infty}_{-\infty} d k_3\frac{\partial \varepsilon^{s}_{\mu}(k_3)}{\partial \lambda}n_F(\varepsilon^{s}_{\mu}(k_3)),\label{tevO}
\end{equation}
where $n_F(x)$ is the Fermi distribution function \emph{in absence of the chemical potential}. The chemical potential is part of the spectrum.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[scale=0.2]{CSEdelta.pdf} &
\includegraphics[scale=0.2]{CSEm.pdf} \\
(a) & (b)
\end{tabular}
\caption{(color online) (a): Regularized CSE as a function of $\mu$ for three values of $\delta=m_5/m$. We fix the mass parameter to be $m=0.3$. (b): CSE as a function of $\mu$ for three values of $m$ and fixed $\delta=0.6$.}
\label{fig:CSE}
\end{figure}
For the case of CME, $J_3=\frac{\partial \varepsilon^s_{\mu}(k_3)}{\partial k_3}$, so
\begin{equation}
\braket{J^3}=\frac{e^2B_3}{4\pi^2}\sum_{s,N}\int^{\infty}_{-\infty} d k_3\frac{\partial \varepsilon^{s,N}_{\mu}(k_3)}{\partial k_3}n_F(\varepsilon^{s,N}_{\mu}(k_3)).\label{CMEBiorthodef}
\end{equation}
The dispersion relation for the LLL ($N=0$) sector is (see Fig.(\ref{fig:bands}(b))):
\begin{equation}
\varepsilon^{s,0}_{\mu}(k_3)=-\mu+s\sqrt{(k_3-\delta \mu)^2+m^2(1-\delta^2)},
\end{equation}
where $\delta=\frac{m_5}{m}$ and $s=\pm1$, while for $N>0$, we have
\begin{equation}
\varepsilon^{s,\tau,N}_{\mu}(k_3)=-\mu+s\sqrt{(\sqrt{k^2_3+\omega^2_c N}+\tau\delta \mu)^2+m^2(1-\delta^2)}.
\end{equation}
For the $N>0$ Landau levels, the spin degree of freedom $\tau=\pm1$ appears explicitly. In Fig.(\ref{fig:bands}(b)) we have plotted the Landau level spectrum for $N=0$ and $N>0$. The all important difference between the eigen-energies for $N=0$ and $N>0$ is that, while $\varepsilon^{s,\tau,n}_{\mu}(k_3)$ with $N>0$ is an even function of $k_3$ for any value of $m,\delta=m_5/m$, and $\mu$, the energy $\varepsilon^{s,0}_{\mu}(k_3)$ with $N=0$ is not. That means that, when taking the derivative with respect to $k_3$ and integrating over a symmetric interval, the $N>0$ Landau levels will not contribute to the integral in (\ref{CMEBiorthodef}), but the $N=0$ will do.
The result turns out to be (see Fig.(\ref{fig:CMEdelta})):
\begin{equation}
\braket{J^3}=\frac{e^2B_3}{2\pi^2}\frac{m_5}{m}\mu.\label{CMEBiortho}
\end{equation}
This is the principal result of this Letter. For non-zero values of the mass $m_5$, which is the parameter that controls the non-Hermiticity of $\mathcal{H}$, there is a \emph{non-vanishing CME in equilibrium}.
The chiral separation effect (CSE) is obtained by computing the average value of the chiral current, represented by the operator $J^{i}_5=e\alpha^i\gamma_5$. We can follow the same route as in the case of the CME. We will add a term $b_3\alpha_3\gamma_5$ to the Hamiltonian (\ref{HwithB}) and compute the spectrum in presence of the parameter $b_3$. Then, we apply the Hellman-Feynman theorem to it, taking the derivative with respect to $b_3$ and constructing the expectation value for each Landau level. We send the parameter $b_3$ to zero after the calculation.
It is a lengthy but straightforward calculation to check that for the $n>0$ sector, $\partial \varepsilon^{s,n}_\mu(k_3,b_3)/\partial b_3$ is an odd function of $k_3$ in the limit $b_3\to 0$ for all values of $m$, $m_5$, and $\mu$. This implies that the integral over $k_3$ is zero and they do not contribute to the CSE. In contrast, for the $n=0$ sector, we simply have
$\partial \varepsilon^{s}_{\mu}(k_3,b_3)/\partial b_3=1$,
so
\begin{equation}
\braket{J^3_5}=\frac{e^2 B_3}{4\pi^2}\sum_{s}\int^{\infty}_{-\infty} d k_3 n_F(\varepsilon^{s,0}_{\mu}(k_3,b_3=0)).\label{CSEBiorthodef}
\end{equation}
We have plotted $\braket{J^3_5}$ in Figs.(\ref{fig:CSE}a,\ref{fig:CSE}b) as a function of $\delta=m_5/m$ for fixed $m$, and as function of $m$ for fixed $\delta$.
Performing the integral we finally have
\begin{equation}
\braket{J^3_5}=\frac{e^2 B_3}{2\pi^2}\left(\frac{1}{\epsilon}+\Theta[\mu-m]\sqrt{\mu^2-m^2(1-\delta^2)}\right),\label{CSEBiortho}
\end{equation}
with $\epsilon\ll1$. We note that there is a divergent contribution in the CSE. It is a particular feature of $(1+1)$ dimensions that there is a duality between the chiral and charge currents. The charge current operator representing the CME is the chiral density, while the chiral current operator $j^5_1$ that is relevant for the study of the CSE is the same operator as the charge density. Having the same origin as the standard charge density, we regularize it in the same way.
\section{Conclusions}
In the present Letter we have demonstrated that CME in equilibrium is possible when non-Hermitian systems are considered. The key ingredient is to realize that the CME is zero if charge conservation is imposed in the system. However, charge conservation, associated to the $U(1)$ symmetry, is not fulfilled in non-Hermitian systems as it is done in conventional Hermitian ones.
Another fact to pay attention is that there not an unique metric operator associated to the non-Hermitian Hamiltonian fulfilling the pseudo-Hermiticity condition. While there is no practical consequence of this regarding the construction of a bi-orthogonal basis (the average of observables do not depend on any particular choice of the metric operator), this observation is relevant as we can associate different chemical potentials to different metric operators understood as conserved quantities in the non-Hermitian sense. Interestingly, all the metric operators are related to each other by a similarity transformation\cite{M08}, so we can generalize the results obtained here to other chemical potentials by modifying the spectrum correspondingly.
Finally, to the best of our knowledge, there are not experimental realizations of fermionic non-Hermitian systems with real spectrum to test our predictions. However, there are impressive experimental advances in the area of non-Hermitian $\mathcal{PT}-$symmetric photonic systems and other condensed matter analogs\cite{ZVP14,H18}. In fact, it has been recently proposed the experimental observation of the CME employing superconducting quantum circuit technology, and synthetic magnetic fields\cite{TZL18}. We suggest the same experimental setup to test our theory, by extending the experimental setup with equal gain-loss\cite{QNO18}.
Besides, other topological equilibrium effects similar to the CME have been proposed to occur in electromagnetism\cite{AS17,Y17,ARN17,CCL18}, being the optical helicity and the optical chirality the electromagnetic symmetries that play the role of the chiral symmetry in ultrarelativistic fermionic systems. There, the biorthogonal formalism have probed to be useful to handle the effect of dissipation and loss in electromagnetism\cite{ABN18,VEM18}. The natural question is then to see how the topologically-related responses associated to these symmetries are modified by the presence of non-Hermitian effects.
\section{Acknowledgements}
We kindly acknowledge inspiring conversations with K. Landsteiner about non-Hermitian systems and the physics underlying the CME.
A.C. acknowledges financial support through the MINECO/AEI/FEDER, UE Grant No. FIS2015-73454-JIN. and the Comunidad de Madrid MAD2D-CM Program (S2013/MIT-3007), and the Ramon y Cajal program through the grant RYC2018-023938-I. The research of M.C. was partially supported by Grant No. 0657-2020-0015 of the Ministry of Science and Higher Education of Russia.
|
1,314,259,993,503 | arxiv | \section{Introduction}
Motivated by limited communication capacity in physical systems, control with quantized feedback \cite{elia2001stabilization,fu2005sector,you2011attainability,kang2015coarsest,zhou2019adaptive} has been an active research area for decades, dating back to an early work \citep{kalman1956nonlinear} by Kalman. By loosely compressing the system input and output into sectionalized levels through a quantizer, the dynamical system can be stabilized, possibly with $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ performance guarantees under low communication load. The main line of the existing literature is to understand and mitigate the side effects of quantization. However, an explicit dynamical model is required which might be impossible to obtain in practical scenarios.
Recent years have witnessed a renewed surge in data-driven control for unknown linear systems \cite{de2019formulas,de2021low,van2020data,van2020noisy}. Instead of identifying a descriptive model via system identification, the direct data-driven framework computes a control law from sampled system trajectories. A key feature of this approach is that it does not require the well-known persistent excitation condition, hence shedding light on the case where data is insufficient for system identification. By invoking a data-based uncertainty representation, it has been shown in \cite{van2020noisy} that for informative data, a stabilizing controller can be efficiently found via a Linear Matrix Inequality (LMI) \citep{boyd1994linear}.
In this paper, we take an initial step towards quantized control in the direct data-driven approach and focus on the quantized stabilization problem for unknown noisy linear systems with a single input. In particular, we limit the input quantizer to be logarithmic (i.e., the quantization levels are linear in logarithmic scale) with a linear state feedback input, which has been proved \citep{elia2001stabilization} to be able to attain the coarsest quantization density for stabilization of deterministic linear systems. By leveraging a recently developed matrix S-lemma \citep{van2020noisy}, we prove a sufficient and necessary condition for the existence of a common stabilizing controller for all possible dynamics that reflect the data, in terms of an LMI. Moreover, we propose a Semi-definite Programming (SDP) to solve a uniform lower bound for the quantization density. By establishing its connections to unstable eigenvalues of the state matrix, we further prove a necessary rank condition on the data for quantized feedback stabilization.
\subsection{Related work}
\textbf{Quantized control.} Control using quantized feedback has drawn increasing attention of the control community from the seminal work \citep{elia2001stabilization}. Its major contribution is to show that for linear systems with a single input, the coarsest quantizer is logarithmic, and the associated quantization density can be computed using the unstable eigenvalues of the state matrix. \cite{fu2005sector} interpret the quantized stabilization problem as an $\mathcal{H}_{\infty}$ control problem based on the sector bound method, and extend it to multiple-input multiple-output (MIMO) systems. Inspired by \cite{fu2005sector}, our work solves a minimax $\mathcal{H}_{\infty}$ control problem to find the coarsest density. More pertinent to our work is the quantized control for linear uncertain systems. \cite{hayakawa2009adaptive} proposes a time-varying adaptive control law for asymptotic stabilization of uncertain noiseless linear systems by solving a series of Riccati equations. However, we aim to stabilize a set of systems consistent with the data using a common quantized state feedback controller. \cite{kang2015coarsest} considers the element uncertainty in the state matrix of a controllable canonical system, and addresses the stabilization problem of systems with two blocks of uncertainties (due to the plant and the quantization) as done in our work. In contrast, we consider a more natural form of uncertainty which is reflected by the data. Other works include \cite{gao2008new,fu2009finite,coutinho2010input,shen2017quantized}, and \cite{corradini2008robust,liu2012sector,yu2016adaptive,zhou2019adaptive} for nonlinear systems.
\textbf{Direct data-driven control.} This line of work originates from the \textit{Willems et al.'s fundamental lemma} proposed by \cite{willems2005note}, which states that the dynamical model can be replaced with data from sufficiently excited systems. Motivated by it, \cite{de2019formulas} represents the dynamics using historical trajectories under the Persistent Excitation (PE) condition and formulate an SDP to solve a stabilizing controller for deterministic linear systems. This data-driven framework is then applied to model predictive control to deal with safety constraints \citep{coulson2019data, berberich2020data}. When the data is corrupted with noises, \cite{de2021low} proposes an SDP with robustness to the noise for stabilization in an ad-hoc way. For the case that data is insufficient to satisfy the PE condition, the seminal work \cite{van2020data} establishes sufficient and necessary conditions on the informativity of the data for several fundamental control problems. \cite{van2020noisy, bisoffi2021trade} solves a stabilizing controller via an LMI from noisy data by leveraging a matrix S-lemma. Other data-driven work includes \cite{guo2021data, xu2021data, rotulo2021online, berberich2020combining}. Our work extends the framework in \cite{van2020noisy} to address quantized stabilization problems and answer fundamental questions regarding the coarsest quantization density and the condition on the noisy data.
\section{Problem formulation}
In this paper, we consider the following discrete-time linear time-invariant system
\begin{equation}\label{equ:sys}
x(k+1) = Ax(k) + Bu(k)+w(k),
\end{equation}
where $x(k) \in \mathbb{R}^n$ denotes the state, $u(k) \in \mathbb{R}$ is the control input and $\{w(k) \in \mathcal{R}^n\}$ is an uncorrelated noise sequence. As an initial attempt, we focus on the setting that $A \in \mathbb{R}^{n \times n}$ is an unknown state matrix but the input matrix $B \in \mathbb{R}^{n}$ is known \textit{a priori}, and $(A,B)$ are stabilizable.
\begin{figure}[t]
\centering
\includegraphics[height=45mm]{loga}
\caption{The logarithmic quantizer with density $\rho$.}
\label{pic:quantizer}
\end{figure}
Since the coarsest quantizer that quadratically stabilizes linear time-invariant systems is logarithmic \citep{elia2001stabilization}, we aim to stabilize (\ref{equ:sys}) via logarithmically quantized state feedback, provided only with a finite length of input and state trajectory $\{x(0), u(0), x(1), u(1), \dots, x(T)\}$. Define the quantization level $u_i = \rho^{-i} u_0, ~i \in \{1,2,\dots\}$ with a density $0<\rho<1$. Then, the quantized controller has the form of
\begin{equation}\label{def:quantizer}
u(k) = f(v(k)), ~~ v(k) = Kx(k),
\end{equation}
where a logarithmic quantizer $f(\cdot): \mathbb{R} \rightarrow \mathbb{R}$ (see Fig. \ref{pic:quantizer}) is given by
\begin{equation}\label{def:delta}
f(v)= \begin{cases}u_{i}, & \text { if } \frac{1}{1+\delta} u_{i}<v \leq \frac{1}{1-\delta} u_{i}, v>0 \\ 0, & \text { if } v=0 \\ -f(-v), & \text { if } v<0\end{cases} ~~\text{with}~~ \delta = \frac{1-\rho}{1+\rho},
\end{equation}
and $K \in \mathbb{R}^{1 \times n}$ is the feedback gain. We make a standard technical assumption \citep{de2019formulas,van2020noisy} on the noise.
\begin{assum}\label{assum}
The sequence $W = \{w(0), w(1), \dots, w(T-1)\}$ is unknown, yet satisfies a quadratic bound
\begin{equation}\label{assumption}
\begin{bmatrix}
I \\
W^{\top}
\end{bmatrix}^{\top}\begin{bmatrix}
\Phi_{11} & \Phi_{12} \\
\Phi_{12}^{\top} & \Phi_{22}
\end{bmatrix}\begin{bmatrix}
I \\
W^{\top}
\end{bmatrix} \geq 0
\end{equation}
with a symmetric matrix $\Phi_{11}$ and a negative definite matrix $\Phi_{22}<0$.
\end{assum}
Assumption \ref{assum} is in the form of a quadratic matrix inequality, which is able to capture important prior knowledge on the system such as energy and sample covariance bounds over a finite horizon. For example, let $\Phi_{12} = 0$ and $\Phi_{22} = - I$, it reduces to $WW^{\top} = \sum_{k=0}^{T-1}w(k)w(k)^{\top} \leq \Phi_{11}$.
A main challenge for quantized stabilization is that the unknown matrix $A$ cannot be deduced uniquely from data, which instead constitutes an uncertainty set. In fact, our approach relies on a data-based representation of the uncertainty~\citep{van2020noisy}. Define the data matrices $X_{-} = \{x(0), x(1) , \dots, x(T-1)\}$, $U = \{u(0), u(1), \dots, u(T-1)\}$, $X_{+} = \{x(1), x(2) , \dots, x(T)\}$, which are constrained by system dynamics
$
X_{+} = AX_{-}+ BU + W.
$
Substituting $W$ into (\ref{assumption}), it follows that all possible $A$ consistent with the data must satisfy a quadratic matrix inequality
\begin{equation}\label{equ:newset}
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix}^{\top}
\begin{bmatrix}
I & X_{U} \\
0 & -X_{-} \\
\end{bmatrix}\begin{bmatrix}
\Phi_{11} & \Phi_{12} \\
\Phi_{12}^{\top} & \Phi_{22}
\end{bmatrix}\begin{bmatrix}
I & X_{U} \\
0 & -X_{-} \\
\end{bmatrix}^{\top}
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix} \geq 0,
\end{equation}
with $X_{U} = X_{+}-BU$, which defines the uncertainty set
$\Sigma = \{ A | (\ref{equ:newset})~\text{holds} \}.$
Our goal is now to design a controller in the form of (\ref{def:quantizer}) to stabilize all systems with $A \in \Sigma$. This naturally raises the following fundamental question: can we characterize the conditions for such a controller to existing? Clearly, the data must be sufficiently informative such that the uncertainty set $\Sigma$ is small enough. Moreover, the quantizer $f(\cdot)$ cannot be too coarse, which implies that the quantization density $\rho$ has a uniform lower bound.
In this paper, we provide an affirmative answer to the question. By converting the stabilization problem to an $\mathcal{H}_{\infty}$ control problem and applying a recently developed matrix S-lemma \citep{van2020noisy}, we derive a sufficient and necessary condition on the data matrices as well as the quantization density in terms of an LMI. Moreover, we show that the coarsest quantizer and the associated controller can be found by solving a minimax $\mathcal{H}_{\infty}$ norm optimization problem, which can be formulated as an efficient SDP. By relating the minimax problem to the unstable eigenvalues of the state matrix, we further prove an explicit rank condition on the data matrices, which is necessary for quantized stabilization.
\section{Sufficient and necessary condition for quantized stabilization}
In this section, we establish a sufficient and necessary condition for (\ref{equ:sys}) to be stabilizable with logarithmically quantized linear feedback. We first convert the stabilization problem to an $\mathcal{H}_{\infty}$ control problem over the uncertainty set $\Sigma$.
It has been shown in \cite{fu2005sector} that (\ref{equ:sys}) is stabilizable via quantized linear state feedback with quantization density $\rho$ if and only if the uncertain system
\begin{equation}\label{equ:uncertain}
x(k+1) = Ax(k) + B(1+\Delta)v(k), ~~\Delta \in [-\delta, \delta]
\end{equation}
is quadratically stabilizable via linear state feedback, where $\delta$ is defined by (\ref{def:delta}). Fig. \ref{pic:diagram} illustrates the control diagram of (\ref{equ:uncertain}) by viewing $\Delta$ as the input uncertainty.
\begin{figure}[t]
\centering
\includegraphics[height=30mm]{control}
\caption{The control diagram of the uncertain system (\ref{equ:uncertain}).}
\label{pic:diagram}
\end{figure}
By the well-known small-gain theorem~\citep{zhou1998essentials}, the stabilization of (\ref{equ:uncertain}) is equivalent to an $\mathcal{H}_{\infty}$ control problem. Define the transfer function from signal $y$ to $v$ as
$$G_{A,K}(z) = \left[\begin{array}{c|c}
A +BK & B \\
\hline K & 0
\end{array}\right]= K(zI-A-BK)^{-1}B.$$
We use $\| G_{A,K}(z)\|_{\infty}$ to denote its $\mathcal{H}_{\infty}$ norm in the Hilbert space. Then, the uncertain system (\ref{equ:uncertain}) is stabilizable via $v = Kx$ if and only if
\begin{equation}\label{equ:h_inf}
\delta^{-1} >\| G_{A,K}(z)\|_{\infty}.
\end{equation}
Since the condition (\ref{equ:h_inf}) is expressed in the frequency domain, we convert it to an explicit algebraic inequality by applying the bounded real lemma in robust control theory~\citep{zhou1996robust}.
\begin{lemma}[Bounded real lemma]\label{lem:bounded}
$\|G_{A,K}(z)\|_{\infty} < 1/\delta$ if and only if there exists $P>0$ such that
\begin{equation}\label{equ:brl}
I-\delta^2B^{\top}PB >0, ~~\text{and}~~ K^{\top}K + ( A + BK)^{\top}(P^{-1} - \delta^2 BB^{\top})^{-1}( A + BK) < P .
\end{equation}
\end{lemma}
By Lemma \ref{lem:bounded}, the stabilization problem is now equivalent to solving the $\mathcal{H}_{\infty}$ control problem in (\ref{equ:brl}) subject to $A \in \Sigma$. We show that (\ref{equ:brl}) can be rewritten as a quadratic inequality by a standard change of variables. Let $Y = P^{-1}$, $X = KY$. Pre- and postmultiplying $P^{-1}$ to (\ref{equ:brl}), it holds that
$$
-Y + (AY+BX)^{\top}(Y- \delta^2 BB^{\top})^{-1}(AY+BX) + X^{\top}X < 0.
$$
A Schur complement argument further yields that
$$
\begin{bmatrix}
Y-X^{\top}X & (AY+BX)^{\top} \\
AY+BX & Y-\delta^2 BB^{\top}
\end{bmatrix} > 0,
$$
which again by Schur complement is equivalent to
$$
Y-\delta^2 BB^{\top} - (AY+BX)(Y-X^{\top}X)^{-1} (AY+BX)^{\top} >0, ~~\text{and}~~
Y-X^{\top}X >0.
$$
Reorganizing it in a quadratic form of $A$ with $Z = Y-X^{\top}X>0$, we have that
\begin{equation}\label{equ:performance}
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix}^{\top}
\begin{bmatrix}
Y- \delta^2 BB^{\top} -BXZ^{-1}X^{\top}B^{\top} & -BXZ^{-1}Y^{\top} \\
-YZ^{-1}X^{\top}B^{\top} & -YZ^{-1}Y^{\top}
\end{bmatrix}
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix}>0.
\end{equation}
Then, the quantized stabilization problem leads to the following question: when does (\ref{equ:performance}) hold for all $A$ satisfying (\ref{equ:newset})? Since both (\ref{equ:performance}) and (\ref{equ:newset}) are quadratic inequalities in $[I ~~A^{\top}]$, we apply the matrix-valued S-lemma~\citep{van2020noisy} to derive an LMI condition.
\begin{lemma}[Matrix S-lemma]\label{lem:matrixS}
Let $M = \begin{bmatrix}
M_{11} & M_{12} \\
M_{12}^{\top} & M_{22}
\end{bmatrix} \in \mathbb{R}^{(n+n)\times(n+n)}$ and $N = \begin{bmatrix}
N_{11} & N_{12} \\
N_{12}^{\top} & N_{22}
\end{bmatrix} \in \mathbb{R}^{(n+n)\times(n+n)}$ be symmetric matrices with $M_{22}\leq 0$ and $N_{22} \leq 0$. Suppose that $\text{ker}(N_{22}) \subset \text{ker}(N_{12})$ and there exists some matrix $A$ satisfying
$
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix}^{\top}
N
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix} > 0
$ (the so-called generalized Slater condition).
Then, it follows that $\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix}^{\top}
M
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix} > 0$ for all $A$ satisfying $\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix}^{\top}
N
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix} \geq 0$ if and only if there exists $\alpha \geq 0$ and $\beta \geq 0$ such that
$
M-\alpha N \geq \begin{bmatrix}
\beta I & 0 \\
0 & 0
\end{bmatrix}.
$
\end{lemma}
In our setting, we define the partitioned matrices
\begin{align}
M & =\begin{bmatrix}
M_{11} & M_{12} \\
M_{12}^{\top} & M_{22}
\end{bmatrix}= \begin{bmatrix}
Y- \delta^2 BB^{\top} -BXZ^{-1}X^{\top}B^{\top} & -BXZ^{-1}Y^{\top} \\
-YZ^{-1}X^{\top}B^{\top} & -YZ^{-1}Y^{\top}
\end{bmatrix}, \\
N & = \begin{bmatrix}
N_{11} & N_{12} \\
N_{12}^{\top} & N_{22}
\end{bmatrix} =
\begin{bmatrix}
I & X_{+}-BU \\
0 & -X_{-} \\
\end{bmatrix}\begin{bmatrix}
\Phi_{11} & \Phi_{12} \\
\Phi_{12}^{\top} & \Phi_{22}
\end{bmatrix}\begin{bmatrix}
I & X_{+}-BU \\
0 & -X_{-} \\
\end{bmatrix}^{\top}\label{equ:N}.
\end{align}
Now, we examine that $M$ and $N$ satisfy the kernel assumption in Lemma \ref{lem:matrixS}. Note that $M_{22}\leq 0$ and $N_{22} \leq 0$ trivially hold since $Z= Y-X^{\top}X > 0$ and $\Phi_{22} <0$. Clearly, $\text{ker}(N_{12}) = \text{ker} (\Phi_{12} + (X_{+}-BU)\Phi_{22})X_{-}^{\top}$, and $\text{ker}(N_{22}) = \text{ker}(X_{-}^{\top})$ as $\Phi_{22}<0$. Thus, $\text{ker}(N_{22}) \subset \text{ker}(N_{12})$. The generalized Slater condition implies that the set $\Sigma$ has at least an interior point, which is a mild assumption in our problem. Then, we have the following main result under the Slater condition.
\begin{theorem}\label{theorem:Lmi}
Assume that there exists some matrix $A$ such that
$
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix}^{\top}
N
\begin{bmatrix}
I \\
A^{\top}
\end{bmatrix} > 0.
$
Then, the system is stabilizable via logarithmically quantized linear feedback with density $\rho = (1-\delta)/(1+\delta)$ for all $A \in \Sigma$ if and only if there exists $Y>0$, $X$ and scalars $\alpha \geq 0, \beta >0, \delta > 0$ such that the following LMI holds
\begin{equation}\label{equ:LMI}
\hspace{-0.085cm}\begin{bmatrix}
Y-\delta^2BB^{\top}-\beta I & 0 & BX & 0 \\
0 & 0& Y & 0\\
X^{\top}B^{\top} & Y^{\top} & Y & X^{\top} \\
0 & 0 & X & I
\end{bmatrix}
-\alpha
\begin{bmatrix}
I & X_{U} \\
0 & -X_{-} \\
0 & 0\\
0 & 0
\end{bmatrix}\begin{bmatrix}
\Phi_{11} & \Phi_{12} \\
\Phi_{12}^{\top} & \Phi_{22}
\end{bmatrix}\begin{bmatrix}
I & X_{U} \\
0 & -X_{-} \\
0 & 0\\
0 & 0
\end{bmatrix}^{\top} \geq 0,
~~\begin{bmatrix}
Y & X^{\top} \\
X & I
\end{bmatrix} > 0.
\end{equation}
Moreover, if (\ref{equ:LMI}) is feasible for some $Y$ and $X$, then a stabilizing controller is given by $u(k) = f(v(k)) $ with quantization density $\rho= (1-\delta)/(1+\delta)$ and $v(k)= Kx(k)$ with $K = XY^{-1}$.
\end{theorem}
\begin{proof}
To prove the ``if'' statement, suppose that (\ref{equ:LMI}) is feasible. Now, we calculate the Schur complement of the first LMI in (\ref{equ:LMI}) concerning $I$ and obtain
$$
\begin{bmatrix}
Y-\delta^2BB^{\top}-\beta I & 0 & BX \\
0 & 0& Y \\
X^{\top}B^{\top} & Y^{\top} & Z \\
\end{bmatrix}
-\alpha
\begin{bmatrix}
I & X_{U} \\
0 & -X_{-} \\
0 & 0\\
\end{bmatrix}\begin{bmatrix}
\Phi_{11} & \Phi_{12} \\
\Phi_{12}^{\top} & \Phi_{22}
\end{bmatrix}\begin{bmatrix}
I & X_{U} \\
0 & -X_{-} \\
0 & 0\\
\end{bmatrix}^{\top} \geq 0,
~~\begin{bmatrix}
Y & X^{\top} \\
X & I
\end{bmatrix} > 0.
$$
Then, we again compute the Schur complement with respect to $Z$ to yield $M-\alpha N \geq \begin{bmatrix}
\beta I & 0 \\
0 & 0
\end{bmatrix}$, where we have used the fact that $Z = Y - X^{\top}X >0$ by the second inequality of (\ref{equ:LMI}). Thus, by the matrix S-lemma in Lemma \ref{lem:matrixS}, the inequality (\ref{equ:performance}) holds for all $A\in \Sigma$. According to our change of variables, a stabilizing controller $K$ can be solved by $K = XY^{-1}$.
To prove the ``only if'' statement, suppose that the system is stabilizable with quantization density $\rho$ for all $A \in \Sigma$.
By assumption, the inequality (\ref{equ:performance}) holds for all $A\in \Sigma$ with $Z = Y- X^{\top}X > 0$. Thus, the first inequality in (\ref{equ:LMI}) follows from Lemma \ref{lem:matrixS} and some Schur complement. The second inequality in (\ref{equ:LMI}) holds since $Z = Y- X^{\top}X > 0$. The proof is now completed.
\end{proof}
It can be clearly observed from the first LMI in (\ref{equ:LMI}) that as $\delta$ increases, the system becomes harder to stabilize. Hence, there exists a coarsest quantization density such that the system cannot be stabilized for any lower density.
\section{Coarsest quantization density}
In this section, we derive a tight lower bound of the quantization density $\rho$ for the system to be stabilizable, provided with the data matrices $\{X_{-}, X_{+}, U\}$.
Consider the following min-max $\mathcal{H}_{\infty}$ optimization problem
\begin{equation}\label{prob:minmax}
\min \limits_{K} \max \limits_{A \in \Sigma} ~~\|G_{A,K}(z) \|_{\infty}.
\end{equation}
Let $\mathcal{K}$ be the set of optimal controllers, $\mathcal{S}$ be the set of optimal state matrices, and $\gamma^*$ be its optimal value to (\ref{prob:minmax}). Then, we show that $\delta =(1-\rho)/(1+\rho)$ is in fact upper bounded by $1/\gamma^*$.
\begin{theorem}
Suppose that (\ref{prob:minmax}) is feasible. Then, there exists a stabilizing logarithmically quantized controller with density $\rho=(1-\delta)/(1+\delta)$ for all systems with $A \in \Sigma$ if and only if $\delta < 1/ \gamma^*$.
\end{theorem}
\begin{proof}
To prove the ``if'' statement, suppose that $\delta < 1/\gamma^*$. Let $K^* \in \mathcal{K}$ be an optimal controller to (\ref{prob:minmax}). Then, for any $A \in \Sigma$, it follows that
$$\|G_{A,K^*}(z)\|_{\infty} \leq \max \limits_{A\in \Sigma} \|G_{A,K^*}(z)\|_{\infty} = \gamma^*.$$
Multiplying by $\delta$ in both sides of the above inequality yields that $\delta\|G_{A,K^*}(z)\|_{\infty}< 1$ by the assumption $\delta < 1/\gamma^*$. Hence, by (\ref{equ:h_inf}), the feedback gain $K^*$ is able to stabilize (\ref{equ:sys}) with quantization density $\rho = (1-\delta)/(1+\delta)$.
To prove the ``only if'' statement, suppose that a controller $K$ with quantization density $\rho$ stabilizes all systems with $A\in \Sigma$. Let $A^* \in \mathcal{S}$ be an optimal system matrix to (\ref{prob:minmax}). Then, it follows that
$$
\delta \gamma^* = \delta \min \limits_{K} \|G_{A^*,K}(z)\|_{\infty} < \delta \|G_{A^*,K}(z)\|_{\infty} <1.
$$
Therefore, we must have that $\delta < 1/\gamma^*$. The proof is now completed.
\end{proof}
Note that (\ref{prob:minmax}) may not have a solution if $\Sigma$ is unbounded. In fact, the boundedness of $\Sigma$ is a necessary condition for (\ref{prob:minmax}) to be feasible, as to be shown in the next section. Now, we propose an efficient SDP to solve (\ref{prob:minmax}). Clearly, (\ref{prob:minmax}) is equivalent to
$$
\min \limits_{K,\delta>0} 1/\delta, ~~~~~ \text{subject to} ~~\|G_{A,K}(z) \|_{\infty} < 1/\delta, ~~ \forall A \in \Sigma.
$$
The $\mathcal{H}_{\infty}$ norm constraint in the above minimization problem can be formulated as an LMI by virtue of Theorem \ref{theorem:Lmi}, which leads to the following maximization problem
\begin{equation}\label{equ:SDP}
\max \limits_{Y,X,\alpha,\beta,\delta} \delta, ~~~~~\text{subject to} ~~(\ref{equ:LMI}) ~~\text{and}~~\delta>0, Y>0, \alpha \geq 0, \beta >0,
\end{equation}
which is an SDP and can be efficiently solved by modern solvers e.g., CVX~\citep{cvx}.
\section{Necessary condition for quantized stabilization}
So far, we have established both sufficient and necessary conditions for quantized stabilization and have proposed an SDP to solve the coarsest quantization density, all based on the LMI in (\ref{equ:LMI}). To provide more profound insights, we derive a necessary and explicit rank condition on the data matrices by leveraging classical quantized control results.
Consider the following max-min problem that has interchanged the order of minimization and maximization of (\ref{prob:minmax})
\begin{equation}\label{prob:maxmin}
\max \limits_{A\in \Sigma} \min \limits_{K} \|G_{A,K}(z) \|_{\infty}.
\end{equation}
According to \cite{fu2005sector}, the inner minimization can be expressed as
$
\min \limits_{K} \|G_{A,K}(z) \|_{\infty} = \prod_{i}|\lambda_i|
$,
where $\lambda_i$ denotes the $i$-th unstable eigenvalue of $A$. Thus, (\ref{prob:maxmin}) is equivalent to a maximization problem
$$
\max \limits_{A \in \Sigma} \prod_{i}|\lambda_i|.
$$
By the well-known max-min inequality~\citep{boyd2004convex}, it follows that
$$
\max \limits_{A \in \Sigma} \prod_{i}|\lambda_i| \leq \min \limits_{K} \max \limits_{A \in \Sigma} \|G_{A,K}(z) \|_{\infty} = \gamma^*.
$$
Hence, the maximum $\max \limits_{A \in \Sigma} \prod_{i}|\lambda_i|$ must be bounded such that $\gamma^*$ is finite. Since $\Sigma$ is only related to the data, we then ask the following question: under which conditions on the data matrices, the eigenvalues of $A$ are bounded for all $A \in \Sigma$?
A straightforward sufficient condition is that $\Sigma$ is bounded, which is equivalent to $\text{rank}(X_{-}) = n$. However, it is non-trivial to prove that the rank condition is also necessary for the eigenvalue to be bounded. The following example shows that when $\Sigma$ is unbounded, there might still exist a nilpotent matrix $A \in \Sigma$ with unbounded elements yet zero eigenvalues.
\begin{example}
Consider a nilpotent matrix $
A = \begin{bmatrix}
0 & k \\
0 & 0
\end{bmatrix}
$ and input matrix $B = I$. The data matrices are given by $X_{-} = \begin{bmatrix}
1 & 1\\
0 & 0
\end{bmatrix}$, $U = I$, $X_{+} = I$. It can be easily verified that $X_{+} = AX_{-}+ BU$. Set $\Phi_{11} = I, \Phi_{12}=0, \Phi_{22}=-I$. Then, it is easy to verify that $A \in \Sigma$ for $k \in \mathbb{R}$. Hence, $A$ can be unbounded but its eigenvalues are bounded.
\end{example}
Nevertheless, we show by contradiction that the boundedness of $\Sigma$ is both sufficient and necessary for the eigenvalues of $A$ to be bounded for all $A \in \Sigma$, which leads to the following result.
\begin{theorem}
If there exists a stabilizing logarithmically quantized controller, then
$
\text{rank}(X_{-}) = n.
$
\end{theorem}
\begin{proof}
We prove by contradiction. Suppose that $\text{rank}(X_{-}) = r < n$. Next, we construct a matrix $\bar{A} \in \Sigma$ which has unbounded eigenvalues. First, we define an invertible linear transformation $E = \begin{bmatrix}
E_{11} & E_{12} \\
E_{21} & E_{22}
\end{bmatrix} \in \mathbb{R}^{n \times n} $ such that $EX_{-} = \begin{bmatrix}
X_{r} \\
\hline 0
\end{bmatrix}
$ with $X_r \in \mathbb{R}^{r \times T}$, and a diagonal matrix $\Lambda = \text{diag}(0,\dots,0,1,\dots,1) \in \mathbb{R}^{n \times n} $ with its first $r$ diagonal elements being zero.
Let $A_0 \in \Sigma$, and $\bar{A} = A_0 + k \Lambda E$. Then, we prove that $\bar{A} \in \Sigma$. To see this, note that $\Lambda E X_{-} = 0$ by definition, and $\Sigma$ can be written as
$$
\begin{bmatrix}
I & X_{U}+AX_{-}
\end{bmatrix}\begin{bmatrix}
\Phi_{11} & \Phi_{12} \\
\Phi_{12}^{\top} & \Phi_{22}
\end{bmatrix}\begin{bmatrix}
I & X_{U}+AX_{-}
\end{bmatrix}^{\top} \geq 0.
$$
Next, we show that $\Lambda E$ has a non-zero eigenvalue. By the definition, $E_{22}$ has full rank. Let $\lambda_{E}$ be an eigenvalue of $E_{22}$ and $x_{E}$ be the associated eigenvector. Denote $x = \begin{bmatrix}
0 \\
\hline x_E
\end{bmatrix} \in \mathbb{R}^{n}$. Then, it is straightforward to verify that $\Lambda E x = \lambda_E x$, which implies that $\lambda_E \neq 0$ is an eigenvalue of $\Lambda E$.
Hence, by limiting $k$ to infinity, we conclude that $\bar{A}$ has unbounded eigenvalue. Thus, we have that $\max \limits_{A \in \Sigma} \prod_{i}|\lambda_i| = \infty$, which leads to a contradiction.
\end{proof}
Though the condition $\text{rank}(X_{-}) = n$ is only necessary, it provides a simple approach to examine the existence of a solution to (\ref{equ:LMI}). Moreover, it also holds for zero quantization density, which corresponds to general linear systems without quantization. Hence, our results extend \cite{van2020noisy} in that we provide a necessary and explicit rank condition for quadratic stabilization.
\section{Numerical examples}
\begin{figure}[t]
\centering
\includegraphics[height=50mm]{noise_exist}
\includegraphics[height=50mm]{noise_delta}
\caption{Left: the percentage of data sets for which (\ref{equ:LMI}) has a solution; Right: the mean of $\delta^2$ which is obtained by solving (\ref{equ:SDP}). The x-axis is set to be logarithmic for better exposition.}
\label{pic:noise}
\end{figure}
In this section, we perform numerical examples to verify our theoretical results. The simulation is carried out using MATLAB 2020b on a laptop with a 2.8GHz CPU. The code is provided in \url{https://github.com/lixc21/Data-driven-Quantized-Control}.
We randomly generate an open-loop unstable linear dynamical model
$$
A=\begin{bmatrix}
-0.192 & -0.936 & -0.814 \\
-0.918 & 0.729 & -0.724 \\
-0.412 & 0.735 & -0.516
\end{bmatrix},~
B=\begin{bmatrix}
-0.554 \\ 0.735 \\ 0.528
\end{bmatrix}
$$
by sampling its elements uniformly from the interval $[-1,1]$. The eigenvalues of $A$ are $1.2910$, $-1.3228$, $0.0528$. The time horizon of the trajectory is set to $T = 20$. We assume that the noise is independently sampled from a uniform distribution on a three-dimensional ball $\{w\in \mathbb{R}^3|~\|w\|_2^2\leq \omega_{\text{max}}\}$ with noise level $\omega_{\text{max}}$. The prior bound in (\ref{assumption}) on the noise is then given by $\Phi_{11}=T\omega_{\text{max}}I$, $\Phi_{12}=0$, $\Phi_{22}=-I$. It can be easily verified that the sampled noises satisfy the quadratic bound. The initial state and the control input at each time instant are sampled from the standard normal distribution.
First, we investigate the effects of noise level on quantized stabilization. We let $\omega_{\text{max}}$ vary from $0$ to $1$. For each noise level, we generate $1000$ independent data sets to solve the LMI in (\ref{equ:LMI}) and plot the percentage for (\ref{equ:LMI}) to have a feasible solution, i.e., a stabilizing controller exists. We check that the generalized Slater condition required by Theorem \ref{theorem:Lmi} holds for all the data sets by verifying that $N$ in (\ref{equ:N}) has three positive eigenvalues. The result is displayed on the left of Fig. \ref{pic:noise}. Consistent with intuition, the percentage decreases as the noise level grows. This is because the uncertainty set of $A$ becomes larger and it is harder to find a quantized stabilizing controller. We obtain similar results for the coarsest quantization density in the right of Fig. \ref{pic:noise}.
\begin{figure}[t]
\centering
\includegraphics[height=50mm]{bound_exist}
\includegraphics[height=50mm]{bound_delta}
\caption{Left: the percentage of data sets for which (\ref{equ:LMI}) has a solution; Right: the mean of $\delta^2$ which is obtained by solving (\ref{equ:SDP}). The x-axis is set to be logarithmic.}
\label{pic:bound}
\end{figure}
Then, we note that the prior bound on the noise in (\ref{assumption}) can also have a significant impact on the performance. To see this, we let the prior bound be larger than the true bound by setting $\Phi_{11}=\zeta \times T\omega_{\text{max}}I$, $\Phi_{12}=0$, $\Phi_{22}=-I$, where $\omega_{\text{max}}=0.005$ and the parameter $\zeta$ varies from 1 to 50. For each $\zeta$, we generate 1000 data sets independently and compute the percentage for (\ref{equ:LMI}) to have a solution. For the cases that a stabilizing controller exists, we further solve the coarsest quantization density by (\ref{equ:SDP}). The generalized Slater condition is also verified to hold for all data sets. The results are shown in Fig. \ref{pic:bound}. Clearly, if we have a more accurate prior bound, we are more likely to stabilize the system via a lower quantization density.
\section{Conclusion}
In this paper, we have addressed the stabilization problem using logarithmically quantized feedback for partially unknown linear systems with noisy inputs. Particularly, we have provided a sufficient and necessary condition for quantized stabilization in the form of an LMI. Moreover, we have solved the coarsest quantization density as well as a stabilizing feedback gain via an SDP.
We believe that our results raise a range of interesting directions. For example, one may extend our results to the stabilization of general MIMO systems. Instead of solving a stabilizing controller, we can also study $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$ performance guarantees of the closed-loop systems. For online control with quantized feedback, one can design an adaptive quantizer with a varying density as more data is collected per round.
\section{Acknowledgement}
We gratefully acknowledge support from the National Natural Science Foundation of China under Grant no. 62033006. We also would like to thank the anonymous reviewers for their helpful suggestions.
|
1,314,259,993,504 | arxiv | \section{Introduction}
Artificial neural networks (ANNs) give unprecedented results in machine learning tasks \cite{AlexNet2012, resnet2015, InceptionResNetV4, ViTOG, EffNetV2OG} though they continue to be little understood. Theoretical studies focus on the equivalence between ANNs trained with (full-batch) gradient descent and kernel methods \cite{Jacot_NTK2018_Convergence, CNTK_Sanjeev, jaxgithub, Domingos21}. Specifically, it was shown that in the infinite-width limit neural networks trained by gradient descent are equivalent to a kernel machine using the NTK \cite{Jacot_NTK2018_Convergence}. It was then shown that the necessary condition for the equivalence of a neural network model to kernel regression was being in a so called 'lazy-training' regime where the weights remain approximately static \cite{LazyTraining}. This is equivalent to using a linearization of the neural network model about its initial parameterization \cite{LazyTraining, WideNNevolveaslinear}. Critics argue that being within this 'lazy-training' regime gives results in-consistent with phenomena that we observe, such as feature learning \cite{TensorPrograms4}, and that using the linearization of a model around its initialization can degrade performance significantly \cite{LazyTraining}. Following this, many teams evaluated the difference and similarities between kernel regression using NTKs and neural networks across a variety of tasks \cite{CNTK_Sanjeev, FinitevsEmpirical2020, HarnessingThePowerArora2020}, but generally found that the performance differences depended on task and architecture \cite{DisentanglingFeatureLazy}.
Our team became interested in calculating these kernels for neural network models of used in practice. A python library has been developed to dramatically increase the ease of calculating both finite and infinite width networks' tangent kernels called neural-tangents \cite{neural_tangents}, but this library has a few limitations. First, it is built upon the jax framework \cite{jaxgithub}. Jax is not currently natively supported on windows machines, and is still in pre-release. Jax has not yet developed as large a user-base as other popular frameworks like TensorFlow \cite{tensorflow} or PyTorch \cite{PyTorch} (see for example: \cite{kaggle_2021}), and until a time where it does so, using neural-tangents will require users to overcome the hurdle of mastering another deep learning framework. A different code-base that uses cupy \cite{cupy} was built to efficiently calculate the infinite convolutional NTK \cite{CNTK_Sanjeev}, but not empirical NTKs. Therefore, a niche role in the community was present to provide software to calculate the tangent-kernel within a framework that is widely used in the field, PyTorch.
We present torchNTK, a python library built on the PyTorch framework that calculates the empirical NTK using PyTorch autograd, and explicitly for multilayer perceptron networks. We developed torchNTK to achieve our goals of studying the time-evolution of the NTK for models that practical users of AI systems interface with. In the remainder of this paper, we describe the explicit algorithms used to calculate the NTK, we benchmark our software to compare implementations, detail an initial experiment to demonstrate our software, and discuss plans for improving our software.
\section{The Additive Components of the Neural Tangent Kernel} \label{AdditiveComponents}
Study of the NTK of finite-networks of large size trained on large number of datapoints has been difficult due to the sizes of the matrices involved. If a neural network parameterized by $\theta$ and acts on a dataset X is denoted by $f_{\theta}(X)$ then, the NTK matrix is the Gram matrix of the Jacobian of the network \cite{Jacot_NTK2018_Convergence} as follows:
\begin{equation}
J = (\nabla_{\theta} f_{\theta}(X)),\quad K^{NTK} = J^{\top}J.
\end{equation}
For a dataset of size $N$ and a network of size $P$, the Jacobian is a size $P\times N$ matrix. Considering that deep learning is applied to problems that generally have many datapoints and models have increased in size over time, the full Jacobian matrix is often too costly to hold in memory on consumer workstations. As a concrete example, for a dataset of size 60,000 and a network with 100,000 parameters represented in fp32, the Jacobian is a 24 gigabyte matrix, which is larger than the total available VRAM on most GPUs. Note, that these sizes are typical for common toy problems like digit classification but that modern architectures might have 1e1 - 1e6 times the number of parameters \cite{GPT3, resnet2015}. While the Jacobian is large due to the number of parameters, the NTK is size $N\times N$, and for modestly sized datasets the NTK is more realistic to expect to hold in memory.
In the over-parameterized regime, we can lower peak memory requirements by transforming the problem from holding a $P\times N$ matrix into holding many $N\times N$ matrices using a layerwise approach. These additive components have already been pointed out directly in works that derive algorithms for the calculation of the NTK \cite{Zhichaos_paper, CNTK_Sanjeev, Lee2019_empricalNTK}, and can be most explicitly represented as a sum over the layers $L$:
\begin{equation}
K^{\mathrm{NTK}} = \sum_{l=0}^L (\nabla_{\theta^l} f_{\theta)}(X)) (\nabla_{\theta^l} f_{\theta}(X))^{\top}.
\end{equation}
We hypothesize that the additive components representing the layers contain more specific information about the operations they represent and may be getting 'lost' in the full NTK, though we leave demonstrating that to future work. For that reason, these components are worthy of additional study, and in fact, there has been recent work on a spectral analysis of these layerwise kernels \cite{LayerwiseSpectral}.
We end this section by pointing out that in contemporary works on the weight matrices \cite{MartinMahoneyNature}, the Hessian \cite{LayerwiseHessian,Layerwise_Hessian}, and Fisher information matrices \cite{Karakida3_crossentropy}, have all examined a 'layerwise' approach. These additive components of the NTK can be thought of as a natural extension to compliment these modes of inquiry.
\section{Algorithmic Details}
TorchNTK is an accumulation of different methods to calculate the NTK, which can be broadly classified as either autograd or explicit differentiation. While autograd methods can handle any model, the explicit differentiation technique was benchmarked to be much faster on the MLP architectures that it is limited to.
In the following sections we derive the formula for MLP architectures used to recursively calculate the NTK:
\subsection{Derivation of the NTK: MLP without bias}
Consider the following to represent a neural network with parameters $\theta$ and $X \in \mathbf{R}^{d_0,n}$ where $d_0$ is the dimensionality of the input vector, or equivalently is the width of the input layer to our neural network. Let us first consider neural networks composed of a series of matrix multiplications, interrupted by non-linear activation functions. These networks are referred to as multilayer perceptrons. Below, we also use the convention that $X_{l}$ is the output of layer $l$, that $\sigma$ is some activation function, and that $W$ is the weight matrix.
$$f_{\theta}(X) = \mathbf{w}^{\top} \frac{1}{\sqrt{d_L}}\sigma(W^{\top}_L \frac{1}{\sqrt{d_{L-1}}} \sigma (... \frac{1}{\sqrt{d_2}} \sigma(W^{\top}_2 \frac{1}{\sqrt{d_1}} \sigma(W^{\top}_1 \mathbf{X}))))$$
Where we have adopted the practice of dividing by the square root of the width of each layer which is necessary to place ourselves in the kernel parameterization.
Given that $S_l$ is a matrix whose $\alpha$th column is:
$$s_{\alpha}^l = D_{\alpha}^l \frac{W_{l+1}^\top}{\sqrt{d_l}} D_{\alpha}^{l+1} \frac{W_{l+2}^\top}{\sqrt{d_{l+1}}} D_{\alpha}^{l+2} ... \frac{W_{L}^\top}{\sqrt{d_{L-1}}} D_{\alpha}^{L} \frac{\mathbf{w}}{\sqrt{d_L}},$$
where D is defined by
$$ D_{\alpha}^{k} \equiv \mathrm{diag}( \sigma^\prime(W_{k} x_{\alpha}^{k-1} ) ) $$
Then one can show (see Appendix G.2 of \cite{Zhichaos_paper}):
$$K^{NTK} = X^\top_L X_L + \sum_{l=1}^L (S_l^\top S_l) \odot (X_{l-1}^\top X_{l-1})$$
The NTK is therefore a sum over components, each themselves being the product of a co-variance matrix of features preceding the layer and a term related to the propagation of gradients inside the network.
\subsection{Derivation: MLP with bias}
We extend these results to include a bias vector:
$$f_{\theta}(x) = \mathbf{w}^{\top} \frac{1}{\sqrt{d_L}}\sigma(W^{\top}_L \frac{1}{\sqrt{d_{L-1}}} \sigma (... \frac{1}{\sqrt{d_2}} \sigma(W^{\top}_2 \frac{1}{\sqrt{d_1}} \sigma(W^{\top}_1 \mathbf{x} + B_1) +B_2) ... +B_{L-1}) +B_L) + \mathbf{B}$$
We need to update our terms as well, so that each layers output is now:
$$X_{l} = \frac{1}{\sqrt{d_l}} \sigma(W^{\top}_{l} X_{l-1} + B_{l})$$
And update our definition of $D_l$:
$$D_{l} = \mathrm{diag}(\sigma^{\prime}(W_l^{\top} X_{l-1} + B_{l}))$$
We can now derive the equation for the bias vectors' contribution to the NTK for the bias of any layer, $B_{l}$. Taking the series of gradients of each weight bearing tensor in the operation reveal that the bias vectors also contribute components equal to the matrix S described above times the element wise product with a matrix of all ones, $J_{n}$, which we describe below.
$$ \frac{\partial f_{\theta}(x)}{\partial B_{l}} = \mathbf{w}^{\top} \frac{1}{\sqrt{d_{L}}} \sigma^{\prime}(W_{L}^\top X_{L-1} + B_{L}) W_{L-1}^{\top} ... \sigma^{\prime}(W_{l}^\top X_{l-1} + B_{l}) \frac{\partial B_{l}}{\partial B_{l}} $$
Substituting in the definition for S taken above (with our new definition of D):
$$ \frac{\partial f_{\theta}(x)}{\partial B_{l}} = S_l \frac{\partial B_{l}}{\partial B_{l}}$$
Considering that $B_{l}$ is a matrix $\in \mathbb{R}^{d_{l}, n}$, and that $S_{l}$ is the same for every element inside that matrix, we can represent the computation for the entire matrix of bias parameters as an element wise product with the matrix of ones, $J$. This matrix is also $\in \mathbb{R}^{d_{l}, n}$. We will notate the first dimension as a subscript and leave the second dimension understood. Thus, $J_{d_{l}}$ is the matrix of all ones with shape $\mathbb{R}^{d_{l}, n}$. This allows us to write:
$$ \frac{\partial f_{\theta}(x)}{\partial B_{l}} = S_l \odot J_{d_{l}}$$
The NTK component from the bias at layer $l$ is therefore:
$$K^{NTK}_{B_l} = (S_l \odot J_{d_{l}})^{\top} (S_l \odot J_{d_{l}})$$
The total NTK can therefore be expressed as:
$$ K^{NTK} = X^\top_L X_L + J_{d_L}^\top J_{d_L} + \sum_{l=1}^L (S_l^\top S_l) \odot (X_{l-1}^\top X_{l-1}) + \sum_{l=1}^L (S_l \odot J_{d_l} )^\top (S_l \odot J_{d_l} ) $$
\subsection{Autograd Algorithms}
In addition to the algorithm described above that explicitly calculates the NTK for MLP architectures, there are additional algorithms included in torchNTK that calculate the NTK using autograd methods. Autograd methods, while slower than our explicit NTK calculation, extend to other PyTorch architectures and largely work 'out-of-the-box' reducing the user's margin of error:
\begin{itemize}
\item [1. ] The first alternative makes a call to \textit{torch.autograd.functional.jacobian} across the dataset, stacks the resulting list of tensors from each datapoint, then simply constructs the NTK as:
$$K^{NTK} = \nabla f(x,\theta)^{\top} \nabla f(x,\theta)$$
\item[2. ] A second alternative calls autograd on the model iteratively across each layer for each datapoint, and was adapted from the work of \cite{TENAS}
\item[3. ] A third alternative computes each row of the Jacobian vector product for each operation, then outputs each operation to a dictionary. This represents our 'layerwise' autograd method
\item[4. ] With PyTorch 1.11, a new \textit{torch.vmap} function was created to parallelize computations across the batch dimension. One specific use case is to speed up the computation of the Jacobian. This can also be applied to speed up the computation of the layerwise autograd method and we have included it as a piece of experimental software.
\end{itemize}
\section{Software Performance}
In this section we detail the performance differences and trade-offs between the various algorithms for two classes of models: a MLP and a CNN, at two different widths. All algorithms were bench marked for their time to completion and maximum GPU memory allocations for calculating the final NTK of the same model for the same data on the same hardware inside an IPython kernel. We tested the algorithms on a local computer cluster equipped with an A-100 DGX node which we queried for 4 cpu cores and just one A-100 GPU. All algorithms were tested using GPU tensors and models, except the \textbf{Full Jacobian} implementation, which is CPU only.
For each of the 'MLP' benchmarks, we created a neural network represented by a Module object with 4 fully-connected layers. Each hidden layer had a width of 100 neurons. The input data was a vector of length 100 drawn from the standard normal distribution. The network terminated into a single neuron. Each hidden layer used the tanh activation function, while the output neuron was not routed through an activation function before calculating the NTK. As is common in the NTK literature, we used the NTK parameterization by dividing each layer's output by the square root of the width of that layer. The weights between each benchmark were kept the same using a random number generator seed, and were themselves drawn from the standard normal distribution.
As a check of our claim that we expect layerwise computations to be more memory efficient in the deep and narrow regime, we also calculated a memory benchmark for a high parameter MLP, called 'MLP-h'. This model has 8 layers, each hidden layer with a width of 1000, an input shape of 1000, tanh activation functions, terminates into a single neuron output. All memory benchmarks were calculated with the maximum allocated memory on the GPU, (\textit{torch.cuda.max\_memory\_allocated}()).
\begin{table}[!htbp]
\begin{center}
\title{MLP Benchmark Time}
\resizebox{\textwidth}{!}{%
\begin{tabular}{| c | c | c | c | c | c |}
\hline
\textbf{N Datapoints} & \multicolumn{5}{ c |}{\textbf{Time [sec]}} \\
\cline{2-6}
& \textbf{Full Jacobian} & \textbf{Autograd} & \textbf{L. Autograd} & \textbf{L. Autograd w vmap} & \textbf{Explicit Differentiation} \\
\hline
10 & 3.3e-3 & 5.95e-3 & 18.2e-3 & 3.97e-3 & 1.01e-3 \\ \hline
100 & 39e-3 & 47.8e-3 & 174e-3 & 10.3e-3 & 0.98e-3 \\ \hline
1000 & 3.71 & 467e-3 & 1.97 & 90.4e-3 & 1.06e-3 \\ \hline
10000 & 359 & 4.77 & 19.8 & 869e-3 & 9.03e-3 \\ \hline
30000 & OOM & 15.7 & 63.0 & 2.89 & 74.7e-3 \\ \hline
40000 & OOM & 21.7 & 84.1 & 4.06 & OOM \\ \hline
\end{tabular}}
\caption{For a variety of number of datapoints, we ran our empirical NTK calculation algorithms on the same MLP model and calculated the time until completion using the IPython magic function \% timeit and report the mean time here. For function calls longer than 10 seconds a single function call's time is reported, calculated using the difference in system clock time. When the algorithm we called failed due to running out of system memory, we report 'OOM' instead. Generally we see that among autograd methods, the algorithms making use of DataLoader objects to feed the calculation (Autograd and L. Autograd with vmap) are faster, with torch.vmap enabling greater parallelization. With any appreciable amount of data, the method that calculates the full Jacobian becomes appreciably slower, so much so, that for the next set of benchmarks on a larger model we exclude the algorithm from analysis (Tables \ref{T:MLP-h Benchmark Time} and \ref{T:MLP-h Benchmark Memory}). Explicit Differentiation is faster than the alternatives, owing this to the engineering made to parallelize the computation across the entire dataset. With greater parallelization generally comes higher memory costs, which is explored further in Table \ref{T:MLP Benchmark Memory}.}
\label{T:MLP Benchmark Time}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\title{MLP Benchmark Memory}
\resizebox{\textwidth}{!}{%
\begin{tabular}{| c | c | c | c | c |}
\hline
\textbf{N Datapoints} & \multicolumn{4}{ c |}{\textbf{Memory [Mb]}} \\
\cline{2-5}
& \textbf{Autograd} & \textbf{L. Autograd} & \textbf{L. Autograd w vmap} & \textbf{Explicit Differentiation} \\
\hline
10 & 2.82 & 1.61 & 1.24 & 0.45 \\ \hline
100 & 25.39 & 12.89 & 13.31 & 1.3 \\ \hline
1000 & 242 & 132 & 94.27 & 32.15 \\ \hline
10000 & 2416 & 2042 & 2471 & 1659 \\ \hline
30000 & 7250 & 14514 & 21809 & 14572 \\ \hline
40000 & 11233 & 25751 & 25731 & OOM \\ \hline
\end{tabular}}
\caption{Each algorithm (Except for Full Jacobian which was implemented only on the CPU, so our benchmark technique was not recorded) was run once on the GPU and the peak memory allocated on the GPU device was recorded. If the device ran out of memory, then 'OOM' was recorded instead. There are a few factors that are coupled that determine the memory usage: including whether the computation is batched over the number of data points, whether the computation parallelizes over the number of datapoints, whether the computation is layerwise or not, the architecture, and the number of datapoints. Due to this complexity, we suggest trying each method on a subset of the data first to search for a suitable choice.}
\label{T:MLP Benchmark Memory}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\title{MLP-h Benchmark Time}
\resizebox{\textwidth}{!}{%
\begin{tabular}{| c | c | c | c | c |}
\hline
\textbf{N Datapoints} & \multicolumn{4}{ c |}{\textbf{Time [sec]}} \\
\cline{2-5}
& \textbf{Autograd} & \textbf{L. Autograd} & \textbf{L. Autograd w vmap} & \textbf{Explicit Differentiation} \\
\hline
10 & 25.3e-3 & 70.5e-3 & 30.8e-3 & 2.43e-3 \\ \hline
100 & 234e-3 & 690e-3 & 184e-3 & 2.36e-3 \\ \hline
500 & 1.31 & 3.38 & 1.3 & 2.08e-3 \\ \hline
1000 & OOM & 6.89 & 2.29 & 2.13e-3 \\ \hline
10000 & OOM & OOM & OOM & 44.3e-3 \\ \hline
20000 & OOM & OOM & OOM & 161e-3 \\ \hline
\end{tabular}}
\caption{Given our belief that a layerwise computation will be memory efficient in the deeper and narrower regime, we re-ran our benchmarks on a deeper model for comparison. Each benchmark was calculated using the IPython magic timieit function, with mean times reported here. If the calculation took longer than 10 seconds, a single calculation was used with the time computed from the difference in system clock times. Because MLP-h has many more parameters, the size of the Jacobians needed to calculate the NTK more quickly saturate the available GPU memory. Whenever the calculation used all the available GPU memory we replace the value with 'OOM'. We see here that in the deeper regime, our layerwise approaches extend the calculation as we expected}
\label{T:MLP-h Benchmark Time}
\end{center}
\end{table}
\begin{table}[!htbp]
\begin{center}
\title{MLP-h Benchmark Memory}
\resizebox{\textwidth}{!}{%
\begin{tabular}{| c | c | c | c | c |}
\hline
\textbf{N Datapoints} & \multicolumn{4}{ c |}{\textbf{Memory [Mb]}} \\
\cline{2-5}
& \textbf{Autograd} & \textbf{L. Autograd} & \textbf{L. Autograd w vmap} & \textbf{Explicit Differentiation} \\
\hline
10 & 717 & 277 & 238 & 158 \\ \hline
100 & 5695 & 1314 & 903 & 101 \\ \hline
500 & 28101 & 6227 & 4160 & 175 \\ \hline
1000 & OOM & 12378 & 8238 & 284 \\ \hline
10000 & OOM & OOM & OOM & 4983 \\ \hline
20000 & OOM & OOM & OOM & 17899 \\ \hline
\end{tabular}}
\caption{We also include memory benchmarks from each algorithm with increasing number of datapoints, where we can see that in the deeper and narrower regime this model probes, and especially with many datapoints, layerwise based calculations use less peak GPU memory than non-layerwise approaches. Our Explicit Differentiation model extended the calculation of the finite NTK to 10000 datapoints under 5GB of peak allocated memory, demonstrating that our code extends the study of the finite NTK with many datapoints to modest workstations. There exist many consumer level GPU choices with more than 5GB of memory.}
\label{T:MLP-h Benchmark Memory}
\end{center}
\end{table}
The results of these tables demonstrate that for very deep MLP networks, our explicit differentiation technique is more memory efficient and many times more time efficient than autograd methods. In addition, we show there is a regime of model architectures where layerwise computations are more memory efficient than full Jacobian computation with Autograd. Because the 'best choice' of algorithm depends on the specific goal of the researcher, we emphasize that individual researchers should benchmark their own architectures on their own systems to make an informed decision about which NTK algorithm suits them. Another key takeaway is that more effort should be placed into developing and investigating highly optimized explicit differentiation techniques for other model architectures. It is clear that tremendous benefits exist in doing so: our MLP-h model benchmark shows a speed up of over 1000x compared to the nearest autograd technique on 1000 datapoints. While tedious, in scenarios where limited hardware is available or has a high cost, or where many of these NTKs will need to be calculated (for instance, see our experiments in S\ref{VisNTKoverTraining} below) the benefits of explicit differentiation can outweigh the up-front development costs. Finally, other neural tangent libraries may benefit in reducing peak memory use with a layerwise approach.
\section{Experiments and Use Cases}
\subsection{Fisher Information Matrix}
As pointed out in contemporary work on the Fisher Information Matrix, the NTK shares its non-zero eigenvalues with its dual matrix \cite{Karakida2_mse,Karakida3_crossentropy}:
$$F = \left(\frac{\partial f_{\theta}(x)}{\partial \theta}\right)\left( \frac{\partial f_{\theta}(x)}{\partial \theta}\right)^\top$$
The Fisher information matrix (FIM) is of interest because at convergence with training loss zero the Hessian of the mean squared loss function is equal to the FIM. In the following equation t indexes the datapoint inside a dataset of size T, see equation 9 of \cite{Karakida2_mse}.
$$H = F - \frac{1}{T} \sum_{t}^{T} (y(x_t) - f(x_t)) \nabla_{\theta} \nabla_{\theta} f(x_t)$$
This makes the FIM useful in study the geometry of the loss landscape. Authors have suggested studying the eigenvalues of the FIM to uncover what they refer to as "pathological sharpness" or the distance between the mean value of the eigenvalues of the FIM and the maximum value of the FIM \cite{Karakida2_mse, Karakida3_crossentropy}. Seeking models with low sharpness in the loss landscape in the local neighborhood with respect to the parameters have been observed to improve generalization \cite{SAM}, so it is plausible that the FIM provides correlative information on model generalizability. In the layerwise setting, each operation of the neural network also can be used to create a [$p_l$ x $p_l$] layerwise Fisher information matrix. Given that we know $p_l$ from our architecture, once we calculate the layerwise NTK we actually know the entire spectra of these layerwise FIM.
\subsection{Visualizing the NTK over training}
\label{VisNTKoverTraining}
In this experiment, we calculated the NTK and each layerwise NTK additive component for every training step of an MLP trained by vanilla gradient descent to classify MNIST-2, where we have randomly sampled handwritten digits of class 6 and 9. By collecting the NTK at every training step, we can reconstruct a video of how the NTK changes that you can view \href{https://youtu.be/Og796gMShcE}{ here.}. A more detailed explanation of our experimental setup are available in appendix \ref{ExperimentDetails}.
In the plots below, we have sorted our training data such that the first 5000 indices are all class 6 and the next 5000 indices are all class 9. This makes visualization more interpretable and does not impact learning because our gradient updates are averaged over the entire training dataset.
In Fig \ref{fig:MNIST2_ntk} we plot the initial and final NTK matrix over training. The kernel has discriminatory ability, meaning that the block of 6s have in general higher NTK value than the block in the upper right and lower left quadrant. Note that as training progresses the diagonal blocks become darker and the off-diagonal blocks become lighter, representing that the NTK is capturing information about how the neural network is differentiating between classes. This is consistent with the intuition that NTK represents a similarity score between datapoints as measured by a dot product between the neural function's gradients. One can use this kernel to do binary classification by computing the similarity of some training point x with the dataset X. The kernel machine describing binary classification is:
$$ y_{i} = \mathrm{sgn}( \sum_{k} w_{k} Y_{k} K(x_{i},X_{k}) )$$
Where the result is mapped onto -1,1. Because our training data is balanced, and for simplicity, we set all $w_{k}$ to 1 as a quick approximation of the accuracy of a kernel machine that could utilize each NTK. We compute these accuracies at the start and end of training and report them in the table: \ref{T:layerwise_kernel_performance}
\begin{figure}[h]
\centering
\includegraphics[width=8cm, height=8cm]{images/MNIST2_final/ntk/00001.png}
\includegraphics[width=8cm,height=8cm]{images/MNIST2_final/ntk/20000.png}
\caption{The initial (left) and final (right) NTK matrix values; the final values were calculated after training the neural network for 20k epochs on a binary classification task. The dataset was sorted before training with all indices 0-5k from MNIST class 6 and 5k-10k from MNIST class 9. Therefore, the darker squares in the top left and bottom right quadrant are expected given similar datapoints are of the same class and that the NTK represents a similarity metric. This also explains the high values along the diagonal, where the NTK similarity between the same datapoints should be expected to be high. The full NTK shown in these images is constructed by summing the additive components from the parameterized operations of the network, plotted in the appendices as figures \ref{fig:MNIST2_l1} through \ref{fig:MNIST2_l4}}
\label{fig:MNIST2_ntk}
\end{figure}
\begin{table}[!htbp]
\begin{center}
\begin{tabular}{| c | c | c |}
\hline
\textbf{Kernel/Method} & \textbf{initialization} & \textbf{training end} \\
\cline{1-3}
layer 1 & 95.0 +/- 0.2 & 98.37 +/- 0.04 \\ \hline
layer 2 & 96.5 +/- 0.1 & 98.54 +/- 0.04 \\ \hline
layer 3 & 98.0 +/- 0.1 & 98.91 +/- 0.03 \\ \hline
layer 4 & 96.2 +/- 0.1 & 97.2 +/- 0.1 \\ \hline
NTK & 98.56 +/- 0.04 & 98.96 +/- 0.03 \\ \hline \hline
Neural Network (train) & 51 +/- 1 & 98.81 +/- 0.02 \\ \hline
Neural Network (test) & 50 +/- 1 & 97.83 +/- 0.05 \\ \hline
\end{tabular}
\end{center}
\caption{The test accuracies from a holdout dataset from the simplified kernel machine are shown in the table before and after training. We note that all accuracies increased as the underlying network specialized to our training task, and that the final accuracy of the underlying ANN model exceeded the performance of any of the kernels. Another interesting observation is that not all layers have the same accuracy. While this is possibly explained between the layer 4 and layer 1 NTK as simply a matter of how many parameters are being used to measure the similarity, layer 2 and layer 3 have the exact same number of parameters, but still have a different performance. Future work will be conducted to systematically measure these performances from a variety of MLP architectures using this software package.}
\label{T:layerwise_kernel_performance}
\end{table}
While not theoretically precise, it is possible that quantities of the finite NTK can give insight to properties of the neural network, and in fact, there is preliminary evidence showing that these finite kernels (non-linearly) correlate with the performance of their infinite width counterparts in CNNs, which in turn themselves correlate with the ANN's performance (compare table 2 and table 1 of \cite{CNTK_Sanjeev}). Using this fact, one might be able to initiate a neural architecture search by searching for architectures or parameterizations whose initial NTK gives better performance.
\section{Future Work}
\subsection{Future Improvements and Known Issues}
We are releasing our software in alpha open-source with a pledge to continue to improve and update our software. We welcome the contributions of the community and look forward to see how other groups might use or be inspired by the software. There are specific improvements to make the software complete that we briefly touch on in this section.
Currently, each algorithm expects a single neuron output. This is a significant limitation, as common practice for even basic multi-task classification would be to have a number of output neurons representing each class. We believe that our autograd techniques could be extended to multiple output neurons with additional effort.
Motivated by our explicit derivative success in MLP, we could add additional derivatives for other architectures. Initial attempts at extending an explicit derivative to fully convolutional networks became memory inefficient by relying on large matrices to describe derivatives of the convolution operation. However, additional effort could be placed towards the end of achieving a fast and memory efficient form.
The software lacks multi-GPU support. Note that the neural-tangents library includes native GPU parallelization. Multiple GPU support would be a large boon; it targets two core issues with NTKs for larger models-- memory constraints and time costs. Even the calculation of the NTK for a modest multilayer perceptron on a subset of MNIST requires the full memory of a single A100 GPU (see Table \ref{T:MLP Benchmark Memory}). This means for more common workstations and consumer level GPUs researchers are still severely limited to small models and small datasets.
\section{Conclusion}
This technical report has described the theoretical background and functional performance of torchNTK. This software has the capability to efficiently calculate the tangent kernel for MLP architectures in PyTorch, but through autograd methods we have extended the utility to arbitrary architectures. This work is impactful because neural kernels are objects of interest to the theoretical community, and with PyTorch support we can extend the number of researchers who have access to compute them. Our software enables teams to more easily calculate the kernels, which we hope will give way for further research and application.
A key takeaway from our work is that teams should consider the performance needs to conduct their research and determine whether calculating the explicit derivative of the network with respect to the parameters is worthwhile. We have shown that, at-least in the cases of MLP architectures, explicit differentiation is more efficient in both time and memory than autograd methods. Furthermore; teams must evaluate honestly whether they have access to the software expertise to implement the calculation they derive in the most parallelized or efficient manner. Converting the derived equations to efficient code is a skill set that should not be underestimated.
This software enables researchers to calculate the NTK in PyTorch faster than ever before and exposes the user to what we have called the layerwise components of the NTK, each representing a parameterized operation inside the neural network. Our future work will wield this software package to explore the NTK and these components to search for use-cases for practical A.I. end users and interpretation of such models.
\printbibliography
\pagebreak
|
1,314,259,993,505 | arxiv | \section{Introduction}
Diffusion phenomena arise from a Markovian stochastic modeling and as a solution of SDEs with or without jumps in many areas of applied mathematics. Their investigation concerns different mathematical branches and therefore research interest in questions such as existence and regularity of solutions of stochastic differential equations has constantly grown over the past years. \\
The study of the statistical properties of diffusion models has emerged since such models are widely used for applications in finance and biology. Diffusion processes with jumps, in particular, have been used in neuroscience for instance in \cite{Neuro} while in finance they have been introduced to model the dynamic of asset prices \cite{Kou}, \cite{Merton}, exchange rates \cite{Bates}, or volatility processes \cite{BarShe}.
In this work, we aim at estimating adaptively the invariant density $\mu$ associated to the process $(X_t)_{t \ge 0}$, solution of the following multivariate stochastic
differential equation with Levy-type jumps:
\begin{equation}
X_t= X_0 + \int_0^t b( X_s)ds + \int_0^t a(X_s)dW_s + \int_0^t \int_{\mathbb{R}^d \backslash \left \{0 \right \} }
\gamma(X_{s^-})z \tilde{\mu}(ds,dz),
\label{eq: model intro}
\end{equation}
where $W$ is a $d$-dimensional Brownian motion and $\bar{\mu}$ a compensated Poisson random measure with a possible infinite jump activity. We assume that a continuous record of observations $X^T = (X_t)_{0 \le t \le T}$ is available.
Practical concerns raise new questions such as the dependence of statistical features on the observation scheme: it is, for the applications, a subject of interest
to consider basic questions in different observation scenarios.
From a theoretical point of view, it is however also of substantial interest to work under the assumption that a continuous record of the diffusion considered is available. \\
In this framework, it belongs to the folklore of the statistics for stochastic processes without jumps that the invariant density can be estimated under standard nonparametric assumptions with a parametric rate (cfr Chapter 4.2 in \cite{Kut}). The proof relies on the existence of diffusion local time and its properties and so such a result is restricted to the one - dimensional setting.
Regarding the literature on statistical properties of multidimensional diffusion processes in the continuous case, an important reference is given by Reiss and Dalalyan in \cite{RD}, where they show an asymptotic statistical equivalence for inference on the drift in the multidimensional diffusion case. As a by-product of the study they prove, under isotropic H{\"o}lder smoothness constraints, convergence rates of invariant density estimators for pointwise estimation which are faster than those known from standard multivariate density estimation. Their result relies on upper bounds on the variance of additive diffusion functionals, proven by an application of the spectral gap inequality in combination with a bound on the transition density of the process. \\
Still in the continuous case, in a recent paper, Strauch \cite{Strauch} has extended their work by building adaptive estimators in the multidimensional diffusion case which achieve fast rates of convergence over anisotropic H{\"o}lder balls. \\
The notion of anisotropy plays an important role. Indeed, the smoothness properties of elements of a function space may depend on the chosen direction of $\mathbb{R}^d$. \\
The Russian school considered anisotropic spaces from the beginning of the theory of function spaces in 1950-1960s (in \cite{Nik} the author takes account of the developments). However, results on minimax rates of convergence in classical statistical models were rare for a lot of time. \\
The question of optimal bandwidth selection based on i. i. d. observations for density estimation with respect to sup - norm risk was not completely solved until the pretty recent developments gathered in \cite{Lep}. The methodology detailed in Goldenshluger and Lepski \cite{Adaptive} inspired the data-driven selection
procedure of the bandwidth of the kernel estimator proposed by many authors such as Strauch in \cite{Strauch} and Comte, Prieur and Samson in \cite{Main adapt} and provides the starting point for the study of our adaptive procedure as well.
In this paper, we provide a non-parametric estimator of the invariant density $\mu$ with a fully data-driven
procedure of the bandwidth.
We propose to estimate the invariant density $\mu$ by means of a kernel estimator, we therefore introduce some kernel function $K: \mathbb{R} \rightarrow \mathbb{R}$. A natural estimator of $\mu$ at $x \in \mathbb{R}^d$ in the anisotropic context is given by
$$\hat{\mu}_{h,T}(x) = \frac{1}{T \prod_{l = 1}^d h_l} \int_0^T \prod_{m = 1}^d K(\frac{x_m - X_u^m}{h_m}) du,$$
where $h = (h_1, ... , h_d)$ is a multi - index bandwidth, which will be chosen through the data-driven selection
procedure.
We first prove some bounds on the transition semigroup and on the transition density that will be useful to find sharp upper bounds on the variance of integral functionals of the diffusion $X$. Through them, we find the following convergence rates for the pointwise estimation of the invariant density of our diffusion with jumps:
$$\mathbb{E} [|\hat{\mu}_{h,T}(x) - \mu (x)|^2] \underset{\sim}{<}
\begin{cases}
\frac{(\log T)^{(2 - \frac{(1 + \alpha)}{2}) \lor 1}}{T} \qquad \mbox{for } d = 1, \\
\frac{\log T}{T} \qquad \mbox{for } d = 2, \\
T^{- \frac{2\bar{\beta}}{2\bar{\beta}+ d - 2}} \qquad \mbox{for } d \ge 3,
\end{cases}
$$
where $\alpha \in (0, 2)$ is the degree of jumps activity of the L\'evy process and $\bar{\beta}$ is the harmonic mean smoothness of the invariant density over the $d$ different dimensions. \\
We remark that the rate we find for $d \ge 3$ is the same Strauch found in \cite{Strauch} in absence of jumps, which is also the rate gathered in \cite{RD} up to replacing the mean smoothness with $\beta$, the common smoothness over the $d$ dimensions. \\
The case $d = 1$ evidences the main difference between what happens with and without jumps. Indeed, if in the continuous case the optimal convergence rate was $\frac{1}{T}$, now the rate we found is between $\frac{\log T}{T}$ and $\frac{(\log T)^\frac{3}{2}}{T}$. It is worth noting here that such a convergence rate is not necessarily the optimal one in the jumps framework. As a matter of fact in the continuous case different approaches, as the diffusion local time, have been used to get the rate $\frac{1}{T}$; we do not exclude the possibility that also in presence of jumps the implementation of other methods could lead to a convergence rate faster than the one presented here above for the mono-dimensional setting. \\
To complete the comparison to the continuous framework, we recall that in both \cite{RD} and \cite{Strauch} the convergence rate found in the case $d = 2$ was $\frac{(\log T)^4}{T}$ and so the convergence of the estimator seems being faster in presence of jumps than without them. The reason why it happens is that, to find the convergence rate, the transition density $(p_t)_{t \in \mathbb{R}^+}$ is needed to be upper bounded. If in \cite{RD} the authors assume to have $p_t(x,y) \le c (t^{- \frac{d}{2}} + t^{ \frac{3d}{2}})$ and in \cite{Strauch} Nash and Poincar\'e inequalities lead Strauch to a bound analogous to the one presented in \cite{RD}; Lemma \ref{lemma: bound transition} below provides us a different bound which guides us to a different rate. However, in absence of the term $t^{ \frac{3d}{2}}$ in the assumption before, which is the case for example considering a bounded drift, also in the continuous setting the convergence rate turns out being, as in the jump -diffusion case, equal to $\frac{\log T}{T}$. \\
It is moreover worth noting here that, if in \cite{RD} and \cite{Strauch} they needed to assume the existence of the transition density and a bound on it, we derive them through Lemma \ref{lemma: bound transition}: all the assumptions we need are directly on the model \eqref{eq: model intro}. \\
We no longer need to assume that the drift is of the form $b = - \nabla V$ (where $V \in \mathcal{C}^2$ is referred to as potential) as it was in both \cite{RD} and \cite{Strauch}.
After having provided the rates of
convergence of the estimators we finally propose, in the case $d \ge 3$, a fully data-driven selection
procedure of the bandwidth of the kernel estimator, inspired by the methodology
detailed in Goldenshluger and Lespki \cite{Adaptive}. The method has the decisive advantage of being
anisotropic: the bandwidths selected in each direction are in general different, which is coherent
with the possibly different regularities with respect to each variable. Finally, we prove that for the selected optimal bandwidth the following estimation holds:
\begin{equation}
\mathbb{E}[\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2_A] \le c_1 \inf_{h \in \mathcal{H}_T} (B(h) + V(h)) + c_1 e^{ - c_2 (\log T)^2},
\label{eq: tesi adatt}
\end{equation}
where we have denoted as $\left \| \cdot \right \|_A$ the $L^2$ norm on $A$, a compact subset of $\mathbb{R}^d$ and as $\mathcal{H}_T$ the set of candidate bandwidths; $B(h)$ is a bias term and $V(h)$ an estimate of the variance bound. We remark that the estimator leads to an automatic trade - off between the bias and the variance: the second term on the right hand side of \eqref{eq: tesi adatt} is indeed negligible compared to the first one. \\
Moreover, as the rate optimal choice $h(T)$ belongs to the set of candidate bandwidths $\mathcal{H}_T$, \eqref{eq: tesi adatt} turns out being
$$\mathbb{E}[\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2_A] \le c_1 T^{- \frac{2\bar{\beta}}{2\bar{\beta}+ d - 2}} + c_1 e^{ - c_2 (\log T)^2},$$
where $\bar{\beta}$ is the mean smoothness of the invariant density.
The paper is organised as follows. We give in Section 2 the assumptions on the process $X$. In section \ref{S: Construction_estimator} we define the anisotropic H{\"o}lder balls and we construct our estimator. Section \ref{S: Main_results} is devoted to the statements of our main results; which will be proven in the two following sections. In particular, we show how we get the convergence rates for the invariant density estimation in Section \ref{S: Proof_Main} while in Section \ref{S: proof adaptive} we prove the estimator we find through our bandwidth selection procedure is adaptive. Some technical results are moreover presented in the Appendix.
\section{Model Assumptions}
We consider the question of nonparametric estimation of the invariant density of a d-dimensional diffusion process X, assuming that a continuous record $X^T = \left \{ X_t, 0 \le t \le T \right \}$ up to time T is observed. This diffusion is given as a strong solution of the following stochastic differential equations with jumps:
\begin{equation}
X_t= X_0 + \int_0^t b( X_s)ds + \int_0^t a(X_s)dW_s + \int_0^t \int_{\mathbb{R}^d \backslash \left \{0 \right \} }
\gamma(X_{s^-})z \tilde{\mu}(ds,dz), \quad t \in [0,T],
\label{eq: model}
\end{equation}
where $b : \mathbb{R}^d \rightarrow \mathbb{R}^d$, $a : \mathbb{R}^d \rightarrow \mathbb{R}^d \times \mathbb{R}^d$ and $\gamma : \mathbb{R}^d \rightarrow \mathbb{R}^d \times \mathbb{R}^d$; $W = (W_t, t \ge 0)$ is a d-dimensional Brownian motion and $\mu$ is a Poisson random measure on $[0, T] \times \mathbb{R}^d$ associated to the L\'evy process $L=(L_t)_{t \in [0,T]}$, with $L_t:= \int_0^t \int_{\mathbb{R}^d} z \tilde{\mu} (ds, dz)$. The compensated measure is $\tilde{\mu}= \mu - \bar{\mu}$; we suppose that the compensator has the following form: $\bar{\mu}(dt,dz): = F(dz) dt $, where conditions on the Levy measure $F$ will be given later. \\
The initial condition $X_0$, $W$ and $L$ are independent. \\ \\
In what follows, we suppose the following assumptions hold: \\ \\
\textbf{A1}: \textit{The functions $b(x)$, $\gamma(x)$ and $a(x)$ are globally Lipschitz and, for some $c \ge 1$,
$$c^{-1} \mathbb{I}_{d \times d} \le a(x) \le c \mathbb{I}_{d \times d}, $$
where $\mathbb{I}_{d \times d}$ denotes the $d \times d$ identity matrix. \\
Denoting with $|.|$ and $<., . >$ respectively the Euclidian norm and the scalar product in $\mathbb{R}^d$, we suppose moreover that there exists a constant $c > 0$ such that, $\forall x \in \mathbb{R}^d$, $|b(x)| \le c$.} \\
\\
Under Assumption 1 the equation (\ref{eq: model}) admits a unique non-explosive c\`adl\`ag adapted solution possessing the strong Markov property, cf \cite{Applebaum} (Theorems 6.2.9. and 6.4.6.). \\
\\
\textbf{A2 (Drift condition) }: \textit{ \\
There exist $\tilde{C} > 0$ and $\tilde{\rho} > 0$ such that $<x, b(x)>\, \le -\tilde{C}|x|$, $\forall x : |x| \ge \tilde{\rho}$.
} \\
\\
We furthermore need the following assumptions on the jumps: \\
\\
\textbf{A3 (Jumps) }: \textit{1.The L\'evy measure $F$ is absolutely continuous with respect to the Lebesgue measure and we denote $F(z) = \frac{F(dz)}{dz}$. \\
2. We suppose that there exist $c > 0$ such that for all $z \in \mathbb{R}^d$, $F(z) \le \frac{c}{|z|^{d + \alpha}}$, with $\alpha \in (0,2)$ and that $supp(F) = \mathbb{R}^d$. \\
3. The jump coefficient $\gamma$ is upper bounded, i.e. $\sup_{x \in \mathbb{R}^d}|\gamma(x)| := \gamma_{max} < \infty$. We suppose moreover that, $\forall x \in \mathbb{R}^d$, $Det(\gamma(x)) \neq 0$.\\
4. If $\alpha =1$, we require for any $0 < r < R < \infty$ $\int_{r <| z |< R} z F(z) dz =0$. \\
5. There exists $\epsilon > 0$ and a constant $c > 0$ such that $\int_{\mathbb{R}^d}|z|^2 e^{\epsilon |z|} F(z) dz \le c $.} \\
\\
As we will see in Lemma \ref{lemma: beta mixing} below, Assumption 2 ensures, together with the last points of Assumption 3, the existence of a Lyapunov function. The process $X$ admits therefore a unique invariant distribution $\pi$ and the ergodic theorem holds. \\
We assume the invariant probability measure $\pi$ of $X$ being absolutely continuous with respect to the Lebesgue measure and from now on we will denote its density as $\mu$: $ d\pi = \mu dx$. \\
For any set $S \subset \mathbb{R}^d$ we define $\mu (S) := \int_S \mu(x) dx$ and, by abuse of notation, we will write $\mu(f) : = \mathbb{E}[f (X_0)] = \int_{\mathbb{R}^d} f(x) \mu (x) dx$ for functions $f : \mathbb{R}^d \rightarrow \mathbb{R}$. \\
We define moreover $L^2 (\mu) := \left \{ f : \mathbb{R}^d \rightarrow \mathbb{R} : \int_{\mathbb{R}^d} |f(x)|^2 \mu (x) dx < \infty \right \}$ and \\
$L^1 (\mu) := \left \{ f : \mathbb{R}^d \rightarrow \mathbb{R} : \int_{\mathbb{R}^d} |f(x)| \mu (x) dx < \infty \right \}$. \\
For each $g\in L^1 (\mu)$ we denote as $\left \| g \right \|_{L^1 (\mu)} := \mu (|g|) $ the $L^1$ norm with respect to $\mu$ on $\mathbb{R}^d$. \\
The transition semigroup of the process $X$ on $L^1 (\mu)$ is $P_{t} f(x) := \mathbb{E} [f(X_t) | X_0 = x ]$. \\
The transition density is denoted by $p_{t}$ and it is such that $P_{t} f(x) = \int_{\mathbb{R}^d} f(y) p_{t} (x,y) dy$; we will see in Lemma \ref{lemma: bound transition} that it exists. \\ \\
The process $X$ is called $\beta$ - mixing if $\beta_X (t) = o(1)$ for $t \rightarrow \infty$ and exponentially $\beta$ - mixing if there exists a constant $\gamma > 0$ such that $\beta_X (t) = O(e^{- \gamma t})$ for $t \rightarrow \infty$, where $\beta_X$ is the $\beta$ - mixing coefficient of the process $X$ as defined in Section 1.3.2 of \cite{Mixing}. We recall that, for a Markov process $X$ with transition semigroup $(P_t)_{t \in \mathbb{R}^+}$ and $\mathcal{L}(X_0) = \eta$, the $\beta$ - mixing coefficient of $X$ is given by
\begin{equation}
\beta_X (t) := \sup_{s \in \mathbb{R}^+} \int_{\mathbb{R}^d} \left \| P_t (x, .) - \eta P_{s + t} (x, .) \right \| \eta P_s (dx, .),
\label{eq: def beta}
\end{equation}
where $\eta P_t = \mathcal{L}(X_t)$ and $\left \| \lambda \right \|$ stands for the total variation norm of a signed measure $\lambda$. \\
For the exponential mixing property of general multidimensional diffusions, the reader may consult Theorem 3 of Kusuoka and Yoshida \cite{Kus_Yos} for the $\alpha$ - mixing; Meyn and Tweedie \cite{May_Twe}, Stramer and Tweedie \cite{Str_Twe} and Veretennikov \cite{Ver} for the $\beta$ - mixing. The mixing property for general diffusions with jumps has been investigated by Masuda in \cite{18 GLM}. \\
Now we mention the notion of exponential ergodicity in the sense of \cite{May_Twe}. \\
\begin{definition}
We say that $X$ is exponentially ergodic if it admits a unique invariant distribution $\pi$ and additionally if there exist positive constants c and $\rho$ for which, for each $f$ centered under $\mu$,
$$\left \| P_{t} f \right \|_{L^1 (\mu)} \le c e^{- \rho t} \left \| f \right \|_{\infty}. $$
\label{def: exp ergodic}
\end{definition}
We will see in Lemma \ref{lemma: beta mixing} that both the exponential ergodicity and the exponential $\beta$ - mixing can be derived from our assumptions. \\
\\
In Lemmas \ref{lemma: beta mixing} and \ref{lemma: bound transition} below we will prove some bounds on the transition semigroup and on the transition density that will be useful to establish tight upper bounds on the variance
$$Var (\int_0^T f(X_s) ds), \qquad f \in L^2(\mu)$$
of integral functionals of the diffusion $X$. \\
Bounds of this type were proven before, in \cite{RD} (cf. their Proposition 1), by combining estimates based on the spectral gap inequality and on upper bounds on the transition densities of $X$.
Through them they prove, under isotropic H{\"o}lder smoothness constraints, convergence rates of invariant density estimators for pointwise estimation which are considerably faster than those known from standard multivariate density estimation. \\
We replace the spectral gap inequality with a control from $L^1$ to $L^\infty$ given by the exponential ergodicity. Moreover, contrary to \cite{RD}, we don't need to assume that such controls hold true since we get them as consequence of Lemma \ref{lemma: bound transition} and \ref{lemma: beta mixing} below, having required some assumptions only directly on the model \eqref{eq: model}. \\
In the next section we will construct adaptive estimators for the density in the multidimensional diffusion case with jumps, which achieve ‘fast’ rates of convergence over anisotropic H{\"o}lder balls.
\section{Construction of the estimator}\label{S: Construction_estimator}
In several cases, the regularity of some function $g: \mathbb{R}^d \rightarrow \mathbb{R}$ depends on the direction in $\mathbb{R}^d$ chosen. We thus work under the following anisotropic smoothness constraints.
\begin{definition}
Let $\beta = (\beta_1, ... , \beta_d)$, $\beta_i > 0$, $\mathcal{L} =(\mathcal{L}_1, ... , \mathcal{L}_d)$, $\mathcal{L}_i > 0$. A function $g : \mathbb{R}^d \rightarrow \mathbb{R}$ is said to belong to the anisotropic H{\"o}lder class $\mathcal{H}_d (\beta, \mathcal{L})$ of functions if, for all $i \in \left \{ 1, ... , d \right \}$,
$$\left \| D_i^k g \right \|_\infty \le \mathcal{L}_i \qquad \forall k = 0,1, ... , \lfloor \beta_i \rfloor, $$
$$\left \| D_i^{\lfloor \beta_i \rfloor} g(. + t e_i) - D_i^{\lfloor \beta_i \rfloor} g(.) \right \|_\infty \le \mathcal{L}_i |t|^{\beta_i - \lfloor \beta_i \rfloor} \qquad \forall t \in \mathbb{R},$$
for $D_i^k g$ denoting the $k$-th order partial derivative of $g$ with respect to the $i$-th component, $\lfloor \beta_i \rfloor$ denoting the largest integer strictly smaller than $\beta_i$ and $e_1, ... , e_d$ denoting the canonical basis in $\mathbb{R}^d$.
\end{definition}
From now on we deal with the estimation of the density $\mu$ belonging to the anisotropic H{\"o}lder class $\mathcal{H}_d (\beta, \mathcal{L})$. \\
\\
Given the observation $X^T$ of a diffusion $X$, solution of \eqref{eq: model}, we propose to estimate the invariant density $\mu$ by means of a kernel estimator.
To estimate some $\mu \in \mathcal{H}_d (\beta, \mathcal{L})$ we therefore introduce some kernel function $K: \mathbb{R} \rightarrow \mathbb{R}$ satisfying
$$\int_\mathbb{R} K(x) dx = 1, \quad \left \| K \right \|_\infty < \infty, \quad \mbox{supp}(K) \subset [-1, 1], \quad \int_\mathbb{R} K(x) x^l dx = 0,$$
for all $l \in \left \{ 0, ... , M \right \}$ with $M \ge \max_i \beta_i$. \\
Denoting by $X_t^j$, $j \in \left \{ 1, ... , d \right \}$ the $j$-th component of $X_t$, $t \ge 0$, a natural estimator of $\mu$ at $x= (x_1, ... , x_d)^T \in \mathbb{R}^d$ in the anisotropic context is given by
\begin{equation}
\hat{\mu}_{h,T}(x) = \frac{1}{T \prod_{l = 1}^d h_l} \int_0^T \prod_{m = 1}^d K(\frac{x_m - X_u^m}{h_m}) du.
\label{eq: def estimator}
\end{equation}
As we will see in Section \ref{S: Adaptive}, a main question concerns the choice of the multi-index bandwidth $h = (h_1, ... , h_d)^T$.
\section{Main results}\label{S: Main_results}
\subsection{Convergence rates for invariant density estimation}
We want to investigate on the convergence rates for invariant density estimation. In order to determine the asymptotic behaviour of our estimator for $T \rightarrow \infty$, we study the variance of general additive functionals of $X$ in $d$ dimension. To do so, we need some properties as the exponential ergodicity of the process and a bound on the transition density. Such properties will be derived from our assumptions through the following lemmas, that we will prove in the appendix. \\
The following bounds on the transition density and on the transition semigroup hold true.
\begin{lemma}
Suppose that A1 - A3 hold. Then, for $T \ge 0$, there exists a transition density $p_{t} (x,y)$ for which for any $t \in [0, T]$ there are a $c_0 > 0$ and a $\lambda_0 > 0$ such that, for any pair of points $x, y \in \mathbb{R}^d$, we have
$$p_{t} (x,y) \le c_0 (t^{- \frac{d}{2}} e^{- \lambda_0 \frac{|y - x|^2}{t}} + \frac{t}{(t^\frac{1}{2} + |y - x|)^{d + \alpha}}).$$
\label{lemma: bound transition}
\end{lemma}
\begin{lemma}
Suppose that A1 - A3 hold. Then the process $X$ is exponentially ergodic and exponentially $\beta$ - mixing.
\label{lemma: beta mixing}
\end{lemma}
On the basis of the two previous lemmas we can prove the following bound on the variance, which is the heart of the study on the convergence rate.
\begin{proposition}
Suppose that A1 - A3 hold and let $f: \mathbb{R}^d \rightarrow \mathbb{R}$ be a bounded, measurable function with support $\mathcal{S}$ satisfying $|\mathcal{S}| < 1$. Then, there exists a constant C independent of $f$ such that
\begin{itemize}
\item[$\bullet$] $Var (\int_0^T f (X_t) dt ) \le C T \left \| f \right \|_\infty^2 |\mathcal{S}|^2(1 + (\log(\frac{1}{|\mathcal{S}|}))^{2 - \frac{(1 + \alpha)}{2}} + \log(\frac{1}{|\mathcal{S}|}))\quad$ for $d=1$,
\item[$\bullet$] $Var (\int_0^T f (X_t) dt ) \le C T\left \| f \right \|^2_{\infty}|\mathcal{S}|^2 (1 + \log(\frac{1}{|\mathcal{S}|})) \quad$ for $d=2$,
\item[$\bullet$] $Var (\int_0^T f (X_t) dt ) \le C T \left \| f \right \|_\infty^2 |\mathcal{S}|^{1 + \frac{2}{d}} \quad$ for $d \ge 3$.
\end{itemize}
\label{prop: variance bound}
\end{proposition}
From the bias - variance decomposition in the anisotropic case (see Proposition 1 in \cite{Decomp}) we get the following bound
$$\mathbb{E} [|\hat{\mu}_{h,T}(x) - \mu (x)|^2] \underset{\sim}{<} \sum_{l = 1}^d h_l^{2\beta_l} + T^{- 2} Var ( \frac{1}{\prod_{l=1}^d h_l} \int_0^T \prod_{m = 1}^d K (\frac{x_m - X_t^m}{h_m}) dt).$$
We want to bound the variance here above using Proposition \ref{prop: variance bound} on the function \\ $f(y) := \frac{1}{\prod_{l=1}^d h_l} \prod_{m = 1}^d K (\frac{x_m - y_m}{h_m})$.
As it will be explained in the proof of Proposition \ref{prop: main result conv d>3} in Section \ref{S: Proof_Main}, for $d \ge 3$ it is
$$Var ( \frac{1}{\prod_{l=1}^d h_l} \int_0^T \prod_{m = 1}^d K (\frac{x_m - X_t^m}{h_m}) dt) \le c T(\prod_{l = 1}^d h_l)^{\frac{2}{d} - 1}, $$
which leads us to the following convergence rate.
\begin{proposition}
Suppose that A1 - A3 hold. If $\mu \in \mathcal{H}_d (\beta, \mathcal{L})$, then the estimator given in \eqref{eq: def estimator} satisfies, for $d \ge 3$, the following risk estimates:
\begin{equation}
\mathbb{E}[|\hat{\mu}_{h,T}(x) - \mu (x)|^2] \underset{\sim}{<} \sum_{l = 1}^d h_l^{2\beta_l} + T^{-1}(\prod_{l = 1}^d h_l)^{\frac{2}{d} - 1}.
\label{eq: rischio d ge 3}
\end{equation}
Defined $\frac{1}{\bar{\beta}} := \frac{1}{d} \sum_{l = 1}^d \frac{1}{\beta_l}$, the rate optimal choice $h_l = h_l (T) = (\frac{1}{T})^{\frac{\bar{\beta}}{\beta_l(2 \bar{\beta} + d -2)}}$ yields the convergence rate
$$\mathbb{E}[|\hat{\mu}_{h,T}(x) - \mu (x)|^2] \underset{\sim}{<} T^{- \frac{2\bar{\beta}}{2\bar{\beta}+ d - 2}}.$$
\label{prop: main result conv d>3}
\end{proposition}
We underline that, in the continuous case, the convergence rate found by Strauch in \cite{Strauch} for the estimation of the invariant density $\mu$ belonging to the anisotropic H{\"o}lder class $\mathcal{H}_d (\beta + 1, \mathcal{L})$ is $T^{- \frac{2(\overline{\beta + 1})}{2(\overline{\beta + 1})+ d - 2}}$, for $d \ge 3$. \\
In Proposition \ref{prop: main result conv d>3} we estimate $\mu$ over anisotropic H{\"o}lder class $\mathcal{H}_d (\beta, \mathcal{L})$ and we therefore extend \cite{Strauch} to the jumps - diffusion case: the convergence rate we obtain is the same it was in the case without jumps, which is also analogous to the rate first obtained by Reiss and Dalalyan in \cite{RD} for the estimation of the invariant density $\mu$ over isotropic H{\"o}lder class $\mathcal{H}_d (\beta + 1, \mathcal{L})$, up to replacing the mean smoothness $\overline{\beta + 1}$ with $\beta + 1$, the common smoothness over the $d$ different dimensions. \\
\\
For $d =1$ and $d=2$, the bound on the variance changes. Therefore, the rate optimal choice $h$ will be different as well, as explained in following two propositions.
\begin{proposition}
Suppose that A1 - A3 hold. If $\mu \in \mathcal{H}_d (\beta, \mathcal{L})$, then the estimator given in \eqref{eq: def estimator} satisfies, for $d = 1$, the following risk estimates:
\begin{equation}
\mathbb{E} [|\hat{\mu}_{h,T}(x) - \mu(x)|^2] \underset{\sim}{<} h^{2 \beta} + \frac{1}{T}(1 + (\log(\frac{1}{h}))^{2 - \frac{(1 + \alpha)}{2}} + \log(\frac{1}{h})).
\label{eq: rischio d = 1}
\end{equation}
The rate optimal choice for $h$ yields to the convergence rate
$$\mathbb{E} [|\hat{\mu}_{h,T}(x) - \mu (x)|^2] \underset{\sim}{<} \frac{(\log T)^{(2 - \frac{(1 + \alpha)}{2}) \lor 1}}{T}.$$
\label{prop: main result conv d=1}
\end{proposition}
It is worth remarking that, in Proposition \ref{prop: main result conv d=1}, it is stated the main difference between the case with and without jumps. Indeed, if in the continuous case the convergence rate was $\frac{1}{T}$, now it depends on the degree of the jumps $\alpha$ and it is between $\frac{\log T}{T}$ and $\frac{(\log T)^\frac{3}{2}}{T}$. \\
We need to say that the convergence rate we have found here above for the estimation of the invariant density of a stochastic differential equations with jumps in the one dimensional setting is not necessarily the optimal one. In the continuous case other methods have been explored for such an estimation when $d=1$, as the use of diffusion local time to get the optimal rate $\frac{1}{T}$. We do not rule out the possibility to get a sharper bound through the exploitation of other approaches also for the jumps case, finding therefore a convergence rate faster than the one presented in the previous proposition.
\begin{proposition}
Suppose that A1 - A3 hold. If $\mu \in \mathcal{H}_d (\beta, \mathcal{L})$, then the estimator given in \eqref{eq: def estimator} satisfies, for $d = 2$, the following risk estimates:
\begin{equation}
\mathbb{E} [|\hat{\mu}_{h,T}(x) - \mu (x)|^2] \underset{\sim}{<} h_1^{2 \beta_1} + h_2^{2 \beta_2} + \frac{1}{T} (1 + \log(\frac{1}{h_1 h_2})).
\label{eq: rischio d = 2}
\end{equation}
The rate optimal choice for $h$ yields to the convergence rate
$$\mathbb{E} [|\hat{\mu}_{h,T}(x) - \mu (x)|^2] \underset{\sim}{<}\frac{\log T}{T}.$$
\label{prop: main result conv d=2}
\end{proposition}
Comparing our result with the convergence rate obtained in the continuous case over isotropic H{\"o}lder class $\mathcal{H}_d (\beta + 1, \mathcal{L})$ in \cite{RD} and anisotropic H{\"o}lder class $\mathcal{H}_d (\beta + 1, \mathcal{L})$ in \cite{Strauch}, which is $\frac{(\log T)^4}{T}$ in both works, one can observe that the convergence rate seems being faster in presence of jumps. \\
The reason why it happens is that in \cite{RD} they assume the transition density to be upper bounded by $C (t^{- \frac{d}{2}} + t^{\frac{3d}{2}})$, which is a bound different from the one we get from Lemma \ref{lemma: bound transition}. \\
If the term $t^{\frac{3d}{2}}$ would have been absent in their assumption, e. g. for bounded drift, then the convergence rate in the continuous case could have been improved to $\frac{\log T}{T}$, which is also what we get in the jump- diffusion case. \\
In \cite{Strauch}, Nash and Poincar\'e inequalities lead the author to an upper bound on the transition density which is analogous to the one found in \cite{RD} (see Remark 2.4 of \cite{Strauch}). \\
\\
From the pointwise estimation of the invariant density gathered in the three previous propositions we move to the estimation on $L^2(A)$, where $A$ is a compact set of $\mathbb{R}^d$. \\
In the sequel, for $A \subset \mathbb{R}^d$ compact and for $g \in L^2(A)$, $\left \| g \right \|^2_A := \int_A |g(x)|^2 dx$ denotes the $L^2$ norm with respect to Lebesgue on $A$. \\
As a consequence of Propositions \ref{prop: main result conv d>3}, \ref{prop: main result conv d=1} and \ref{prop: main result conv d=2} and the fact that the constants which turn out in the proofs do not depend on $x$, the following corollary holds true:
\begin{corollary}
If $\mu \in \mathcal{H}_d (\beta, \mathcal{L})$, then for the rate optimal choice for h = h(T) provided in Propositions \ref{prop: main result conv d>3}, \ref{prop: main result conv d=1} and \ref{prop: main result conv d=2} we have the following risk estimates:
\begin{equation}
\mathbb{E}[\left \| \hat{\mu}_{h,T} - \mu \right \|^2_A] \underset{\sim}{<} V_d(T) :=
\begin{cases}
\frac{(\log T)^{(2 - \frac{(1 + \alpha)}{2}) \lor 1}}{T} \qquad \mbox{for } d=1 \\
\frac{\log T}{T} \qquad \mbox{for } d=2 \\
T^{- \frac{2\bar{\beta}}{2\bar{\beta}+ d - 2}} \qquad \mbox{for } d \ge 3.
\end{cases}
\label{eq: rates L2}
\end{equation}
\label{cor: rate L2}
\end{corollary}
The proof of Corollary \ref{cor: rate L2} will be given in Section \ref{S: Proof_Main}.
\subsection{Adaptive procedure}\label{S: Adaptive}
The question of density estimation belongs to the canonical framework of nonparametric statistics. \\
As detailed in Propositions \ref{prop: main result conv d=1} and \ref{prop: main result conv d=2}, both the bandwidth and the upper bound on the rate of convergence appearing on the right hand side of \eqref{eq: rischio d = 1} and \eqref{eq: rischio d = 2} do not depend on the unknown smoothness of the invariant density $\mu$ and so there is no gain in implementing a data-driven bandwidth selection procedure for density estimation in the framework of continuous observations of a one or two dimensional diffusion process with jumps. Hence, throughout the sequel we restrict to the case $d \ge 3$. \\
It is clear from the previous section that for $d\ge 3$, instead, the proposed bandwidth choice depends on the regularity of the density $\mu$, which is unknown. This is why we study a data-driven bandwidth selection device. \\
We emphasize that the $d$ selected bandwidths are different, and this anisotropy property is important in our setting: the regularity in each direction can be various. The bandwidth selection procedure has to be able to provide such different choices for $h_1$, $h_2$, ... , $h_d$. \\
To select $h$ adequately, we propose the following method, inspired from Goldenshluger and Lespki \cite{Adaptive}. \\
We define the set of candidate bandwidths $\mathcal{H}_T$ as
\begin{equation}
\mathcal{H}_T \subset \left \{ h = (h_1, ... , h_d)^T \in (0,1]^d : \, \frac{(\log T)^{2d}}{T^\frac{d}{3}} \le \prod_{l = 1}^d h_l \le ( \frac{1}{\log T})^{\frac{3 d}{d - 2}} \right \},
\label{eq: def mathcal H}
\end{equation}
The conditions on $\prod_{l = 1}^d h_l$ we have just given are needed to use Talagrand inequality, on the basis of which we show our adaptive result. \\
We suppose moreover that the growth of $|\mathcal{H}_T|$ is at most polynomial in $T$, which is there exists $c >0$ for which $|\mathcal{H}_T | \le c T^c$. \\
An example of $\mathcal{H}_T$ is the following set of candidate bandwidths:
\begin{equation}
\mathcal{H}_T := \left \{ h = (h_1, ... , h_d)^T \in (0,1]^d : \,h_i = \frac{1}{k_i} \, \mbox{ with } k_i \in \mathbb{N}, \, \frac{(\log T)^{2d}}{T^\frac{d}{3}} \le \prod_{l = 1}^d \frac{1}{k_l} \le ( \frac{1}{\log T})^{\frac{3 d}{d - 2}} \right \}.
\label{eq: example HT}
\end{equation}
In correspondence of the variation of $h \in \mathcal{H}_T$, we have the following family of estimators, defined as in \eqref{eq: def estimator}
$$\mathcal{F}(\mathcal{H}_T) := \left \{ \hat{\mu}_h (x) := \frac{1}{T} \int_0^T \mathbb{K}_h (X_u - x) du: \quad x \in \mathbb{R}^d, \quad h \in \mathcal{H}_T \right \}$$
where, for $y \in \mathbb{R}^d$, it is
\begin{equation}
\mathbb{K}_h (y) := \prod_{l = 1}^d \frac{1}{h_l} \prod_{m =1}^d K (\frac{y_m}{h_m}).
\label{eq: def mathbb K h}
\end{equation}
We aim at selecting an estimator from the family $\mathcal{F}(\mathcal{H}_T)$ in a completely data-driven way, based only on the observation of the continuous trajectory of the process X solution of \eqref{eq: model}. \\
We now turn to describing the selection procedure from $\mathcal{F}(\mathcal{H}_T)$, which is based on auxiliary estimators relying on the convolution operator. According to our records, it was introduced in \cite{Lep99} as a device to circumvent the lack of ordering among a set of estimators in anisotropic case, where the increase of the variance of an estimator does not imply a decrease of its bias. \\
For any bandwidths $h = (h_1, ... , h_d)^T$, $\eta = (\eta_1, ... , \eta_d)^T$ $\in \mathcal{H}_T$ and $x \in \mathbb{R}^d$, we define
$$\mathbb{K}_h * \mathbb{K}_\eta (x) := \prod_{j = 1}^d (K_{h_j} * K_{\eta_j}) (x_j) = \prod_{j = 1}^d \int_{\mathbb{R}} K_{h_j} (u - x_j) K_{\eta_j } (u) du.$$
We moreover define the kernel estimators
$$\hat{\mu}_{h, \eta} (x) := \frac{1}{T} \int_0^T (\mathbb{K}_h * \mathbb{K}_{\eta}) (X_u - x) du, \quad x \in \mathbb{R}^d.$$
We remark that for how we have defined the kernel estimators, since the convolution is commutative, it is $\hat{\mu}_{h, \eta} = \hat{\mu}_{ \eta, h }$. \\
The proposed selection procedure relies on comparing the differences $\hat{\mu}_{h, \eta} - \hat{\mu}_{\eta}$. \\
We define
\begin{equation}
A(h) := \sup_{\eta \in \mathcal{H}_T} (\left \| \hat{\mu}_{h, \eta} - \hat{\mu}_{\eta} \right \|^2_A - V(\eta))_+,
\label{eq: def A(h)}
\end{equation}
with $$V(h) := \frac{k}{T} \, (\prod_{l = 1}^d h_l)^{\frac{2}{d} - 1}, $$
where $k$ is a numerical constant which is large. In particular, it is sufficient to choose it bigger than the constants $2 k_0^*$ and $2 k_0$ which appear in Lemma \ref{lemma: Talagrand}. Even if $k$ is not explicit, it
can be calibrated by simulations as done for example in Section 5 of \cite{Main adapt} through the implementation of a method inspired by Goldenshluger and Lepski \cite{Adaptive} and rewritten most recently by Lacour, Massart and Rivoirard in \cite{Lacour et al}. \\ Heuristically, $A(h)$ is an estimate of the squared bias and $V(h)$ of the variance bound. It is worth noticing that the penalty term $V(h)$ which is used here comes from Proposition \ref{prop: variance bound} for the function $f$ being the Kernel function. \\
Thus, the selection is done by setting
\begin{equation}
\tilde{h} := \mbox{arg}\min_{ h \in \mathcal{H}_T} (A (h) + V(h)).
\label{eq: def h tilde}
\end{equation}
We introduce the following notation: $\mu_h := \mathbb{K}_h * \mu$, which is the function that is estimated without bias by $\hat{\mu}_h$, i.e. $\mathbb{E}[\hat{\mu}_h(x)] = \mu_h(x)$. Moreover we define $\mu_{h, \eta} := \mathbb{K}_h * \mathbb{K}_\eta * \mu $ and a bias term
$B(h) := \left \| \mu_h - \mu \right \|^2_{\tilde{A}}$, where we have denoted as $\left \| . \right \|_{\tilde{A}}$ the $L^2$ - norm on $\tilde{A}$, a compact set in $\mathbb{R}^d$ which is such that $\tilde{A} := \left \{ \zeta \in \mathbb{R}^d : d(\zeta, A) \le 2 \sqrt{d} \right \}$. \\
The following result holds.
\begin{theorem}
Suppose that assumptions A1 - A3 hold. Then, we have
$$\mathbb{E}[\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2_A] \le c_1 \inf_{h \in \mathcal{H}_T} (B(h) + V(h)) + c_1 e^{ - c_2 (\log T)^2},$$
for $c_1$ and $c_2$ positive constants.
\label{th: adaptive}
\end{theorem}
The bound stated in Theorem \ref{th: adaptive} shows that the estimator leads to an automatic trade-off between the bias $\left \| \mu_h - \mu \right \|^2_{\tilde{A}}$ and the variance $V(h)$, up to a multiplicative constant $c_1$. The last term is indeed negligible.
The proof of Theorem \ref{th: adaptive} is postponed to Section \ref{S: proof adaptive}. \\
\\
We recall that Proposition \ref{prop: main result conv d>3} provides us the rate optimal choice $h(T)$ for $d \ge 3$, which is $h_l(T) = (\frac{1}{T})^{\frac{\bar{\beta}}{\beta_l ( 2 \bar{\beta} + d - 2)}}$. \\
Using such a bandwidth we will prove in Section \ref{S: proof adaptive} the following theorem.
\begin{theorem}
Suppose that assumptions A1 - A3 hold and let $\mathcal{H}_T$ be defined by \eqref{eq: example HT}. Then, we have
$$\mathbb{E}[\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2_A] \le c_1 (\frac{1}{T})^{\frac{2 \bar{\beta}}{ 2 \bar{\beta} + d - 2}} + c_1 e^{ - c_2 (\log T)^2},$$
for $c_1$ and $c_2$ positive constants.
\label{th: adaptive optimal}
\end{theorem}
Underlining once again that the second term in the right hand side of the equation here above is negligible compared to the first, we have that the risk estimates we get using the bandwidth provided by our selection procedure converges to zero fast. In particular, its convergence rate coincides to the optimal one provided by both \cite{RD} and \cite{Strauch} in the case without jumps.
\section{Proof convergence rates for invariant density estimation}\label{S: Proof_Main}
In this section we prove Propositions \ref{prop: main result conv d>3}, \ref{prop: main result conv d=1} and \ref{prop: main result conv d=2}, which gives us the convergence rate for the estimation of the invariant density $\mu \in \mathcal{H}_d(\beta, \mathcal{L})$ in the three different situation: $d=1$, $d=2$ and $d \ge 3$. \\
We emphasize that all the constants will appear in the proofs do not depend on the point $x$ considered. \\
We start showing the bound on the variance gathered in Proposition \ref{prop: variance bound}.
\subsection{Proof of Proposition \ref{prop: variance bound}}
\begin{proof}
We consider first of all the case $d \ge 3$.
We define the function $f_c := f - \mu (f)$. From the symmetry and the stationarity we have
$$Var(\int_0^T f(X_s) ds) = 2 \int_0^T \int_0^s \mathbb{E} [f_c(X_s)f_c(X_t)]dt ds = 2 \int_0^T \int_0^s \mathbb{E} [f_c(X_0)f_c(X_{s -t})]dt ds $$
Applying the change if variable $u := s- t$, using Fubini and computing the integral we have that the quantity here above is equal to $2 \int_0^T (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du$.
Let now $0 < \delta < D \le T$, where the specific choice of $\delta$ and $D$ will be given later. The idea is to deal with the integral here above in different way for $u$ which is in different intervals. For this reason we see $\int_0^T (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du$ as
\begin{equation}
\int_0^{\delta} (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du + \int_{\delta}^D (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du + \int_D^T (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du.
\label{eq: spezzo integrale varianza}
\end{equation}
We now observe that
\begin{equation}
\int_0^\delta (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du = \int_0^\delta (T - u) (\mathbb{E} [f(X_0)f(X_u)] - (\mu (f))^2) du \le cT \int_0^\delta | < P_{u} f , f >_{\mu} | du,
\label{eq: inizio rate}
\end{equation}
where we have denoted as $< ., . >_{\mu}$ the scalar product deriving by the norm with respect to the measure $\mu$, for which $< g, h >_{\mu}:= \int_{\mathbb{R}^d}g(x) h (x) \mu (x) dx $, each $g, h \in L^2(\mu)$. In the last inequality we have moreover used that $(\mu (f))^2$ is always more than $0$. \\
Now we use Cauchy-Schwartz inequality and the fact that $P_{u} f$ is a contraction map in $L^2(\mu)$ to get
\begin{equation}
\int_0^{\delta}| < P_{u} f , f >_{\mu} | du \le \int_0^{\delta} \sqrt{\left \| P_{u} f \right \|^2_{\mu} \left \| f \right \|^2_{\mu}} du \le \int_0^{\delta} \sqrt{\left \| f \right \|^4_{\mu}} du \le \left \| f \right \|^2_{\infty} \mu (\mathcal{S}) \delta ,
\label{eq: conseq contraction}
\end{equation}
where in the last inequality we have used the estimation
$$\left \| f \right \|^2_{\mu} = \int_\mathcal{S} |f(x)|^2 \mu(x) dx \le \left \| f \right \|^2_{\infty} \mu (\mathcal{S}). $$
Concerning the second integral in \eqref{eq: spezzo integrale varianza}, we remark that \eqref{eq: inizio rate} still holds on $[\delta, D]$. We then estimate it through the definition of transition semigroup. It is
\begin{equation}
\int_{\delta}^D | < P_{u} f , f >_{\mu} | du \le \int_{\delta}^D \int_{\mathbb{R}^d} |f(x)| \int_{\mathbb{R}^d} |f(y)| p_{u}(x,y) dy \mu (x) dx du.
\label{eq: stima density}
\end{equation}
We want to use the bound on the transition density given in Lemma \ref{lemma: bound transition} which holds for $t \in [0, T]$ but it is not uniform in $t$ big. Nevertheless, for $t \ge 1$, we have
$$p_t(x,y) = \int_{\mathbb{R}^d} p_{t - \frac{1}{2}} (x, \zeta) p_\frac{1}{2} (\zeta,y) d\zeta \le c \int_{\mathbb{R}^d} p_{t - \frac{1}{2}} (x, \zeta)(e^{- \lambda_0 (y - \zeta)^2 \frac{1}{2}} + \frac{1}{(\sqrt{\frac{1}{2}} +|y - \zeta|)^{d + \alpha}}) d\zeta \le $$
$$ \le c \int_{\mathbb{R}^d} p_{t - \frac{1}{2}} (x, \zeta) d\zeta \le c, $$
where the constant $c$ changes from line to line.
The right hand side of \eqref{eq: stima density} is therefore upper bounded by
$$ \int_{\delta}^D \int_{\mathbb{R}^d} |f(x)| c \int_{\mathbb{R}^d} |f(y)| (u^{- \frac{d}{2}} e^{- \lambda_0 \frac{|y - x|^2}{u}} + \frac{u}{(u^\frac{1}{2} + |y - x|)^{d + \alpha}} + 1) dy \, \mu (x) dx \, du \le $$
$$\le \int_{\delta}^D \int_{\mathcal{S}} |f(x)| c \int_{\mathcal{S}} |f(y)| (u^{- \frac{d}{2}} + u^{1 - \frac{(d + \alpha)}{2}} + 1) dy \, \mu (x) dx \, du \le c \left \| f \right \|^2_{\infty} \mu (\mathcal{S})|\mathcal{S}| \int_{\delta}^D (u^{- \frac{d}{2}} + u^{1 - \frac{(d + \alpha)}{2}} + 1) du,$$
where we have bounded in both integrals the absolute value of $f$ with its infinity norm. \\
Now we want to calculate the integral with respect to the variable $u$. We observe that, since $d \ge 3$, $1 - \frac{d}{2}<0$. The exponent of the second term in the integral here above, after having integrated, is $2 - \frac{d + \alpha}{2}$. It is more than zero if $d < 4 - \alpha$, which is possible only if $\alpha \in (0,1)$ and $d =3$, less then zero otherwise. \\
Therefore, we have to consider the two different possibilities, according to the fact that the exponent would be positive or negative. It follows
\begin{equation}
\int_{\delta}^D | < P_{u} f , f >_{\mu} | du \le c \left \| f \right \|^2_{\infty} \mu_b (\mathcal{S})|\mathcal{S}|(\delta^{1 - \frac{d}{2}} + \delta^{2 - \frac{d + \alpha}{2}} 1_{\left \{ d \ge 4 - \alpha \right \}} + D^{2 - \frac{d + \alpha}{2}} 1_{\left \{ d < 4 - \alpha \right \}} + D).
\label{eq: conseq bound density}
\end{equation}
We are now left to estimate the third integral of \eqref{eq: spezzo integrale varianza}. From Lemma \ref{lemma: beta mixing} it follows it is upper bounded by
\begin{equation}
c T \int_{D}^T |< P_{u} f_c , f_c >_{\mu} | du \le c T \int_{D}^T \sqrt{\left \| P_{u} f_c \right \|^2_{L^1(\mu)} \left \| f_c \right \|^2_{\infty}} du \le c \, T \int_{D}^T \sqrt{(e^{- \rho u} \left \| f_c \right \|_\infty )^2 \left \| f_c \right \|^2_{\infty}} du.
\label{eq: int D T}
\end{equation}
We recall it is $f_c (x) = f(x) - \mu (f)$, where $\mu (f) = \int_{\mathcal{S}} f(x) \mu (x) dx $. \\
Therefore we have
\begin{equation}
|f_c (x)| \le |f (x)| + |\mu(f)| \le |f (x)| + \left \| f \right \|_\infty \mu(\mathcal{S})
\label{eq: stima fc}
\end{equation}
and so
$$\left \| f_c \right \|_\infty \le \left \| f \right \|_\infty + \left \| f \right \|_\infty \mu(\mathcal{S}) \le c \left \| f \right \|_\infty, $$
where in the last inequality we have used the following estimation
\begin{equation}
|\mu (\mathcal{S})| \le \left \| \mu \right \|_\infty |\mathcal{S}| \le c |\mathcal{S}|
\label{eq: estim mu(S)}
\end{equation}
and the fact that $|\mathcal{S}| < 1$. \\
Therefore we get
\begin{equation}
c T \int_{D}^T |< P_{u} f_c , f_c >_{\mu} | du \le c \, T \left \| f \right \|_\infty^2 \int_{D}^T e^{- \rho u} du \le c \, T \left \| f \right \|_\infty^2 e^{- \rho D}.
\label{eq: conseq beta mixing}
\end{equation}
Replacing \eqref{eq: conseq contraction}, \eqref{eq: conseq bound density} and \eqref{eq: conseq beta mixing} in \eqref{eq: spezzo integrale varianza} we have that
\begin{equation}
|\int_0^T (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du| \le c \, T \left \| f \right \|^2_\infty |\mathcal{S}|(\delta + |\mathcal{S}| \delta^{1 - \frac{d}{2}} + |\mathcal{S}| \delta^{2 - \frac{d + \alpha}{2}} 1_{\left \{ d \ge 4 - \alpha \right \}} + |\mathcal{S}| D^{2 - \frac{d + \alpha}{2}} 1_{\left \{ d < 4 - \alpha \right \}} +
\label{eq: da bilanciare}
\end{equation}
$$ + |\mathcal{S}| D) + c \, T \left \| f \right \|^2_\infty e^{- \rho D}. $$
We now want to choose $\delta$ and $D$ for which the estimation here above is as sharp as possible. Recalling that the exponent on $\delta $ are less than zero in the second and the third terms of the right hand side of \eqref{eq: da bilanciare}, we have that for a small choice of $\delta$ would correspond the smallness of the first term while the second and the third would be big, the opposite would hold for a big $\delta$. In the same way, the behaviour of the last two terms of the right hand side of \eqref{eq: da bilanciare} relies on the choice $D$. Aiming at balancing them, we define $\delta := |\mathcal{S}|^{\frac{2}{d}}$ and $D:= [\max (- \frac{2}{\rho} \log (|\mathcal{S}|), 1)] \land T$. Replacing them in \eqref{eq: da bilanciare} if $T > (- \frac{2}{\rho} \log (|\mathcal{S}|))$ we get
$$|\int_0^T (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du| \le c \, T \left \| f \right \|^2_\infty (|\mathcal{S}|^{1 + \frac{2}{d}} + |\mathcal{S}|^{1 + \frac{4 - \alpha}{d}} +|\mathcal{S}|^2(log|\mathcal{S}| )^{2 - \frac{d + \alpha }{2}} + |\mathcal{S}|^2 log|\mathcal{S}| + |\mathcal{S}|^2 ), $$
which give us the result we wanted remarking that both $2$ and $1 + \frac{4 - \alpha}{d}$ are always more than $1 + \frac{2}{d}$ for $d \ge 3$ and $\alpha \in (0,2)$. \\
Otherwise, if $T \le(- \frac{2}{\rho} \log (|\mathcal{S}|))$, by the definition of $D$ we obtain $D = T$.
We still have $|\mathcal{S}|^2 D^{2 - \frac{d + \alpha}{2}} 1_{\left \{ d < 4 - \alpha \right \}} \le c |\mathcal{S}|^2 (log|\mathcal{S}| )^{2 - \frac{d + \alpha }{2}}$ and, moreover, the last integral which we dealt with in \eqref{eq: conseq beta mixing} is in this case between $T$ and $T$ and so its contribution is null. Hence, the result still holds true. \\
\\
We now consider the case $d=1$. We can act exactly like we did in the case $d \ge 3$, splitting the integral in three parts. Estimations \eqref{eq: conseq contraction} and \eqref{eq: conseq beta mixing} still holds while, using again Lemma \ref{lemma: bound transition} on the interval $[\delta, D]$, we have
$$|\int_{\delta}^D (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du| \le c T \left \| f \right \|^2_{\infty} \mu (\mathcal{S})|\mathcal{S}| \int_{\delta}^D (u^{- \frac{1}{2}} + u^{1 - \frac{(1 + \alpha)}{2}} + 1) du \le $$
$$ \le c T \left \| f \right \|^2_{\infty} \mu (\mathcal{S})|\mathcal{S}|(D^\frac{1}{2} + D^{2 - \frac{(1 + \alpha)}{2}} + D),$$
where we have used that now, integrating, both the exponent we get are positive.
In total in the case $d = 1$, using also \eqref{eq: estim mu(S)}, we therefore have
$$Var(\int_0^T f(X_s) ds) \le c T\left \| f \right \|^2_{\infty} (|\mathcal{S}| \delta + |\mathcal{S}|^2(D + D^{2 - \frac{(1 + \alpha)}{2}}) + e^{- \rho D}).$$
As we have already done, we want to make the estimation here above as sharp as possible. This time there isn't any constraint on the smallness of $\delta$ and so we can choose directly $\delta := 0$. Regarding $D$ we observe that, if $\alpha > 1$, then $2 - \frac{(1 + \alpha)}{2} < 1$; otherwise $2 - \frac{(1 + \alpha)}{2} > 1$. In each case we have the same trade off we had in the case $d \ge 3$ and so we keep taking $D:= [\max (- \frac{2}{\rho} \log (|\mathcal{S}|), 1)] \land T$; it follows
$Var(\int_0^T f(X_s) ds) \le c T\left \| f \right \|^2_{\infty}|\mathcal{S}|^2(1 + (\log|\mathcal{S}|)^{2 - \frac{(1 + \alpha)}{2}} + \log (\frac{1}{|\mathcal{S}|})). $ \\
\\
In the case $d =2$ estimations \eqref{eq: conseq contraction} and \eqref{eq: conseq beta mixing} keep holding true. The bound on the transition density gathered in Lemma \ref{lemma: bound transition} for $d =2$ yields
$$|\int_{\delta}^D (T - u) \mathbb{E} [f_c(X_0)f_c(X_u)] du| \le c T \left \| f \right \|^2_{\infty} \mu (\mathcal{S})|\mathcal{S}| \int_{\delta}^D (u^{-1} + u^{1 - \frac{(2 + \alpha)}{2}} + 1) du \le$$
$$ \le c T \left \| f \right \|^2_{\infty} \mu (\mathcal{S})|\mathcal{S}| (\log(\frac{D}{\delta}) + D^{2 - \frac{(2 + \alpha)}{2}} + D), $$
having remarked that $2 - \frac{(2 + \alpha)}{2} = 1 - \frac{\alpha}{2} > 0$ because $\alpha \in (0,2)$. This entails, using also \eqref{eq: estim mu(S)},
$$Var(\int_0^T f(X_s) ds) \le c T\left \| f \right \|^2_{\infty} (|\mathcal{S}| \delta + |\mathcal{S}|^2(\log(\frac{D}{\delta})+ D^{2 - \frac{(2 + \alpha)}{2}} + D) + e^{- \rho D}).$$
Aiming at balancing the terms, we choose again $\delta := |\mathcal{S}|$ and $D:= [\max (- \frac{2}{\rho} \log (|\mathcal{S}|), 1)] \land T$. \\
It follows that $\log(\frac{D}{\delta}) \le c | \log|\log |\mathcal{S}||| +c |\log(\frac{1}{|\mathcal{S}|})|$ and so, observing that $\log|\log (|\mathcal{S}|)|$ is negligible compared to $\log(\frac{1}{|\mathcal{S}|})$, the bound on the variance becomes
$$Var(\int_0^T f(X_s) ds) \le c T\left \| f \right \|^2_{\infty}|\mathcal{S}|^2 (1 + \log(\frac{1}{|\mathcal{S}|})),$$
where we have also used that $2 - \frac{(2 + \alpha)}{2}$ is always less than $1$ and so $(\log(\frac{1}{|\mathcal{S}|}))^{2 - \frac{(2 + \alpha)}{2}} < \log(\frac{1}{|\mathcal{S}|})$. \\
The proposition is therefore proved.
\end{proof}
\subsection{Proof of Proposition \ref{prop: main result conv d>3}}
\begin{proof}
Estimation \eqref{eq: rischio d ge 3} is a straightforward consequence of the bias - variance decomposition and Proposition \ref{prop: variance bound} applied to $f(y) := \frac{1}{\prod_{l=1}^d h_l} \prod_{m = 1}^d K (\frac{x_m - y_m}{h_m})$, whose support $\mathcal{S}$ is such that $|\mathcal{S}| \le c \prod_{l=1}^d h_l$ and which is by construction such that $\left \| f \right \|_{\infty} \le c (\prod_{l=1}^d h_l)^{-1}$. \\
To find the optimal choice of $h$ we define $h_l (T) := (\frac{1}{T})^{a_l}$ for $l \in \left \{ 1, ... , d \right \}$ and we look for $a_1$, ... $a_d$ such that the upper bound of the mean-squared error in the right hand side of \eqref{eq: rischio d ge 3} would result as small as possible. \\
Replacing the definition of $h_l (T)$ in the bias - variance decomposition, it means searching for $a_1, ... , a_d$ for which we get the balance and so we have to resolve the following system:
$$
\begin{cases}
\beta_i a_i = \beta_{i + 1} a_{i + 1} \qquad \forall i \in \left \{ 1, ... , d-1 \right \} \\
2 \beta_d a_d = 1 + (\frac{2}{d} - 1) \sum_{l = 1}^d a_l.
\end{cases}
$$
We observe that, as a consequence of the first $d-1$ equations, we can write $a_l$ as $\frac{\beta_d}{\beta_l} a_d$ for each $l \in \left \{ 1, ... , d-1 \right \} $. Therefore, the last equation becomes
$$2 \beta_d a_d = 1 + (\frac{2}{d} - 1) \beta_d a_d \sum_{l = 1}^d \frac{1}{\beta_l}.$$
Defining $\displaystyle{\frac{1}{\bar{\beta}} := \frac{1}{d} \sum_{l = 1}^d \frac{1}{\beta_l}}$, it follows $\displaystyle{2 \beta_d a_d = 1 + (\frac{2}{d} - 1) \beta_d a_d \frac{d}{\bar{\beta}}}$, which yields
$$a_d = \frac{\bar{\beta}}{\beta_d (2 \bar{\beta} + (d - 2))}$$
and, replacing it in the system, we have
$$a_l = \frac{\bar{\beta}}{\beta_l (2 \bar{\beta} + (d - 2))} \quad \forall l \in \left \{ 1, ... , d \right \}.$$
Taking in the right hand side of \eqref{eq: rischio d ge 3} the rate optimal choice $h_l (T)= (\frac{1}{T})^{\frac{\bar{\beta}}{\beta_l (2 \bar{\beta} + (d - 2))}}$ we get the convergence rate wanted.
\end{proof}
\subsection{Proof of Proposition \ref{prop: main result conv d=1}}
\begin{proof}
The upper bound of the mean-squared error follows from the decomposition bias - variance and from Proposition \ref{prop: variance bound}, recalling that for $f (X_t) := \frac{1}{h} K(\frac{x - X_t}{h})$ we have $\left \| f \right \|_{\infty} \le c h^{-1}$ and its support $\mathcal{S}$ is such that $|\mathcal{S}| \le c h$. \\
Now, aiming at balancing the terms, we take $h := (\frac{1}{T})^a$; getting the mean-squared error is upper bounded by
$$(\frac{1}{T})^{2 a \beta} + \frac{1}{T} + \frac{(a \log T)^{2 - \frac{(1 + \alpha)}{2}}}{T} + \frac{a \log T}{T}.$$
If $a$ gets bigger clearly $h$ gets smaller; it is enough to take $a$ such that $2 a \beta > 1$ to obtain the first two terms here above are negligible compared to the last ones, which gives us the convergence rate $\frac{(\log T)^{2 - \frac{(1 + \alpha)}{2}}}{T}$ for $\alpha \le 1$, $\frac{\log T}{T}$ for $\alpha > 1$.
\end{proof}
\subsection{Proof of Proposition \ref{prop: main result conv d=2}}
\begin{proof}
Again, \eqref{eq: rischio d = 2} follows naturally from the bias - variance decomposition and Proposition \ref{prop: variance bound}. \\
Regarding the convergence rate, we take again $h_l := (\frac{1}{T})^{a_l}$ for $l=1,2$. \\
It follows $\log(\frac{1}{h_1 h_2}) = a_1 \log T + a_2 \log T$ and so the mean-squared error is upper bounded by
$$(\frac{1}{T})^{2 a_1 \beta_1} + (\frac{1}{T})^{2 a_2 \beta_2} + \frac{c \log T}{T}.$$
Taking $a_1$ and $a_2$ big enough to make the first two terms here above negligible compared to the third, we get the convergence rate $\frac{\log T}{T}$.
\end{proof}
\subsection{Proof of Corollary \ref{cor: rate L2}}
\begin{proof}
It is a straightforward consequence of Propositions \ref{prop: main result conv d>3}, \ref{prop: main result conv d=1} and \ref{prop: main result conv d=2} after having remarked that the constants which turn out in all the previous propositions do not depend on the point $x$ considered. Indeed, such propositions yield
$$\mathbb{E}[\left \| \hat{\mu}_{h,T} - \mu \right \|^2_A] = \mathbb{E}[\int_A | \hat{\mu}_{h,T} (x) - \mu (x)|^2 dx] \le c \int_A c V_d (T) dx \le c |A| V_d (T).$$
\end{proof}
\section{Proof of the adaptive procedure}\label{S: proof adaptive}
The heart of the proof of Theorem \ref{th: adaptive} consist of finding an upper bound for the expected value of $A(h)$, which is gathered in the following proposition.
\begin{proposition}
Suppose that assumptions A1 - A3 hold. Then, $\forall h \in \mathcal{H}_T$,
$$\mathbb{E}[A (h)] \le c_1 B(h) + c_1 e^{ - c_2 (\log T)^2}.$$
\label{prop: E[A(h)]}
\end{proposition}
Proposition \ref{prop: E[A(h)]} will be proven after the proofs of Theorems \ref{th: adaptive} and \ref{th: adaptive optimal}.
\subsection{Proof of Theorem \ref{th: adaptive}.}
\begin{proof}
From triangular inequality it follows $\forall h \in \mathcal{H}_T$
\begin{equation}
\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2 \le c( \left \| \hat{\mu}_{\tilde{h}} - \hat{\mu}_{h,\tilde{h}} \right \|^2 + \left \| \hat{\mu}_{h,\tilde{h}} - \hat{\mu}_h \right \|^2 + \left \| \hat{\mu}_h - \mu \right \|^2)
\label{eq: start adapt}
\end{equation}
By the definition \eqref{eq: def A(h)} of $A(h)$ it follows that the first and the second term of \eqref{eq: start adapt} are respectively upper bounded by $A(h) + V(\tilde{h})$ and $A(\tilde{h}) + V(h)$, having also used on the second term that $\hat{\mu}_{h,\tilde{h}} = \hat{\mu}_{\tilde{h}, h}$. Then, since $\tilde{h}$ has been defined in \eqref{eq: def h tilde} as the $h \in \mathcal{H}_T$ for which $A(h) + V(h)$ is minimal, we clearly have that $A(\tilde{h}) + V(\tilde{h}) \le A(h) + V(h)$. \\
Hence, for any $h \in \mathcal{H}_T$, we get
\begin{equation}
\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2 \le c (A(h) + V(h) + \left \| \hat{\mu}_h - \mu \right \|^2).
\label{eq: adapt penultima}
\end{equation}
We want an upper bound for the expected value of the left hand side of the equation \eqref{eq: adapt penultima} and so we need to evaluate $\mathbb{E} [\left \| \hat{\mu}_h - \mu \right \|^2] = \mathbb{E}[\int_A | \hat{\mu}_h (x) - \mu(x)|^2 dx]$. \\
From the standard bias variance decomposition, recalling that $\mu_h = \mathbb{K}_h * \mu$ is such that $\mathbb{E} [\hat{\mu}_h (x)] = \mu_h (x)$, we get
$$\mathbb{E}[\int_A | \hat{\mu}_h (x) - \mu(x)|^2 dx] = \int_A | \mu_h (x) - \mu(x)|^2 dx + \int_A \mathbb{E}[| \hat{\mu}_h (x) - \mu_h(x)|^2] dx.$$
Now, we can upper bound the first term of the right hand side here above by enlarging the integration domain getting
\begin{equation}
\int_A | \mu_h (x) - \mu(x)|^2 dx \le \int_{\tilde{A}} | \mu_h (x) - \mu(x)|^2 dx = B(h).
\label{eq: estim B(h)}
\end{equation}
Moreover, as consequence of Proposition \ref{prop: variance bound} in the case $d \ge 3$ we obtain, as it was in Proposition \ref{prop: main result conv d>3},
\begin{equation}
\int_A \mathbb{E}[| \hat{\mu}_h (x) - \mu_h(x)|^2] dx = \int_A Var( \frac{1}{T \, \prod_{l = 1}^d h_l} \int_0^T \prod_{m = 1}^d K (\frac{x_m - X_u^m }{h_m}) du ) dx \le |A| \frac{c}{T} (\prod_{l = 1}^d h_l)^{\frac{2}{d} - 1}.
\label{eq: estim adap var}
\end{equation}
Comparing the upper bound given in \eqref{eq: estim adap var} with the definition of $V(h)$ and using also \eqref{eq: adapt penultima} and \eqref{eq: estim B(h)} we get, for each $h \in \mathcal{H}_T$,
$$\mathbb{E} [\left \| \hat{\mu}_h - \mu \right \|^2] \le c( B(h) + V(h)) + \mathbb{E}[A(h)].$$
Now from Proposition \ref{prop: E[A(h)]} and the arbitrariness of the bandwidth $h$ we are considering it follows
$$\mathbb{E}[\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2] \le c_1 \inf_{h' \in \mathcal{H}_T}(B(h') + V(h')) +c_1 e^{ - c_2 (\log T)^2},$$
as we wanted.
\end{proof}
As a consequence of Theorem \ref{th: adaptive} we get, considering the rate optimal choice $h_l(T) = (\frac{1}{T})^{\frac{\bar{\beta}}{ \beta_l (2 \bar{\beta} + d - 2)}} $ provided by Proposition \ref{prop: main result conv d>3}, the estimation gathered in Theorem \ref{th: adaptive optimal}. Its proof relies on the fact that, for how we have found it in Proposition \ref{prop: main result conv d>3}, if the rate optimal bandwidth belongs to $\mathcal{H}_T$ then the $\inf_{h \in \mathcal{H}_T}(B (h) + V(h))$ is clearly realized by it.
\subsection{Proof of Theorem \ref{th: adaptive optimal}}
\begin{proof}
We observe that, for the rate optimal choice $h(T)$ of the bandwidth, the conditions gathered in the right hand side of \eqref{eq: def mathcal H}, which are $\frac{(\log T)^{2d}}{T^\frac{d}{3}} \le \prod_{l = 1}^d h_l \le ( \frac{1}{\log T})^{\frac{3 d}{d - 2}}$,
hold true.
Indeed,
$$ \prod_{l = 1}^d h_l (T) = (\frac{1}{T})^{\frac{\bar{\beta}}{ 2 \bar{\beta} + d - 2} \sum_{l = 1}^d \frac{1}{\beta_l}} = (\frac{1}{T})^{\frac{d}{ 2 \bar{\beta} + d - 2}}. $$
the upper bound condition is therefore
$$(\frac{1}{T})^{\frac{d}{ 2 \bar{\beta} + d - 2}} \le (\frac{1}{\log T})^{\frac{3d}{d + 2}},$$
which is true if and only if $\log T \le T^{\frac{d - 2}{3(2 \bar{\beta} + d - 2)}}$. Now we observe that $\frac{d - 2}{3(2 \bar{\beta} + d - 2)} > 0$ for $\bar{\beta} > 1 - \frac{d}{2}$, which is always true since $d \ge 3$. In particular we can write $\frac{d - 2}{3(2 \bar{\beta} + d - 2)} =: \gamma \in (0,1)$ and, given that eventually for $T$ going to $\infty$ it is $\log T \le T^\gamma$, we have $\prod_{l = 1}^d h_l (T) \le (\frac{1}{\log T})^{\frac{3d}{d + 2}}$. \\
In the same way it is
$$(\frac{1}{T})^{\frac{d}{ 2 \bar{\beta} + d - 2}} \ge \frac{(\log T)^{2 d}}{ T^{\frac{d}{3}}}.$$
For the same reasoning as here above it is true if $(\frac{1}{3} - \frac{1}{2 \bar{\beta} + d - 2}) \frac{1}{2} =: \gamma$ is positive. \\
Being $\bar{\beta } > 1$ and $d \ge 3$, it turns out $\gamma > 0$, as we wanted. \\
Up to consider $\tilde{h}_l(T) := \frac{1}{\lfloor T^{\frac{\bar{\beta}}{ \beta_l (2 \bar{\beta} + d - 2)}}\rfloor }$ instead of $h_l(T)$, which is asymptotically equivalent and which leads to the same convergence rate, we have that the rate optimal choice belongs to the set of candidate bandwidths $\mathcal{H}_T$ proposed in \eqref{eq: example HT}. \\
Having now $h(T) \in \mathcal{H}_T$, for how we have found the rate optimal choice in Proposition \ref{prop: main result conv d>3}, the $\inf_{h \in \mathcal{H}_T} (B (h) + V (h))$ is clearly realized by it and so the bound stated in Theorem \ref{th: adaptive} is actually (see also Corollary \ref{cor: rate L2})
$$\mathbb{E}[\left \| \hat{\mu}_{\tilde{h}} - \mu \right \|^2_A] \le c_1 (\frac{1}{T})^{\frac{2 \bar{\beta}}{ 2 \bar{\beta} + d - 2}} + c_1 e^{ - c_2 (\log T)^2},$$
as we wanted.
\end{proof}
We have showed Theorem \ref{th: adaptive} using, as main tool, the bound on $\mathbb{E} [A(h)]$ stated in Proposition \ref{prop: E[A(h)]}. Its proof, as we will see in the next section, relies on the use of Berbee's coupling method as in Viennet \cite{Viennet} and on a version of Talagrand inequality given in Klein and Rio \cite{KR}.
\subsection{Proof of Proposition \ref{prop: E[A(h)]}}
\begin{proof}
To find a bound for $\mathbb{E}[A(h)]$, for each $h \in \mathcal{H}_T$ we want to use Talagrand inequality, stated on random variables which are independent. Therefore, we start introducing some blocks which are mutually independent. We do it through the use of Berbee's coupling method as done in Viennet \cite{Viennet}, Proposition 5.1 and its proof p. 484. \\
We assume that $T = 2 p_T q_T$, with $p_T$ integer and $q_T$ a real to be chosen. We split the initial process $X = (X_t)_{t \in [0,T]}$ in $2 p_T$ processes of a length $q_T$: for each $j \in \left \{ 1, ... , p_T \right \} $ we consider \\
$X^{j,1} := (X_t)_{t \in [2(j - 1)q_T, (2j - 1)q_T]}$ and $X^{j,2} := (X_t)_{t \in [(2j-1) q_T, 2 j q_T]}$. \\
Then, there exist a process $(X^*_t)_{t \in [0,T]}$ satisfying the following properties:
\begin{enumerate}
\item For $j \in \left \{ 1, ... , p_T \right \}$ the processes $X^{j,1}$ and $X^{* \,j,1} := (X^*_t)_{t \in [2(j - 1)q_T, (2j - 1)q_T]}$ have the same distribution and so have the processes $X^{j,2}$ and $X^{* \, j,2} := (X^*_t)_{t \in [(2j-1) q_T, 2 j q_T]}$.
\item For $j \in \left \{ 1, ... , p_T \right \}$, $\mathbb{P}(X^{j,1} \neq X^{* \,j,1} ) \le \beta_X(q_T)$ and $\mathbb{P}(X^{j,2} \neq X^{* \,j,2} ) \le \beta_X(q_T)$, where $\beta_X$ is the $\beta$ - mixing coefficient of the process $X$ as in \eqref{eq: def beta}.
\item For each $k \in \left \{ 1,2 \right \}$, $X^{* \,1,k}, ... , X^{* \,p_T,k}$ are independent.
\end{enumerate}
We denote by $\hat{\mu}^*_h$ the estimator computed using $X^*_t$ instead of $X_t$ and we write $\hat{\mu}^*_h = \frac{1}{2}(\hat{\mu}^{* (1)}_h + \hat{\mu}^{*(2)}_h)$ to separate the part coming from $X^{* \,.,1}$ (super -index $(1)$) and those coming from $X^{* \,.,2}$ (super -index $(2)$), having $\hat{\mu}^{* (1)}_h := \frac{1}{ p_T q_T} \sum_{j = 1}^{p_T} \int_{2(j - 1) q_T}^{(2 j - 1) q_T}\mathbb{K}_h (X^*_u - x) du $. \\
In a natural way we define moreover $\hat{\mu}^{*}_{h, \eta} := \mathbb{K}_\eta * \hat{\mu}^*_h$, which can be written again as $\frac{1}{2}(\hat{\mu}^{* (1)}_{h, \eta} + \hat{\mu}^{*(2)}_{h, \eta})$, to separate the contribution of $X^{* \,.,1}$ and $X^{* \,.,2}$. \\
With this background we can evaluate $\mathbb{E}[A(h)]$.
We recall that, as defined in \eqref{eq: def A(h)}, $$A(h) := \sup_{\eta \in \mathcal{H}_T} (\left \| \hat{\mu}_{h, \eta} - \hat{\mu}_{\eta} \right \|^2 - V(\eta))_+.$$
Now we can see $\hat{\mu}_{h, \eta} - \hat{\mu}_{\eta}$ as sum of different terms which we deal with singularly:
$$\hat{\mu}_{h, \eta} - \hat{\mu}_{\eta} := (\hat{\mu}_{h, \eta} - \hat{\mu}_{h, \eta}^*) + (\hat{\mu}_{h, \eta}^* - \mu_{h, \eta}) + (\mu_{h, \eta} - \mu_\eta) + (\mu_\eta - \hat{\mu}_{\eta}^*) + (\hat{\mu}_{\eta}^* - \hat{\mu}_{\eta}). $$
As a consequence of the triangular inequality and of the definition of $A(h)$ the following estimation holds true:
$$A(h) \le \sup_{\eta \in \mathcal{H}_T} [\left \| \hat{\mu}_{h, \eta} - \hat{\mu}_{h, \eta}^* \right \|^2 + (\left \| \hat{\mu}_{h, \eta}^* - \mu_{h, \eta} \right \|^2 - \frac{V(\eta)}{2})_+ + \left \| \mu_{h, \eta} - \mu_\eta \right \|^2 + (\left \| \mu_\eta - \hat{\mu}_{\eta}^* \right \|^2 - \frac{V(\eta)}{2})_+ + \left \| \hat{\mu}_{\eta}^* - \hat{\mu}_{\eta} \right \|^2]=$$
$$= \sup_{\eta \in \mathcal{H}_T}[ \sum_{j = 1}^5 I_j^{h,\eta}]$$
We start considering $I_5^{h, \eta}$. We define the set
$$\Omega^* := \left \{ X_t = X^*_t \quad \forall t \in [0,T] \right \}.$$
As a consequence of the second property of the process $X^*$ and of the $\beta$ - mixing with exponential decay showed in Lemma \ref{lemma: beta mixing} we get, recalling that $2 p_T q_t = T$ (with $q_T$ and $p_T$ to be chosen),
\begin{equation}
\mathbb{P}(\Omega^{*c}) \le 2 p_T \beta_X(q_T) \le c \frac{T}{q_T}e^{- \gamma q_T}.
\label{eq: P(Omega compl)}
\end{equation}
From the definition of $\hat{\mu}_{h}^*$ and Jensen inequality it is
$$\left \| \hat{\mu}_{\eta}^* - \hat{\mu}_{\eta} \right \|^2 = \int_A (\frac{1}{T} \int_0^T \mathbb{K}_\eta(X_t - x) - \mathbb{K}_\eta(X^*_t - x) dt)^2 dx 1_{\Omega^{* c}} \le $$
\begin{equation}
\le c \int_A \frac{1}{T^2} T (\int_0^T \mathbb{K}^2_\eta(X_t - x) + \mathbb{K}^2_\eta(X^*_t - x) dt ) dx 1_{\Omega^{* c}} \le c \left \| \mathbb{K}_\eta \right \|_\infty^2 1_{\Omega^{* c}}
\label{eq: bound muh - muh*}
\end{equation}
By the definition \eqref{eq: def mathbb K h} we get that, $\forall h \in \mathcal{H}_T$, $\left \| \mathbb{K}_h \right \|_\infty \le (\prod_{l = 1}^d h_l)^{- 1}$. \\
We recall that, from how we have defined in \eqref{eq: def mathcal H} the set $\mathcal{H}_T$, we have that $\forall h \in \mathcal{H}_T$\\
$\prod_{l = 1}^d h_l > \frac{(\log T)^{2 d}}{T^{\frac{d}{3}}}$ and so
$$\left \| \mathbb{K}_\eta \right \|_\infty^2 < \frac{1}{(\prod_{l = 1}^d \eta_l)^2} \le \frac{T^{\frac{2 d}{3}}}{(\log T)^{4 d}}.$$
Replacing this bound in \eqref{eq: bound muh - muh*} it follows
$$ \sup_{\eta \in \mathcal{H}_T} \left \| \hat{\mu}_{\eta}^* - \hat{\mu}_{\eta} \right \|^2 \le c \frac{T^{\frac{2 d}{3}}}{(\log T)^{4 d}}1_{\Omega^{* c}}. $$
We take its expectation and we use \eqref{eq: P(Omega compl)}, getting a term which depends on $q_T$, a real to be chosen. From the arbitrariness of $q_T$ we get a convergence to zero as fast as we want, for $T$ going to $\infty$. Indeed, taking $q_T := (\log T)^2 $ yields $\forall h, \eta \in \mathcal{H}_T$,
\begin{equation}
\mathbb{E}[ |\sup_{\eta \in \mathcal{H}_T} I_5^{h, \eta} |] = \mathbb{E}[ |\sup_{\eta \in \mathcal{H}_T} \left \| \hat{\mu}_{\eta}^* - \hat{\mu}_{\eta} \right \|^2|] \le c \frac{T^{\frac{2 d}{3} + 1}}{(\log T)^{4 d + 2}} e^{- \gamma (\log T)^2}.
\label{eq: I5}
\end{equation}
Regarding $\sup_{\eta \in \mathcal{H}_T} I_1^{h , \eta}$, we estimate it through \eqref{eq: I5} and the following lemma, which will be proven in the appendix. \\
\begin{lemma}
Let $f : \mathbb{R}^d \rightarrow \mathbb{R}$ be a bounded, measurable function with support $\mathcal{S}$ satisfying \\ $diam(\mathcal{S}) \le 2 \sqrt{d}$; $\tilde{A}$ a compact set in $\mathbb{R}^d$ such that $A \subset \tilde{A}$ and $\tilde{A} = \left \{ \zeta : d(\zeta, A) \le 2 \sqrt{d} \right \}$ and $g$ a function in $L^2(\tilde{A})$. Then,
$$\left \| f*g \right \|_A \le \left \|f \right \|_{1, \mathbb{R}^d} \left \| g \right \|_{2, \tilde{A}},$$
where we have denoted as $\left \| . \right \|_A$ the usual $L^2$ norm on $A$, $\left \| . \right \|_{1 , \mathbb{R}^d}$ the $L^1$ norm on $\mathbb{R}^d$ and $\left \| . \right \|_{2,\tilde{A}}$ the $L^2$ norm on $\tilde{A}$.
\label{lemma: convolution}
\end{lemma}
We recall that $\hat{\mu}_{h, \eta} = \mathbb{K}_\eta * \hat{\mu}_h$ and $\hat{\mu}^*_{h, \eta} = \mathbb{K}_\eta * \hat{\mu}^*_h$. Therefore, remarking that $diam(K) \le 2$ and so by the definition of $\mathbb{K}_\eta$ it is $diam(\mathbb{K}_\eta) \le 2 \sqrt{d}$, we can use Lemma \ref{lemma: convolution}, which yields
$$\sup_{\eta \in \mathcal{H}_T} I_1^{h, \eta} = \sup_{\eta \in \mathcal{H}_T} \left \| \mathbb{K}_\eta *(\hat{\mu}_h - \hat{\mu}^*_h) \right \|^2 \le \sup_{\eta \in \mathcal{H}_T} \left \| \mathbb{K}_\eta \right \|_{1, \mathbb{R}^d}^2 \left \| \hat{\mu}_h - \hat{\mu}^*_h \right \|^2_{\tilde{A}}.$$
Taking the expected value, using that $\left \| \mathbb{K}_\eta \right \|_{1, \mathbb{R}^d} \le c $ $\forall \eta \in \mathcal{H}_T$ and the equation \eqref{eq: I5}, remarking that the dependence on the integration set considered is hidden in the constant $c$ in which this time will appear $|\tilde{A}|$ instead of $|A|$ we get
\begin{equation}
\mathbb{E}[ |\sup_{\eta \in \mathcal{H}_T} I_1^{h, \eta} |] \le c \frac{T^{\frac{2 d}{3} + 1}}{(\log T)^{4 d + 2}} e^{- \gamma (\log T)^2}.
\label{eq: I1}
\end{equation}
We still use Lemma \ref{lemma: convolution} to study $\sup_{\eta \in \mathcal{H}_T} I_3^{h, \eta}$, recalling that $\mu_{h, \eta}= \mathbb{K}_\eta * \mu_h$ and $\mu_\eta = \mathbb{K}_\eta * \mu_b$. It yields
\begin{equation}
\sup_{\eta \in \mathcal{H}_T} I_3^{h, \eta} = \sup_{\eta \in \mathcal{H}_T} \left \| \mathbb{K}_\eta *(\mu_h - \mu) \right \|^2 \le \sup_{\eta \in \mathcal{H}_T} \left \| \mathbb{K}_\eta \right \|^2_{1, \mathbb{R}^d} \left \| \mu_h - \mu \right \|^2_{\tilde{A}} \le c \left \| \mu_h - \mu \right \|^2_{\tilde{A}} = c B(h).
\label{eq: I3}
\end{equation}
We are left to study $I_2^{h,\eta}$ and $I_4^{h,\eta}$, for which we need the following lemma that will be showed right after the proof of this proposition. \\
\begin{lemma}
For $i = 1,2$, there exist some positive constants $c_1^*$, $c_2^*$, $c_3^*$ and a constant $k_0^*$ such that, for any $\bar{k} \ge k_0^*$,
\begin{equation}
\mathbb{E} [\sup_{\eta \in \mathcal{H}_T} (\left \| \mu_\eta - \hat{\mu}^{* (i)}_\eta \right \|^2 - \frac{\bar{k}}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1})_+] \le c_1^* T^{c_2^*} e^{- c_3^*(\log T)^2 }.
\label{eq: Talagrand per I4}
\end{equation}
Moreover there exist $k_0$ such that, for any $\bar{k} \ge k_0$,
\begin{equation}
\mathbb{E}[\sup_{\eta \in \mathcal{H}_T} (\left \| \mu_{h,\eta} - \hat{\mu}^{* (i)}_{h,\eta} \right \|^2 - \frac{\bar{k}}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1})_+] \le c_1^* T^{c_2^*} e^{- c_3^*(\log T)^2 }.
\label{eq: Talagrand per I2}
\end{equation}
\label{lemma: Talagrand}
\end{lemma}
Concerning $I_4^{h, \eta}$ observe it is $\mu_\eta - \hat{\mu}^{*}_\eta = \frac{1}{2}(2 \mu_\eta - \hat{\mu}^{* (1)}_\eta - \hat{\mu}^{* (2)}_\eta )$. Hence, from triangular inequality and the definition of positive part function, we get
$$I_4^{h, \eta} \le c(\left \| \mu_\eta - \hat{\mu}_{\eta}^{* (1)} \right \|^2 - \frac{V(\eta)}{2})_+ + c (\left \| \mu_\eta - \hat{\mu}_{\eta}^{* (2)} \right \|^2 - \frac{V(\eta)}{2})_+.$$
From \eqref{eq: Talagrand per I4}, for a $k$ in the definition of $V(\eta)$ big enough, for which we have $\frac{k}{2} > (k_0^* \lor k_0)$, we get
\begin{equation}
\mathbb{E} [\sup_{\eta \in \mathcal{H}_T} I_4^{h, \eta}] \le c_1 T^{c_2} e^{- c_3(\log T)^2 }.
\label{eq: I4}
\end{equation}
In the same way, remarking that
$$I_2^{h, \eta} \le c(\left \| \hat{\mu}_{h, \eta}^{* (1)} - \mu_{h,\eta} \right \|^2 - \frac{V(\eta)}{2})_+ + c (\left \| \hat{\mu}_{h, \eta}^{* (2)} - \mu_{h,\eta} \right \|^2 - \frac{V(\eta)}{2})_+$$
and using \eqref{eq: Talagrand per I2} it follows
\begin{equation}
\mathbb{E} [\sup_{\eta \in \mathcal{H}_T} I_2^{h, \eta}] \le c_1 T^{c_2} e^{- c_3(\log T)^2 }.
\label{eq: I2}
\end{equation}
From \eqref{eq: I5} , \eqref{eq: I1}, \eqref{eq: I3}, \eqref{eq: I4} and \eqref{eq: I2} we obtain, for any $h \in \mathcal{H}_T$,
$$\mathbb{E}[A (h)] \le c \frac{T^{\frac{2 d}{3} + 1}}{(\log T)^{4 d + 2}} e^{- \gamma (\log T)^2} + c B(h) + c_1 T^{c_2} e^{- c_3(\log T)^2 } \le c B(h) + c_1 e^{- c_2(\log T)^2 },$$
as we wanted.
\end{proof}
To conclude the proof of the adaptive procedure we need to show Lemma \ref{lemma: Talagrand}, which core is the use of the Talagrand inequality. First of all, we recall the following version of the Talagrand inequality, which has been stated as Lemma 2 in \cite{Main adapt} and which is a straightforward consequence of the Talagarand inequality given in Klein and Rio \cite{KR}.
\begin{lemma}
Let $T_1, ... , T_n$ be independent random variables with values in some Polish space $\mathcal{X}$, $\mathcal{R}$ a countable class of measurable functions from $\mathcal{X}$ into $[-1, 1]^p$ and
$v_p (r) := \frac{1}{p} \sum_{j = 1}^p [r (T_j) - \mathbb{E}[r(T_j)]]. $
Then,
\begin{equation}
\mathbb{E}[(\sup_{r \in \mathcal{R}} |v_p(r)|^2 - 2H^2)_+] \le c (\frac{v}{p} e^{- c \frac{p H^2}{v}} + \frac{M^2}{p^2} e^{- c \frac{p H}{M}}),
\label{eq: Talagrand in Klein Rio}
\end{equation}
with $c$ a universal constant and where
$$\sup_{r \in \mathcal{R}} \left \| r \right \|_\infty \le M, \quad \mathbb{E}_b[\sup_{r \in \mathcal{R}}|v_p (r)|] \le H, \quad \sup_{r \in \mathcal{R}} \frac{1}{p} \sum_{j = 1}^p Var(r (T_j)) \le v.$$
\label{lemma: Talagrand in Klein Rio}
\end{lemma}
\subsection{Proof of Lemma \ref{lemma: Talagrand}}
\begin{proof}
Since the two cases $i = 1$ and $i =2$ are similar, we study only one of them. We start proving \eqref{eq: Talagrand per I4}, the proof of inequality \eqref{eq: Talagrand per I2} follows the same line.
We first observe it is
\begin{equation}
\mathbb{E} [\sup_{\eta \in \mathcal{H}_T} (\left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|_A^2 - \frac{\bar{k}}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1})_+] \le \sum_{\eta \in \mathcal{H}_T} \mathbb{E} [(\left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|_A^2 - \frac{\bar{k}}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1})_+].
\label{eq: start Talagrand}
\end{equation}
Our goal is now to find a bound for the right hand side of the inequality here above using the version of the Talagrand inequality gathered in Lemma \ref{lemma: Talagrand in Klein Rio}. To do it, we need to introduce some notation. \\
We observe that $\left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|_A^2 = \sup_{r, \left \| r \right \| = 1} < \mu_\eta - \hat{\mu}^{* (1)}_\eta,r>^2$, and the supremum can be considered over a countable dense set of function $r$ such that $\left \| r \right \| = 1$; let us denote this set by $\mathcal{B}(1)$. \\
We define
$$T_j (z) := \frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} \mathbb{K}_\eta (X_t^{* j, 1} - z) dt; \quad r(T_j) := \int_{A} T_j (z) r (z) dz.$$
Thus
$$v_{p_T} (r) = <\hat{\mu}^{* (1)}_\eta - \mu_\eta,r> = \frac{1}{p_T} \sum_{j = 1}^{p_T} [r (T_j)- \mathbb{E} [r (T_j)]]$$
is a centered empirical process with independent variables
$$\psi_r (X^{* j, 1}) := r (T_j)- \mathbb{E}[r (T_j)] = \int_{A} (\frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} [\mathbb{K}_\eta (X_t^{* j, 1} - z) - \mathbb{E} [\mathbb{K}_\eta (X_t^{* j, 1} - z)]] dt) r(z) dz, $$
to which we want to apply Talagrand inequality \eqref{eq: Talagrand in Klein Rio}. Therefore, we have to compute $M$, $H$ and $v$ as defined in Lemma \ref{lemma: Talagrand in Klein Rio}.
We start by the calculation of M. For any $r \in \mathcal{B}(1)$ it is, using the definition of $r$ and Cauchy - Schwartz inequality,
$$|\int_A T_j(z) r (z) dz| \le (\int_A T_j^2 (z) dz )^\frac{1}{2}.$$
Now from the definition of $T$ and Jensen inequality it follows
$$\int_A T_j^2 (z) dz \le \int_A \frac{1}{q_T^2} q_T \int_{(2 j - 1) q_T}^{2j q_T} \mathbb{K}^2_\eta (X_t^{* j, 1} - z) dt dz \le c \left \| K_\eta \right \|^2_\infty |\mathcal{S}| \le c (\prod_{l = 1}^d \eta_l)^{-2} (\prod_{l = 1}^d \eta_l) = c(\prod_{l = 1}^d \eta_l)^{-1},$$
where we have also used that the support of $K_\eta$ is on $\mathcal{S}$ which size is $\prod_{l = 1}^d \eta_l$. Hence,
\begin{equation}
|\int_A T_j(z) r (z) dz| \le c(\prod_{l = 1}^d \eta_l)^{-\frac{1}{2}} =: M.
\label{eq: computation M}
\end{equation}
Regarding the computation of $H$, from the definition of $v_{p_T} (r)$ and the fact that the random variables $\psi_r^{* j, 1}$ are centered and independents it follows
$$\mathbb{E} [\sup_{r \in \mathcal{B}(1)}|v_{p_T}(r)|^2] = \mathbb{E}_b [\left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|_A^2] = \int_A Var(\frac{1}{p_T} \sum_{j = 1}^{p_T}\frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} (\mathbb{K}_\eta (X_t^{* j, 1} - z) - \mathbb{E} [\mathbb{K}_\eta (X_t^{* j, 1} - z)]) dt) dz = $$
$$= \int_A \frac{1}{p_T} Var(\frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} \mathbb{K}_\eta (X_t^{* j, 1} - z) dt) dz \le c |A| \frac{1}{p_T} \frac{1}{q_T}(\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1}, $$
where in the last inequality we have used the estimation for the variance in the case $d \ge 3$ gathered in Proposition \ref{prop: variance bound}, considering that taking the Kernel function as $f$ we have that its support $\mathcal{S}$ is such that $|\mathcal{S}| \le c (\prod_{l = 1}^d \eta_l)$. It yields
\begin{equation}
\mathbb{E} [\sup_{r \in \mathcal{B}(1)}|v_{p_T}(r)|^2] \le \frac{c}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1} =: H^2
\label{eq: computation H}
\end{equation}
In order to use Lemma \ref{lemma: Talagrand in Klein Rio}, we are left to compute $v$.\\
We observe it is
$$\frac{1}{p_T} \sum_{j = 1}^{p_T} Var(\int_A \frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} \mathbb{K}_\eta (X_t^{* j, 1} - z) dt \, r(z) dz) = \frac{1}{p_T} \sum_{j = 1}^{p_T} Var( \frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} (\mathbb{K}_\eta * r) (X_t^{* j, 1}) dt).$$
We want to prove a tight upper bound for the variance of the integral functional $\frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} f_{\eta} (X_t^{* j, 1}) dt$ of the diffusion $X^*$ , where we have denoted $f_\eta := \mathbb{K}_\eta * r$. \\
Following the proof we have given of Proposition \ref{prop: variance bound} we have,
$$Var(\frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} f_{\eta} (X_t^{* j, 1}) dt) \le \frac{c}{q_T^2} \int_0^{q_T} (q_T - u) \mathbb{E}[f_{\eta,c}(X_0^{* j, 1}), f_{\eta,c}(X_u^{* j, 1})] du =$$
$$ = \frac{c}{q_T^2} (\int_0^{D} (q_T - u) \mathbb{E}[f_{\eta,c}(X_0^{* j, 1}), f_{\eta,c}(X_u^{* j, 1})] du + \int_D^{q_T} (q_T - u) \mathbb{E}[f_{\eta,c}(X_0^{* j, 1}), f_{\eta,c}(X_u^{* j, 1})] du),$$
where we have introduced $f_{\eta,c}(x) := f_{\eta} (x) - \mu (f_\eta)$ and $D$ a quantity to be chosen in order to balance the contribution of the two integrals here above. We denote moreover as $P^*_{t}$ the transition semigroup of the process $X^*$ Concerning the integral between $0$ and $D$, we act like we did in \eqref{eq: inizio rate} and we use Cauchy - Schwartz inequality and the fact that $P^*_{ t} $ is a contraction. It follows
$$\frac{c}{q_T^2} \int_0^{D} (q_T - u) \mathbb{E}[f_{\eta,c}(X_0^{* j, 1}), f_{\eta,c}(X_u^{* j, 1})] du \le \frac{c}{q_T} |\int_0^{D} <P^*_{t} f_\eta, f_\eta>_{\mu} dt| \le$$
\begin{equation}
\le \frac{c}{q_T} \int_0^D (\left \| P^*_{t} f_\eta \right \|^2_{\mu} \left \| f_\eta \right \|^2_{\mu})^\frac{1}{2} dt \le \frac{c}{q_T} \int_{0}^D \left \| f_\eta \right \|^2_{\mu} dt \le \frac{cD}{q_T},
\label{eq : int 0 D}
\end{equation}
where in the last inequality we have used the fact that $\left \| \mu \right \|_\infty \le c$, Young inequality and the definition of the Kernel function and of $r$ in order to say
\begin{equation}
\left \| f_\eta \right \|^2_{\mu} = \left \| \mathbb{K}_\eta * r \right \|^2_{\mu} \le c \left \| \mathbb{K}_\eta * r \right \|^2_{2, \mathbb{R}^d} \le c \left \| \mathbb{K}_\eta \right \|_{1, \mathbb{R}^d}^2 \left \| r \right \|^2_{2, \mathbb{R}^d} \le c.
\label{eq: stima norma 1 f eta}
\end{equation}
Regarding the integral between $D$ and $q_T$, we act like we did in \eqref{eq: int D T} using the exponential ergodicity gathered in Lemma \ref{lemma: beta mixing} to get
\begin{equation}
\frac{c}{q_T^2} \int_D^{q_T} (q_T - u) \mathbb{E}[f_{\eta,c}(X_0^{* j, 1}), f_{\eta,c}(X_u^{* j, 1})] du \le \frac{c}{q_T} |\int_D^{q_T} <P^*_{t} f_{\eta,c}, f_{\eta,c}>_{\mu} dt| \le \frac{c}{q_T} \int_D^{q_T} e^{- \rho t} \left \| f_{\eta,c} \right \|^2_{\infty} dt.
\label{ eq: int D qT}
\end{equation}
We now recall that $f_{\eta,c} (x) = f_\eta(x) - \mu (f_\eta)$ with $\mu(f_\eta) = \int_{\mathbb{R}^d} f_\eta (x) \mu(x) dx$. Hence, from Cauchy- Schwartz inequality and \eqref{eq: stima norma 1 f eta}, we get $|\mu (f_\eta)| \le c$ and therefore
\begin{equation}
|f_{\eta,c} (x)| \le |f_{\eta}(x) |+ c.
\label{eq: stima f eta c}
\end{equation}
To estimate the infinity norm of $f_{\eta, c}$ we remark it is, $\forall y \in \mathbb{R}^d$,
$$|f_{\eta}(y) | = |\mathbb{K}_\eta * r (y) | = | \int_A \mathbb{K}_\eta (y - z) r(z) dz | \le (\int_A \mathbb{K}^2_\eta (y - z) dz)^\frac{1}{2} (\int_A r^2 (z) dz)^\frac{1}{2} = (\int_{\mathcal{S}} \mathbb{K}^2_\eta (y - z) dz)^\frac{1}{2},$$
where we have used Cauchy - Schwartz inequality and the fact that $r \in \mathcal{B} (1)$ and so its $2$ - norm is equal to $1$ by definition. Moreover, the Kernel function is different from $0$ only on its support $\mathcal{S}$, which size is $\prod_{l = 1}^d \eta_l$. Hence, recalling also that the infinite norm of $\mathbb{K}_\eta$ is upper bounded by $c(\prod_{l = 1}^d \eta_l)^{- 1} $, we get
$$|\mathbb{K}_\eta * r (y) | \le c (\left \| \mathbb{K}_\eta \right \|^2_\infty | \mathcal{S} |)^\frac{1}{2} \le \frac{c}{\sqrt{\prod_{l = 1}^d \eta_l}}.$$
It follows
\begin{equation}
\left \| f_{\eta,c} \right \|_{\infty} \le \frac{c}{\sqrt{\prod_{l = 1}^d \eta_l}} + c \le \frac{c}{\sqrt{\prod_{l = 1}^d \eta_l}},
\label{eq: norma inf f eta c}
\end{equation}
given that the second term is negligible compared to the first.
Replacing \eqref{eq: norma inf f eta c} in \eqref{ eq: int D qT}, using also \eqref{eq : int 0 D}, we obtain
$$ Var( \frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} f_{\eta} (X_t^{* j, 1}) dt) \le \frac{c}{q_T} (D + \frac{e^{- \rho D}}{\prod_{l = 1}^d \eta_l}).$$
We look for a $D$ for which the first and the second term of the right hand side of the inequality here above have the same magnitude. Therefore, we choose $D := [\max(- \frac{1}{\rho} \log (\prod_{l = 1}^d \eta_l), 1)] \land q_T$. Replacing such a value we get, if $q_T > - \frac{1}{ \rho} \log (\prod_{l = 1}^d \eta_l)$,
$$\frac{1}{p_T} \sum_{j = 1}^{p_T} Var(\int_A \frac{1}{q_T} \int_{(2 j - 1) q_T}^{2j q_T} \mathbb{K}_\eta (X_t^{* j, 1} - z) dt \, r(z) dz) \le \frac{c}{q_T} (1 + \log (\frac{1}{|\prod_{l = 1}^d \eta_l |}))$$
Otherwise, if $q_T \le - \frac{1}{ \rho} \log (\prod_{l = 1}^d \eta_l)$, by the definition of $D$ we have $D= q_T$. We still have the contribution of $\frac{c}{q_T} D$ which is in this case less than $\frac{c}{q_T}(\log (\frac{1}{|\prod_{l = 1}^d \eta_l |}))$ and, moreover, the contribution of the integral between $D$ and $q_T$ is now null since we have $D = q_T$.
Hence we have
\begin{equation}
v := \frac{c}{q_T} (1 + \log (\frac{1}{|\prod_{l = 1}^d \eta_l |})).
\label{eq: computation v}
\end{equation}
We use Lemma \ref{lemma: Talagrand in Klein Rio} on the right hand side of \eqref{eq: start Talagrand}, recalling that $\left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|_A^2 = \sup_{r \in \mathcal{B}(1)} < \mu_\eta - \hat{\mu}^{* (1)}_\eta,r>^2 = \sup_{r \in \mathcal{B}(1)}|v_{p_T}(r)|^2$; with $M$, $H$ and $v$ as found in \eqref{eq: computation M}, \eqref{eq: computation H} and \eqref{eq: computation v}. It follows
$$\mathbb{E}[\sup_{\eta \in \mathcal{H}_T} (\left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|^2 - \frac{\bar{k}}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1})_+] \le $$
$$ \le c \sum_{\eta \in \mathcal{H}_T} \frac{(1 + \log (\frac{1}{|\prod_{l = 1}^d \eta_l|}))}{p_T q_T} e^{- c \frac{\frac{p_T}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1}}{\frac{1}{q_T} (1 + \log (\frac{1}{|\prod_{l = 1}^d \eta_l |})) }} + \frac{(\prod_{l = 1}^d \eta_l)^{-1}}{p_T^2 } e^{- c \frac{p_T \frac{1}{\sqrt{T}}(\prod_{l = 1}^d \eta_l)^{\frac{1}{d} - \frac{1}{2}}}{(\prod_{l = 1}^d \eta_l)^{- \frac{1}{2}}}} .$$
We recall that $2 p_T q_T = T$, where $q_T$ is chosen above equation \eqref{eq: I5} as $ (\log T)^2 $; we
can therefore upper bound the right hand side of the equation here above with
$$c \sum_{\eta \in \mathcal{H}_T} \frac{(1 + \log (\frac{1}{|\prod_{l = 1}^d \eta_l|}))}{T} e^{- \frac{c}{(\prod_{l = 1}^d \eta_l)^{ 1- \frac{2}{d}}(1 + \log (\frac{1}{|\prod_{l = 1}^d \eta_l|}))}} + \frac{(\log T)^4}{(\prod_{l = 1}^d \eta_l) T^2} e^{- c \frac{\sqrt{T}}{(\log T)^2} (\prod_{l = 1}^d \eta_l)^{\frac{1}{d}}} \le$$
$$ \le (\frac{\log T}{T} e^{- c (\log T)^2} + \frac{T^{\frac{d}{3} - 2}}{(\log T)^{2 d - 2}} e^{- c T^{\frac{1}{6}}})|\mathcal{H}_T|,$$
where in the last inequality we have used that, by the definition \eqref{eq: def mathcal H} we have given of $\mathcal{H}_T$, $\forall h \in \mathcal{H}_T$ we have
$\frac{(\log T)^{2d}}{T^\frac{d}{3}} \le \prod_{l = 1}^d h_l \le ( \frac{1}{\log T})^{\frac{3 d}{d - 2}}$ and so $(\prod_{l = 1}^d \eta_l)^{ 1- \frac{2}{d}}(1 + \log (\frac{1}{|\prod_{l = 1}^d \eta_l|})) \le c \frac{1}{(\log T)^2}$ and $\frac{\sqrt{T}}{(\log T)^2} (\prod_{l = 1}^d \eta_l)^{\frac{1}{d}} \ge c T^\frac{1}{6}$; we have therefore upper bounded each element of the sum with a quantity which does not depend on $\eta$. \\
We have moreover assumed that $|\mathcal{H}_T|$ has polynomial growth in $T$; and so there is a constant $c > 0$ such that
$$\mathbb{E} [\sup_{\eta \in \mathcal{H}_T} (\left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|^2 - \frac{\bar{k}}{T} (\prod_{l = 1}^d \eta_l)^{\frac{2}{d} - 1})_+] \le (\frac{\log T}{T} e^{- c (\log T)^2} + \frac{T^{\frac{d}{3} - 2}}{(\log T)^{2 d - 2}} e^{- c T^{\frac{1}{6}}})T^c ;$$
inequality \eqref{eq: Talagrand per I4} follows. \\
As a consequence of Lemma \ref{lemma: convolution} and the definition of Kernel function we moreover have
$$\left \| \mu_{h,\eta} - \hat{\mu}^{* (1)}_{h,\eta} \right \|^2 = \left \| \mathbb{K}_h* ( \mu_\eta - \hat{\mu}^{* (1)}_\eta) \right \|^2 \le \left \| \mathbb{K}_h \right \|^2_{1, \mathbb{R}^d} \left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|^2_{2, \tilde{A}} \le c \left \| \mu_\eta - \hat{\mu}^{* (1)}_\eta \right \|^2_{2, \tilde{A}}.$$
Using that and \eqref{eq: Talagrand per I4} we have just shown we obtain \eqref{eq: Talagrand per I2}.
\end{proof}
|
1,314,259,993,506 | arxiv | \section{Introduction}
\label{sec:intro}
Explaining how a feature impacts a model prediction is a crucial question in machine learning (ML) as it provides a deeper understanding of how the model behaves and what insights have been extracted from data. In many real-world applications, it has been increasingly common to deploy complicated models such as a deep neural network model or a random forest to achieve high predictability. However, it often comes with a cost of unintuitive interpretations, and it naturally calls for a principled and practical attribution method. The goal of this work is to quantify the contribution of individual features to a particular prediction, also known as the attribution problem.
\citet{lundberg2017unified} proposed a model-agnostic attribution method, SHapley Additive exPlanations (SHAP), based on the Shapley value from economics. Supported by theoretical properties that the Shapley value satisfies, SHAP has been a popular method in the attribution literature \citep{janzing2020feature, sundararajan2020many}. For instance, \citet{frye2020shapley} and \citet{aas2021explaining} developed the SHAP algorithms for dependent features, and \citet{heskes2020causal} and \citet{wang2021shapley} proposed a rigorous framework in causal inference settings. One of the practical problems to use SHAP is the heavy computational costs, and there have been works on improving computational efficiency \citep{covert2021improving, lundberg2020local, jethani2021fastshap}. In addition, SHAP has been deployed to various applied scientific research \citep{lundberg2018explainable, janizek2021uncovering, qiu2022interpretable}.
The Shapley value, a mathematical basis of SHAP, is a simple average of the marginal contributions that quantify the average change in a coalition function when a feature of interest is added from a subset of features with a given coalition size. There are different version of the marginal contributions by the coalition size, and the Shapley value takes a uniform weight to summarize the influence of a feature. This uniform weight arises due to the efficiency axiom of the Shapley value, which requires the sum of attributions to equal the original model prediction, but it is often problematic because some may be more informative than others. As we will show in Section~\ref{sec:analysis_of_marginal_contrib}, the Shapley value is not optimal to sort features in order of influence on a model prediction.
\paragraph{Our contributions.} While SHAP is widely used for feature attribution, its limitations are still not rigorously understood. We first show the suboptimality of the Shapley value through an analysis of the marginal contributions. We identify a key limitation of the Shapley value in that it assigns uniform weights to marginal contributions in computing the attribution score. We show that this can lead to attribution mistakes when different marginal contributions have different signal and noise. Motivated by this analysis, we propose WeightedSHAP, a generalization of the Shapley value which is more flexible. WeightedSHAP uses a weighted average of marginal contributions where the weights can be learned from the data. On several real-world datasets, our experiments demonstrate that WeightedSHAP is better able to identify influential features that recapitulate a model's prediction compared to a standard SHAP. WeightedSHAP is a simple modification of SHAP and is easy to adapt from existing SHAP implementations.
\subsection{Related works}
\paragraph{Model interpretation}
There are mainly two types of model interpretation depending on the quantity to be accounted for; global and local interpretations. The global interpretation is to explain the impact of a feature on a prediction model across the entire dataset \citep{lipovetsky2001analysis, breiman2001random, owen2014sobol, broto2020variance, zhao2021causal, benard2022shaff}. For instance, for a decision tree model, \citet{breiman2017classification} measures the total decrease of node impurity at node split by a feature of interest as an impact. In contrast, the local interpretation is to explain the impact of a feature on a particular prediction value \citep{lundberg2017unified, chen2018learning, janzing2020feature, lundberg2020local, heskes2020causal, jethani2021fastshap}. For a deep neural network model, a gradient-based method uses the gradient evaluated at a particular input sample as an impact \citep{simonyan2013deep, sundararajan2017axiomatic, ancona2017towards, selvaraju2017grad, adebayo2018sanity}.
Our work studies the local interpretation problem with a focus on a marginal contribution-based method, which we review in Section~\ref{sec:preliminaries}. The marginal contribution-based method is potentially advantageous over a gradient-based method as it does not require the differentiability of a prediction model.
\paragraph{Shapley value and its extension}
The Shapley value, introduced as a fair division method from economics \citep{shapley1953}, has been deployed in various ML problems. One leading application is data valuation, where the main goal is to quantify the impact of individual data points in model training.
\citet{ghorbani2019data} and \citet{jia2019} propose to use the Shapley value for measuring the individual data value, and this concept has been extended to handle the randomness of data \citep{ghorbani2020distributional, kwon2021efficient}. As for the other applications of the Shapley value, model explainability \citep{ghorbani2020neuron}, model valuation \citep{rozemberczki2021shapley}, federated learning \citep{liu2021gtg}, and multi-agent reinforcement learning \citep{li2021shapley} have been studied. We refer to \citet{rozemberczki2022shapley} for a complementary literature review of ML applications of the Shapley value.
The relaxation of the Shapley axioms has been one of the central topics in cooperative game theory \citep{shapley1953additive, banzhaf1964weighted, kalai1987weighted, weber1988probabilistic}. Recently, \citet{kwon2021beta} propose to relax the efficiency axiom in the data valuation problem, showing promising results in the low-quality data detection task. Given that Shapley axioms are often not readily applicable to ML problems, relaxing them has the potential to capture a better notion of significance. In this work, we explore the benefits of relaxation of the efficiency axiom on the attribution problem.
\section{Preliminaries}
\label{sec:preliminaries}
We review the marginal contribution and the Shapley value in the context of an attribution problem. We first introduce some notations. For $d \in \mathbb{N}$, let $\mathcal{X} \subseteq \mathbb{R}^d$ and $\mathcal{Y} \subseteq \mathbb{R}$ be an input space and an output space, respectively. We use a capital letter $X=(X_1, \dots, X_d)$ for an input random variable defined on $\mathcal{X}$, and a lower case letter $x=(x_1, \dots, x_d)$ for its realized value. We denote a prediction model by $\hat{f}:\mathcal{X} \to \mathcal{Y}$. For $j \in \mathbb{N}$, we set $[j]:=\{1, \dots, j\}$ and denote a power set of $[j]$ by $2^{[j]}$. For a vector $u \in \mathbb{R}^d$ and a subset $S=(j_1, \dots, j_{|S|}) \subseteq [d]$, we denote a subvector by $u_S := (u_{j_1}, \dots, u_{j_{|S|}})$. We assume that $X$ has a joint distribution $p(X)$ such that a conditional distribution $p(X_{[d]\backslash S} \mid X_S)$ is well-defined for any subset $S\subsetneq[d]$. With the notations, a conditional coalition function $v_{x, \hat{f}} ^{\mathrm{(cond)}}: 2 ^{[d]} \to \mathbb{R}$ is defined as follows \citep{lundberg2017unified}.
\begin{align}
v_{x, \hat{f}} ^{\mathrm{(cond)}} (S) := \mathbb{E} [ \hat{f}(x_S, X_{[d] \backslash S} ) \mid X_S =x_S ] - \mathbb{E} [\hat{f}(X)],
\label{eqn:conditional_coalition}
\end{align}
where the first expectation is taken with a conditional distribution $p(X_{[d] \backslash S} \mid X_S=x_S)$ and the second expectation is taken with a joint distribution $p(X)$. Here, we use a slight abuse of notation for $\hat{f}(x_S, X_{[d] \backslash S} )$ to describe $f(u)$ where $u_i = x_i$ if $i \in S$, and $u_i = X_i$ otherwise. By convention, we set $v_{x, \hat{f}} ^{\mathrm{(cond)}} ([d]) := \hat{f}(x) -\mathbb{E} [\hat{f}(X)]$ and $v_{x, \hat{f}} ^{\mathrm{(cond)}} (\emptyset) := 0$.
A conditional coalition function defined in Equation~\eqref{eqn:conditional_coalition} is a prediction recovered after observing partial information $x_S$ compared to the null information. For instance, if $S=\emptyset$, the first term becomes the marginal expectation $\mathbb{E} [\hat{f}(X)]$ and nothing is recovered by $S=\emptyset$. For ease of notation, we write $v_{x, \hat{f}} ^{\mathrm{(cond)}} (S) = v ^{\mathrm{(cond)}} (S)$ for remaining part of the paper.
\paragraph{Marginal contribution-based attribution methods}
Given that the goal of the attribution problem is to assign the significance of an individual feature $x_i$ on the prediction $\hat{f}(x)$, its primary challenge is how to measure the influence of the feature $x_i$. A leading approach is to quantify the difference in the conditional coalition function values $v^{\mathrm{(cond)}}$ after adding one feature of interest. We formalize this concept below.
\begin{definition}[Marginal contribution]
For $i, j \in [d]$, we define the marginal contribution of the $i$-th feature $x_i$ with respect to $j-1$ features as follows.
\begin{align}
\Delta_{j}( x_i ) &:= \frac{1}{\binom{d-1}{j-1}}\sum_{S \subseteq [d] \backslash\{i\}, |S|=j-1 } v ^{\mathrm{(cond)}}(S\cup \{i\}) - v ^{\mathrm{(cond)}}(S).
\label{eqn:marginal_contribution}
\end{align}
\label{def:marginal_contribution}
\end{definition}
The marginal contribution $\Delta_{j}(x_i)$ considers every possible subset $S \subseteq [d]\backslash \{i\}$ with the coalition size $|S|=j-1$ and takes a simple average of the difference $v ^{\mathrm{(cond)}}(S\cup \{i\}) - v ^{\mathrm{(cond)}}(S)$. That is, it measures the average contribution of the $i$-th feature $x_i$ when it is added to a subset $S$.
Different marginal contributions $\Delta_{j}( x_i )$ have been studied depending on the coalition size $j$ in the literature. \citet{zintgraf2017visualizing} considered $j=d$ and measured the leave-one-out marginal contribution $\Delta_{d}(x_i) = \hat{f}(x) - \mathbb{E} [ \hat{f}(x_{[d] \backslash \{i\}}, X_{i} ) \mid X_{[d] \backslash \{i\}} =x_{[d] \backslash \{i\}} ]$ as an influence of a feature. \citet{guyon2003introduction} considered $j=1$ and measured the coefficient of determination as an influence of a feature. Although they did not use an individual prediction, their idea is essentially similar to using $\Delta_{1}(x_i) = \mathbb{E} [ \hat{f}(x_i, X_{[d] \backslash \{i\}} ) \mid X_i = x_i ] - \mathbb{E} [\hat{f}(X)]$.
Another widely used marginal contribution-based method is the Shapley value \citep{lundberg2017unified, covert2021improving}. It summarizes the impact of one feature by taking a simple average across all marginal contributions. To be more specific, the Shapley value is defined as follows.
\begin{align}
\phi_{\mathrm{shap}}(x_i) := \frac{1}{d} \sum_{j=1} ^{d} \Delta_{j}(x_i).
\label{eqn:shapley_value}
\end{align}
The Shapley value in \eqref{eqn:shapley_value} is known as the unique function that satisfies the four axioms of a fair division in cooperative game theory \citep{shapley1953}. The four axioms and the uniqueness of the Shapley value are discussed in more detail in Appendix.
Although the Shapley value provides a principled framework in game theory, one critical issue is that the economic notion of the Shapley axioms is not intuitively applicable to the attribution problem \citep{kumar2020problems, rozemberczki2022shapley}. In particular, the efficiency axiom, which requires the sum of the attributions to be equal to $v^{\mathrm{cond}}([d])$, is not necessarily essential because an order of attributions is invariant to the constant multiplication. For instance, for any positive constant $C>0$, an attribution $\phi_C(x_i) := C \times \phi_{\mathrm{shap}} (x_i)$ will have the same order as the Shapley value $\phi_{\mathrm{shap}}$, but the efficiency axiom is not required for $\phi_C$. In Section~\ref{sec:proposed}, we will revisit this point and introduce a new attribution method that relaxes the efficiency axiom.
\paragraph{Evaluation metrics for the attribution problem.}
In the literature, different notions of goodness have been proposed, for instance, the complete axiom \citep{sundararajan2017axiomatic, shrikumar2017learning}, the local Lipschitzness \citep{alvarez2018robustness}, and the explanation infidelity \citep{yeh2019fidelity} with a focus on the total sum of attributions or the sensitivity of attributions.
Recently, \citet{jethani2021fastshap} suggested using the \textit{Inclusion AUC} to assess the goodness of an order of attributions. Specifically, the Inclusion AUC is measured as follows: Given an attribution method, features are first ranked based on their attribution values. Then the area under the receiver operating characteristic curve (AUC) is iteratively evaluated by adding features one by one from the most influential to the least influential. This procedure generates a AUC curve as a function of the number features added, and the Inclusion AUC is defined as the area under this curve. Similar evaluation metrics have been used in \citet{petsiuk2018rise} and \citet{lundberg2020local}. Following the literature, we consider the area under the prediction recovery error curve (AUP) defined as follows.
\begin{definition}[Area under the prediction recovery error curve]
For a given attribution method $\phi$, an input $x \in \mathcal{X}$, and $k\in[d]$, let $\mathcal{I}(k; \phi, x) \subseteq [d]$ be a set of $k$ integers that indicates $k$ most influential features based on their absolute value $|\phi(x_j)|$. For a prediction model $\hat{f}$, we define the area under the prediction recovery error curve at $x$ as follows.
\begin{align}
\mathrm{AUP}(\phi; x, \hat{f}) := \sum_{k=1} ^d \left| \hat{f}(x)-\mathbb{E}[\hat{f}(X) \mid X_{\mathcal{I}(k; \phi, x)}=x_{\mathcal{I}(k; \phi, x)} ] \right|.
\label{eqn:prediction_recovery}
\end{align}
\end{definition}
AUP is defined as the sum of the absolute differences between the original prediction $\hat{f}(x)$ and its conditional expectation $\mathbb{E}[\hat{f}(X) \mid X_{\mathcal{I}(k; \phi, x)}=x_{\mathcal{I}(k; \phi, x)} ]$ when the $k$ most influential features are given. Each term in Equation~\eqref{eqn:prediction_recovery} measures the amount of a prediction that is not recovered by the $k$ most influential features, and thus this prediction recovery error is expected to decrease as $k$ increases. The prediction recovery error can be described as a function of $k$, and the AUP measures the area under this function as in the Inclusion AUC.
\section{The Shapley value is suboptimal}
\label{sec:analysis_of_marginal_contrib}
In this section, we show that the suboptimality of the Shapley value through a rigorous analysis of the marginal contribution. We first derive a useful closed-form expression of the marginal contribution when $\hat{f}$ is linear and $p(X)$ is Gaussian (Section~\ref{sec:closed_form}).
With this theoretical result, we present two simulation experiments where the Shapley value incorrectly reflects the influence of features, resulting in a suboptimal order of attributions (Section~\ref{sec:motivational_examples}).
\subsection{A closed-form expression of the marginal contribution}
\label{sec:closed_form}
To this end, we assume that a prediction model $\hat{f}$ is linear and an input distribution $p(X)$ follows a Gaussian distribution with zero mean and a block diagonal covariance matrix. To be more specific, we define some notations. For $B \in \mathbb{N}$, we set a vector $\mathbf{d}=(d_1, \dots, d_B) \in \mathbb{N}^B$ such that $\sum_{b=1} ^B d_b = d$ and a vector $\pmb{\rho} := (\rho_1, \dots, \rho_B) \in [0,1)^{B}$. We denote a $d \times d$ block diagonal covariance matrix by $\Sigma_{\pmb{\rho}, \mathbf{d}} ^{\mathrm{(block)}} = \mathrm{diag}\left( \Sigma_{(\rho_1, d_1)}, \dots, \Sigma_{(\rho_B, d_B)} \right)$ where $\Sigma_{(\rho_b,d_b)}=(1-\rho_b) I_{d_b} + \rho_b \mathds{1}_{d_b} \mathds{1}_{d_b} ^T$. Here, for $j \in \mathbb{N}$, we denote the $j \times j$ identity matrix by $I_j$, the $j$-dimensional vector of ones by $\mathds{1}_j := (1, \dots, 1)^T \in \mathbb{R}^{j}$ and $\mathbf{0}_j := 0 \times \mathds{1}_j$. Lastly, we denote a Gaussian distribution with a mean vector $\mu$ and a covariance matrix $\Sigma$ by $\mathcal{N}(\mu, \Sigma)$. With the notations, we assume $X \sim \mathcal{N}(\mathbf{0}_d, \Sigma_{\pmb{\rho}, \mathbf{d}} ^{\mathrm{(block)}})$. That is, every feature is normalized to have a unit variance and is included in one of $B$ independent clusters. For $j\in[B]$, the size of the $j$-th cluster is $d_j$, and features are equally correlated to each other within a cluster. The correlation levels can vary from cluster to cluster.
In general, the marginal contribution in Equation~\eqref{eqn:marginal_contribution} does not have a closed-form expression, and it makes a rigorous analysis of the Shapley value difficult. In the following theorem, we derive a closed-form expression of the marginal contribution when $\hat{f}$ is linear and $X \sim \mathcal{N}(\mathbf{0}_d, \Sigma_{\pmb{\rho}, \mathbf{d}} ^{\mathrm{(block)}})$.
\begin{theorem}[A closed-form expression for the marginal contribution]
Suppose $\hat{f}(x) = \hat{\beta}_0 + x^T \hat{\beta}$ for some $(\hat{\beta}_0, \hat{\beta}) \in \mathbb{R} \times \mathbb{R}^d$ and $X \sim \mathcal{N}(\mathbf{0}_d, \Sigma_{\pmb{\rho}, \mathbf{d}} ^{\mathrm{(block)}})$. Then, for $i, j \in [d]$, the marginal contribution of the $i$-th feature $x_i$ with respect to $j-1$ samples is expressed as
\begin{align*}
\Delta_{j}(x_i) = x^T H(i, j) \hat{\beta},
\end{align*}
for some explicit matrix $H(i,j) \in \mathbb{R}^{d \times d}$.
\label{thm:marginal_contribution}
\end{theorem}
A proof and the explicit term for $H(i,j)$ are provided in Appendix. Theorem~\ref{thm:marginal_contribution} shows that the marginal contribution is a bilinear function of an input $x$ and the estimated regression coefficient $\hat{\beta}$. One direct consequence is that the Shapley value also has a bilinear form $\phi_{\mathrm{shap}}(x_i)= x^T H(i) \hat{\beta}$ for $H(i) := \sum_{j=1} ^d H(i,j)/d$. We emphasize that this bilinear form greatly improves computational efficiency. Specifically, for all $i,j\in[d]$, since the term $H(i,j)\hat{\beta}$ is not a function of an input $x$, we only need to compute the one-time in multiple attribution computations. Moreover, it also leads to a memory efficient algorithm as there is no need to store the $d\times d$ matrix $H(i,j)$.
\begin{figure}[t]
\centering
\subfigure[Illustrations of the suboptimality of Shapley-based feature attributions on the four different situations when $d=2$.]{
\centering
\label{fig:motivation_gaussian_two_features}
\includegraphics[width=\textwidth]{figures/motivation_gaussian_example1.pdf}
}
\subfigure[Illustrations of a prediction recovery error curve and AUP comparison when $d=100$.]{
\centering
\label{fig:motivation_gaussian_more_than_two_features}
\includegraphics[width=0.49\textwidth]{figures/motivation_gaussian_example2_1.pdf}
\includegraphics[width=0.49\textwidth]{figures/motivation_gaussian_example2_2.pdf}
}
\caption{The Shapley value is suboptimal. (Top) a region described in black denotes the area the Shapley value fails to select the more influential feature. We encode $\mathcal{E}(1; x, \hat{f})-\mathcal{E}(2; x, \hat{f})$ as the background color to visualize which feature is more influential. Blue color describes a region where the first feature $x_1$ is more influential, \textit{i.e.}, $\mathcal{E}(1; x, \hat{f}) < \mathcal{E}(2; x, \hat{f})$, and red describes a region where the second feature $x_2$ is more influential, \textit{i.e.}, $\mathcal{E}(1; x, \hat{f}) > \mathcal{E}(2; x, \hat{f})$. The intensity for $\mathcal{E}(1; x, \hat{f})-\mathcal{E}(2; x, \hat{f})$ is described in a color bar. (Bottom) we compare the three attribution methods $\Delta_1, \phi_{\mathrm{shap}},$ and $\Delta_d$ on the two different situations by varying correlation $\rho \in \{0.2, 0.6\}$. As for the prediction recovery error curve, we denote a 95\% confidence band based on 100 samples. The lower AUP is, the better. In both settings, the Shapley value is suboptimal according to AUP.}
\end{figure}
\subsection{Motivational examples}
\label{sec:motivational_examples}
With the theoretical result introduced in the previous subsection, we show that the Shapley-based feature attribution is suboptimal and fails to assign larger values to more influential features.
\paragraph{When there are two features.}
When $d=2$, there are only two possible values for AUP.
For any attribution method $\phi$,
\begin{align*}
\mathrm{AUP}(\phi; x, \hat{f})=\begin{cases}
\mathcal{E}(1; x, \hat{f}) & \text{if } \mathcal{I}(1; \phi, x)=\{1\}\\
\mathcal{E}(2; x, \hat{f}) & \text{otherwise}
\end{cases},
\end{align*}
where $\mathcal{E}(k; x, \hat{f}) := \left| \hat{f}(x_1, x_2) - \mathbb{E}[\hat{f}(X_1, X_2) \mid X_k =x_k] \right|$ for $k \in \{1,2\}$. Therefore, the optimal order based on AUP is fully determined by $\mathcal{E}(1; x, \hat{f})$ and $\mathcal{E}(2; x, \hat{f})$, for instance, the first feature $x_1$ is more influential than the second one $x_2$ if $\mathcal{E}(1; x, \hat{f}) < \mathcal{E}(2; x, \hat{f})$. It is intuitively sensible because $\mathcal{E}(1; x, \hat{f}) < \mathcal{E}(2; x, \hat{f})$ means that the original prediction $\hat{f}(x)$ is more accurately recovered by the first feature $x_1$ than the second one $x_2$.
Using the optimal order, we demonstrate that the Shapley value does not necessarily assign a large attribution to a more influential feature. We consider the four different scenarios with two different prediction models $\hat{f}(x) \in \{1.5 x_1+x_2, 0.5 x_1+x_2\}$ and two different Gaussian distributions, $X \sim \mathcal{N} \left( \mathbf{0}_2, \Sigma_{(\rho, 2)} \right)$ for $\rho \in \{0.2, 0.6\}$. In these four scenarios, the terms $\mathcal{E}(1; x, \hat{f})$ and $\mathcal{E}(2; x, \hat{f})$ have a closed-form expression, and thus the optimal order is explicitly obtained. Moreover, due to Theorem~\ref{thm:marginal_contribution}, a more influential feature according to the Shapley value is explicitly obtained.
Figure~\ref{fig:motivation_gaussian_two_features} illustrates the suboptimality of the Shapley value on the four different situations. In any situation, there is a non-negligible region (described in black) where the Shapley value fails to select a more influential feature. In addition, this suboptimal area increases as the correlation gets larger, showing that the Shapley value-based explainability becomes poor when features are highly correlated.
\paragraph{When there are more than two features}
When $d>2$, it is infeasible to find the exact optimal order because there are $2^{d-1}$ possible AUPs. For this reason, we compare the Shapley value with the two commonly used marginal contribution-based methods $\Delta_1$ and $\Delta_d$, showing the Shapley value is not optimal in terms of AUP. We assume the following setting: a trained model is linear $\hat{f}(x) = \hat{\beta}_0 + x^T \hat{\beta}$ for some $(\hat{\beta}_0, \hat{\beta}) \in \mathbb{R} \times \mathbb{R}^d$ and an input vector $X=(X_1, \dots, X_d)$ follows a Gaussian distribution $\mathcal{N} \left( \mathbf{0}_d, \Sigma_{(\rho, d)} \right)$. That is, there are $d$ features and they are equally correlated to each other with the correlation $\rho$. We set $d=100$ and consider two different situations by varying $\rho \in \{0.2, 0.6\}$. Similar to the previous analysis, due to Theorem~\ref{thm:marginal_contribution}, the three attribution methods are explicitly obtained. We evaluate the prediction recovery error and the AUP on the 100 held-out test samples randomly drawn from the distribution $\mathcal{N} \left( \mathbf{0}_d, \Sigma_{(\rho, d)} \right)$. Detailed information is provided in Appendix.
Figure~\ref{fig:motivation_gaussian_more_than_two_features} illustrates the suboptimality of the Shapley value when $d=100$ and $\rho\in\{0.2, 0.6\}$. In any situation, the prediction recovery curves for the Shapley value (described in green) have a steeper slope than $\Delta_1$ (described in red), but is not optimal as the $\Delta_d$ (described in blue) approaches to zero faster. When $\rho=0.6$, the suboptimality becomes more severe in that the gap between $\Delta_d$ and the Shapley value gets larger.
\section{Proposed method: WeightedSHAP}
\label{sec:proposed}
\begin{wrapfigure}[21]{R}{0.41\textwidth}
\vspace{-11pt}
\centering
\includegraphics[width=0.4\textwidth]{figures/est_error_06.pdf}
\caption{Illustrations of the relative difference between the true marginal contribution $\Delta_j$ and its estimate $\hat{\Delta}_j$ as a function of the coalition size $j \in [d]$. We consider the same setting used in Figure~\ref{fig:motivation_gaussian_more_than_two_features}. The $\Delta_d$ is shown to have the largest relative difference.}
\label{fig:estimation_error_of_the_marginal_contributions}
\end{wrapfigure}
Our motivational examples in the previous section suggest that the Shapley value does not necessarily assign larger attributions for more influential features, leading to a suboptimal order of features. In fact, the last marginal contribution $\Delta_d$ outperforms other attribution methods in Figure~\ref{fig:motivation_gaussian_more_than_two_features}.
Although the use of $\Delta_d$ is promising, we show that focusing on one marginal contribution might lead to an unstable attribution method, suggesting a weighted mean of the marginal contributions. To be more concrete, we first examine the estimation error of the marginal contribution in the following.
\paragraph{Analysis of estimation error}
In practice, the Shapley value needs to be estimated, resulting in an estimation error. Given the mathematical form of the Shapley value in \eqref{eqn:shapley_value}, this estimation error arises from the estimation error of the marginal contribution. In this reason, we investigate the estimation error of the marginal contribution. We consider the same setting used in Figure~\ref{fig:motivation_gaussian_more_than_two_features} with $\rho=0.6$. As for the estimation of the marginal contribution, we follow a standard algorithm to estimate a conditional coalition function $v^{\mathrm{(cond)}}$ used in \citet{jethani2021fastshap} and a sampling-based algorithm to approximate $\Delta_j$. A detailed explanation for the estimation procedure and the additional result for $\rho=0.2$ are provided in Appendix.
Figure~\ref{fig:estimation_error_of_the_marginal_contributions} shows the relative difference between the true marginal contribution $\Delta_j$ obtained by Theorem~\ref{thm:marginal_contribution} and its estimate $\hat{\Delta}_j$ as a function of the coalition size $j \in [d]$. Here, we use the relative difference between $A$ and $B$ defined as $|A-B|/\max(|A|,|B|)$ to avoid numerical instability that can be occurred by too small marginal contribution values. It shows the $\Delta_d$, the most informative marginal contribution in Figure~\ref{fig:motivation_gaussian_more_than_two_features}, has the largest relative difference from the true value.
In other words, $\Delta_d$ has the largest signal to explain a model prediction, but at the same time, it is the most unstable in terms of the estimation error. This finding motivates us to consider a weighted mean of the marginal contributions that can reduce the estimation error while maintaining signals.
\paragraph{Proposed method}
For a weighted vector $\mathbf{w}=(w_1, \dots, w_d)$ such that $\sum_{j=1} ^d w_j =1$ and $w_j \geq 0$ for all $j\in[d]$, we consider a weighted mean of the marginal contributions
\begin{align}
\phi_\mathbf{w} (x_i) := \sum_{j=1} ^{d} w_j \Delta_{j}(x_i).
\label{eqn:semivalue}
\end{align}
A weighted mean $\phi_\mathbf{w} (x_i)$ is expected to capture the influence of features better than the Shapely value \eqref{eqn:shapley_value} by assigning a large weight to important marginal contributions. As for the game theoretic interpretation, a mathematical form of Equation~\eqref{eqn:semivalue} is known as a semivalue in cooperative game theory. It satisfies all the Shapley axioms but the efficiency axiom, which is not crucial in the attribution as we discussed in Section~\ref{sec:preliminaries} \citep{dubey1977probabilistic, ridaoui2018axiomatisation}. Due to the relaxation of the efficiency axiom, a semivalue is not uniquely determined, but it is known that the semivalue is \textit{almost} unique up to a weighted mean operation. A detailed explanation for the semivalue is provided in Appendix.
One natural and practical question that arises when using a weighted mean $\phi_\mathbf{w} (x_i)$ is how to select the weight vector $\mathbf{w}$. Since a weight vector to be selected is desired to have a certain good property, this question can be rephrased as to which concept of goodness should be optimized. However, it is difficult to have one universal desideratum by the intricate nature of model interpretations. There are different types of goodness and they often represent independent characteristics, as we discussed in Section~\ref{sec:preliminaries}. In other words, a good attribution essentially depends on a practitioner's downstream task. To reflect this, we propose to learn a weight vector that optimizes a user-defined utility.
To be more specific, we let $\mathcal{W} \subseteq \{w \in \mathbb{R}^d : \sum_{j=1} ^d w_j =1, w_j \geq 0 \}$ be a parametrized family of weights and $\mathcal{T}$ be a user-defined utility function that takes as input an attribution method and outputs its utility. Without loss of generality, we assume that the larger $\mathcal{T}$ is, the better it is (\textit{e.g.}, the negative value of AUP). Given $\mathcal{T}$ and $\mathcal{W}$, we propose WeightedSHAP as follows.
\begin{align}
\phi_{\mathrm{WeightedSHAP}}(\mathcal{T}, \mathcal{W}) := \phi_{\mathbf{w}^* (\mathcal{T}, \mathcal{W})},
\label{eqn:weightedSHAP}
\end{align}
where $\mathbf{w}^* (\mathcal{T}, \mathcal{W}) := \mathrm{argmax}_{\mathbf{w} \in \mathcal{W}} \mathcal{T} (\phi_{\mathbf{w}})$. That is, we learn the optimal weight by optimizing a user-defined utility.
When $\mathcal{W}$ includes the uniform weight $(1/d, \dots, 1/d) \in \mathbb{R}^d$, then by its construction, we can guarantee that WeightedSHAP is always better than or equal to the Shapley value according to the utility $\mathcal{T}$. For instance, when the negative value of AUP is used for the utility $\mathcal{T}$, AUP of WeightedSHAP is less than that of the Shapley value, \textit{i.e.}, $\mathrm{AUP}(\phi_{\mathrm{WeightedSHAP}}) \leq \mathrm{AUP}(\phi_{\mathrm{shap}})$.
Moreover, the more weight vectors are in $\mathcal{W}$, the better the quality of $\phi_{\mathrm{WeightedSHAP}}$ is guaranteed. WeightedSHAP $\phi_{\mathrm{WeightedSHAP}}$ depends on a set $\mathcal{W}$. In our experiments, we parameterize an element $\mathbf{w} \in \mathcal{W}$ by the Beta distribution inspired by mathematical properties of the semivalue in \citet[Theorem 11]{monderer2002variations}. Detailed information is provided in Appendix.
\begin{example}[WeightedSHAP and the Shapley value $\phi_{\mathrm{shap}}$ on AUP]
We revisit the motivational example introduced in Section~\ref{sec:motivational_examples}. With the negative AUP for $\mathcal{T}$ and some $\mathcal{W} \supseteq \{\Delta_d, \phi_{\mathrm{shap}}\}$, WeightedSHAP achieves significantly lower AUP than both $\Delta_d$ and $\phi_{\mathrm{shap}}$. Specifically, when $(d,\rho)=(100,0.6)$, the AUPs of ($\Delta_d$, $\phi_{\mathrm{shap}}$, $\phi_{\mathrm{WeightedSHAP}}$) are ($1.49\pm0.06, 1.65\pm0.08, 0.77\pm0.03$), respectively, where the numbers denote ``mean $\pm$ standard error'' based on the 100 held-out test samples. Meanwhile, the estimation errors of ($\Delta_d$, $\phi_{\mathrm{shap}}$, $\phi_{\mathrm{WeightedSHAP}}$) are ($9.23\pm0.02, 6.16\pm0.04, 7.90\pm0.11$), respectively. In short, WeightedSHAP achieves a significantly lower estimation error than $\Delta_d$ while achieving the lowest AUP. Although $\phi_{\mathrm{shap}}$ achieves the lowest estimation error, its AUP is significantly greater than both $\Delta_d$ and $\phi_{\mathrm{WeightedSHAP}}$. The uniform weight in $\phi_{\mathrm{shap}}$ helps reduce the estimation error, but it loses signals too much. In contrast, WeightedSHAP well balances the signal and the estimation error, \textit{i.e.}, reducing the estimation error while taking more signals.
\end{example}
\textbf{Implementation algorithm for WeightedSHAP} Given a finite set $\mathcal{W}$ and an easy-to-compute utility function $\mathcal{T}$, the optimal weight $\mathbf{w}^*$ can be achieved by iteratively evaluating the utility $\mathcal{T}$ for each attribution method $\phi_{\mathbf{w}}$ with $\mathbf{w} \in \mathcal{W}$. In addition, $\phi_{\mathbf{w}}$ is readily obtained as long as there are the marginal contribution estimates. Therefore, the key part of the implementation algorithm is to estimate a set of marginal contributions. The estimation of the marginal contributions consists of two parts, estimation of a conditional coalition function $v^{\mathrm{(cond)}}$ and approximation of the marginal contribution $\Delta_j$. As for the first part, we train a surrogate model that takes as input a subset of input features and outputs a conditional expectation of a prediction value given the same subset \citep{frye2020shapley, jethani2021fastshap, jethani2021have}. It is known that this surrogate model unbiasedly estimates a conditional expectation of a prediction value given a subset of features under mild conditions \citep{frye2020shapley, covert2020understanding}. Regarding the second part, a weighted mean is approximated by a sampling-based algorithm \citep{ghorbani2019data, kwon2021beta}. We provide a pseudo algorithm in Appendix.
In terms of the computational cost, our algorithm is comparable to a standard the Shapley value estimation algorithm because both algorithms need to estimate the marginal contributions as a primary part \citep{lundberg2017unified, frye2020shapley}. For instance, with the classification dataset \texttt{fraud}, the marginal contribution estimation part takes $20.7$ seconds per sample on average but the weight optimization part only takes $0.18$ seconds, \textit{i.e.}, the weight optimization part is only $0.86 \%$ of the total compute.
\begin{figure}[t]
\centering
\subfigure[Illustrations of the prediction recovery error curve on the four regression datasets. The lower, the better.]{
\includegraphics[width=0.22\textwidth]{figures/reg_pr_error_boston_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/reg_pr_error_airfoil_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/reg_pr_error_whitewine_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/reg_pr_error_abalone_keep_absolute_B.pdf}
\label{fig:prediction_error_regression_boosting}
}
\subfigure[Illustrations of the Inclusion MSE curve on the four regression datasets. The lower, the better.]{
\includegraphics[width=0.22\textwidth]{figures/reg_mse_boston_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/reg_mse_airfoil_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/reg_mse_whitewine_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/reg_mse_abalone_keep_absolute_B.pdf}
\label{fig:inclusion_auc_regression_boosting}
}
\caption{\textbf{Regression tasks.} Illustrations of the prediction recovery error curve and the Inclusion MSE curve as a function of the number of features added. We add features from most influential to the least influential. We denote a 95\% confidence interval based on 50 independent runs. WeightedSHAP achieves a significantly smaller MSE with fewer features than the MCI and the Shapley value.}
\label{fig:regression_boosting_all}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Illustrations of the prediction recovery error curve on the four binary classification datasets. The lower, the better.]{
\includegraphics[width=0.22\textwidth]{figures/clf_pr_error_fraud_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/clf_pr_error_phoneme_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/clf_pr_error_wind_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/clf_pr_error_cpu_keep_absolute_B.pdf}
\label{fig:prediction_error_classification_boosting}
}
\subfigure[Illustrations of the Inclusion AUC curve on the four binary classification datasets. The higher, the better.]{
\includegraphics[width=0.22\textwidth]{figures/clf_auc_fraud_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/clf_auc_phoneme_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/clf_auc_wind_keep_absolute_B.pdf}
\includegraphics[width=0.22\textwidth]{figures/clf_auc_cpu_keep_absolute_B.pdf}
\label{fig:inclusion_auc_classification_boosting}
}
\caption{\textbf{Classification tasks.} Illustrations of the prediction recovery error curve and the Inclusion AUC curve as a function of the number of features added. Details are provided in Figure~\ref{fig:regression_boosting_all}. WeightedSHAP achieves a significantly higher AUC with fewer features than the MCI and the Shapley value.}
\end{figure}
\section{Experimental results}
\label{sec:experiment}
We demonstrate the practical efficacy of WeightedSHAP on various regression and classification datasets. We compare WeightedSHAP with the marginal contribution feature importance (MCI) proposed by \citet{catav21marginal} and the Shapley value on the prediction recovery error task and the inclusion performance task. Each task assesses the goodness of an attribution order by iteratively measuring how much the original model prediction or its performance is recovered with a given number of features. As for the model performance, we evaluate mean squared error (MSE) and AUC for regression and classification problems, respectively.
We consider a gradient boosting model for a prediction model $\hat{f}$. As for the surrogate model in coalition function estimation $v^{\mathrm{(cond)}}$, we use a multilayer perceptron model with the two hidden layers, and each layer has 128 neurons and the ELU activation function \citep{clevert2015fast}. As for the WeightedSHAP in \eqref{eqn:weightedSHAP}, we use the negative value of the AUP for $\mathcal{T}$ and a set $\mathcal{W}$ that has 13 different weights including $\Delta_1$, $\phi_{\mathrm{shap}}$, and $\Delta_d$. All the missing details about numerical experiments are provided in Appendix, and our Python-based implementations are available at \url{https://github.com/ykwon0407/WeightedSHAP}.
Figure~\ref{fig:prediction_error_regression_boosting} compares the prediction recovery error curves for the WeightedSHAP (described in red) with the MCI (described in blue) and the Shapley value (described in green). WeightedSHAP shows always lower prediction recovery errors than the MCI and the Shapley value. Given that WeightedSHAP minimizes the AUP, which is the sum of prediction recovery error $|\hat{f}(x) - \mathbb{E}[\hat{f}(X) \mid X_S=x_S]|$, WeightedSHAP does not necessarily have a smaller prediction recovery error for every number of features added. As for the MSE, Figure~\ref{fig:inclusion_auc_regression_boosting} shows that WeightedSHAP has a significantly smaller MSE than baseline methods with fewer features. In particular, on the \texttt{airfoil} dataset, WeightedSHAP achieves $0.53$ MSE with 10 features, but the Shapley value never achieves this value because of the suboptimality of the attribution order.
We also evaluate the prediction recovery error and AUC for the four classification datasets. Similar to the regression cases, Figures~\ref{fig:prediction_error_classification_boosting} and~\ref{fig:inclusion_auc_classification_boosting} show that WeightedSHAP effectively assigns larger values for more influential features and recovers the original prediction $\hat{f}(x)$ significantly faster than the MCI and the Shapley value. Specifically, on the \texttt{fraud} dataset, WeightedSHAP achieves $0.995$ AUC with 14 features, but the Shapley value always has the lower AUC value. Our findings are consistently observed with a different model for $\hat{f}$ or other datasets. Additional experimental results with different evaluation metrics and a qualitative assessment of WeightedSHAP are provided in Appendix.
\subsection{Illustrative examples from MNIST}
\label{app:illustrative_examples_mnist}
We present a qualitative assessment of WeightedSHAP and examine how its top influential features differ from those from SHAP using the MNIST dataset. We train a convolutional neural network model using the same setting suggested in \citet{jethani2021fastshap}. It achieves 98.6 \% accuracy on the test dataset. We select illustrative images with a significant difference in AUPs between WeightedSHAP and SHAP.
Figure~\ref{fig:MNIST_samples} compares the top 10\% influential attributions for WeightedSHAP and the Shapley value. On the top images, while SHAP fails to capture the last stroke of digit nine, which is a crucially important stroke to differentiate from the digit zero, WeightedSHAP clearly captures the strokes. On the bottom images, SHAP produces unintuitive negative attributions, providing noisy explanations. In contrast, WeightedSHAP provides noiseless and intuitive explanations.
\begin{figure}[t]
\centering
\includegraphics[width=0.425\textwidth]{figures/MNIST_9.pdf}
\includegraphics[width=0.45\textwidth]{figures/MNIST_20.pdf}\\
\includegraphics[width=0.425\textwidth]{figures/MNIST_25.pdf}
\includegraphics[width=0.45\textwidth]{figures/MNIST_93.pdf}
\caption{Illustrative examples of WeightedSHAP and SHAP attributions on MNIST images. We present the top 10\% of influential features. Red ({\it resp.} blue) color indicates the corresponding feature positively ({\it resp.} negatively) affects the model prediction. (Top) WeightedSHAP clearly captures the last stroke of nine while SHAP fails to capture it. (Bottom) While SHAP has noisy negative feature attributions described by blue pixels, WeightedSHAP provides noiseless and intuitive explanations.}
\label{fig:MNIST_samples}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we provide an analysis of the widely used SHAP attribution method. We discover that even in simple natural settings, SHAP can incorrectly identify important features. Mathematically, a key limitation of SHAP is in that it assigns uniform weights to all marginal contributions. We propose WeightedSHAP which generalizes the Shapley value by relaxing the efficiency axiom. WeightedSHAP learns to pay more attention to the marginal contributions that have more signal on a prediction, assigning larger attributions for more influential features. There are several limitations of WeightedSHAP that motivate interesting future works. Here we use the AUP metric to optimize the weights because AUP is commonly used in practice. However, there is no agreed-upon metric for evaluating feature attribution methods. Different users may care about different notions of attribution. Developing variants of marginal contribution weighting optimized for different applications is an important direction of future research. We believe that the core contribution of this paper---that the uniform weighting used by SHAP can be suboptimal---still provides useful insights for these investigations. Here we focus our experiments on directly comparing WeightedSHAP with SHAP because our goal is to characterize the limitations of SHAP. There is a large body of works comparing SHAP with other attribution methods that are complementary to our work \citep{yeh2019fidelity, jethani2021fastshap}.
\section*{Acknowledgment}
The authors would like to thank all anonymous reviewers for their constructive comments. We also would like to thank Ruishan Liu for the helpful discussion.
|
1,314,259,993,507 | arxiv | \section{Introduction}
\label{sec:1.Introduction}
\vspace{-0.15cm}
Image retrieval is a long-standing problem in computer vision. This task aims to sort a database of images based on their similarities to the given query image.
For this task, global retrieval through global descriptor matching and geometric verification after local feature matching are mainly employed. These approaches typically comprise two primary components of the image retrieval framework that mutually complement one another.
The global retrieval quickly performs a coarse retrieval across the database, and geometric verification re-ranks the coarse results by performing precise evaluation only on the potential candidates.
Along with deep learning, image retrieval has also advanced significantly.
In particular, several studies \cite{cao2020unifying, yang2021dolg, tan2021instance, noh2017large, teichmann2019detect,simeoni2019local} have been focused on extracting representative and distinctive features for global and local representations with deep learning.
However, geometric verification after local feature matching still plays an essential role in the re-ranking task in image retrieval, despite its drawbacks.
Owing to its \textit{verify-after-matching} structure, geometric verification is performed based on only sparse and thresholded feature correspondence. Moreover, it is neither learnable nor differentiable and requires iterative optimization even during testing.
In addition, geometric verification does not include a component that can handle multi-scale operation. Thus, several studies \cite{noh2017large,cao2020unifying,tan2021instance,philbin2007object} have attempted to solve the scale problem by repeating inference with the image pyramid to extract multi-scale local features. However, this is an extremely expensive process.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/introduce.pdf}
\vspace{-0.6cm}
\caption{
Novel image retrieval re-ranking method named correlation verification that \textit{directly} predicts image similarity by leveraging dense feature correlation in a convolutional manner.
}
\vspace{-0.5cm}
\label{fig:fig1.introduce}
\end{figure}
In this study, we propose an end-to-end learnable re-ranking network called \textit{Correlation Verification Networks} (CVNet) to replace the role of geometric verification in a better way.
The proposed network \textit{directly} evaluates semantic and geometric relations by leveraging dense feature correlations in a convolutional manner.
Following the successful architectural design of representative 2D convolutional neural networks (CNN), we design a 4D CNN with a pyramid structure of deeply stacked 4D convolution layers.
It compresses the correlation between semantic cues into image similarity while learning diverse geometric matching patterns from a large number of image pairs.
To ensure robustness even for large scale difference problems, it expands the single-scale feature to a feature pyramid for each image, forming cross-scale correlations between feature pyramids.
This structure enables cross-scale matching with a single inference while replacing the multi-scale inference conventionally used in image retrieval.
Our model does not require additional inference to extract local information; therefore the feature extraction latency, which significantly affects online retrieval time, is considerably reduced compared with other re-ranking methods.
Similar to several computer vision problems, image retrieval suffers from the problem of hard samples. We address these challenges through curriculum learning using the hard negative mining and Hide-and-Seek \cite{singh2017hide} strategy in the training phase. This improves the overall performance by focusing on hard samples without losing generality in the case of normal ones.
Our proposed re-ranking network shows state-of-the-art performance on several image retrieval benchmarks with a significant margin over several state-of-the-art methods. Our main contributions are as follows:
\vspace{-0.15cm}
\begin{itemize}[leftmargin=5mm]
\setlength{\itemsep}{1pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item We present Correlation Verification Networks (CVNet), which is a powerful re-ranking model that directly predicts the similarity of an image pair based on dense feature correlation.
\item To replace expensive multi-scale inference, we construct cross-scale correlations within the model and perform cross-scale matching using a single inference.
\item We propose curriculum learning using the hard negative mining and Hide-and-Seek strategy to handle hard samples without losing generality.
\item The proposed model achieves new state-of-the-art performance on several image retrieval benchmarks: $\mathcal{R}$Oxford (+1M), $\mathcal{R}$Paris (+1M), and GLDv2-retrieval.
\end{itemize}
\vspace{-0.15cm}
\section{Related Work}
\label{sec:2.Related_Work}
\vspace{-0.15cm}
\paragraph{Image retrieval.}
Over the past few decades, image retrieval\cite{sivic2003video,jegou2010aggregating,jegou2011aggregating,radenovic2016cnn,arandjelovic2016netvlad, teichmann2019detect,radenovic2018fine,cao2020unifying} has been one a primary focus of computer-vision studies.
In pioneering research, handcrafted local features \cite{lowe2004distinctive, bay2008speeded} have been employed for global retrieval and re-ranking. A global retrieval with a global descriptor that aggregates handcrafted local features \cite{sivic2003video, philbin2007object, philbin2008lost, jegou2008hamming, jegou2010aggregating, jegou2011aggregating} is performed first, and spatial verification \cite{philbin2007object, philbin2008lost, avrithis2014hough} via local feature matching with RANSAC \cite{fischler1981random} is performed to re-rank putative retrieval results. Afterward, with the advancements in deep learning, global \cite{babenko2014neural, babenko2015aggregating, arandjelovic2016netvlad, gordo2017end, radenovic2018fine, tolias2015particular, cao2020unifying, yang2021dolg} and local features \cite{barroso2019key, dusmanu2019d2, luo2019contextdesc, mishchuk2017working, mishkin2018repeatability,noh2017large,yi2016lift, cao2020unifying} extracted from deep-learning networks have replaced handcrafted features.
Although the techniques of global and local representations has progressed significantly, geometric verification remains a de facto solution for image retrieval re-ranking in both conventional \cite{philbin2007object,philbin2008lost, xu2012learning} and recent studies \cite{noh2017large,cao2020unifying,simeoni2019local,teichmann2019detect}. In a recent study, Reranking Transformers (RRT) \cite{tan2021instance} were proposed as a replacement for geometric verification by leveraging the transformer structure \cite{vaswani2017attention}. However, no significant improvement in performance was reported. In this study, we propose a novel re-ranking solution that exhibits powerful retrieval performance.
\vspace{-0.4cm}
\paragraph{Diffusion / Query expansion.}
Among the re-ranking methods, several methods such as diffusion \cite{iscen2017efficient, chang2019explore} and query expansion \cite{chum2007total, radenovic2018fine} exist that require additional expenses to traverse the entire database. However, because this study focuses on improving image matching for single pairs, we do not consider these re-ranking methods.
\vspace{-0.4cm}
\paragraph{4D convolutional neural network.}
4D convolution is a promising solution that has received considerable attention for tasks that require interpretation of the relationship between two images (\eg visual dense correspondence prediction \cite{rocco2018neighbourhood, min2021convolutional, yang2019volumetric, li2020correspondence} and few-shot segmentation \cite{min2021hypercorrelation}).
The primary difference between the aforementioned tasks and image retrieval is that the former aims for a 2D (single image side) \cite{min2021hypercorrelation} or 4D (both image sides) \cite{min2021convolutional,min2021convolutional,yang2019volumetric} dense output, whereas the latter requires a single similarity value.
Therefore, in this study, we propose a novel structure that gradually compresses the 4D feature correlation through deeply stacked 4D convolution layers.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/CVNET_global.pdf}
\vspace{-0.6cm}
\caption{
Illustration of the proposed Global backbone network (CVNet-Global) and its training objective. The network has two objectives: classification loss and contrastive loss. To utilize several samples without a computational burden in contrastive learning, momentum network and queue structure are adopted from MoCo \cite{he2020momentum}. The combination of these objectives enables the network to learn intra-class variability and inter-class distinctiveness, which is required for image retrieval task.
}
\vspace{-0.5cm}
\label{fig:CVNet_Global}
\end{figure*}
\vspace{-0.4cm}
\paragraph{Hide-and-Seek.}
Hide-and-Seek \cite{singh2017hide} is an augmentation technique that has been proposed to improve object localization performance in weakly supervised fields. To address the drawback that the network focuses only on the most salient areas, a few random patches of the image are masked to induce the network to make robust predictions despite having visual access only to less salient areas. We found that the Hide-and-Seek approach could improve the image retrieval performance by enabling accurate matching even on hard samples, such as those involving occlusion or truncation.
In this study, we apply Hide-and-Seek to our model in a curriculum manner to ensure robustness when handling hard samples without losing generality.
\vspace{-0.15cm}
\section{Global Backbone Network (CVNet-Global)}
\label{sec:3.Global_Backbone_Network}
\vspace{-0.15cm}
In this section, we introduce our proposed global backbone network named CVNet-Global. An overview of CVNet-Global is shown in \cref{fig:CVNet_Global}.
Our proposed global backbone network, that takes a single image $\mathbf{I} \in \mathbb{R}^{3 \times H \times W}$ as the input, is used to extract the global descriptor $\mathbf{d}_g \in \mathbb{R}^{C_g}$ for global image retrieval and local feature map $\mathbf{F} \in \mathbb{R}^{C_l \times H_l \times W_l}$ for the re-ranking phase.
We adopt multi-objective loss\cite{berman2019multigrain} that jointly optimizes the classification loss and contrastive loss to induce the network to learn more distinctive and robust global and local representations.
\vspace{-0.15cm}
\subsection{Structure}
\label{sec:3.1.Structure}
\vspace{-0.15cm}
Inspired by the momentum-contrastive structure of MoCo \cite{he2020momentum}, we build two networks: the global backbone network $f$ and its momentum network $\bar{f}$. These two networks are based on ResNet \cite{he2016deep}. $f_i$ denotes $i$th \textit{ResBlock}. Global Average Pooling is replaced with learnable GeM pooling \cite{radenovic2016cnn} with power initialized to 3.0, and a whitening FC layer\cite{gordoa2012leveraging} and L2-normalization are added after the pooling layer. We build a queue $\mathbf{Q} \in \{\bar{\mathbf{d}}_g^i\}_{i=1}^K$, to save momentum global descriptors for each iteration and utilize them as contrastive samples.
\vspace{-0.10cm}
\subsection{Training Objective}
\label{sec:3.2.Training_Objective}
\vspace{-0.10cm}
\paragraph{Classification loss.}
At each iteration, the query image $\mathbf{I}_q$ is fed into the global network $f$ to compute the query global descriptor $\mathbf{d}^q_g$. With $\mathbf{d}^q_g$, CurricularFace \cite{huang2020curricularface}-margined classification loss $\mathcal{L}_{cls}$ is computed as
\vspace{-0.15cm}
\begin{equation}
{
\mathcal{L}_{cls} = - \log\frac{\exp ( \mathcal{C} ({\mathbf{W}_{y_g}}^T \mathbf{d}^q_g, 1)/\tau )}{{\sum}_{i=1}^{N}{\exp (\mathcal{C} ({{\mathbf{W}_{y_i}}^T \mathbf{d}^q_g, \mathbbm{1}^i_q})/ \tau )}},
}
\vspace{-0.15cm}
\label{eq:Global_Classification_Loss}
\end{equation}
where $\mathbf{W}$ is the class weight, $\tau$ is the scale parameter, $y_g$ is the ground-truth class, and $\mathbbm{1}^i_q$ is an indicator that shows whether the $i$th class $y_i$ is identical to $y_g$. $\mathcal{C}$ is a function that adds a CurricularFace margin to cosine similarity with its margin term $m$.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/CVNET_rerank.pdf}
\vspace{-0.6cm}
\caption{
Illustration of the proposed Re-ranking network (CVNet-Rerank). The proposed network takes pair of feature maps extracted from the trained CVNet-Global model as input, constructs a cross-scale feature correlation, and gradually compresses it to image similarity of a pair with deeply stacked 4D convolution layers.
}
\label{fig:CVNet_Rerank}
\vspace{-0.5cm}
\end{figure*}
\vspace{-0.40cm}
\paragraph{Momentum contrastive loss.}
At each iteration, a positive image $\mathbf{I}_p$ with the same label as the query image $\mathbf{I}_q$ is sampled and fed into the momentum network $\bar{f}$ to compute the positive momentum global descriptor $\bar{\mathbf{d}}^p_g$. The descriptor $\bar{\mathbf{d}}^p_g$ is updated to queue $\mathbf{Q}$ while dequeuing the last element of the queue. Then, queue $\mathbf{Q}$ holds at least one momentum sample with the same label as the query including $\bar{\mathbf{d}}^p_g$. Thus we use the CurriculurFace-margined momentum contrastive loss $\mathcal{L}_{con}$:
\vspace{-0.15cm}
\begin{equation}
{
\mathcal{L}_{con}\! = \!\frac{-1}{\left | P(q) \right |} \sum\limits_{p \in P(q)}\!\!\log\!\frac{ \exp \left( \bar{\mathcal{C}} \left( \mathbf{d}_g^q \cdot \bar{\mathbf{d}}_g^p, 1 \right)\! / \tau \right)}{\sum\limits_{i \in \{\!p\!\}\!\bigcup\!N\!(\!q\!)}\exp \left( \bar{\mathcal{C}} \left( \mathbf{d}_g^q \cdot \bar{\mathbf{d}}_g^i, \mathbbm{1}^i_q \right)\! / \tau \right)},
}
\label{eq:Global_Contrastive_Loss}
\end{equation}
where $\bar{\mathcal{C}}$ is identical to $\mathcal{C}$, but updates its moving average parameter separately with $\mathcal{C}$. $P(q)$ and $N(q)$ are the in-queue positive and negative set, respectively.
\vspace{-0.45cm}
\paragraph{Total loss.}
Finally, the total loss of our global backbone network $\mathcal{L}_g$ is the weighted sum of the classification loss $\mathcal{L}_{cls}$ and contrastive loss $\mathcal{L}_{con}$:
\vspace{-0.15cm}
\begin{equation}
{
\mathcal{L}_g = \lambda_{cls}\mathcal{L}_{cls} + \lambda_{con}\mathcal{L}_{con}.
}
\label{eq:Global_Total_Loss}
\vspace{-0.15cm}
\end{equation}
Note that, optimizer only updates the global backbone network $f$. The momentum network $\bar{f}$ is momentum updated with a momentum of $\eta$.
\vspace{-0.10cm}
\section{Re-Ranking Network (CVNet-Rerank)}
\label{sec:4.Re_Ranking_Network}
\vspace{-0.10cm}
In this section, we introduce our proposed re-ranking network, named CVNet-Rerank.
An overview of CVNet-Rerank is shown in \cref{fig:CVNet_Rerank}.
Our proposed re-ranking network, which takes a pair of local feature maps $(\mathbf{F}_q, \mathbf{F}_k)$ of images $(\mathbf{I}_q, \mathbf{I}_k)$ as input, is used to predict the similarity $s_l^{q,k} \in \mathbb{R}^1$ between two images. It subsequently re-ranks the global image retrieval results based on the results of the predicted similarity. The local feature maps $(\mathbf{F}_q, \mathbf{F}_k)$ are extracted from the intermediate layer of the global backbone network $f$, that is fully trained and frozen.
Representative 2D CNN architectures (\eg VGG \cite{simonyan2015very} and ResNet \cite{he2016deep}) stack several 2D convolutional layers, followed by spatial-dimensional down-sampling to capture diverse level features in an image and compress it to fine-grained information. Inspired by the aforementioned structure, the proposed re-ranking network gradually compresses the feature correlation with deeply stacked 4D convolution layers and predicts the image similarity using the classifier.
\vspace{-0.10cm}
\subsection{Cross-scale Correlation Construction}
\label{sec:4.1.Cross_scale_Correlation_Construction}
\vspace{-0.10cm}
Because image retrieval must be robust for scale difference, several image retrieval methods that use local features built a multi-scale local feature set through multiple inferences using an image pyramid.
Here, following \cite{min2021convolutional}, we expand the extracted feature map to a multi-scale feature pyramid to capture semantic cues from different scales inside the model, thus avoiding the expensive task of multi-scale inference.
Given a pair of query and key images $\mathbf{I}_q, \mathbf{I}_k \in \mathbb{R}^{3 \times H \times W}$, we extract the local feature maps $\mathbf{F}_q, \mathbf{F}_k \in \mathbb{R}^{C_l \times H_l \times W_l}$ using the global backbone network $f$. After feature extraction, we construct a feature pyramid $\{\mathbf{F}^s\}^S_{s=1}$,where $S$ is the number of scales, by repeatedly resizing the extracted feature map $F$ with a scaling factor of $1/\sqrt{2}$. Each level of the feature pyramid passes the scale-wise $3\times3$ convolution layer, thereby reducing the channel dimension of each layer to $C'_l$ to capture semantic information with diverse receptive field sizes while reducing the memory footprint of our image retrieval framework.
With the constructed query feature pyramid $\{\mathbf{F}_q^s\}^S_{s=1}$ and key feature pyramid $\{\mathbf{F}_k^s\}^S_{s=1}$, we compute a 4-dimensional cross-scale correlation set $\{\mathbf{C}_{qk}^{s_q,s_k}\}^{(S,S)}_{(s_q,s_k)=(1,1)}$ of size $S^2$ using cosine similarity and ReLU function:
\begin{equation}
\mathbf{C}_{qk}^{s_q,s_k}(\mathbf{p}_q, \mathbf{p}_k) = \text{ReLU} \left (\frac{\mathbf{F}_q^{s_q}(\mathbf{p}_q) \cdot \mathbf{F}_k^{s_k}(\mathbf{p}_k)}{\left \| \mathbf{F}_q^{s_q}(\mathbf{p}_q) \right \| \left \| \mathbf{F}_k^{s_k}(\mathbf{p}_k) \right \|} \right ),
\label{eq:Correlation_Computation}
\end{equation}
where $\mathbf{p}_q$ and $\mathbf{p}_k$ are the pixel positions in each feature map. Finally, we interpolate all the correlations to obtain the original feature resolution $H_l \times W_l$ for each image side, stack all the correlations, and construct a cross-scale correlation set $\mathbf{C}^0_{qk} \in \mathbb{R}^{S^2 \times H_l \times W_l \times H_l \times W_l}$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/encoder_structure.pdf}
\vspace{-0.6cm}
\caption{
The detailed structure of the proposed 4D correlation Encoder. The proposed encoder structure gradually compresses the cross-scale correlation into a fine-grained correlation cue.
}
\label{fig:fig4.4D_encoder_structure}
\vspace{-0.5cm}
\end{figure}
\subsection{4D Correlation Encoder}
\label{sec:4.2.4D_Correlation_Encoder}
\vspace{-0.15cm}
Our correlation encoder takes the cross-scale correlation set $\mathbf{C}^0_{qk} \in \mathbb{R}^{S^2 \times H_l \times W_l \times H_l \times W_l}$ and gradually compresses it into a binary class logit $\mathbf{Z}_{qk}=\{z_0, z_1\} \in \mathbb{R}^2$.
We construct our encoder with a sequence of 4D convolution blocks, followed by a global average pooling layer and a 2-layer MLP classifier. Except for the last 4D convolution block, the remaining blocks perform spatial dimension down-sampling by constructing each last convolutional layer as a stride convolution. Na\"ive 4D convolution is computationally intensive and, therefore, unsuitable for online re-ranking. Using the knowledge taken from findings of previous studies, we adopt a center-pivot 4D convolution \cite{min2021hypercorrelation} to reduce the burden of using high-dimensional kernels and enable real-time image re-ranking.
With this pyramid structure of 4D convolution, the cross-scale feature correlation set is encoded as a fine-grained correlation cue $\mathbf{C}^{1:4}_{qk}$. It is subsequently converted into a class logit $\mathbf{Z}_{qk}$ through spatial dimension average pooling and a binary classifier.
\subsection{Training Objective}
\label{sec:4.3.Training_Objective}
\vspace{-0.15cm}
Our re-ranking network is trained to minimize the cross-entropy loss for query and key pair $\left(q,k\right)$:
\vspace{-0.15cm}
\begin{equation}
\mathcal{L}_{r}^{qk} = \mathbf{CE}(\mathbf{Softmax}(\mathbf{Z}_{qk}), \mathbbm{1}^k_q).
\end{equation}
\label{eq:Rerank_Loss}
\vspace{-0.5cm}
We symmetrically convert the loss $\mathcal{L}_r^{qk}$ to $\mathcal{L}_r^{kq}$ by reversing the query-key position. Afterward, we apply them to positive $p$ and negative key samples $n$, respectively. The final loss for our re-ranking network is constructed as follows:
\vspace{-0.15cm}
\begin{equation}
\mathcal{L}_{r} = \left(\mathcal{L}_{r}^{qp} + \mathcal{L}_{r}^{pq} + \mathcal{L}_{r}^{qn} + \mathcal{L}_{r}^{nq}\right)/4 .
\end{equation}
\label{eq:Rerank_Total_Loss}
\vspace{-0.6cm}
\vspace{-0.15cm}
\subsection{Training with Hard Samples}
\label{sec:4.4:Training_with_Hard_Samples}
\vspace{-0.15cm}
Because image re-ranking is performed on images that look similar at first glance, it must be robust against hard samples. Thus, we propose a method to train a network by focusing on hard samples through hard negative mining and Hide-and-Seek augmentation. Although hard samples are beneficial for model training, a possibility of losing generality in the case of normal samples exists. Carefully considering this concern, we apply hard negative mining and Hide-and-Seek augmentation in a curriculum learning manner to train the re-ranking network to make more accurate predictions without losing generality in the case of normal ones while concentrating on hard samples.
\vspace{-0.45cm}
\paragraph{Hard negative mining.}
We selected hard-negative samples with help of trained global descriptors. For every sample in the training dataset, the top 10 negatives are selected in order of the highest global descriptor matching score. Example results of hard negative mining are shown in \cref{fig:fig5.hard_negative_mining}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/hardnegmining.pdf}
\vspace{-0.6cm}
\caption{
Examples of the query and hard negative samples of the GLDv2-clean dataset. These pairs look similar at first glance, but a closer look reveals several differences.
}
\label{fig:fig5.hard_negative_mining}
\vspace{-0.5cm}
\end{figure}
\vspace{-0.45cm}
\paragraph{Hide-and-Seek.}
Similar to several computer vision studies, occlusion is a primary obstacle in image retrieval tasks. To solve this problem, we apply Hide-and-Seek \cite{singh2017hide} augmentation to synthetically generate matching situations that involve occlusions. In the original Hide-and-Seek method, the input image is divided into grids, and probabilistic deactivation is applied to each grid section.
Similarly, we randomly deactivate each pixel value from each input feature map. This can have an effect similar to that of applying occlusion to the receptive field of the original image that corresponds to one pixel in the feature map.
This concept is illustrated in \cref{fig:fig6.hide_and_seek}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/hideandseek.pdf}
\vspace{-0.6cm}
\caption{
With Hide-and-Seek, the re-ranking network can effectively learn hard-matching cases by randomly hiding parts of matching pairs to give images an occlusion-like effect.
}
\label{fig:fig6.hide_and_seek}
\vspace{-0.5cm}
\end{figure}
\begin{table*}[t]
\small
\centering
\setlength{\extrarowheight}{-4.5pt}
\addtolength{\extrarowheight}{\aboverulesep}
\addtolength{\extrarowheight}{\belowrulesep}
\setlength{\aboverulesep}{0pt}
\setlength{\belowrulesep}{0pt}
\resizebox{0.9\linewidth}{!}{\begin{tabular}{lcccclcccclcc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Method}} & \multicolumn{4}{c}{Medium} & & \multicolumn{4}{c}{Hard} & & \multicolumn{2}{c}{Multi-scale} \\ \cmidrule(l){2-5} \cmidrule(l){7-10} \cmidrule(l){12-13}
\multicolumn{1}{c}{} & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M & & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M & & global & local \\ \midrule
\multicolumn{13}{l}{\textit{\textbf{(A) Local feature aggregation (+ Local feature re-ranking)}}} \\
DELF-ASMK*+SP \cite{noh2017large, radenovic2018revisiting} & 67.8 & 53.8 & 76.9 & 57.3 & & 43.1 & 31.2 & 55.4 & 26.4 & - & 7 \\
DELF-D2R-R-ASMK* (GLDv1) \cite{teichmann2019detect} & 73.3 & 61.0 & 80.7 & 60.2 & & 47.6 & 33.6 & 61.3 & 29.9 & & - & 7 \\
\,\;+ SP (Rerank Top-100) \cite{teichmann2019detect} & 76.0 & 64.0 & 80.2 & 59.7 & & 52.4 & 38.1 & 58.6 & 29.4 & & - & 7 \\
R50-How-ASMK,n=2000 \cite{tolias2020learning} & 79.4 & 65.8 & 81.6 & 61.8 & & 56.9 & 38.9 & 62.4 & 33.7 & & - & 7 \\ \midrule
\multicolumn{13}{l}{\textit{\textbf{(B) Global features (+ Local feature re-ranking)}}} \\
R101-GeM$^\uparrow$ \cite{radenovic2018fine,simeoni2019local} & 65.3 & 46.1 & 77.3 & 52.6 & & 39.6 & 22.2 & 56.6 & 24.8 & & 3 & - \\
\,\;+DSM (Rerank Top-100) \cite{simeoni2019local} & 65.3 & 47.6 & 77.4 & 52.8 & & 39.2 & 23.2 & 56.2 & 25.0 & & 3 & 3 \\
R101-GeM-AP (GLDv1) \cite{revaud2019learning} & 66.3 & - & 80.2 & - & & 42.5 & - & 60.8 & - & & 1 & - \\
R101-GeM+SOLAR (GLDv1) \cite{ng2020solar} & 69.9 & 53.5 & 81.6 & 59.2 & & 47.9 & 29.9 & 65.5 & 33.4 & & 3 & - \\
R50-DELG (Global-only, GLDv2-clean) \cite{cao2020unifying} & 73.6 & 60.6 & 85.7 & 68.6 & & 51.0 & 32.7 & 71.5 & 44.4 & & 3 & - \\
\,\;+ GV (Rerank Top-100) \cite{cao2020unifying} & 78.3 & 67.2 & 85.7 & 69.6 & & 57.9 & 43.6 & 71.0 & 45.7 & & 3 & 7 \\
\,\;+ GV (Rerank Top-200) \cite{cao2020unifying,tan2021instance} & 79.2 & 68.2 & 85.5 & 69.6 & & 57.5 & 42.9 & 67.2 & 44.5 & & 3 & 7 \\
\,\;+ RRT (Rerank Top-100) \cite{tan2021instance} & 78.1 & 67.0 & 86.7 & 69.8 & & 60.2 & 44.1 & 75.1 & 49.4 & & 3 & 7 \\
\,\;+ RRT (Rerank Top-200) \cite{tan2021instance} & 79.5 & 68.6 & 87.8 & 71.5 & & \uline{62.5} & 46.3 & 77.1 & 52.3 & & 3 & 7 \\
R101-DELG (Global-only, GLDv2-clean) \cite{cao2020unifying} & 76.3 & 63.7 & 86.6 & 70.6 & & 55.6 & 37.5 & 72.4 & 46.9 & & 3 & - \\
\,\;+ GV (Rerank Top-100) \cite{cao2020unifying} & 81.2 & 69.1 & 87.2 & 71.5 & & 64.0 & 47.5 & 72.8 & 48.7 & & 3 & 7 \\
\,\;+ RRT (Rerank Top-100) \cite{cao2020unifying} & 79.9 & - & 87.6 & - & & \uline{64.1} & - & 76.1 & - & & 3 & 7 \\
\,\;+ SuperGlue (Rerank Top-100) \cite{cao2020unifying,sarlin2020superglue} & 79.7 & - & 87.1 & - & & 62.1 & - & 71.5 & - & & 3 & 7 \\
R50-DOLG (GLDv2-clean) \cite{yang2021dolg} & \uline{80.5} & \uline{76.6} & \uline{89.8} & \uline{80.8} & & 58.8 & \uline{52.2} & \uline{77.7} & \uline{62.8} & & \multicolumn{2}{c}{5} \\
R101-DOLG (GLDv2-clean) \cite{yang2021dolg} & \uline{81.5} & \uline{77.4} & \uline{91.0} & \uline{83.3} & & 61.1 & \uline{54.8} & \uline{80.3} & \uline{66.7} & & \multicolumn{2}{c}{5} \\ \midrule
\multicolumn{13}{l}{\textit{\textbf{(C) Ours}}} \\
\textbf{R50-CVNet-Global (GLDv2-clean)} & 81.0 & 72.6 & 88.8 & 79.0 & & 62.1 & 50.2 & 76.5 & 60.2 & & 3 & - \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-100)} & 86.1 & 77.6 & 89.4 & 79.9 & & 72.8 & 61.1 & 78.6 & 63.9 & & 3 & 1 \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-200)} & 87.2 & 78.9 & 90.0 & 81.2 & & 74.5 & 62.9 & 79.5 & 66.0 & & 3 & 1 \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-400)} & \textbf{87.9} & \textbf{80.7} & \textbf{90.5} & \textbf{82.4} & & \textbf{75.6} & \textbf{65.1} & \textbf{80.2} & \textbf{67.3} & & 3 & 1 \\
\textbf{R101-CVNet-Global (GLDv2-clean)} & 80.2 & 74.0 & 90.3 & 80.6 & & 63.1 & 53.7 & 79.1 & 62.2 & & 3 & - \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-100)} & 85.6 & 79.6 & 90.6 & 81.5 & & 72.9 & 64.5 & 80.4 & 66.2 & & 3 & 1 \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-200)} & 86.4 & 81.0 & 91.1 & 82.7 & & 74.6 & 66.6 & 81.0 & 68.0 & & 3 & 1 \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-400)} & \textbf{87.2} & \textbf{81.9} & \textbf{91.2} & \textbf{83.8} & & \textbf{75.9} & \textbf{67.4} & \textbf{81.1} & \textbf{69.3} & & 3 & 1 \\ \bottomrule
\end{tabular}
}
\vspace{-0.2cm}
\caption{\textbf{Comparison with state-of-the-art methods.} Performance comparison on $\mathcal{R}$Oxf/$\mathcal{R}$Par and 1M-added experiments (referred to as +1M) with Medium and Hard evaluation protocols. The proposed image retrieval framework outperforms state-of-the-art image retrieval methods by a large margin for every measure. The best and second-best scores are presented as \textbf{boldfaced} and \uline{underlined} text, respectively.
}
\label{tbl:main}
\vspace{-0.5cm}
\end{table*}
\vspace{-0.45cm}
\paragraph{Curriculum learning.}
To prevent hard samples from interfering with early learning, we apply hard negative mining and Hide-and-Seek in a curriculum learning manner. Instead of focusing on hard negatives from the outset, the rate of selecting hard negatives $r_{H}$ and the probability of Hide-and-Seek augmentation $p_{has}$ gradually increase as learning progresses.
This curriculum learning helps the network to retain its generality to ensure that it consistently performs well even when the re-ranking range is extended.
\vspace{-0.15cm}
\section{Experiments}
\label{sec:5.Experiments}
\vspace{-0.15cm}
\subsection{Implementation Details}
\label{sec:5.1.Implementation_Details}
\vspace{-0.15cm}
\paragraph{Common setting.}
Our proposed CVNet is implemented using PyTorch \cite{paszke2019pytorch}. We use the `clean' subset \cite{yokoo2020two} of Google Landmarks dataset v2 (1.58M images from 81k landmarks) \cite{weyand2020google} as a training set. The input image is augmented with random cropping/aspect ratio distortion and resized to $512 \times 512$. We use an SGD optimizer with a momentum of 0.9 and use cosine learning rate scheduling.
\vspace{-0.45cm}
\paragraph{Global backbone network.}
We use ResNet-50 (R50) and ResNet-101 (R101) as the encoder of global backbone networks with ImageNet \cite{russakovsky2015imagenet} pre-trained weights, whereas ResNet-50 is used for ablation studies.
We use a Shuffling Batch Normalization \cite{he2020momentum}, global descriptor size of 2048, and a queue size of 73,728. We set the $\tau$ to $1/30$, $m$ to $0.15$, $\eta$ to 0.999, and $\lambda_{cls}$ and $\lambda_{con}$ to $0.5$. The global model is trained for 25 epochs (39.5M steps) for the training dataset, using a learning rate of 0.005625, and a batch size of 144.
\vspace{-0.45cm}
\paragraph{Re-ranking network.}
For cross-scale correlation construction, we use $S=3$ scales (\ie $\{1/2, 1/\sqrt{2}, 1\}$). We extract the feature map $\mathbf{F}$ from the $f_3$ output and compress its channel dimension to $C'_l=256$.
Our training set contains various views of landmarks, including cases with no overlap. To avoid query-positive non-overlapping, we select verified match pairs for each class with help of deep local features \cite{noh2017large} and exclude only those classes with a number of verified match pairs.
Please see the supplementary material for a more detailed explanation of the data selection and sampling process used for the CVNet-Rerank. Finally, we select 1M images from 31k landmarks, and the proposed re-ranking model is trained for 200 epochs (6.3M steps) for all classes, using a learning rate of 0.00375 and a batch size of 96.
$r_H$ and $p_{has}$ linearly increase from 0.2 to 1.0 and from 0 to 0.2 while training, respectively.
\vspace{-0.45cm}
\paragraph{Feature extraction and matching.}
For global descriptor extraction, we follow the convention of previous studies \cite{gordo2017end,noh2017large,radenovic2018fine, cao2020unifying, tan2021instance}. We extract global descriptors of three scales: $\left\{1/\sqrt 2 ,1,\sqrt 2 \right\}$. The final global descriptor is calculated by L2-normalizing the average of the three descriptors.
During the re-ranking process, the final ranking is decided based on the final score $s_g + \alpha s_r$, where $s_g$ is the cosine similarity of the global descriptors, $s_r$ is the output score of the re-ranking network and $\alpha$ is the weight for $s_r$. As in previous studies \cite{cao2020unifying, revaud2019learning, teichmann2019detect, ng2020solar}, the weight $\alpha$ is tuned in $\mathcal{R}$Oxf/$\mathcal{R}$Par and fixed for its large-scale experiment and GLDv2-retrieval test. Finally, we set the $\alpha$ to 0.5.
\vspace{-0.15cm}
\subsection{Evaluation Benchmarks}
\label{sec:5.2.Evaluation_Benchmarks}
\vspace{-0.15cm}
We primarily evaluate our model on $\mathcal{R}$Oxford5k \cite{philbin2007object, radenovic2018revisiting} (referred to as $\mathcal{R}$Oxf) and $\mathcal{R}$Paris6k \cite{philbin2008lost, radenovic2018revisiting} (referred to as $\mathcal{R}$Par) datasets. Both datasets comprise 70 queries and 4933 and 6322 database images, respectively. In addition, an $\mathcal{R}$1M distractor set \cite{radenovic2018revisiting} is used for measuring the large-scale retrieval performance. Performance is measured using a mean Average Precision (mAP) metric.
Additionally, we evaluate our model on the instance-level large-scale image retrieval task of the Google Landmarks dataset v2 \cite{weyand2020google} (referred to as GLDv2-retrieval). The GLDv2-retrieval comprises 750 test query images and 762k database images. In this task, performance is evaluated using a mean Average Precision@100 (mAP$@100$) metric.
\vspace{-0.15cm}
\subsection{Results}
\label{sec:5.3.Results}
\vspace{-0.2cm}
In this section, we compare our model with state-of-the-art image retrieval methods.
\vspace{-0.6cm}
\paragraph{Comparison with state-of-the-art methods. (\cref{tbl:main}, \cref{tab:gldv2_retrieval})}
\cref{tbl:main} shows a comparison between results of the proposed model and state-of-the-art image retrieval methods on $\mathcal{R}$Oxf and $\mathcal{R}$Par, and their +1M experiments.
For all settings, the proposed CVNet outperforms the state-of-the-art methods. Our global model shows performance comparable to the state-of-the-art methods without additional modules, and our proposed re-ranking network exhibits superior performance without using expensive multi-scale inference. Because of the nature of re-ranking, the proposed model exhibits significantly superior performance in the difficult dataset ($\mathcal{R}$Oxf), for the difficult protocol (Hard), when a large number of images interfere (+1M). Our re-ranking method yields an improvement of up to 14.9$\%$ (R50-$\mathcal{R}$Oxf-Hard+1M), which is significantly higher than any of the state-of-the-art methods. In addition, the proposed method performs well without loss of generality even when the number of re-ranking samples increases. \cref{tab:gldv2_retrieval} compares CVNet with the results of the previous study's GLDv2-retrieval test. Even in this comparison, our proposed CVNet outperforms all state-of-the-art methods.
\begin{table}[t]
\centering
\small
\resizebox{0.9\linewidth}{!}
{
\begin{tabular}{lc}
\toprule
\multicolumn{1}{c}{Method} & mAP@100 \\ \midrule
DELF-R-ASMK*+SP \cite{teichmann2019detect} & 18.8 \\
R101-GeM+ArcFace \cite{weyand2020google} & 20.7 \\
R101-GeM+CosFace \cite{yokoo2020two} & 21.4 \\
R50-DELG (GLDv2-clean) \cite{cao2020unifying} & 24.1 \\
\,\;+ GV (Rerank Top-100) \cite{cao2020unifying} & 24.3 \\
R101-DELG (GLDv2-clean) \cite{cao2020unifying} & 26.0 \\
\,\;+ GV (Rerank Top-100) \cite{cao2020unifying} & 26.8 \\
\textbf{R50-CVNet-Global (Ours)} & 30.2 \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-100) (Ours)} & \textbf{32.4} \\
\textbf{R101-CVNet-Global (Ours)} & 32.5 \\
\,\;\textbf{+ CVNet-Rerank (Rerank Top-100) (Ours)} & \textbf{34.9} \\ \bottomrule
\end{tabular}
}
\vspace{-0.2cm}
\caption{\textbf{GLDv2-retrieval evaluation.} The result on the test split of the GLDv2-retrieval. The best scores are presented as \textbf{boldfaced} text for each ResNet backbone.
}
\label{tab:gldv2_retrieval}
\vspace{-0.5cm}
\end{table}
\vspace{-0.25cm}
\paragraph{Comparison with other re-ranking methods. (\cref{tbl:Comparison_ohter_reranking_methods})}
For a fair comparison, we attach the local branch of the DELG \cite{cao2020unifying} to our global backbone to learn the local DELG features. With these learned local features, we reproduce two re-ranking methods: geometric verification (GV) and Reranking Transformer \cite{tan2021instance}. Details of the reproduction are provided in the supplementary material.
While GV exhibits moderate performance improvement, RRT exhibits a decrease in performance in some sets, despite using the official code and setting. Our proposed method surpasses both methods by a large margin for all the measures.
\subsection{Ablation Experiments}
\label{sec:5.4.Ablation Experiments}
\vspace{-0.15cm}
In this section, we present the core ablation results in \cref{tbl:ablation_study}. Please refer to the supplementary material for a detailed explanation of this and additional ablation studies.
\vspace{-0.45cm}
\paragraph{Cross-scale correlation (\cref{tbl:Cross_Scale_Correlation}).}
We conduct an ablation study using cross-scale correlation construction to demonstrate its efficacy. The cross-scale correlation boosts the re-ranking performance, especially in hard protocols that include large-scale differences.
\vspace{-0.45cm}
\paragraph{Hard negative mining and Hide-and-Seek (\cref{tbl:Hard_Negative_Mining_and_Hide_and_Seek}).}
Our results demonstrate the effects of hard negative mining and Hide-and-Seek augmentation. When learning is performed only with random negatives, the network lost its distinguishing power and fails to re-rank. Considering the nature of re-ranking, that the process of re-ranking primarily encounters hard samples during testing, learning that focuses on hard negatives considerably improves performance. Hide-and-Seek augmentation also improves the overall performance by inducing the network to be robust against hard situations.
\vspace{-0.45cm}
\paragraph{Loss comparison for the CVNet-Global (\cref{tbl:Global_Loss_Comparison}).}
For the global backbone network, instead of using either the classification or contrastive loss, it is found that using both simultaneously results in overall improved performance.
\vspace{-0.45cm}
\paragraph{Quantization (\cref{tbl:8_bit_quantization}).}
To reduce the memory footprint, we conduct an experiment by quantizing the multi-scale features stored in 32 bits to 8 bits. While this quantization reduces the memory footprint by 1/4, it hardly diminishes the overall performance.
\vspace{-0.45cm}
\paragraph{Extraction latency and memory footprint (\cref{tbl:Extraction_Latency_and_Memory_Footprint}).}
Our feature extraction in the re-ranking process requires only a single inference, which is included in the process of extracting the global descriptor. Therefore, it has the lowest extraction latency time among the reproduced re-ranking methods. The memory footprint of the original model is large because of its dense nature. Thus, we attempt to reduce it with quantization (CVNet$^Q$). Through channel reduction and quantization, we achieve a memory footprint similar to that of re-ranking methods using sparse features while significantly improving the performance. Latency and matching time are measured on NVIDIA TITAN RTX GPU and i5-9600K CPU, for squared images of side 512. The time measured in the CPU is marked with an $*$.
\begin{table}[t]
\centering
\Huge
\resizebox{1.0\linewidth}{!}
{%
\begin{tabular}{@{}clcccccccc@{}}
\toprule
\multirow{2}{*}{\#} & \multirow{2}{*}{Method} & \multicolumn{4}{c}{Medium} & \multicolumn{4}{c}{Hard} \\ \cmidrule(l){3-10}
& & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M \\ \midrule
0 & CVNet-Global & 81.0 & 72.6 & 88.8 & 79.0 & 62.1 & 50.2 & 76.5 & 60.2 \\ \midrule
\multirow{3}{*}{100} & GV$^\dagger$ \cite{cao2020unifying} & {\ul 82.2} & {\ul 74.0} & {\ul 89.0} & {\ul 79.3} & 64.2 & 51.9 & {\ul 77.1} & {\ul 60.8} \\
& RRT$^\dagger$ \cite{tan2021instance} & {\ul 82.2} & 72.4 & 88.8 & 78.8 & {\ul 66.1} & {\ul 52.3} & 75.6 & 57.4 \\
& CVNet-Rerank & \textbf{86.1} & \textbf{77.6} & \textbf{89.4} & \textbf{79.9} & \textbf{72.8} & \textbf{61.1} & \textbf{78.6} & \textbf{63.9} \\ \midrule
\multirow{3}{*}{200} & GV$^\dagger$ \cite{cao2020unifying} & {\ul 82.7} & {\ul 74.8} & {\ul 89.1} & {\ul 79.4} & 65.0 & {\ul 52.3} & {\ul 77.5} & {\ul 60.8} \\
& RRT$^\dagger$ \cite{tan2021instance} & 82.1 & 71.6 & 88.7 & 77.9 & {\ul 66.0} & 51.3 & 75.2 & 53.5 \\
& CVNet-Rerank & \textbf{87.2} & \textbf{78.9} & \textbf{90.0} & \textbf{81.2} & \textbf{74.5} & \textbf{62.9} & \textbf{79.5} & \textbf{66.0} \\ \midrule
\multirow{3}{*}{400} & GV$^\dagger$ \cite{cao2020unifying} & {\ul 82.5} & {\ul 74.8} & {\ul 89.1} & {\ul 79.5} & 63.8 & {\ul 52.1} & {\ul 77.5} & {\ul 61.1} \\
& RRT$^\dagger$ \cite{tan2021instance} & 81.7 & 71.2 & 88.2 & 75.2 & {\ul 65.2} & 50.4 & 74.8 & 49.9 \\
& CVNet-Rerank & \textbf{87.9} & \textbf{80.7} & \textbf{90.5} & \textbf{82.4} & \textbf{75.6} & \textbf{65.1} & \textbf{80.2} & \textbf{67.3} \\ \bottomrule
\end{tabular}
}
\vspace{-0.2cm}
\caption{\textbf{Comparison with other re-ranking methods.} Geometric Verification (GV) and Reranking Transformers (RRT) are reproduced based on our R50-CVNet-Global. $\dagger$ indicates reproduced. \# is the number of samples that is re-ranked and the best and second-best scores are presented as \textbf{boldfaced} and \uline{underlined} text, respectively.}
\label{tbl:Comparison_ohter_reranking_methods}
\vspace{-0.5cm}
\end{table}
\begin{table*}[ht]
\centering
\begin{subtable}{0.5\linewidth}
{
\centering
\resizebox{0.95\columnwidth}{!}
{
\Huge
\begin{tabular}{cccccccccc}
\toprule
\multirow{2}{*}{\#} & \multirow{2}{*}{CSC} & \multicolumn{4}{c}{Medium} & \multicolumn{4}{c}{Hard} \\ \cmidrule(l){3-10}
& & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M \\ \midrule
0 & & 81.0 & 72.6 & 88.8 & 79.0 & 62.1 & 50.2 & 76.5 & 60.2 \\ \midrule
\multirow{2}{*}{100} & & 84.9 & 76.1 & 88.8 & 79.3 & 69.9 & 57.4 & 76.3 & 61.1 \\
& \checkmark & \textbf{86.1} & \textbf{77.6} & \textbf{89.4} & \textbf{79.9} & \textbf{72.8} & \textbf{61.1} & \textbf{78.6} & \textbf{63.9} \\ \midrule
\multirow{2}{*}{200} & & 85.3 & 76.7 & 88.9 & 79.5 & 70.5 & 58.3 & 76.3 & 61.5 \\
& \checkmark & \textbf{87.2} & \textbf{78.9} & \textbf{90.0} & \textbf{81.2} & \textbf{74.5} & \textbf{62.9} & \textbf{79.5} & \textbf{66.0} \\ \midrule
\multirow{2}{*}{400} & & 85.5 & 77.6 & 89.0 & 79.7 & 70.7 & 59.3 & 76.4 & 61.6 \\
& \checkmark & \textbf{87.9} & \textbf{80.7} & \textbf{90.5} & \textbf{82.4} & \textbf{75.6} & \textbf{65.1} & \textbf{80.2} & \textbf{67.3} \\ \bottomrule
\end{tabular}
}
\caption{\textbf{Cross-Scale Correlation.}}
\label{tbl:Cross_Scale_Correlation}
\resizebox{0.95\columnwidth}{!}
{
\Huge
\begin{tabular}{ccccccccccc}
\toprule
\multirow{2}{*}{\#} & \multirow{2}{*}{HNM} & \multirow{2}{*}{HaS} & \multicolumn{4}{c}{Medium} & \multicolumn{4}{c}{Hard} \\ \cmidrule(l){4-11}
& & & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M \\ \midrule
0 & & & 81.0 & 72.6 & 88.8 & 79.0 & 62.1 & 50.2 & 76.5 & 60.2 \\ \midrule
\multirow{3}{*}{100} & & & 81.4 & 72.7 & 88.8 & 79.0 & 62.4 & 50.3 & 76.4 & 60.2 \\
& \checkmark & & 85.8 & 77.5 & 89.3 & 79.9 & 71.6 & 60.5 & 78.1 & 63.7 \\
& \checkmark & \checkmark & \textbf{86.1} & \textbf{77.6} & \textbf{89.4} & \textbf{79.9} & \textbf{72.8} & \textbf{61.1} & \textbf{78.6} & \textbf{63.9} \\ \midrule
\multirow{3}{*}{200} & & & 81.3 & 72.6 & 88.7 & 78.9 & 62.5 & 50.2 & 76.5 & 60.2 \\
& \checkmark & & 86.9 & 78.7 & 89.7 & 81.0 & 73.4 & 62.1 & 78.6 & 65.6 \\
& \checkmark & \checkmark & \textbf{87.2} & \textbf{78.9} & \textbf{90.0} & \textbf{81.2} & \textbf{74.5} & \textbf{62.9} & \textbf{79.5} & \textbf{66.0}\\ \midrule
\multirow{3}{*}{400} & & & 81.2 & 72.5 & 88.8 & 78.9 & 62.5 & 50.2 & 76.9 & 60.4 \\
& \checkmark & & 87.5 &80.3&89.9&82.0&74.2&64.3&78.9&66.4 \\
& \checkmark & \checkmark & \textbf{87.9} & \textbf{80.7} & \textbf{90.5} & \textbf{82.4} & \textbf{75.6} & \textbf{65.1} & \textbf{80.2} & \textbf{67.3} \\ \bottomrule
\end{tabular}
}
\caption{\textbf{Hard Negative Mining (HNM) and Hide-and-Seek (HaS).}}
\label{tbl:Hard_Negative_Mining_and_Hide_and_Seek}
}
\end{subtable}%
\hfill
\begin{subtable}{0.5\linewidth}
{
\centering
\resizebox{0.88\columnwidth}{!}
{
\Huge
\begin{tabular}{cccccccccc}
\toprule
\multirow{2}{*}{$\mathcal{L}_{cls}$} & \multirow{2}{*}{$\mathcal{L}_{con}$} & \multicolumn{4}{c}{Medium} & \multicolumn{4}{c}{Hard} \\ \cmidrule(l){3-10}
& & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M \\ \midrule
\checkmark & & {\ul 78.0} & 69.4 & \textbf{89.8} & {\ul 77.3} & 57.1 & 42.9 & \textbf{78.4} & {\ul 56.9} \\
& \checkmark & 80.1 & \textbf{73.5} & 87.7 & 76.2 & \textbf{62.2} & \textbf{51.9} & 74.0 & 56.4 \\
\checkmark & \checkmark & \textbf{81.0} & {\ul 72.6} & {\ul 88.8} & \textbf{79.0} & {\ul 62.1} & {\ul 50.2} & {\ul 76.5} & \textbf{60.2} \\ \bottomrule
\end{tabular}
}
\caption{\textbf{Loss Comparison of CVNet-Global.}}
\label{tbl:Global_Loss_Comparison}
\resizebox{0.88\columnwidth}{!}
{
\Huge
\begin{tabular}{cccccccccc}
\toprule
\multirow{2}{*}{\#} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}8-bit\\quant\end{tabular}} & \multicolumn{4}{c}{Medium} & \multicolumn{4}{c}{Hard} \\ \cmidrule(l){3-10}
& & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M & $\mathcal{R}$Oxf & +1M & $\mathcal{R}$Par & +1M \\ \midrule
0 & & 81.0 & 72.6 & 88.8 & 79.0 & 62.1 & 50.2 & 76.5 & 60.2 \\ \midrule
\multirow{2}{*}{100} & & \textbf{86.1} & \textbf{77.6} & \textbf{89.4} & \textbf{79.9} & \textbf{72.8} & \textbf{61.1} & \textbf{78.6} & \textbf{63.9} \\
& \checkmark & \textbf{86.1} & \textbf{77.6} & \textbf{89.4} & \textbf{79.9} & \textbf{72.8} & \textbf{61.1} & \textbf{78.6} & \textbf{63.9} \\ \midrule
\multirow{2}{*}{200} & & \textbf{87.2} & \textbf{78.9} & \textbf{90.0} & \textbf{81.2} & \textbf{74.5} & \textbf{62.9} & \textbf{79.5} & \textbf{66.0} \\
& \checkmark & \textbf{87.2} & \textbf{78.9} & \textbf{90.0} & \textbf{81.2} & \textbf{74.5} & 62.8 & \textbf{79.5} & \textbf{66.0} \\ \midrule
\multirow{2}{*}{400} & & \textbf{87.9} & \textbf{80.7} & \textbf{90.5} & \textbf{82.4} & \textbf{75.6} & \textbf{65.1} & \textbf{80.2} & \textbf{67.3} \\
& \checkmark & \textbf{87.9} & 80.6 & \textbf{90.5} & \textbf{82.4} & 75.5 & \textbf{65.1} & \textbf{80.2} & \textbf{67.3} \\ \bottomrule
\end{tabular}
}
\caption{\textbf{8-bit Quantization.}}
\label{tbl:8_bit_quantization}
\resizebox{0.88\columnwidth}{!}
{
\Huge
\begin{tabular}{@{}llccccccclcc@{}}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Method}} & & \multicolumn{2}{c}{Multi-scale} & & \multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}Extraction\\ latency (ms)\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Matching\\ time (ms)\end{tabular} & & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Memory\\ (GB)\end{tabular}} \\ \cmidrule(l){2-12}
\multicolumn{1}{c}{} & & global & local & & global & +local & total & & & $\mathcal{R}$Oxf & $\mathcal{R}$Par \\ \midrule
DELG$^\dagger$ & & 3 & 7 & & 24.0 & 33.1 & 57.1 & 69.0$^*$ & & 4.25 & 5.35 \\
RRT$^\dagger$ & & 3 & 7 & & 24.0 & 33.1 & 57.1 & \textbf{3.2} & & \textbf{2.16} & \textbf{2.72} \\
CVNet & & 3 & \textbf{1} & & 24.0 & \textbf{1.7} & \textbf{25.7} & 15.6 & & 27.02 & 33.55 \\
CVNet$^Q$ & & 3 & \textbf{1} & & 24.0 & \textbf{1.7} & \textbf{25.7} & 15.6 & & 6.88 & 8.52 \\ \bottomrule
\end{tabular}
}
\caption{\textbf{Extraction Latency and Memory Footprint.}}
\label{tbl:Extraction_Latency_and_Memory_Footprint}
}
\end{subtable}
\vspace{-0.2cm}
\caption{\textbf{Ablation study for CVNet.} mAP measures for each setting. \# is the number of samples that are re-ranked.}
\label{tbl:ablation_study}
\end{table*}
\begin{figure*}[t]
\vspace{-0.25cm}
\centering
\includegraphics[width=0.9\linewidth]{figures/CVNet_qualitative.pdf}
\vspace{-0.2cm}
\caption{
Example qualitative results on $\mathcal{R}$Oxf-Hard+1M with R50-CVNet. The upper row shows the global descriptor matching result and the lower row shows the re-ranking result. Correct/incorrect results are marked with \textcolor{green}{green}/\textcolor{red}{red} borders, respectively. The query used as an input is generated by cropping only the part bounded by a green square. A dashed \textcolor{yellow}{yellow} line indicates the areas that overlap with the query.
}
\vspace{-0.5cm}
\label{fig:fig7.qualitative_results}
\end{figure*}
\vspace{-0.3cm}
\section{Discussion}
\label{sec:6.Discussion}
\vspace{-0.15cm}
\paragraph{Qualitative results.}
Examples of our re-ranking results are provided in \cref{fig:fig7.qualitative_results}. Despite technological advances, global descriptor matching is easily fooled by similar-looking negative images and has difficulty finding occluded or truncated positives, even more so at different scales.
Our re-ranking network can respond to scale changes owing to cross-scale correlation and has been trained to be robust in situations involving challenges such as occlusion. Consequently, our re-ranking network shows robust final retrieval results by boosting the ranks of positives even in cases where global descriptors are misjudged.
Additional qualitative results are provided in the supplementary material.
\vspace{-0.45cm}
\paragraph{Limitations and future work.}
Although our proposed re-ranking method has significant potential, it has shortcomings in terms of speed and memory, owing to its dense nature. To solve this problem, we apply kernel sparsification, channel reduction, and quantization to bring them up to an appropriate level, but the proposed method still requires considerable improvement. Our future work will aim to achieve improvements in speed and memory while preserving its strong performance.
\vspace{-0.15cm}
\section{Conclusion}
\label{sec:7.Conclusion}
\vspace{-0.15cm}
In this study, we propose a novel image retrieval re-ranking network that directly predicts similarity by leveraging dense feature correlation in a convolutional manner.
We design the network to construct cross-scale correlations within a single inference, thereby enabling cross-scale matching instead of expensive multi-scale inferences.
Considering that re-ranking primarily encounters hard samples during testing, we trained this network by focusing on hard samples.
With the aforementioned contributions, we achieve state-of-the-art performance on several benchmarks, demonstrating that dense feature correlation is a powerful cue for image retrieval re-ranking.
\vspace{-0.4cm}
\paragraph{Acknowledgements.}
This work was supported by the Industry Core Technology Development Project, 20005062, Development of Artificial Intelligence Robot Autonomous Navigation Technology for Agile Movement in Crowded Space, funded by the Ministry of Trade, industry \& Energy (MOTIE, Republic of Korea).
{\small
\bibliographystyle{ieee_fullname}
|
1,314,259,993,508 | arxiv | \section{Introduction}\label{sec-1}
The aim of the present contribution consists in providing a topological proof of the existence of chaotic dynamics for a planar Hamiltonian system considered in different formulations and with different interpretations in the literature, by using the ``Linked Twist Maps'' (from now on, LTMs) technique. More precisely, the model that we are going to analyze has been proposed in two evolutionary game theoretic frameworks
by Antoci et al. in \cite{Anea-16,Anea-18} to describe the dynamic outcomes arising from the interactions between patients and physicians, whose behavior is subject to clinical and legal risks. Namely, both works deal with defensive medicine, i.e., the deviation from good medical practice motivated by the threat of liability claims, investigated e.g. in \cite{KeMC-96,Stea-05,TaBa-78}. In particular, in \cite{Anea-16} the focus is on positive defensive medicine (see for instance \cite{Su-95}), according to which additional services are offered to discourage patients from
filing malpractice claims, or to convince the legal system that the standard of care was
met. On the other hand, the authors in \cite{Anea-18} consider
negative defensive medicine, that is the case in which physicians tend to avoid a source
of legal risk e.g. by adopting safer but less effective treatments (cf. \cite{Fe-12}).
A biological interpretation of the setting considered in \cite{Anea-16,Anea-18}, connected
with intraspecific competition and environmental carrying capacity
in predator-prey models, has been briefly suggested in \cite{Haea-07}, where however the focus is on the celebrated growth-cycle model by Goodwin \cite{Go-67,Go-72}, describing the dynamics of the wage share of output and the employment proportion. Actually, Harvie et al. in \cite{Haea-07} propose a system of differential equations in which each variable has both a positive and a
negative effect on its own growth rate. We will focus just on the latter, in order to obtain again the same formulation considered in \cite{Anea-16,Anea-18}.\\
The LTMs technique, that we are going to use in order to prove the existence of complex dynamics in the above described frameworks, is based on the Stretching Along the Paths (henceforth, SAP) method, i.e., the topological method for the search of fixed points
and periodic points for continuous maps defined on sets homeomorphic to the unit cube in finite
dimensional Euclidean spaces that expand the arcs along one
direction, developed in the planar case in \cite{PaZa-04a,PaZa-04b} and extended to the $N$-dimensional framework
in \cite{PiZa-07}. The context of LTMs represents a geometrical framework in which it is possible to employ the SAP method in order to detect complex dynamics, as first shown in \cite{PaZa-08,PiZa-08}. Further applications of the SAP method to planar LTMs contexts have been provided e.g. in \cite{BuZa-09,BuZa-10,PaZa-09}, while a biological application to a three-dimensional LTMs context has been proposed in \cite{RZZa-14}.\footnote{We also recall the related works \cite{RZZa-15,ZaZa-09}, in which
the existence of complex dynamics has been proved in a 2D and in a 3D continuous-time framework, respectively, by means of the SAP technique, without relying on the LTMs geometry.} Usual assumptions on the twist mappings
(see e.g. \cite{BuEa-80,Pr-83,Pr-86}) concern, among others, their smoothness, preservation of the Lebesgue measure and
monotonicity of the angular speed with respect to the radial coordinate, also in view of checking an hyperbolicity condition, needed in order to ensure the existence of Smale horseshoes (cf. \cite{De-78}). On the other hand, since our approach is purely topological, we just need a twist condition on the boundary of the two linked annuli, similar to the Poincar\'e-Birkhoff fixed point theorem. Namely, by a linked twist map we mean the composition of two twist maps, acting each on one of the two linked annuli, which, in the two-dimensional case, cross along two (or more) planar sets homeomorphic to the unit square, that we call topological rectangles.\\
In our applications of LTMs, we will consider Hamiltonian systems with a nonisochronous center, in which the position of the center or the shape of the orbits vary when modifying one of the model parameters. In particular, we will modify parameters for which it is sensible to assume that they alternate in a periodic fashion between two different values, e.g. due to a seasonal effect. In this manner, we obtain two conservative systems, the original and the perturbed ones, and for each of them we can consider an annulus composed of energy levels. Under suitable conditions on the orbits, depending on the geometric configuration we are dealing with, the two annuli can cross in two or more generalized rectangles, and in that case we call them linked annuli.
In such context, the LTMs technique consists in finding two linked annuli, whose intersection sets contain chaotic sets for the Poincar\'e map obtained as composition of the Poincar\'e maps associated with the original system and the perturbed one.
We stress that the nonisochronicity of the center is crucial in the just described procedure, because it implies that the orbits composing the linked annuli are run with a different speed, so that the Poincar\'e maps produce a twist effect on the annuli.
The consequent deformation produced on the generalized rectangles grows with the passing of time. Hence, if the switching times between the regimes governed by the original system and by the perturbed one are large enough, those generalized rectangles are transformed by the Poincar\'e maps into spiral-like sets, intersecting several times the same generalized rectangles, and the stretching along the path property required by the SAP method is fulfilled, guaranteeing the presence of chaotic sets inside the generalized rectangles.\\
We will use the LTMs technique both when dealing with the evolutionary game theoretic frameworks describing the dynamic effects of positive defensive medicine, proposed in \cite{Anea-16}, and of negative defensive medicine, investigated in \cite{Anea-18}, as well as when considering the context connected with intraspecific competition and environmental carrying capacity in predator-prey models, mentioned in \cite{Haea-07}.
Although all such settings are variants of the same model, due to the dissimilar meaning attached to the parameters in the three analyzed contexts, each time it will be sensible to periodically perturb a different parameter, and this in turn will generate a peculiar geometrical configuration, in which the LTMs method can be applied in a specific manner. Namely, in the frameworks from \cite{Anea-16, Anea-18} we will exploit the periodic dependence on time of the model parameters describing the risk associated with certain medical interventions, whose seasonal variation is empirically grounded, considering seasonality in hypertension (see e.g. \cite{Atea-08,Deea-12})
and its connection with perioperative adverse events (cf. for instance \cite{Hoea-04,Liuea-16}).
Despite the common rationale for assuming a periodic variation on those two parameters in the settings from \cite{Anea-16,Anea-18}, we stress that modifying the former or the latter produces a different effect on the position of the center. Moreover, orbits are run clockwise in the framework in \cite{Anea-16} and counterclockwise in the framework in \cite{Anea-18}. Also this difference will affect the proofs of the corresponding results about LTMs, in which we need to count the laps completed by suitable paths around the centers.
As concerns the biological setting, encompassing logistic terms which take into account intra-species interactions and the role of environmental resources, according e.g. to \cite{BoSt-15,NiGu-76}
it is sensible to assume a seasonal variation both for the carrying capacities and for the
intrinsic growth rates of the two populations. Even if such parameters influence the shape of the orbits, and raising the value of carrying capacities also enlarges the region in which orbits may lie, neither carrying capacities nor intrinsic growth rates affect the center position. Nonetheless, we show that it is still possible to prove the existence of chaotic dynamics for the associated Poincar\'e map via the LTMs technique dealing with a different geometrical configuration for orbits in the phase plane, in agreement with the results obtained in other contexts e.g. in \cite{BuZa-09,PaZa-13}. Regarding the nonisochronicity of the centers, it was proven in \cite{Sc-85,Sc-90} that the period of the orbits increases with the energy level in the settings that we are going to analyze.\footnote{In this respect, we also mention \cite{Maea-16}, where results about the period of small and large cycles have been obtained for a wide class of Hamiltonian systems, encompassing those here considered.} We finally stress that, like it happened e.g. in \cite{PiZa-08}, where the effect produced by a periodic harvesting on the original predator-prey model was investigated, also our results about the existence of chaotic dynamics are robust with respect to small perturbations, in $L^1$ norm, in the coefficients of the considered settings.\\
The remainder of the paper is organized as follows. In Section \ref{sec-2} we recall the main definitions and results connected with the LTMs method. In Section \ref{sec-3} we introduce the first two versions of the model that we are going to analyze, as presented in \cite{Anea-16,Anea-18}, respectively, and we prove that the Poincar\'e maps associated with them may generate chaotic dynamics via the method of LTMs, looking at the geometrical configurations of their orbits in the phase plane. In Section \ref{sec-4} we illustrate the ecological interpretation given in \cite{Haea-07} of the model considered in Section \ref{sec-3} and we describe an alternative way of applying the LTMs technique.
In Section \ref{sec-5} we briefly discuss our results and conclude.
\section{Recalling the Linked Twist Maps framework}\label{sec-2}
As explained in the Introduction, our theoretical starting point is given by the Stretching Along
the Paths (henceforth, SAP) method, developed in the planar case in \cite{PaZa-04a,PaZa-04b} and extended to the $N$-dimensional framework
in \cite{PiZa-07}. We shall see below that the SAP method is based on the SAP relation in \eqref{sapr}.
The context of ``Linked Twist Maps'' (LTMs, from now on) represents a geometrical framework in which it is possible to employ the SAP method in order to detect complex dynamics, as first shown in \cite{PaZa-08,PiZa-08}. In more detail,
by a linked twist map we mean the composition of two twist maps, each acting on one of two annuli, which, in the two-dimensional case, cross along two (or more) planar sets homeomorphic to the unit square, that we call topological rectangles. For the two maps, we assume that they are homeomorphisms and that, like in the Poincar\'e-Birkhoff fixed point theorem, produce a twist effect on the boundary of the two annuli, leaving the boundary invariant. However, our approach is purely topological and for the maps we do not require neither area-preserving properties, nor the monotonicity of the twist with respect to the radial coordinate.\\
For brevity's sake, in what follows we will recall just the definitions and the results about the SAP relation that are necessary in view of our planar applications in Sections \ref{sec-3} and \ref{sec-4}. Further details and more general formulations can be found e.g. in \cite{Paea-08,PiZa-08}. Moreover, a three-dimensional LTMs context has been proposed in \cite{RZZa-14}.\\
A \textit{path} in $\mathbb R^2$ is a continuous map $\gamma:
[0,1]\to\mathbb R^2$ and we set $\overline{\gamma}:=\gamma([0,1]).$
By a \textit{generalized rectangle}
we mean a set ${\mathcal R}\subseteq\mathbb R^2$ which is homeomorphic to the
unit square $I^2:=[0,1]^2,$ through a homeomorphism $h: {\mathbb R}^2\supseteq I^2 \to \mathcal R\subseteq \mathbb R^2.$
We also set
$${\mathcal R}^{-}_{l}:= h([x_1 = 0])\,,\quad
{\mathcal R}^{-}_{r}:= h([x_1 = 1])$$ and call them the \textit{left} and
the \textit{right} sides of $\mathcal R,$ respectively.
Setting $\mathcal R^{-}:= \mathcal R^{-}_{l}\cup \mathcal R^{-}_{r},$
we call the pair
$${\widetilde{\mathcal R}}:= (\mathcal R, \mathcal R^-)$$
an {\textit{oriented rectangle}} \textit{of $\mathbb R^2$}.\\
We are now in position to recall the definition of the {\it stretching along the paths} relation for maps between oriented rectangles.\\
Given ${\widetilde{\mathcal A}}:=
({\mathcal A},{\mathcal A}^-)$ and ${\widetilde{\mathcal B}}:=
({\mathcal B},{\mathcal B}^-)$ oriented rectangles of $\mathbb R^2,$ let $F: \mathcal A\to \mathbb R^2$ be a function
and ${\mathcal K}\subseteq {\mathcal A}$
be a compact set. We say that \textit{$({\mathcal K},F)$ stretches
${\widetilde{\mathcal A}}$ to ${\widetilde{\mathcal B}}$ along the
paths}, and write
\begin{equation}\label{sapr}
({\mathcal K},F): {\widetilde{\mathcal A}} \stretchx {\widetilde{\mathcal B}},
\end{equation}
if
\begin{itemize}
\item{} \; $F$ is continuous on ${\mathcal K}\,;$
\vspace{-2mm}
\item{} \; for every path $\gamma: [0,1]\to {\mathcal A}$ with
$\gamma(0)$ and $\gamma(1)$ belonging to different components of ${\mathcal A}^-,$ there exists $[t',t'']\subseteq [0,1]$
such that $\gamma([t',t''])\subseteq {\mathcal K},$ $F(\gamma([t',t'']))\subseteq {\mathcal B},$ with $F(\gamma(t'))$ and
$F(\gamma(t''))$ belonging to different components of ${\mathcal B}^-.$
\end{itemize}
In the special case in which ${\mathcal K}={\mathcal A},$ we simply write $F: {\widetilde{\mathcal A}} \stretchx {\widetilde{\mathcal B}}.$
\smallskip
In our applications of LTMs, we will consider Hamiltonian systems with a nonisochronous center, in which the position of the center or the shape of the orbits vary when modifying one of the model parameters. In particular, we will modify parameters for which it is sensible to assume that they alternate in a periodic fashion between two different values, e.g. due to a seasonal effect. In this manner, we obtain two conservative systems and for each of them we can consider an annulus composed of energy levels. Under suitable conditions on the orbits, depending on the geometric configuration we are dealing with, the two annuli can cross in two or more generalized rectangles, that we orientate by suitably choosing how to name (as left and right) two among the arcs of orbits composing their boundary.
As we shall see in the next sections, the choice depends on the relative position of the generalized rectangles and on whether orbits are run clockwise or counterclockwise. The involved functions are the Poincar\'e maps associated with the two systems, and thus they are homeomorphisms.
The main result that we shall use in Sections \ref{sec-3} and \ref{sec-4} reads as follows:
\begin{theorem}\label{th}
Let $F: \mathbb R^2\supseteq D_{F}\to \mathbb R^2$ and
$G: \mathbb R^2\supseteq D_{G}\to \mathbb R^2$ be continuous maps defined on the sets $D_{F}$ and $D_{G},$ respectively. Let also
${\widetilde{\mathcal A}} := ({\mathcal A},{\mathcal A}^-)$ and
${\widetilde{\mathcal B}} := ({\mathcal B},{\mathcal B}^-)$ be oriented rectangles of $\mathbb R^2.$
Suppose that the following conditions are satisfied:
\begin{itemize}
\item[$\quad (C_F)\;\;$] There are (at least) two disjoint compact sets ${\mathcal H_0},\,{\mathcal H_1}\,\subseteq {\mathcal A}\cap D_{F}$
such that
$\displaystyle{({\mathcal H}_i,F): {\widetilde{\mathcal A}}
\stretchx\, {\widetilde{\mathcal B}}},$ for $i=0,1\,;$
\\
\item[$(C_G)\;\;$] ${\mathcal B}\subseteq D_{G}$ and
$G\,: {\widetilde{\mathcal B}} \stretchx {\widetilde{\mathcal A}}\,.$
\end{itemize}
Then if the map $\Phi:=G\circ F$ is continuous and injective on the set
${\mathcal H}:=({\mathcal H}_0\cup{\mathcal H}_1)\cap{F}^{-1}(\mathcal B),$ setting
\begin{equation}\label{xin}
{X}_{\infty}:=\bigcap_{n=-\infty}^{\infty}\Phi^{-n}(\mathcal H),
\end{equation}
there exists a nonempty compact set
$${X}\subseteq {X}_{\infty} \subseteq {\mathcal H},$$
on which the following properties are fulfilled:
\begin{itemize}
\item[$(i)$] ${X}$
is invariant for $\Phi$ (i.e., $\Phi(X) = X$);
\item[$(ii)$] $\Phi\!\!\restriction_{X}$ is semi-conjugate to the two-sided Bernoulli shift on two symbols, i.e.,
there exists a continuous map $\pi$ from ${X}$ onto $\Sigma_2:=\{0,1\}^{\mathbb Z},$ endowed with the distance
\begin{equation*}
\hat d(\textbf{s}', \textbf{s}'') := \sum_{i\in {\mathbb Z}} \frac{|s'_i - s''_i|}{2^{|i| + 1}}\,,
\end{equation*}
for $\textbf{s}'=(s'_i)_{i\in {\mathbb Z}}$ and
$\textbf{s}''=(s''_i)_{i\in {\mathbb Z}}\in \Sigma_2\,,$
such that the diagram
\begin{equation*}
\begin{diagram}
\node{{X}} \arrow{e,t}{\Phi} \arrow{s,l}{\pi}
\node{{X}} \arrow{s,r}{\pi} \\
\node{\Sigma_2} \arrow{e,b}{\sigma}
\node{\Sigma_2}
\end{diagram}
\end{equation*}
commutes, i.e. $\pi\circ\Phi=\sigma\circ\pi,$ where $\sigma:\Sigma_2\to\Sigma_2$ is the Bernoulli
shift defined as $\sigma((s_i)_i):=(s_{i+1})_i,\,\forall
i\in\mathbb Z\,;$
\item[$(iii)$] the set of the periodic points of $\Phi\restriction_{{X}_{\infty}}$ is dense in ${X}$
and the preimage $\pi^{-1}(\textbf{s})\subseteq {X}$ of
every
$k$-periodic sequence $\textbf{s} = (s_i)_{i\in {\mathbb N}}\in \Sigma_2$
contains at least one $k$-periodic point.
\end{itemize}
Furthermore, from conclusion $(ii)$ it follows that:
\begin{itemize}
\item[$(iv)$] $$h_{\rm top}(\Phi)\ge h_{\rm top}(\Phi\restriction_{X})\geq h_{\rm top}(\sigma) = \log(2),$$
where $h_{\rm top}$ is the topological entropy;
\item[$(v)$] there exists a compact invariant set $\Lambda\subseteq {X}$ such that $\Phi\vert_{\Lambda}$ is
semi-conjugate to the two-sided Bernoulli shift on two symbols, topologically transitive and displays sensitive dependence on initial conditions.
\end{itemize}
\end{theorem}
\begin{proof}
The key step consists in showing that
\begin{equation}\label{2}
({\mathcal H}_i\cap F^{-1}(\mathcal B),\Phi): {\widetilde{\mathcal A}}\stretchx {\widetilde{\mathcal A}}\,,\;\;\, i=0,1.
\end{equation}
See Theorem 3.1 in \cite{PaZa-08} for a verification of a more general version of this property.
The desired conclusions then follow by Lemma 3.2, Lemma 3.3 and the discussion after Definition 1.1 in \cite{PiZa-08}.
\end{proof}$\hfill\square$
\newpage
\noindent
Recalling:
\begin{itemize}
\item the definition of a map inducing chaotic dynamics on two symbols in a subset $\mathcal D$ of its domain (cf. Definition 2.2 in \cite{Meea-09}), according to which every two-sided sequence on two symbols - say $0$ and $1$ - is realized through the iterates of the map, jumping between two disjoint compact subsets - say $\mathcal K_0$ and $\mathcal K_1$ - of $\mathcal D,$ and periodic sequences of symbols are reproduced by periodic orbits of the map;
\item Theorem 2.3 in \cite{Meea-09}, which tells that any time the stretching relation \eqref{sapr} is fulfilled for a continuous function with respect to two disjoint compact sets, then that function induces chaotic dynamics on two symbols\footnote{Namely, as explained in \cite{Meea-09}, the SAP method allows to prove the presence of chaotic dynamics for a continuous map when verifying the validity of the SAP relation in \eqref{sapr} with respect to two suitable disjoint compact subsets of its domain.},
\end{itemize}
we can conclude that, when conditions $(C_F)$ and $(C_G)$ in Theorem \ref{th} are satisfied, the composite function $G\circ F$ induces chaotic dynamics on two symbols in $\mathcal A,$ since condition \eqref{2} is fulfilled. On the other hand, according to Theorem 2.2 in \cite{Meea-09}, if an injective map induces chaotic dynamics on two symbols in a set, then it possesses all the features listed in Theorem \ref{th}.\footnote{If the map $\Phi$ were not injective, rather than ${X}_{\infty}$ in \eqref{xin} we should introduce ${X}_{\infty}^{+}:=\bigcap_{n=0}^{\infty}\Phi^{-n}(\mathcal H)$ and then the properties listed in Theorem \ref{th} would hold true replacing $\Sigma_2:=\{0,1\}^{\mathbb Z}$ with $\Sigma_2^{+}:=\{0,1\}^{\mathbb N}.$ This means, in particular, that we could derive a weaker conclusion than $(ii)$ in Theorem \ref{th}, establishing a semi-conjugacy between the map $\Phi$ and the one-sided Bernoulli shift $\sigma^{+}((s_i)_i):=(s_{i+1})_i,\,\forall i\in\mathbb N\,,$ rather than with the two-sided Bernoulli shift. See Theorem 2.2 in \cite{Meea-09} for the precise statement and for a proof. More generally, we refer the interested reader to \cite{Meea-09,PiZa-08} for further details, as well as for a discussion on the various notions of chaos used in the literature and on the relationships among them.}\\
Hence, we can summarize the statement of Theorem \ref{th} by saying that, when conditions $(C_F)$ and $(C_G)$ therein are satisfied, the composite function $G\circ F$ induces chaotic dynamics on two symbols in $\mathcal A,$ knowing that from this fact it follows that
all the properties listed in Theorem \ref{th} are fulfilled for $G\circ F,$ also in regard to the existence of periodic points. We will use this reformulation of Theorem \ref{th} in Sections \ref{sec-3} and \ref{sec-4} when dealing with the composition of Poincar\'e maps associated with different systems (cf. e.g. the statement of Theorem \ref{app18}). We recall that a classical approach - see \cite{Kr-68} - to show the existence of periodic
solutions (harmonics or subharmonics) of systems of first order ODEs with periodic coefficients, under the
assumption of uniqueness of the solutions for the Cauchy problems, is based on
the search of the fixed points or periodic points for the associated Poincar\'e map.\\
We close the present preliminary section by completing the explanation, started just before Theorem \ref{th}, of what the LTMs method consists in, referring the reader to Subsections \ref{31} and \ref{32} and to Section \ref{sec-4} for some concrete applications of it, as well as for a few graphical illustrations of possible geometrical configurations related to such framework (see in particular Figures \ref{nef}--\ref{biofk}, connected with the numerical examples). Given a Hamiltonian system with a nonisochronous center, led by the economic or biological interpretation of the considered context, we perturb it by modifying a parameter, for which it is sensible to assume that it alternates in a periodic fashion between two different values. The LTMs method consists in proving the presence of chaotic dynamics for the Poincar\'e map obtained as composition of the Poincar\'e maps associated with the original system and the perturbed one,
by finding two linked annuli, each composed by orbits of one of the two systems, to which it is possible to apply Theorem \ref{th}. Notice that it would not be possible to find two linked annuli composed by orbits of the same Hamiltonian system. More precisely, the existence of chaos is shown by applying Theorem \ref{th}, i.e., by checking that conditions $(C_F)$ and $(C_G)$ therein are satisfied when taking as $F$ and $G$ the Poincar\'e maps associated with the two systems, and by choosing as $\mathcal A$ and $\mathcal B$ two among the generalized rectangles in which the considered linked annuli cross.\footnote{Namely, in the frameworks considered in Subsections \ref{31} and \ref{32} the linked annuli cross in two generalized rectangles (see Figures \ref{nef} and \ref{16-la}), while in Section \ref{sec-4} we find four generalized rectangles as intersections sets between two linked annuli (see Figures \ref{bioer} and \ref{bioek}). Indeed, the precise definition of linked annuli may vary according to the geometrical configuration of the considered framework (cf. Definitions \ref{li} and \ref{lim}).} We stress that the nonisochronicity of the center is crucial in the above described procedure, because it implies that the orbits composing the linked annuli are run with a different speed, so that the Poincar\'e maps produce a twist effect on the linked annuli, although closed orbits are invariant under the action of the Poincar\'e maps.
The consequent deformation produced on the generalized rectangles grows with the passing of time. Hence, if the switching times between the regimes governed by one of the two systems are large enough, those generalized rectangles are transformed by the Poincar\'e maps into spiral-like sets, intersecting several times the same generalized rectangles, allowing to check the stretching relations in $(C_F)$ and $(C_G),$
so that the presence of complex dynamics is guaranteed by Theorem \ref{th}.\\
This is the methodology that we are going to use in Section \ref{sec-3}, dealing with the evolutionary game theoretic frameworks describing the dynamic effects of positive defensive medicine, proposed in \cite{Anea-16}, and of negative defensive medicine, investigated in \cite{Anea-18}, as well as in Section \ref{sec-4}, where we consider a context introduced in \cite{Haea-07} and connected with intraspecific competition and environmental carrying capacity in predator-prey models.
We stress that all the frameworks that we will consider along the manuscript are variants of the same model. However, due to the different meaning attached to the parameters in the analyzed contexts, each time it will be sensible to periodically perturb a different parameter, and this in turn will generate a peculiar geometrical configuration, in which the LTMs method can be applied in a specific manner.
\section{Two medical malpractice litigation frameworks}\label{sec-3}
In the present section we introduce the settings considered in \cite{Anea-16,Anea-18} and we explain how to prove the existence for each of them of chaotic dynamics via the method of LTMs. Due to the large number of parameters involved in the models, we will just describe the strictly needed aspects, referring the interested reader to \cite{Anea-16,Anea-18} for further details.\\
Both works deal with defensive medicine, i.e., the deviation from good medical practice motivated by the
threat of liability claims, investigated e.g. in \cite{KeMC-96,Stea-05,TaBa-78}. In particular, Antoci et al. in \cite{Anea-16} propose an evolutionary game theoretic model to investigate the dynamic effects of positive defensive medicine, in which, according e.g. to \cite{Su-95}, additional services are offered to discourage patients from
filing malpractice claims, or to convince the legal system that the standard of care was
met. On the other hand, still in an evolutionary game theoretic model, the authors in \cite{Anea-18} consider
negative defensive medicine, that is the case in which physicians tend to avoid a source
of legal risk e.g. by adopting safer but less effective treatments (cf. \cite{Fe-12}). An important example of this phenomenon, analyzed in
\cite{Duea-99, Duea-01,Loea-93}, is given by the excessive number of Cesarean sections, which can often be unnecessary. We stress that an extension of the frameworks considered in \cite{Anea-16,Anea-18} has been proposed in \cite{Anea-19}, where it is assumed that physicians have the possibility of insuring against liability claims, so that the system becomes four-dimensional, with the additional variables being
the share of the population of physicians who buy a malpractice insurance and the cost of the insurance policy.\footnote{Under the assumption
that the insurance company has perfect foresight about agents' behavior and it is able to instantaneously adjust the policy premium to its equilibrium value, the model in \cite{Anea-19} becomes three-dimensional, as discussed in Section 5 therein.}\\
In more detail, in \cite{Anea-16} physicians are randomly paired with patients
and provide them a risky medical treatment; if an adverse event occurs, patients can choose whether or not to suit their physician
for medical malpractice, while physicians can decide whether or not to practice
defensive medicine. In the former case physicians perform unnecessary
diagnostic and therapeutic interventions, possibly harmful to patients, in order to prevent malpractice charges.
In symbols, Antoci et al. in \cite{Anea-16} obtain the following system:
\begin{equation}\label{16}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(p\,\ell(E_{ND}-E_D)-C_D+C_{ND}\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)p\left(d(E_D-E_{ND})+E_{ND}-C_L\right)
\end{array}
\right.
\end{equation}
where $d(t)\in[0,1]$ represents the share of physicians
playing strategy $D$ (practice defensive medicine) and $\ell(t)\in[0,1]$ is the share of patients
playing strategy $L$ (litigate in case that an adverse event occurs) at time t, and $d'(t),\,\ell'(t)$ are the corresponding time derivatives. In this manner $1-d(t)\in[0,1]$ represents the share of physicians
playing strategy $ND$ (not practice defensive medicine) and $1-\ell(t)\in[0,1]$ is the share of patients
playing strategy $NL$ (not litigate in case of an adverse event) at time t. The probability that an adverse event occurs is described by $p\in (0,1)$ and it is not affected by the choice of the physician to practice or not defensive medicine.
In the case of litigation, if the physician practiced defensive medicine, the patient wins with probability $q_D\in(0,1)$, while the physician wins with probability $1-q_D;$ if instead the physician did not practice defensive medicine, the patient wins with probability
$q_{ND}\in(0,1),$ while the physician wins with probability $1-q_{ND}.$ It is assumed that $q_D<q_{ND},$ i.e., defensive medicine decreases the probability for physicians of losing an eventual litigation. Calling $R>0$ the damage suffered by a patient in case that an adverse event occurs, coinciding with the compensation he/she receives from the physician if winning the litigation, and $K>0$ the sum that the patient, if losing the litigation, pays to the physician as reparation for the legal and reputation losses, it holds that
$E_D=q_D R -(1-q_D)K$ and $E_{ND}=q_{ND} R -(1-q_{ND})K$ are the expected settlement of the litigation when the physician practiced
defensive medicine or not, respectively, with $E_{ND}>E_D;$ practicing defensive medicine costs the physician $C_D,$ while not practicing defensive medicine costs the physician $C_{ND},$ with $C_D>C_{ND}\ge 0;$ finally, $C_L>0$ is the cost faced by the patient to sue the physician for medical malpractice.\\
As concerns \cite{Anea-18}, the system obtained therein is given by:
\begin{equation}\label{18}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(B^{PH}-P\,E\,\ell\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)\left(-q_{ND}(C_L-p_{ND}E)+\left((q_{ND}-q_{D})C_L+P\,E\right)d\right)
\end{array}
\right.
\end{equation}
where $d(t),\,\ell(t)\in[0,1]$ and $d'(t),\,\ell'(t)$ have a similar meaning with respect to \eqref{16}. However, this time playing strategy $D$ (practice defensive medicine) means for the physician to opt for an inferior but safer treatment, while the physician plays strategy $ND$ when he/she chooses to provide the superior, but riskier, treatment to the patient. Treatments $D$ and $ND$ produce, in addition to sure benefits, an uncertain harm to the patient, which can occur with exogenous probabilities $q_D$ and $q_{ND},$ where $0<q_D<q_{ND}<1;$ the patient, when suffering the harm, can decide to sue the physician for medical malpractice, at a cost $C_L>0;$ parameters
$p_{D},\,p_{ND}\in[0,1]$ describe the exogenous probabilities with which the court will order the physician to pay compensation for having provided the inferior, but safer, treatment or the
superior, but riskier, treatment to the patient, respectively, with $0\le p_{ND}\le p_D\le 1.$
Parameter $P:=p_Dq_D-p_{ND}q_{ND}$ represents the increase (or decrease, if negative) in the ex ante probability of being condemned for a physician providing the inferior treatment $D$ to a litigious patient; $E>0$ is the compensation that the patient obtains, if winning the lawsuit, from the losing physician; $B^{PH}$ represents the physician's additional immediate benefit (or cost, if negative) of practicing defensive medicine.
Both Systems \eqref{16} and \eqref{18} are encompassed in the following general formulation
\begin{equation}\label{zan}
\left\{
\begin{array}{ll}
u\,'=u(1-u)(\sigma v-\chi)\\
\vspace{-2mm}\\
v\,'=-v(1-v)(\xi u-\phi)
\end{array}
\right.
\end{equation}
with $u(t),\,v(t)\in[0,1]$ and $\sigma,\,\chi,\,\xi,\,\phi\in\mathbb R.$
In particular, according to \cite{HoSi-88}, if $0<\chi<\sigma$ and $0<\phi<\xi$ all orbits are closed and periodic, surrounding the unique internal equilibrium $S=\left(\frac{\phi}{\xi},\frac{\chi}{\sigma}\right),$ which is a center. In view of finding a connection with the LTMs framework introduced in Section \ref{sec-2}, in what follows we will focus just on the latter
scenario, which also describes a battle of the sexes game with two players and two strategies, in which no one of the two dominates the other. We stress that, due to the conditions imposed on the sign of the parameters in \cite{Anea-16,Anea-18}, it holds that \eqref{16} corresponds to \eqref{zan} when identifying $d$ with $u$ and $\ell$ with $v,$ while \eqref{18} corresponds to \eqref{zan} when identifying $d$ with $v$ and $\ell$ with $u.$ As we shall better see in Subsections \ref{31} and \ref{32}, this difference between \eqref{16} and \eqref{18} implies that orbits are run clockwise in the former framework and counterclockwise in the latter.\\
In order to use some results on the orbits obtained in \cite{Maea-16}, we need to check that System \eqref{zan} fulfills the conditions on page 778 therein. Namely, setting $f(u)=1-u,\,\psi(v)=\sigma v-\chi,\,g(v)=1-v,\,\varphi(u)=\xi u-\phi,$ it holds that
$f,\,g:(0,1)\to (0,+\infty)$ are continuous functions and that $\varphi,\,\psi:(0,1)\to\mathbb R$ are $\mathcal C^1$ maps with positive derivative on $(0,1),$ satisfying
$$
\begin{array}{ll}
\lim_{u\to 0^{+}}\varphi(u)=-\phi<0,\,\,& \lim_{v\to 0^{+}}\psi(v)=-\chi<0,\\
\vspace{-2mm}\\
\lim_{u\to 1^{-}}\varphi(u)=\xi-\phi>0,\,\,& \lim_{v\to 1^{-}}\psi(v)=\sigma-\chi>0
\end{array}
$$
under the assumption that the system admits a center in $S.$ Moreover, setting $A(u)=\int\frac{\varphi(u)}{uf(u)}\,du=\int\frac{\xi u-\phi}{u(1-u)}\,du$ and
$B(v)=\int\frac{\psi(v)}{vg(v)}\,dv=\int\frac{\sigma v-\chi}{v(1-v)}\,dv,$ it holds that
$$\lim_{u\to 0^{+}}A(u)=\lim_{u\to 1^{-}}A(u)=\lim_{v\to 0^{+}}B(v)=\lim_{v\to 1^{-}}B(v)=+\infty.$$
Indeed, $A(u)=-\phi\log(u)+(\phi-\xi)\log(1-u)+k_1$ and $B(v)=-\chi\log(v)+(\chi-\sigma)\log(1-v)+k_2,$ with $k_1,\,k_2\in\mathbb R.$
Hence, System \eqref{zan} admits $H(u,v)=A(u)+B(v)$ as first integral having $S$ as minimum point and, according to the results obtained in \cite{Maea-16},
all its solutions are periodic and describe closed orbits contained in the (open) unit square. Due to the wide class of Hamiltonian systems considered in \cite{Maea-16}, the authors do not provide a proof of the monotonicity of the period of the orbits, but show some results about the period of small and large cycles.\footnote{Namely, Madotto et al. in \cite{Maea-16} prove, on the one hand, that the approximation of the period length of the small cycles by means of the period of the linearized system is valid near the equilibrium point and, on the other hand, that the period length of large cycles, approaching the boundary of the feasible set, is arbitrarily high, in the case that $f$ and $g$ are for instance $\mathcal C^1$ functions on the open interval $(0,1),$ that are continuous in $0,$ too, like it happens in our framework.} Nonetheless, the proof of the increasing monotonicity of the period of the orbits with the energy level for System \eqref{zan} can be found in \cite{Sc-85,Sc-90} by Renate Schaaf (see $(4)$ and the corresponding comments on page 97 in \cite{Sc-85} and Example 2.4.3 on page 64 in \cite{Sc-90}). We will rely on such result in our application of the method of the LTMs, summarized in Section \ref{sec-2}, to System \eqref{zan}, and more precisely to \eqref{16} and \eqref{18}.\\
However, in order to apply the LTMs technique, even before checking the twist condition on the boundary of the considered linked annuli as a consequence of the nonisochronicity of the centers, we need to guarantee that linked together annuli do exist. To such aim, since annuli are for us composed of energy levels, we may assume that at least one of the model parameters varies in a periodic fashion, e.g. due to a seasonal effect, so that we obtain two conservative systems, the original and the perturbed ones, and for each of them we can consider an annulus: under suitable conditions on the orbits, the two annuli cross in two or more generalized rectangles, thus resulting linked together.
In particular, we will make the periodic variation assumption on parameter $p$ in System \eqref{16} and on $q_{ND}$ in System \eqref{18}.
Namely, we recall that $p\in (0,1)$ describes the probability that an adverse event for the patient occurs in the positive defensive medicine framework, and it is not affected by the choice of the physician to practice or not defensive medicine; assuming instead negative defensive medicine, $q_{ND}$ is the probability that a harm occurs when the physician chooses to provide the superior, but riskier, treatment to the patient. Two facts are well-known and largely documented in the medical literature: that hypertension phenomena raise in winter and fall in summer in countries both north and south of the equator (see \cite{Atea-08,Deea-12,Fa-13,Siea-10}) and that hypertension increases e.g. perioperative cardiac risks, as well as the incidence of perioperative major adverse cardiovascular and cerebrovascular events (cf. for instance \cite{Hoea-04,Liuea-16}).
In the case of positive defensive medicine, we can suppose that the physician will opt for a surgical intervention any time it is needed, while in the case of negative defensive medicine we can assume that surgery coincides with the superior, but riskier, treatment ($ND$) in comparison for instance with a less effective, but safer, pharmacological therapy ($D$). Hence, due to the seasonality in hypertension and its connection with perioperative adverse events, we can assume that both $p$ in \eqref{16} and
$q_{ND}$ in \eqref{18} alternate in a periodic fashion between a low and a high value.\\
Despite the similar meaning of the parameters $p$ and $q_{ND}$ and, consequently, despite the common rationale for assuming a periodic variation on them, we stress that modifying the former or the latter produces a different effect on the position of the center. Namely, as we shall see in Subsection \ref{31}, an increase in parameter $q_{ND}$ in \eqref{18} will generally affect the position of both components of the center, while in Subsection \ref{32} we will find that raising $p$ in \eqref{16} affects just the ordinate of the center. Moreover, as mentioned above, orbits are run clockwise for System \eqref{16} and counterclockwise for System \eqref{18}. Since in the proofs of our results about LTMs (cf. Theorems \ref{app18} and \ref{app16}) we need to introduce the rotation number and count the laps completed by suitable paths around the center, those computations turn out to be more intuitive when orbits are run counterclockwise. For such reason, we start focusing on the framework from \cite{Anea-18} in Subsection \ref{31}, turning to the analysis of the setting in \cite{Anea-16} in Subsection \ref{32}.
\subsection{Analysis of the framework in Antoci et al. (2018)}\label{31}
Recalling that $P:=p_Dq_D-p_{ND}q_{ND},$ when setting
$\zeta=B^{PH},\,\eta=P\,E=(p_Dq_D-p_{ND}q_{ND})E,\,\vartheta=q_{ND}(C_L-p_{ND}E),\,\kappa=(q_{ND}-q_{D})\,C_L+P\,E=q_{ND}(C_L-p_{ND}E)+q_D(p_{D}E-C_L),$
System \eqref{18} becomes
\begin{equation}\label{18b}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(\zeta-\eta\,\ell\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)\left(-\vartheta+\kappa d\right)
\end{array}
\right.
\end{equation}
with $d(t),\,\ell(t)\in[0,1].$ Focusing on the case $0<\zeta<\eta$ and $0<\vartheta<\kappa,$ the center is given by
$S=\left(\frac{\vartheta}{\kappa},\frac{\zeta}{\eta}\right)$ and it is immediate to see that if $q_{ND}$ increases, reaching an higher value $\widehat{q}_{ND},$ then $\zeta$ does not vary, while $\eta$ falls and $\vartheta,\,\kappa$ raise. Calling $\widehat\eta,\,\widehat\vartheta,\,\widehat\kappa$ the new parameter values and $V=\left(\frac{\widehat\vartheta}{\widehat\kappa},\frac{\zeta}{\widehat\eta}\right)$ the new equilibrium, it is a global center of the perturbed system
\begin{equation}\label{18p}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(\zeta-\widehat\eta\,\ell\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)\left(-\widehat\vartheta+\widehat\kappa d\right)
\end{array}
\right.
\end{equation}
as long as $0<\zeta<\widehat\eta$ and $0<\widehat\vartheta<\widehat\kappa.$
In particular, due to the increase in $q_{ND},$ the ordinate of $V$ will surely be larger than that of $S,$ and a simple computation shows that the abscissa of $V$ is always larger than that of $S,$ too. Namely, this is the case illustrated in
Figures \ref{la}--\ref{sapf}, which should help the reader to better understand Definition \ref{li} of linked annuli and to follow the proof of Theorem \ref{app18}.\\
Let us start by precisely defining what two linked annuli are in the present framework.
To such aim we notice that, by the results recalled after System \eqref{zan}, the orbits for System \eqref{18b} surrounding $S$ have equation
\begin{equation}\label{hs-18}
H_S(d,\ell)=-\vartheta\log(d)+(\vartheta-\kappa)\log(1-d)-\zeta\log(\ell)+(\zeta-\eta)\log(1-\ell)=e,
\end{equation}
for some $e\ge e_0,$ where $e_0:=-\vartheta\log(\frac{\vartheta}{\kappa})+(\vartheta-\kappa)\log(1-\frac{\vartheta}{\kappa})-\zeta\log(\frac{\zeta}{\eta})+(\zeta-\eta)\log(1-\frac{\zeta}{\eta})$ is the minimum ``energy'' level attained by $H_S(d,\ell)$
on the unit square. Indeed, by the previously recalled results from \cite{Maea-16}, $H_S$ admits $S$ as minimum point.
Setting $\Gamma_S(e)=\{(d,\ell)\in (0,1)^2:H_S(d,\ell)=e\},$ for some $e>e_0,$ this is a simple closed curve surrounding $S.$ We call {\it annulus around} $S$ any set $\mathcal C_S(e_1,e_2)=\{(d,\ell)\in (0,1)^2:e_1\le H_S(d,\ell)\le e_2\}$ with $e_0<e_1<e_2,$ so that the inner boundary of
$\mathcal C_S(e_1,e_2)$ coincides with $\Gamma_S(e_1)$ and the outer boundary coincides with $\Gamma_S(e_2).$\\
Similarly, the orbits for System \eqref{18p} surrounding $V$ have equation
\begin{equation}\label{hv-18}
H_V(d,\ell)=-\widehat\vartheta\log(d)+(\widehat\vartheta-\widehat\kappa)\log(1-d)-\zeta\log(\ell)+(\zeta-\widehat\eta)\log(1-\ell)=h,
\end{equation}
for some $h\ge h_0,$ where $h_0:=H_V(V)$ is the minimum ``energy'' level attained by $H_V(d,\ell)$
on the unit square. The sets $\Gamma_V(h)=\{(d,\ell)\in (0,1)^2:H_V(d,\ell)=h\},$ are simple closed curves surrounding $V$ for any $h>h_0,$
and we call {\it annulus around} $V$ any set $\mathcal C_V(h_1,h_2)=\{(d,\ell)\in (0,1)^2:h_1\le H_V(d,\ell)\le h_2\}$ with
$h_0<h_1<h_2,$ whose inner boundary coincides with $\Gamma_V(h_1)$ and whose outer boundary coincides with $\Gamma_V(h_2).$\\
Let us consider the straight line $r$ joining $S$ and $V$ and define on it the ordering inherited from the horizontal axis, so that given $P=(d_P,\ell_P)$ and $Q=(d_Q,\ell_Q)$ belonging to $r$ it holds that $P\vartriangleleft\, Q$ (resp. $P\trianglelefteq\, Q$) if and only if $d_P<d_Q$ (resp. $d_P\le d_Q$). We are now in position to introduce the following:
\begin{definition}\label{li}
Given the annulus $\mathcal C_S(e_1,e_2)$ around $S$ and the annulus $\mathcal C_V(h_1,h_2)$ around $V,$ we say that they are linked together if
$$S_{2,-}\vartriangleleft\, S_{1,-}\trianglelefteq\, V_{2,-}\vartriangleleft\, V_{1,-}\trianglelefteq\, S_{1,+}\vartriangleleft\, S_{2,+}\trianglelefteq\, V_{1,+}\vartriangleleft\, V_{2,+}$$
where, for $i\in\{1,2\},$ $S_{i,-}$ and $S_{i,+}$ denote the intersection points\footnote{We stress that, for $e_i>e_0$ and $h_i>h_0,\,i\in\{1,2\},$ the boundary sets $\Gamma_S(e_i)$ and $\Gamma_V(h_i)$ intersect the straight line $r$ in exactly two points because
$\{(d,\ell)\in (0,1)^2:H_S(d,\ell)\le e\}$ and $\{(d,\ell)\in (0,1)^2:H_V(d,\ell)\le h\},$ coinciding with the lower contour sets of the convex functions $H_S$ in \eqref{hs-18} and $H_V$ in \eqref{hv-18}, are star-shaped for all $e>e_0$ and for every $h>h_0,$ respectively. We will need the star-shapedness of those lower contour sets along the proof of Theorem \ref{app18}, too.}
between $\Gamma_S(e_i)$ and the straight line $r,$ with
$S_{i,-}\vartriangleleft\, S\vartriangleleft\, S_{i,+},$ and, similarly, $V_{i,-}$ and $V_{i,+}$ denote the intersection points between $\Gamma_V(h_i)$ and $r,$ with $V_{i,-}\vartriangleleft\, V\vartriangleleft\, V_{i,+}.$
\end{definition}
We refer the reader to Figure \ref{la} for a pictorial illustration of Definition \ref{li}. In order not to overburden the pictures, we drew the unit square, containing all orbits for both Systems \eqref{18b} and \eqref{18p}, just in Figure \ref{la} (A), but we will omit to represent it in the next pictures.
\begin{figure}[ht]
\centering
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth,height=6cm]{fig1a}
\center{(A)}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth,height=6cm]{fig1b}
\center{(B)}
\end{minipage}
\caption{In (A) we represent some energy level lines associated with System \eqref{18b}, in red, surrounding $S,$ as well as some energy level lines associated with System \eqref{18p}, in blue, surrounding $V.$ In (B), suitably choosing two level lines for each system,
we obtain two linked together annuli, according to Definition \ref{li}.}\label{la}
\end{figure}
\noindent
In view of stating and proving Theorem \ref{app18}, we have to explain how the shift between the regimes described by Systems \eqref{18b} and \eqref{18p} occurs. Due to the above recalled seasonality in hypertension and its connection with perioperative adverse events, we can suppose that $q_{ND}$ in \eqref{18} alternate in a periodic fashion between a low and a high value.
We will then assume that our dynamical system is described by \eqref{18b} for $t\in[0,T_S)$, while it is described by \eqref{18p} for $t\in[T_S,T_S+T_V),$
and that the same alternation between the two regimes occurs with $T$-periodicity, where $T=T_S+T_V.$ In other terms, we can suppose that we are dealing with a system with periodic coefficients of the form
\begin{equation}\label{18t}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(\zeta(t)-\eta(t)\ell\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)\left(-\vartheta(t)+\kappa(t) d\right)
\end{array}
\right.
\end{equation}
where $\zeta(t)\equiv\zeta$ and, for $x\in\{\eta,\vartheta,\kappa\},$ it holds that
\begin{equation}\label{xt}
x(t)=\left\{
\begin{array}{ll}
x \quad & \mbox{for } \, t\in[0,T_S) \\
\vspace{-2mm}\\
\widehat x \quad & \mbox {for } \, t\in[T_S,T)
\end{array}
\right.
\end{equation}
with $0<\zeta<\widehat\eta,\,0<\vartheta<\kappa$ and $0<\widehat\vartheta<\widehat\kappa.$
Hence, the functions $\eta(t),\,\vartheta(t)$ and $\kappa(t)$ are
piecewise constant and they are supposed to be extended to
the whole real line by $T$-periodicity.\\
Let us finally introduce our last needed ingredient, i.e., the Poincar\'e map $\Psi$ of System \eqref{18t}, which associates with any initial condition $(d_0,\ell_0)$ belonging to the open unit square the position at time $T$ of the solution
$\varsigma(\,\cdot\,,(d_0,\ell_0))=(d(\,\cdot\,,(d_0,\ell_0)),\ell(\,\cdot\,,(d_0,\ell_0)))$ to \eqref{18t} starting at time $t=0$ from $(d_0,\ell_0).$
In symbols, $\Psi:(0,1)^2\to(0,1)^2,\,(d_0,\ell_0)\mapsto\varsigma(T,(d_0,\ell_0)).$
We remark that, along the paper, solutions are meant in the Carath\'eodory sense, i.e., they are absolutely continuous and satisfy the corresponding system for almost every $t\in\mathbb R.$ Notice that
$\Psi$ may be decomposed as $\Psi=\Psi_V\,\circ\Psi_S,$ where $\Psi_S$ is the Poincar\'e map associated with System \eqref{18b} for $t\in[0,T_S]$ and $\Psi_V$ is the Poincar\'e map associated with System \eqref{18p} for $t\in[0,T_V].$ Moreover, since every annulus $\mathcal C_S(e_1,e_2)$ around $S$ is invariant under the action of the map $\Psi_S,$ being composed of the invariant orbits $\Gamma_S(e),$ for $e\in[e_1,e_2],$ and, similarly, since every annulus $\mathcal C_V(h_1,h_2)$ around $V$ is invariant under the action of the map $\Psi_V,$ it holds that every pair of linked twist annuli is invariant under the action of the composite map $\Psi.$
In Theorem \ref{app18} we will denote by $\tau_S(e),$ for all $e>e_0,$ the period of $\Gamma_S(e),$ i.e., the time needed by the solution
$\varsigma_S(\,\cdot\,,(d_0,\ell_0))$ to System \eqref{18b}, starting from any $(d_0,\ell_0)\in\Gamma_S(e),$ to complete one turn around $S$ moving along $\Gamma_S(e),$ and by $\tau_V(h),$ for all $h>h_0,$ the period of $\Gamma_V(h),$ i.e., the time needed by the solution
$\varsigma_V(\,\cdot\,,(d_0,\ell_0))$ to System \eqref{18p}, starting from any $(d_0,\ell_0)\in\Gamma_V(h),$ to complete one turn around $V$ moving along $\Gamma_V(h).$ A straightforward analysis of the phase portrait shows that both orbits surrounding $S$ or $V$ are run counterclockwise. As recalled above, the increasing monotonicity of $\tau_S(\,\cdot\,)$ and $\tau_V(\,\cdot\,)$ with the energy levels has been proven in \cite{Sc-85,Sc-90}. Hence, for any annulus $\mathcal C_S(e_1,e_2)$ around $S$ it holds that $\tau_S(e_1)<\tau_S(e_2),$ as well as for each annulus $\mathcal C_V(h_1,h_2)$ around $V$ it holds that $\tau_V(h_1)<\tau_V(h_2).$\\
Our result about System \eqref{18t} reads as follows:
\begin{theorem}\label{app18}
For any choice of the parameters $0<\zeta<\eta$ and $0<\vartheta<\kappa,$ defined like in System \eqref{18b}, and for any increase in the parameter $q_{ND}$ such that $0<\zeta<\widehat\eta$ and $0<\widehat\vartheta<\widehat\kappa,$ given the annulus $\mathcal C_S(e_1,e_2)$ around $S,$ for some $e_0<e_1<e_2,$ and the annulus $\mathcal C_V(h_1,h_2)$ around $V,$ for some $h_0<h_1<h_2,$ assume that they are linked together, calling $\mathcal A$ and $\mathcal B$ the connected components of $\mathcal C_S(e_1,e_2)\cap\mathcal C_V(h_1,h_2).$
Then, if $T_S>\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}$ and
$T_V>\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))},$ the
Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ of System \eqref{18t} induces chaotic dynamics on two symbols in $\mathcal A$ and in $\mathcal B,$ and thus all the properties listed in Theorem \ref{th} are fulfilled for $\Psi$.
\end{theorem}
We can summarize the statement of Theorem \ref{app18} by saying that any time we have two linked together annuli corresponding to Systems \eqref{18b} and \eqref{18p}, then, if the switching times between the regimes described by Systems \eqref{18b} and \eqref{18p} are sufficiently large, the Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ induces chaotic dynamics on two symbols in the sets in which the two annuli cross, and thus all properties listed in Theorem \ref{th} hold for $\Psi.$
\smallskip
\noindent {\textit{Proof of Theorem \ref{app18}.}}
Given the linked together annuli $\mathcal C_S(e_1,e_2)$ and $\mathcal C_V(h_1,h_2),$ let us set $\mathcal C_S(e_1,e_2)=\mathcal C_S^u(e_1,e_2)\cup\mathcal C_S^d(e_1,e_2)$ and $\mathcal C_V(h_1,h_2)=\mathcal C_V^u(h_1,h_2)\cup\mathcal C_V^d(h_1,h_2),$ where $u$ (resp. $d$) stands for ``up'' (resp. ``down''), and indeed $\mathcal C_S^u(e_1,e_2)$ (resp. $\mathcal C_S^d(e_1,e_2)$) is the subset of $\mathcal C_S(e_1,e_2)$ which lies above (resp. below) the straight line $r,$ joining $S$ and $V,$ and, analogously, $\mathcal C_V^u(h_1,h_2)$ (resp. $\mathcal C_V^d(h_1,h_2)$) is the subset of $\mathcal C_V(h_1,h_2)$ which lies above (resp. below) the straight line $r.$ See Figure \ref{ud} for a graphical illustration.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm,height=6cm]{fig2}
\caption{Given the linked together annuli $\mathcal C_S(e_1,e_2)$ and $\mathcal C_V(h_1,h_2),$ we represent in yellow the set $\mathcal C_S^u(e_1,e_2)$ and in light blue the set $\mathcal C_V^u(h_1,h_2),$ both lying above the straight line $r,$ joining the centers $S$ and $V,$ while we represent in orange the set $\mathcal C_S^d(e_1,e_2)$ and in dark blue the set $\mathcal C_V^d(h_1,h_2),$ both lying below the straight line $r.$ Notice that $\mathcal C_S^d(e_1,e_2)$ and $\mathcal C_V^d(h_1,h_2)$ cross in $\mathcal A$ (colored in dark green), and that
$\mathcal C_S^u(e_1,e_2)$ and $\mathcal C_V^u(h_1,h_2)$ cross in $\mathcal B$ (colored in light green).}\label{ud}
\end{figure}
\noindent
Let us also set $\mathcal A:=\mathcal C_S^d(e_1,e_2)\cap \mathcal C_V^d(h_1,h_2)$ and $\mathcal B:=\mathcal C_S^u(e_1,e_2)\cap\mathcal C_V^u(h_1,h_2).$ We are going to show that, when orienting them by setting ${\mathcal A}^-=\mathcal A^{-}_{l}\cup\mathcal A^{-}_{r}$ and ${\mathcal B}^-=\mathcal B^{-}_{l}\cup\mathcal B^{-}_{r},$ with $\mathcal A^{-}_{l}:=\mathcal A\cap\Gamma_S(e_1),\,\mathcal A^{-}_{r}:=\mathcal A\cap\Gamma_S(e_2),\,\mathcal B^{-}_{l}:=\mathcal B\cap\Gamma_V(h_1),\,\mathcal B^{-}_{r}:=\mathcal B\cap\Gamma_V(h_2),$ then conditions $(C_F)$ and $(C_G)$ in Theorem \ref{th} are fulfilled for the oriented rectangles ${\widetilde{\mathcal A}} := ({\mathcal A},{\mathcal A}^-)$ and
${\widetilde{\mathcal B}} := ({\mathcal B},{\mathcal B}^-)$ with $F=\Psi_S$ and $G=\Psi_V.$ Then the
Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ of System \eqref{18t} induces chaotic dynamics on two symbols in $\mathcal A$
and thus, since $\Psi$ is a homeomorphism on the open unit square,
all the properties listed in Theorem \ref{th} are fulfilled for $\Psi$. This will conclude the verification of the first half of our result. Namely, the proof will be complete when we will show that conditions $(C_F)$ and $(C_G)$ in Theorem \ref{th} are fulfilled with $F=\Psi_S$ and $G=\Psi_V$ also when interchanging the role of $\mathcal A$ and $\mathcal B,$ after having suitably modified their orientation, since from this it will follow that $\Psi$ induces chaotic dynamics on two symbols in $\mathcal B,$ too.\\
Checking that condition $(C_F)$ in Theorem \ref{th} is fulfilled with $F=\Psi_S$ with respect to ${\widetilde{\mathcal A}}$ and ${\widetilde{\mathcal B}}$ amounts to find two disjoint compact subsets ${\mathcal H_0},\,{\mathcal H_1}$ of ${\mathcal A}$
such that $\displaystyle{({\mathcal H}_i,\Psi_S): {\widetilde{\mathcal A}}\stretchx\, {\widetilde{\mathcal B}}},$ for $i=0,1.$
To such aim, we need to consider the image $\overline\gamma$ through $\Psi_S$ of any path $\gamma:[0,1]\to{\mathcal A}$ joining $\mathcal A^{-}_{l}$ with
$\mathcal A^{-}_{r}$ and, recalling that orbits for System \eqref{18b} are run counterclockwise, count how many times it completely crosses
${\mathcal B},$ from $\mathcal B^{-}_{l}$ to $\mathcal B^{-}_{r},$ when $T_S$ is large enough. In particular, we will show that, in order for ${\mathcal H_0},\,{\mathcal H_1}$ as above to exist, two complete crossings have to occur between $\overline\gamma$ and ${\mathcal B}.$ This will allow us to determine as lower bound on $T_S$ the constant $k_S:=\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))},$ recalling that $\tau_S(e_1)<\tau_S(e_2).$\\
In view of counting the turns completed by the image of a path around $S,$ we introduce a system of generalized polar coordinates centered at $S.$ Namely, assuming to have performed the rototranslation of $\mathbb R^2$ that brings the origin
to the point $S$ and makes the horizontal axis coincide with the straight line $r,$ we can express the solution
$\varsigma_S(t\,,(d_0,\ell_0))=(d(t\,,(d_0,\ell_0)),\ell(t\,,(d_0,\ell_0)))$ to System \eqref{18b} with initial point in
$(d_0,\ell_0)\in (0,1)^2$ through the radial coordinate
$\rho_S(t,(d_0,\ell_0))=\|\varsigma_S(t\,,(d_0,\ell_0))-S\|$ and the angular coordinate $\theta_S(t,(d_0,\ell_0))$ as
$\varsigma_S(t\,,(d_0,\ell_0))=\rho_S(t,(d_0,\ell_0))\,(\cos(\theta_S(t,(d_0,\ell_0))),\,\sin(\theta_S(t,(d_0,\ell_0)))).$ Since orbits for System \eqref{18b} are run counterclockwise, we define
the rotation number, describing the normalized angular displacement of $\varsigma_S(t\,,(d_0,\ell_0))$ during the time interval $[0,t]\subseteq [0,T_S],$ as
\begin{equation}\label{rns}
\rot_S(t,(d_0,\ell_0)):=\frac{\theta_S(t,(d_0,\ell_0))-\theta_S(0,(d_0,\ell_0))}{2\pi}\,.
\end{equation}
For $e>e_0,$ as a consequence of the star-shapedness with respect to $S$ of the lower contour sets $\{(d,\ell)\in (0,1)^2:H_S(d,\ell)\le e\},$
with $H_S$ as in \eqref{hs-18},
and recalling that $\tau_S(e)$ is the time needed by the solution $\varsigma_S(\,\cdot\,,(d_0,\ell_0))$ to System \eqref{18b}, starting from any $(d_0,\ell_0)\in\Gamma_S(e),$ to complete one turn around $S$ moving along $\Gamma_S(e),$
the following properties hold true for every $(d_0,\ell_0)\in\Gamma_S(e),$ for all $t\in [0,T_S]$ and for each $n\in {\mathbb N}\setminus\{0\}$:
\begin{equation}\label{rot}
\begin{array}{ll}
\rot_S(t,(d_0,\ell_0)) < n \, &\Longleftrightarrow \,\,\, t < n \, \tau_S(e)\\
\vspace{-3mm}\\
\rot_S(t,(d_0,\ell_0)) = n \, &\Longleftrightarrow \,\,\, t = n \, \tau_S(e)\\
\vspace{-3mm}\\
\rot_S(t,(d_0,\ell_0)) > n \, &\Longleftrightarrow \,\,\, t > n \, \tau_S(e)
\end{array}
\end{equation}
so that $\rot_S(t,(d_0,\ell_0))\in(n,n+1) \Longleftrightarrow t\in(n \,\tau_S(e),(n + 1)\, \tau_S(e)).$\\
As explained above, in order to check that condition $(C_F)$ in Theorem \ref{th} is fulfilled with $F=\Psi_S$ with respect to ${\widetilde{\mathcal A}}$ and ${\widetilde{\mathcal B}},$ we have to find two disjoint compact subsets ${\mathcal H_0},\,{\mathcal H_1}$ of ${\mathcal A}$
such that $\displaystyle{({\mathcal H}_i,\Psi_S): {\widetilde{\mathcal A}}\stretchx\, {\widetilde{\mathcal B}}},$ for $i=0,1.$ Let then
$\gamma:[0,1]\to{\mathcal A},\,\lambda\mapsto\gamma(\lambda),$ be any path such that $\gamma(0)\in\mathcal A^{-}_{l}$ and $\gamma(1)\in\mathcal A^{-}_{r}.$ For every $\lambda\in [0,1],$ we consider the position at time $T_S$ of the solution $\varsigma_S(t\,,\gamma(\lambda))$ to System \eqref{18b} starting at $t=0$ from $\gamma(\lambda)\in{\mathcal A},$ i.e., $\Psi_S(\gamma(\lambda)),$ as well as the angular coordinate
$\theta_S(T_S,\gamma(\lambda))$ associated with it. Notice that the function $\lambda\mapsto\theta_S(T_S,\gamma(\lambda))$ is continuous due to the continuity of $\gamma$ and by the continuous dependence of the solutions from the initial data.
Since $\mathcal A^{-}_{l}\subset\Gamma_S(e_1)$ and $\mathcal A^{-}_{r}\subset\Gamma_S(e_2),$ recalling that $\tau_S(e_1)<\tau_S(e_2),$ we claim that if $T_S\ge\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}$ then\footnote{We stress that the proof of the properties regarding the system centered at $S$ is valid also when $T_S=\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}$ and, similarly, the verification of the features concerning the system centered at $V$ works even when $T_V=\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))}.$ However, the strict inequalities make our result robust with respect to small perturbations in the coefficients of System \eqref{18t}, as better explained below.}
$\theta_S(T_S,\gamma(0))-\theta_S(T_S,\gamma(1))>5\pi.$ If this is the case, there exists $n^{\ast}\in\mathbb N$ such that
$[2n^{\ast}\pi,2n^{\ast}\pi+\pi]$ and $[2(n^{\ast}+1)\pi,2(n^{\ast}+1)\pi+\pi]$ are contained in the interval
$\{\theta_S(T_S,\gamma(\lambda)):\lambda\in[0,1]\}.$ Hence, by Bolzano theorem, there exist two disjoint maximal intervals
$[\lambda_0',\lambda_0''],\,[\lambda_1',\lambda_1'']$ of $[0,1]$ such that for $i\in\{0,1\}$ it holds that
$\{\theta_S(T_S,\gamma(\lambda)):\lambda\in[\lambda_i',\lambda_i'']\}\subseteq[2(n^{\ast}+i)\pi,2(n^{\ast}+i)\pi+\pi],$
with $\theta_S(T_S,\gamma(\lambda_i'))=2(n^{\ast}+i)\pi$ and $\theta_S(T_S,\gamma(\lambda_i''))=2(n^{\ast}+i)\pi+\pi.$
We can then set
${\mathcal H}_i:=\{(d_0,\ell_0)\in\mathcal A:\theta_S(T_S,(d_0,\ell_0))\in[2(n^{\ast}+i)\pi,2(n^{\ast}+i)\pi+\pi]\}$ for $i\in\{0,1\}$
in order to have the stretching relation
\begin{equation}\label{sr}
\displaystyle{({\mathcal H}_i,\Psi_S): {\widetilde{\mathcal A}}\stretchx\, {\widetilde{\mathcal B}}}
\end{equation}
satisfied. Namely, for $i\in\{0,1\},$ ${\mathcal H}_i$ is a compact set containing $\{\gamma(\lambda)):\lambda\in[\lambda_i',\lambda_i'']\}.$ Moreover, we have proved that, for $i\in\{0,1\}$ and $\lambda\in[\lambda_i',\lambda_i''],$ it holds that
$\gamma(\lambda)\in{\mathcal H}_i,\,\Psi_S(\gamma(\lambda))\in\mathcal C_S^u(e_1,e_2)$ and $H_S(\Psi_S(\gamma(\lambda_i')))\le h_1,$
$H_S(\Psi_S(\gamma(\lambda_i'')))\ge h_2.$ Then, there exists $[\lambda_i^{\ast},\lambda_i^{\ast\ast}]\subseteq[\lambda_i',\lambda_i'']$ such that $\Psi_S(\gamma(\lambda))\in\mathcal B$ for every $\lambda\in[\lambda_i^{\ast},\lambda_i^{\ast\ast}],$ and
$H_S(\Psi_S(\gamma(\lambda_i^{\ast})))=h_1,\,H_S(\Psi_S(\gamma(\lambda_i^{\ast\ast})))=h_2.$
Since $\mathcal B^{-}_{l}:=\mathcal B\cap\Gamma_V(h_1),\,\mathcal B^{-}_{r}:=\mathcal B\cap\Gamma_V(h_2),$ this means that $\Psi_S(\gamma(\lambda_i^{\ast}))\in\mathcal B^{-}_{l}$ and $\Psi_S(\gamma(\lambda_i^{\ast\ast}))\in\mathcal B^{-}_{r},$ concluding the verification of \eqref{sr}. See Figure \ref{sapf} (A).\\
We then have to check our claim that $T_S\ge\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}$ implies that\\
$\theta_S(T_S,\gamma(0))-\theta_S(T_S,\gamma(1))>5\pi,$ where $\gamma:[0,1]\to{\mathcal A}$ is any path such that $\gamma(0)\in\mathcal A^{-}_{l}:=\mathcal A\cap\Gamma_S(e_1)$ and $\gamma(1)\in\mathcal A^{-}_{r}:=\mathcal A\cap\Gamma_S(e_2).$ Since $\rot_S(t,\gamma(0)) \geq \lfloor t/\tau_S(e_1)\rfloor$ and
$\rot_S(t,\gamma(1)) \leq \lceil t/\tau_S(e_2)\rceil$ for every $t> 0,$
it follows that
\begin{equation}\label{rots}
\rot_S(t,\gamma(0)) - \rot_S(t,\gamma(1))\ge \lfloor t/\tau_S(e_1)\rfloor-\lceil t/\tau_S(e_2)\rceil > t\, \frac{\tau_S(e_2) - \tau_S(e_1)}{\tau_S(e_1)\,\tau_S(e_2)}\, - 2
\end{equation}
for every $t>0.$
As a consequence, for $T_S\ge\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}$ it holds that
$$\rot_S(T_S,\gamma(0)) - \rot_S(T_S,\gamma(1))> T_S\, \frac{\tau_S(e_2) - \tau_S(e_1)}{\tau_S(e_1)\,\tau_S(e_2)}\, - 2\ge\frac{11}{2}-2=\frac{7}{2}>3.$$
Hence, $\theta_S(T_S,\gamma(0)) - \theta_S(T_S,\gamma(1))>6\pi+\theta_S(0,\gamma(0))-\theta_S(0,\gamma(1)).$ Since
$\overline\gamma\subset\mathcal A:=\mathcal C_S^d(e_1,e_2)\cap\mathcal C_V^d(h_1,h_2),$ we have that
$\theta_S(0,\gamma(0)),\,\theta_S(0,\gamma(1))\in [-\pi,0],$ and thus $\theta_S(0,\gamma(0))-\theta_S(0,\gamma(1))>-\pi,$ from which it follows that
$\theta_S(T_S,\gamma(0)) - \theta_S(T_S,\gamma(1))>5\pi,$ as desired.\\
Let us now turn to the proof that condition $(C_G)$ in Theorem \ref{th} is fulfilled with $G=\Psi_V$ with respect to ${\widetilde{\mathcal B}}$ and ${\widetilde{\mathcal A}},$ i.e., that $\displaystyle{\Psi_V: {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}}.$
To such aim, we need to consider the image through $\Psi_V$ of any path $\sigma:[0,1]\to{\mathcal B}$ joining $\mathcal B^{-}_{l}$ with
$\mathcal B^{-}_{r}$ and check that it completely crosses ${\mathcal A},$ from $\mathcal A^{-}_{l}$ to $\mathcal A^{-}_{r},$ at least once
when $T_V\ge\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))},$ recalling that orbits for System \eqref{18p} are run counterclockwise and that $\tau_V(h_1)<\tau_V(h_2).$
Also in this case we introduce a system of generalized polar coordinates, but centered at $V.$ Namely, assuming to have performed the rototranslation of $\mathbb R^2$ that brings the origin
to the point $V$ and makes the horizontal axis coincide with the straight line $r,$ we can express the solution
$\varsigma_V(t\,,(d_0,\ell_0))=(d(t\,,(d_0,\ell_0)),\ell(t\,,(d_0,\ell_0)))$ to System \eqref{18p} with initial point in
$(d_0,\ell_0)\in (0,1)^2$ through the radial coordinate
$\rho_V(t,(d_0,\ell_0))=\|\varsigma_V(t\,,(d_0,\ell_0))-V\|$ and the angular coordinate $\theta_V(t,(d_0,\ell_0))$ as
$\varsigma_V(t\,,(d_0,\ell_0))=\rho_V(t,(d_0,\ell_0))\,(\cos(\theta_V(t,(d_0,\ell_0))),\,\sin(\theta_V(t,(d_0,\ell_0)))).$ Since orbits for System \eqref{18p} are still run counterclockwise, we can define
the rotation number as
\begin{equation}\label{rnv}
\rot_V(t,(d_0,\ell_0)):=\frac{\theta_V(t,(d_0,\ell_0))-\theta_V(0,(d_0,\ell_0))}{2\pi}\,,
\end{equation}
describing the normalized angular displacement during the time interval $[0,t]\subseteq [0,T_V]$ of $\varsigma_V(t\,,(d_0,\ell_0)).$\\
Let then $\sigma:[0,1]\to{\mathcal B}$ be any path such that $\sigma(0)\in\mathcal B^{-}_{l}$ and
$\sigma(1)\in\mathcal B^{-}_{r}.$ We are going to show that if $T_V\ge\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))}$
then $\theta_V(T_V,\sigma(0))-\theta_V(T_V,\sigma(1))>3\pi.$
Since $\mathcal B^{-}_{l}:=\mathcal B\cap\Gamma_V(h_1),\,\mathcal B^{-}_{r}:=\mathcal B\cap\Gamma_V(h_2),$ it holds that
$\rot_V(t,\sigma(0)) \geq \lfloor t/\tau_V(h_1)\rfloor$ and
$\rot_V(t,\sigma(1)) \leq \lceil t/\tau_V(h_2)\rceil$ for every $t> 0,$
and thus
$\rot_V(t,\sigma(0)) - \rot_V(t,\sigma(1))>t\, \frac{\tau_V(h_2) - \tau_V(h_1)}{\tau_V(h_1)\,\tau_V(h_2)}\, - 2,$
similar to \eqref{rots}. Hence, for\\
$T_V\ge\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))}$ it holds that
\begin{equation*}
\rot_V(T_V,\sigma(0)) - \rot_V(T_V,\sigma(1))> T_V\, \frac{\tau_V(h_2) - \tau_V(h_1)}{\tau_V(h_1)\,\tau_V(h_2)}\, - 2\ge\frac{9}{2}-2=\frac{5}{2}>2.
\end{equation*}
Recalling that $\mathcal B:=\mathcal C_S^u(e_1,e_2)\cap\mathcal C_V^u(h_1,h_2),$ since
$\overline\sigma\subset\mathcal B,$ we have that \\
$\theta_V(T_V,\sigma(0)) - \theta_V(T_V,\sigma(1))>4\pi+\theta_V(0,\sigma(0))-\theta_V(0,\sigma(1))>4\pi-\pi=3\pi,$ as needed,
because $\theta_V(0,\sigma(0)),\,\theta_V(0,\sigma(1))\in [0,\pi],$ and thus $\theta_V(0,\sigma(0))-\theta_V(0,\sigma(1))>-\pi.$\\
Then, there exists $n^{\ast\ast}\in\mathbb N$ such that
$[(2n^{\ast\ast}+1)\pi,(2n^{\ast\ast}+2)\pi]\subset\{\theta_V(T_V,\sigma(\lambda)):\lambda\in[0,1]\}$ and, consequently, there exists
$[\lambda',\lambda'']\subseteq [0,1]$ such that \\
$\{\theta_V(T_V,\sigma(\lambda)):\lambda\in[\lambda',\lambda'']\}\subseteq[(2n^{\ast\ast}+1)\pi,(2n^{\ast\ast}+2)\pi],$
with $\theta_V(T_V,\sigma(\lambda'))=(2n^{\ast\ast}+1)\pi$ and $\theta_V(T_V,\sigma(\lambda''))=(2n^{\ast\ast}+2)\pi.$
Thus, $\Psi_V(\sigma(\lambda))\in\mathcal C_V^d(h_1,h_2)$ for every $\lambda\in[\lambda',\lambda'']$
and $H_V(\Psi_V(\sigma(\lambda')))\le e_1,\,
H_V(\Psi_V(\sigma(\lambda'')))\ge e_2.$ Then, there exists $[\lambda^{\ast},\lambda^{\ast\ast}]\subseteq[\lambda',\lambda'']$ such that $\Psi_V(\sigma(\lambda))\in\mathcal A$ for every $\lambda\in[\lambda^{\ast},\lambda^{\ast\ast}],$ and it holds that
$H_V(\Psi_V(\sigma(\lambda^{\ast})))=e_1,\,H_V(\Psi_V(\sigma(\lambda^{\ast\ast})))=e_2,$ so that
the stretching relation
$\displaystyle{\Psi_V: {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}}$
is verified. See Figure \ref{sapf} (B).\\
Since we have checked that conditions $(C_F)$ and $(C_G)$ in Theorem \ref{th} are fulfilled for the oriented rectangles ${\widetilde{\mathcal A}} := ({\mathcal A},{\mathcal A}^-)$ and
${\widetilde{\mathcal B}} := ({\mathcal B},{\mathcal B}^-)$ with $F=\Psi_S$ and $G=\Psi_V,$ it follows that $\Psi=\Psi_V\circ\Psi_S$
induces chaotic dynamics on two symbols in $\mathcal A.$ Given that $\Psi$ is a homeomorphism on the open unit square, it is injective on the set ${\mathcal H}:=({\mathcal H}_0\cup{\mathcal H}_1)\cap{\Psi_S}^{-1}(\mathcal B),$ and thus
all the properties listed in Theorem \ref{th} are fulfilled for $\Psi$.\\
In order to verify that $\Psi$ induces chaotic dynamics on two symbols in $\mathcal B,$ too, we orientate $\mathcal A$ by setting $\mathcal A^{-}_{l}:=\mathcal A\cap\Gamma_V(h_1),\,\mathcal A^{-}_{r}:=\mathcal A\cap\Gamma_V(h_2),\,\mathcal B^{-}_{l}:=\mathcal B\cap\Gamma_S(e_1),\,\mathcal B^{-}_{r}:=\mathcal B\cap\Gamma_S(e_2),$ and we should
show that the image through $\Psi_S$ of any path joining in $\mathcal B$ the sides $\mathcal B^{-}_{l}$ and
$\mathcal B^{-}_{r}$ crosses ${\mathcal A},$ from $\mathcal A^{-}_{l}$ to $\mathcal A^{-}_{r},$ at least twice when
$T_S\ge\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))},$ and then check that the image through $\Psi_V$ of any path in $\mathcal A$ joining $\mathcal A^{-}_{l}$ with
$\mathcal A^{-}_{r}$ crosses ${\mathcal B},$ from $\mathcal B^{-}_{l}$ to $\mathcal B^{-}_{r},$ at least once
when $T_V\ge\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))}.$ Namely, this amounts to show that $\displaystyle{({\mathcal K}_i,\Psi_S): {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}}$ for suitable compact disjoint subsets ${\mathcal K}_i$ of $\mathcal B,$ for $i\in\{0,1\},$ as well as that $\displaystyle{\Psi_V: {\widetilde{\mathcal A}}\stretchx\, {\widetilde{\mathcal B}}}.$
Due to the similarity with the above proved properties, we leave the details in the verification of new conditions $(C_F)$ and $(C_G)$ in Theorem \ref{th} to the reader.
Then, by Theorem \ref{th} it holds that $\Psi$ induces chaotic dynamics on two symbols in $\mathcal B,$ as desired.\\
The proof is complete.
$\hfill\square$
\begin{figure}[ht]
\centering
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth,height=6cm]{fig3a}
\center{(A)}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth,height=6cm]{fig3b}
\center{(B)}
\end{minipage}
\caption{In (A) we provide a qualitative representation of what happens when the stretching relation in \eqref{sr} is fulfilled. Namely, for any path $\gamma$ (in purple) joining $\mathcal A^{-}_{l}$ to $\mathcal A^{-}_{r}$ in $\mathcal A,$ there exist two compact subsets $\mathcal H_0$ and $\mathcal H_1$ (in lilac) of $\mathcal A$ such that the restriction (represented in brown) of $\gamma$ to each of them is transformed by $\Psi_S$ into a path (represented in brown) joining $\mathcal B^{-}_{l}$ to $\mathcal B^{-}_{r}$ in $\mathcal B.$ In (B)
we provide a qualitative representation of the effect produced on each of those two paths by the
stretching relation $\displaystyle{\Psi_V: {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}}.$
In this case the Poincar\'e map $\Psi_V$ transforms those paths into paths (hazel colored) joining $\mathcal A^{-}_{l}$ to $\mathcal A^{-}_{r}$ in $\mathcal A.$}\label{sapf}
\end{figure}
As it is clear from the proof of Theorem \ref{app18}, the chaotic dynamics on two symbols are the result of the twist property
for the rotation number, generated by the different speeds with which the inner and the outer boundaries of two linked together annuli are run. Namely, for large enough time-intervals this suffices to make the image of paths through the Poincar\'e maps related to Systems
\eqref{18b} and \eqref{18p} turn in a spiral-like fashion inside the annuli and cross a suitably high number of times the intersection sets between the linked annuli, where the invariant chaotic sets are located. See Figure \ref{sapf} for a qualitative graphical representation
of our framework, as well as Figures 4 and 5 in \cite{PaZa-08} for a similar scenario, in which the spiral effect is clearly illustrated. Indeed, if the switching times $T_S$ and $T_V$ were still larger, it could be proven that the Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ associated with System \eqref{18t} induces chaotic dynamics on $m$ symbols, where $m$ is any number greater or equal to $2,$ implying the existence of a semi-conjugacy between $\Psi$ and $\Sigma_m:=\{0,1,\dots,m-1\}^{\mathbb Z}$ and, consequently, implying the estimate $h_{\rm top}(\Psi)\ge\log(m)$ for the topological entropy. See Lemma 3.1 and Theorem 1.2 in \cite{PiZa-08} for further details.\\
We also stress that Theorem \ref{app18} is robust with respect to small perturbations in the coefficients of System \eqref{18t}, in $L^1$ norm. Namely, if $T_S$ and $T_V$ satisfy the conditions described in its statement, then, recalling the definition of System \eqref{18t} and of its coefficients in \eqref{xt}, there exists a positive constant $\varepsilon$ such that the same conclusions of Theorem \ref{app18} hold true for the system
\begin{equation*}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(\tilde\zeta(t)-\tilde\eta(t)\ell\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)\left(-\tilde\vartheta(t)+\tilde\kappa(t) d\right)
\end{array}
\right.
\end{equation*}
with $\tilde\zeta,\,\tilde\eta,\,\tilde\vartheta,\,\tilde\kappa: \mathbb R\to\mathbb R$ that are $T$-periodic functions,
as long as
$$
\int_0^T \bigl|\,\tilde\zeta(t)-\zeta\bigr|\,dt < \varepsilon,\quad
\int_0^T \bigl|\,\tilde\eta(t)-\eta(t)\bigr|\,dt < \varepsilon,
$$
$$
\int_0^T \bigl|\,\tilde\vartheta(t)-\vartheta(t)\bigr|\,dt < \varepsilon,\quad
\int_0^T \bigl|\,\tilde\kappa(t)-\kappa(t)\bigr|\,dt < \varepsilon.
$$
All the above remarks apply, with the suitable modifications, to the frameworks discussed in Subsection \ref{32} and in Section \ref{sec-4}, too.\\
We conclude the present investigation concerning the model proposed in \cite{Anea-18} by illustrating a
numerical example, in which Theorem \ref{app18} can be used to show the existence of chaotic dynamics.
\begin{example}\label{18ex}
Taking $p_D=0.9,\,p_{ND}=0.1,\,q_D=0.1,\,q_{ND}=0.2<\widehat{q}_{ND}=0.3,\,B^{PH}=6,\,E=140,\,C_L=90,$ we obtain
System \eqref{18b} with $\zeta=B^{PH}=6,\,\eta=(p_Dq_D-p_{ND}q_{ND})E=9.8,\,\vartheta=q_{ND}(C_L-p_{ND}E)=15.2,\,\kappa=q_{ND}(C_L-p_{ND}E)+q_D(p_{D}E-C_L)=18.8,$ as well as System \eqref{18p} with $\zeta=B^{PH}=6,\,\widehat\eta=(p_Dq_D-p_{ND}\widehat{q}_{ND})E=8.4,\,\widehat\vartheta=\widehat{q}_{ND}(C_L-p_{ND}E)=22.8,\,\widehat\kappa=\widehat{q}_{ND}(C_L-p_{ND}E)+q_D(p_{D}E-C_L)=26.4.$
Hence, the former system has a center in $S=\left(\frac{\vartheta}{\kappa},\frac{\zeta}{\eta}\right)=(0.809,0.612),$ while the latter system has a center in $V=\left(\frac{\widehat\vartheta}{\widehat\kappa},\frac{\zeta}{\widehat\eta}\right)=(0.864,0.714).$\\
As shown in Figure \ref{nef}, two linked together annuli $\mathcal C_S(e_1,e_2)$ and $\mathcal C_V(h_1,h_2)$ can be obtained\,\footnote{We remark that, with respect to Figures \ref{la}--\ref{sapf}, Figure \ref{nef} is drawn for a different (more sensible, from an interpretative viewpoint) parameter configuration. However, in Figure \ref{nef} orbits are more squeezed, especially for System \eqref{18p}, so that the sets specified in the previous pictures would have been more difficult to highlight in the present context. Namely, Figures \ref{la}--\ref{sapf} just illustrate Definition \ref{li} and Theorem \ref{app18}, but we do not assign to them any interpretative value.} for $e_1=16.9,\,e_2=18.5,\,h_1=16.9,\,h_2=18.4,$ intersecting in the two disjoint generalized rectangles denoted by $\mathcal A$ and $\mathcal B.$ In this case, software-assisted computations show that
$\tau_S(e_1)\approx 3.2 <\tau_S(e_2)\approx 3.8$ and $\tau_V(h_1)\approx 3<\tau_V(h_2)\approx 3.3.$ Hence, Theorem \ref{app18} guarantees the existence of chaotic dynamics for the Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ associated with System \eqref{18t} provided that
$T_S>\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}\approx 111.467$ and
$T_V>\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))}\approx 148.500.$ Thus, considering $T=T_S+T_V=365,$ i.e., a period of one year, and assuming for instance that $T_S=T_V,$ i.e., that across the year the hot and the cold days are equally distributed, in which, according e.g. to \cite{Atea-08,Deea-12}, hypertension phenomena fall and raise, respectively, then it is possible to apply Theorem \ref{app18} to conclude that $\Psi$ induces chaotic dynamics on two symbols in $\mathcal A$ and $\mathcal B.$
\end{example}
\begin{figure}[ht]
\centering
\includegraphics[width=5cm,height=6cm]{fig4}
\caption{The two linked together annuli $\mathcal C_S(e_1,e_2)$ (whose boundary is colored in red) and $\mathcal C_V(h_1,h_2)$ (whose boundary is colored in blue) considered in Example \ref{18ex}, together with the corresponding phase portrait.}\label{nef}
\end{figure}
\subsection{Analysis of the framework in Antoci et al. (2016)}\label{32}
Recalling that
$E_D=q_D R-(1-q_D)K$ and $E_{ND}=q_{ND}R-(1-q_{ND})K,$
when setting
$\lambda=E_{ND}-E_D,\,\mu=C_D-C_{ND},\,\nu=E_{ND}-C_L,$
System \eqref{16} becomes
\begin{equation}\label{16b}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(p\lambda\ell-\mu\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)p\left(\nu-\lambda d\right)
\end{array}
\right.
\end{equation}
with $d(t),\,\ell(t)\in[0,1].$ Focusing on the case $0<\mu<p\lambda$ and $0<\nu<\lambda,$ the center is given by
$S=\left(\frac{\nu}{\lambda},\frac{\mu}{p\lambda}\right)$ and thus if $p$ increases, reaching an higher value $\widehat p,$ calling $V=\left(\frac{\nu}{\lambda},\frac{\mu}{\widehat p\lambda}\right)$ the new equilibrium, it is a global center of the perturbed system
\begin{equation}\label{16p}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(\widehat p\lambda\ell-\mu\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)\widehat p\left(\nu-\lambda d\right)
\end{array}
\right.
\end{equation}
as long as $0<\mu<\widehat p\lambda$ and $0<\nu<\lambda.$
In particular, due to the raise in $p,$ the abscissa of $V$ coincides with that of $S,$ while the ordinate of $V$ is smaller than that of $S.$\\
In what follows, we describe just the main steps that allow to state and prove a result analogous to Theorem \ref{app18}, focusing on the differences with Subsection \ref{31}. In order to simplify the comparison between Subsections \ref{31} and \ref{32}, we will maintain the notation introduced therein to denote similar objects.\\
The orbits for System \eqref{16b} surrounding $S$ have equation
\begin{equation}\label{hs-16}
H_S(d,\ell)=-p\nu\log(d)+p(\nu-\lambda)\log(1-d)-\mu\log(\ell)+(\mu-p\lambda)\log(1-\ell)=e,
\end{equation}
for some $e\ge e_0,$ where $e_0:=-p\nu\log(\frac{\nu}{\lambda})+p(\nu-\lambda)\log(1-\frac{\nu}{\lambda})-\mu\log(\frac{\mu}{p\lambda})+(\mu-p\lambda)\log(1-\frac{\mu}{p\lambda})$ is the minimum energy level attained by $H_S(d,\ell)$ on the unit square.
The orbits for System \eqref{16p} surrounding $V$ have equation
\begin{equation}\label{hv-16}
H_V(d,\ell)=-\widehat p\nu\log(d)+\widehat p(\nu-\lambda)\log(1-d)-\mu\log(\ell)+(\mu-\widehat p\lambda)\log(1-\ell)=h,
\end{equation}
for some $h\ge h_0,$ where $h_0:=H_V(V)$ is the minimum energy level attained by $H_V(d,\ell)$
on the unit square.\\
In the present framework, the straight line $r$ joining $S$ and $V$ is vertical, its equation being $d=\frac{\nu}{\lambda}.$ In order for Definition \ref{li} of linked together annuli to be still valid without the need of any change when considering annuli composed of level lines of $H_S$ in \eqref{hs-16} and of $H_V$ in \eqref{hv-16}, we define the ordering ``$\vartriangleleft$'' as a reversed comparison between the ordinate coordinates,
so that given $P=(d_P,\ell_P)$ and $Q=(d_Q,\ell_Q)$ belonging to $r,$ we have that $d_P=d_Q$ and it holds that $P\vartriangleleft\, Q$ (resp. $P\trianglelefteq\, Q$) if and only if $\ell_P>\ell_Q$ (resp. $\ell_P\ge\ell_Q$). See Figure \ref{16-la} for a graphical illustration of the new geometrical configuration.
\begin{figure}[ht]
\centering
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth,height=6cm]{fig5a}
\center{(A)}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\includegraphics[width=\textwidth,height=6cm]{fig5b}
\center{(B)}
\end{minipage}
\caption{In (A) we represent some energy level lines associated with System \eqref{16b}, in red, surrounding $S,$ as well as some energy level lines associated with System \eqref{16p}, in blue, surrounding $V.$ All of them are run clockwise. In (B), suitably choosing two level lines for each system,
we obtain two linked together annuli, according to Definition \ref{li}. Calling them $\mathcal C_S(e_1,e_2)$ and $\mathcal C_V(h_1,h_2),$ we represent in yellow the set $\mathcal C_S^u(e_1,e_2)$ and in light blue the set $\mathcal C_V^u(h_1,h_2),$ both lying on the left of the vertical line $r,$ joining the centers $S$ and $V,$ while we represent in orange the set $\mathcal C_S^d(e_1,e_2)$ and in dark blue the set $\mathcal C_V^d(h_1,h_2),$ both lying on the right of $r.$ Notice that $\mathcal C_S^d(e_1,e_2)$ and $\mathcal C_V^d(h_1,h_2)$ cross in $\mathcal A$ (colored in dark green), and that $\mathcal C_S^u(e_1,e_2)$ and $\mathcal C_V^u(h_1,h_2)$ cross in $\mathcal B$ (colored in light green).}\label{16-la}
\end{figure}
Similar to Subsection \ref{31}, due to the seasonal variation in hypertension and the connection of the latter with perioperative adverse events, we can assume that $p$ in \eqref{16} alternate in a periodic fashion between a low and a high value, so that the dynamics are governed
by \eqref{16b} for $t\in[0,T_S)$, and by \eqref{16p} for $t\in[T_S,T_S+T_V).$ We suppose that the same alternation between the two regimes occurs with $T$-periodicity, where $T=T_S+T_V,$ i.e., we can assume that we are dealing with a system with periodic coefficients of the form
\begin{equation}\label{16t}
\left\{
\begin{array}{ll}
d\,'=d(1-d)\left(p(t)\lambda(t)\ell-\mu(t)\right)\\
\vspace{-2mm}\\
\ell\,'=\ell(1-\ell)p(t)\left(\nu(t)-\lambda(t) d\right)
\end{array}
\right.
\end{equation}
where $x(t)\equiv x,$ for $x\in\{\lambda,\mu,\nu\},$ and for $p(t)$ it holds that
\begin{equation}\label{pt}
p(t)=\left\{
\begin{array}{ll}
p \quad & \mbox{for } \, t\in[0,T_S) \\
\vspace{-2mm}\\
\widehat p \quad & \mbox {for } \, t\in[T_S,T)
\end{array}
\right.
\end{equation}
with $0<\mu<p\lambda$ and $0<\nu<\lambda.$ The function $p(t)$ is piecewise constant and it is supposed to be extended to the whole real
line by $T$-periodicity.\\
In regard to the Poincar\'e map $\Psi$ of System \eqref{16t}, it may be decomposed as $\Psi=\Psi_V\,\circ\Psi_S,$ where $\Psi_S$ is the Poincar\'e map associated with System \eqref{16b} for $t\in[0,T_S]$ and $\Psi_V$ is the Poincar\'e map associated with System \eqref{16p} for $t\in[0,T_V].$ Like for the framework analyzed in Subsection \ref{31}, the increasing monotonicity of the period of the orbits surrounding $S$ and $V,$ denoted respectively by $\tau_S(e)$ and $\tau_V(h),$ with the energy levels $e>e_0$ and $h>h_0,$ has been proven in \cite{Sc-85,Sc-90}. On the other hand, the analysis of the phase portrait shows that orbits surrounding $S$ and $V$ are both run clockwise this time. Since such dissimilarity with respect to the model proposed in \cite{Anea-18} will urge us to modify, with respect to Theorem \ref{app18}, some details in the proof of our result on the existence of chaotic dynamics for System \eqref{16t} (compare in particular the definitions of rotation number in \eqref{rns} and \eqref{rnv} with those in \eqref{rnsb} and \eqref{rnvb}, as well the estimates following from them), we present its statement in a precise manner, together with the main steps in its proof:
\begin{theorem}\label{app16}
For any choice of the parameters $0<\mu<p\lambda$ and $0<\nu<\lambda,$ defined like in System \eqref{16b}, and for any increase in the parameter $p,$ given the annulus $\mathcal C_S(e_1,e_2)$ around $S,$ for some $e_0<e_1<e_2,$ and the annulus $\mathcal C_V(h_1,h_2)$ around $V,$ for some $h_0<h_1<h_2,$ assume that they are linked together, calling $\mathcal A$ and $\mathcal B$ the connected components of $\mathcal C_S(e_1,e_2)\cap\mathcal C_V(h_1,h_2).$
Then, if $T_S>\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}$ and
$T_V>\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))},$ the
Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ of System \eqref{16t} induces chaotic dynamics on two symbols in $\mathcal A$ and $\mathcal B,$ and thus
all the properties listed in Theorem \ref{th} are fulfilled for $\Psi$.
\end{theorem}
\begin{proof}
Given the linked together annuli $\mathcal C_S(e_1,e_2)$ and $\mathcal C_V(h_1,h_2),$ let us set $\mathcal C_S(e_1,e_2)=\mathcal C_S^u(e_1,e_2)\cup\mathcal C_S^d(e_1,e_2)$ and $\mathcal C_V(h_1,h_2)=\mathcal C_V^u(h_1,h_2)\cup\mathcal C_V^d(h_1,h_2),$ where $\mathcal C_S^u(e_1,e_2)$ (resp. $\mathcal C_S^d(e_1,e_2)$) is the subset of $\mathcal C_S(e_1,e_2)$ which lies on the left (resp. on the right) of the vertical line $r,$ joining $S$ and $V,$ and, analogously, $\mathcal C_V^u(h_1,h_2)$ (resp. $\mathcal C_V^d(h_1,h_2)$) is the subset of $\mathcal C_V(h_1,h_2)$ which lies on the left (resp. on the right) of the vertical line $r.$ See Figure \ref{16-la} (B) for a graphical illustration.\footnote{We stress that in this case the specifications $u$ and $d,$ staying respectively for ``up'' and ``down'', are meant with respect to the horizontal axis, before performing the rototranslation of $\mathbb R^2$ that brings the origin to the point $S$ and makes the horizontal axis coincide with the vertical line $r.$ As we shall see below, we need that rototranslation in order to introduce a system of generalized polar coordinates centered at $S,$ in view of defining the rotation number in \eqref{rnsb}. We will then perform the same operation with respect to $V,$ as well.}
Let us also set $\mathcal A:=\mathcal C_S^d(e_1,e_2)\cap \mathcal C_V^d(h_1,h_2)$ and $\mathcal B:=\mathcal C_S^u(e_1,e_2)\cap\mathcal C_V^u(h_1,h_2).$ We are going to show that, when orienting them by setting ${\mathcal A}^-=\mathcal A^{-}_{l}\cup\mathcal A^{-}_{r}$ and ${\mathcal B}^-=\mathcal B^{-}_{l}\cup\mathcal B^{-}_{r},$ with $\mathcal A^{-}_{l}:=\mathcal A\cap\Gamma_S(e_1),\,\mathcal A^{-}_{r}:=\mathcal A\cap\Gamma_S(e_2),\,\mathcal B^{-}_{l}:=\mathcal B\cap\Gamma_V(h_1),\,\mathcal B^{-}_{r}:=\mathcal B\cap\Gamma_V(h_2),$ then there exist two disjoint compact subsets ${\mathcal H_0},\,{\mathcal H_1}$ of ${\mathcal A}$
such that $\displaystyle{({\mathcal H}_i,\Psi_S): {\widetilde{\mathcal A}}\stretchx\, {\widetilde{\mathcal B}}},$ for $i=0,1,$ and that
$\displaystyle{\Psi_V: {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}}.$
If this is true, the Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ of System \eqref{16t} induces chaotic dynamics on two symbols in $\mathcal A$
and thus, since $\Psi$ is a homeomorphism on the open unit square, all the properties listed in Theorem \ref{th} are fulfilled for $\Psi$.\\
In order to check the former stretching relation, we introduce a system of generalized polar coordinates centered at $S,$ as explained in the proof of Theorem \ref{app18}, assuming to have performed the rototranslation of $\mathbb R^2$ that brings the origin to the point $S$ and makes the horizontal axis coincide with the vertical line $r.$ Since orbits for System \eqref{16b} are run clockwise, in order to count
positive the turns around $S$ in the clockwise sense, we define the rotation number, describing the normalized angular displacement during the time interval $[0,t]\subseteq [0,T_S]$ of the solution
$\varsigma_S(t\,,(d_0,\ell_0))$ to System \eqref{16b} with initial point in $(d_0,\ell_0)\in (0,1)^2,$ as
\begin{equation}\label{rnsb}
\Rot_S(t,(d_0,\ell_0)):=\frac{\theta_S(0,(d_0,\ell_0))-\theta_S(t,(d_0,\ell_0))}{2\pi}\,.
\end{equation}
For $e>e_0,$ as a consequence of the star-shapedness with respect to $S$ of the lower contour sets $\{(d,\ell)\in (0,1)^2:H_S(d,\ell)\le e\},$
with $H_S$ as in \eqref{hs-16}, conditions in \eqref{rot} are satisfied by $\Rot_S,$ too.
Let us now consider a generic path $\gamma:[0,1]\to{\mathcal A}$ with $\gamma(0)\in\mathcal A^{-}_{l},\,\gamma(1)\in\mathcal A^{-}_{r}$ and let us check that $T_S\ge\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}$ implies that $\theta_S(T_S,\gamma(1))-\theta_S(T_S,\gamma(0))>5\pi.$ Namely, like in \eqref{rots}, for every $t> 0$ it holds that
$\Rot_S(t,\gamma(0)) - \Rot_S(t,\gamma(1))> t\, \frac{\tau_S(e_2) - \tau_S(e_1)}{\tau_S(e_1)\,\tau_S(e_2)}-2,$ and thus,
for $T_S\ge\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))},$ we have
$\Rot_S(T_S,\gamma(0)) - \Rot_S(T_S,\gamma(1))>3.$
Hence, $\theta_S(T_S,\gamma(1)) - \theta_S(T_S,\gamma(0))>6\pi+\theta_S(0,\gamma(1))-\theta_S(0,\gamma(0)).$ Since
$\theta_S(0,\gamma(0)),\,\theta_S(0,\gamma(1))\in [-\pi,0]$ because $\overline\gamma\subset\mathcal A:=\mathcal C_S^d(e_1,e_2)\cap\mathcal C_V^d(h_1,h_2),$ it holds that $\theta_S(0,\gamma(1))-\theta_S(0,\gamma(0))>-\pi,$ from which it follows that
$\theta_S(T_S,\gamma(1)) - \theta_S(T_S,\gamma(0))>5\pi.$
As a consequence, there exists $n^{\ast}\in\mathbb N\setminus\{0\}$ such that
$[-2(n^{\ast}+1)\pi,-2(n^{\ast}+1)\pi+\pi]$ and $[-2n^{\ast}\pi,-2n^{\ast}\pi+\pi]$ are contained in the interval
$\{\theta_S(T_S,\gamma(\lambda)):\lambda\in[0,1]\}.$ Hence, by Bolzano theorem, there exist two disjoint maximal intervals
$[\lambda_0',\lambda_0''],\,[\lambda_1',\lambda_1'']$ of $[0,1]$ such that for $i\in\{0,1\}$ it holds that
$\{\theta_S(T_S,\gamma(\lambda)):\lambda\in[\lambda_i',\lambda_i'']\}\subseteq[-2(n^{\ast}+i)\pi,-2(n^{\ast}+i)\pi+\pi],$
with $\theta_S(T_S,\gamma(\lambda_i'))=-2(n^{\ast}+i)\pi$ and $\theta_S(T_S,\gamma(\lambda_i''))=-2(n^{\ast}+i)\pi+\pi.$
It is easy to check that we can then set
${\mathcal H}_i:=\{(d_0,\ell_0)\in\mathcal A:\theta_S(T_S,(d_0,\ell_0))\in[-2(n^{\ast}+i)\pi,-2(n^{\ast}+i)\pi+\pi]\}$ for $i\in\{0,1\}$
in order to have the stretching relation $\displaystyle{({\mathcal H}_i,\Psi_S): {\widetilde{\mathcal A}}\stretchx\, {\widetilde{\mathcal B}}}$ satisfied.\\
In order to prove that $\displaystyle{\Psi_V: {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}},$ we introduce a system of generalized polar coordinates, now centered at $V,$ defining in particular the rotation number as
\begin{equation}\label{rnvb}
\Rot_V(t,(d_0,\ell_0)):=\frac{\theta_V(0,(d_0,\ell_0))-\theta_V(t,(d_0,\ell_0))}{2\pi}\,,
\end{equation}
since orbits for System \eqref{16p} are still run clockwise.
Let $\sigma:[0,1]\to{\mathcal B}$ be any path such that $\sigma(0)\in\mathcal B^{-}_{l}$ and
$\sigma(1)\in\mathcal B^{-}_{r}.$ Then, similar to what proven for $\Rot_S,$ we have
$\Rot_V(t,\sigma(0)) - \Rot_V(t,\sigma(1))>t\, \frac{\tau_V(h_2) - \tau_V(h_1)}{\tau_V(h_1)\,\tau_V(h_2)}\, - 2,$ and therefore if $T_V\ge\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))}$ it follows that $\Rot_V(T_V,\sigma(0)) - \Rot_V(T_V,\sigma(1))>2,$
from which $\theta_V(T_V,\sigma(1))-\theta_V(T_V,\sigma(0))>3\pi,$ because $\overline\sigma\subset\mathcal B:=\mathcal C_S^u(e_1,e_2)\cap\mathcal C_V^u(h_1,h_2),$ and thus $\theta_V(0,\sigma(0)),\,\theta_V(0,\sigma(1))\in [0,\pi],$ implying that $\theta_V(0,\sigma(1))-\theta_V(0,\sigma(0))>-\pi.$\\
Hence, there exists $n^{\ast\ast}\in\mathbb N\setminus\{0\}$ such that
$[-2n^{\ast\ast}\pi+\pi,-2n^{\ast\ast}\pi+2\pi]\subset\{\theta_V(T_V,\sigma(\lambda)):\lambda\in[0,1]\}$ and, consequently, there exists
$[\lambda',\lambda'']\subseteq [0,1]$ such that
$\{\theta_V(T_V,\sigma(\lambda)):\lambda\in[\lambda',\lambda'']\}\subseteq[-2n^{\ast\ast}\pi+\pi,-2n^{\ast\ast}\pi+2\pi],$
with $\theta_V(T_V,\sigma(\lambda'))=-2n^{\ast\ast}\pi+\pi$ and $\theta_V(T_V,\sigma(\lambda''))=-2n^{\ast\ast}\pi+2\pi.$
This easily allows to verify the stretching relation
$\displaystyle{\Psi_V: {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}}.$\\
Having proved that conditions $(C_F)$ and $(C_G)$ in Theorem \ref{th} are fulfilled for the oriented rectangles ${\widetilde{\mathcal A}} := ({\mathcal A},{\mathcal A}^-)$ and
${\widetilde{\mathcal B}} := ({\mathcal B},{\mathcal B}^-)$ with $F=\Psi_S$ and $G=\Psi_V,$ it follows that $\Psi=\Psi_V\circ\Psi_S$
induces chaotic dynamics on two symbols in $\mathcal A.$ Since $\Psi$ is a homeomorphism on the open unit square, it is injective on the set ${\mathcal H}:=({\mathcal H}_0\cup{\mathcal H}_1)\cap{\Psi_S}^{-1}(\mathcal B),$ and thus
all the properties listed in Theorem \ref{th} are fulfilled for $\Psi$.\\
In order to check that $\Psi$ induces chaotic dynamics on two symbols in $\mathcal B,$ too, we orientate $\mathcal A$ by setting $\mathcal A^{-}_{l}:=\mathcal A\cap\Gamma_V(h_1),\,\mathcal A^{-}_{r}:=\mathcal A\cap\Gamma_V(h_2),\,\mathcal B^{-}_{l}:=\mathcal B\cap\Gamma_S(e_1),\,\mathcal B^{-}_{r}:=\mathcal B\cap\Gamma_S(e_2),$ and we should
show that $\displaystyle{({\mathcal K}_i,\Psi_S): {\widetilde{\mathcal B}}\stretchx\, {\widetilde{\mathcal A}}}$ for suitable compact disjoint subsets ${\mathcal K}_i$ of $\mathcal B,$ for $i\in\{0,1\},$ as well as that $\displaystyle{\Psi_V: {\widetilde{\mathcal A}}\stretchx\, {\widetilde{\mathcal B}}}.$ The verification of the details is omitted.
Then, by Theorem \ref{th} it holds that $\Psi$ induces chaotic dynamics on two symbols in $\mathcal B,$ as desired, and the proof is complete.
\end{proof}$\hfill\square$
We now provide a numerical example about the model proposed in \cite{Anea-16}, in which the existence of chaotic dynamics
can be shown by using Theorem \ref{app16}.
\begin{example}\label{16ex}
Taking $p=0.1<\widehat p=0.2,\,q_D=0.4,\,q_{ND}=0.6,\,R=130,\,K=70,\,C_D=18,\,C_{ND}=15,\,C_L=30,\,$
we obtain $E_D=q_D R-(1-q_D)K=10,\,E_{ND}=q_{ND} R-(1-q_{ND})K=50,$ and consequently
Systems \eqref{16b}, \eqref{16p} with $\lambda=E_{ND}-E_D=40,\,\mu=C_D-C_{ND}=3,\,\nu=E_{ND}-C_L=20.$
Since $0<\mu<p\lambda$ and $0<\nu<\lambda,$ System \eqref{16b} has a center in $S=\left(\frac{\nu}{\lambda},\frac{\mu}{p\lambda}\right)=(0.5,0.75),$ while System \eqref{16p} has a center in $V=\left(\frac{\nu}{\lambda},\frac{\mu}{\widehat p\lambda}\right)=(0.5,0.375).$ This is the framework represented in Figure \ref{16-la}.\\
In particular, as shown in Figure \ref{16-la} (B), two linked together annuli $\mathcal C_S(e_1,e_2)$ and $\mathcal C_V(h_1,h_2)$ can be obtained for $e_1=5.4,\,e_2=8.3,\,h_1=12.1,\,h_2=16.2,$ intersecting in the two disjoint generalized rectangles denoted by $\mathcal A$ and $\mathcal B.$ Software-assisted computations show that
$\tau_S(e_1)\approx 7.8 <\tau_S(e_2)\approx 11.3$ and $\tau_V(h_1)\approx 3.6<\tau_V(h_2)\approx 4.5.$ Hence, Theorem \ref{app16} guarantees the existence of chaotic dynamics for the Poincar\'e map $\Psi=\Psi_V\circ\Psi_S$ associated with System \eqref{16t} provided that
$T_S>\frac{11\,\tau_S(e_1)\tau_S(e_2)}{2\,(\tau_S(e_2)-\tau_S(e_1))}\approx 138.5$ and
$T_V>\frac{9\,\tau_V(h_1)\tau_V(h_2)}{2\,(\tau_V(h_2)-\tau_V(h_1))}\approx 81.$ Thus, considering a period of one year, and assuming that the hot and the cold days are equally distributed across the year, together with their opposite effect on hypertension phenomena (see e.g. \cite{Atea-08,Deea-12}), so that $T_S=T_V=\frac{365}{2},$ it is then possible to apply Theorem \ref{app16} to conclude that $\Psi$ induces chaotic dynamics on two symbols in $\mathcal A$ and $\mathcal B.$
\end{example}
\section{A biological framework with intraspecific competition}\label{sec-4}
We now provide an ecological interpretation of the model in \eqref{zan}, connected with intraspecific competition and environmental carrying capacity in predator-prey models. Its precise formulation is given by
\begin{equation}\label{bio}
\left\{
\begin{array}{ll}
x\,'=r_x\, x\,\left(1-\frac{x}{K_x}\right)(\alpha-\beta y)\\
\vspace{-2mm}\\
y\,'=r_y\, y\,\left(1-\frac{y}{K_y}\right)(-\gamma+\delta x)
\end{array}
\right.
\end{equation}
with $x(t),\,y(t)>0$ representing the size of the prey and of the predator populations,
respectively, and $\alpha,\,\beta,\,\gamma,\,\delta,\,r_x,\,r_y,\,K_x,\,K_y$ positive constants.
In particular, if $0<\alpha<\beta$ and $0<\gamma<\delta$ all orbits are closed and periodic, surrounding the unique internal equilibrium $S=\left(\frac{\gamma}{\delta},\frac{\alpha}{\beta}\right),$ which is a center. With respect to the classical Lotka-Volterra model (cf. for instance \cite{PiZa-08}), the model in \eqref{bio} encompasses the logistic terms $r_x\, \left(1-\frac{x}{K_x}\right)$ and $r_y\, \left(1-\frac{y}{K_y}\right),$ which take into account intra-species interaction and the role of environmental resources. More precisely, the parameters $r_x$ and $r_y$ are called the intrinsic (or inherent) growth rate for the two populations, while the parameters $K_x$ and $K_y$ are called environmental carrying capacity for the two species, as they describe the maximum population size that the resources of the environment can carry over a period of time (see \cite{Beea-06} for further ecological details).
Such biological interpretation of the model has been briefly suggested in \cite{Haea-07}, where however the focus is on the celebrated growth-cycle model by Goodwin \cite{Go-67,Go-72}, describing the dynamics of the wage share of output and the employment proportion.
Actually, Harvie et al. in \cite{Haea-07} propose a system of differential equations in which each variable has both a positive and a
negative effect on its own growth rate, while in \eqref{bio} we focus just on the latter, in agreement with the models analyzed in Section \ref{sec-3}.\footnote{Still for the sake of conformity with the previous section, where the state variables $d(t)$ and $\ell(t)$ could vary just in $[0,1],$ being population shares, in Examples \ref{bioer} and \ref{bioek} we will confine ourselves to carrying capacities not exceeding unity, although $x(t)$ and $y(t)$ in \eqref{bio} can assume any positive value. In this manner those examples could be interpreted also in economic terms, concerning the wage share of output and the employment proportion, in line with \cite{Haea-07}. Of course, it would be easy to modify Examples \ref{bioer} and \ref{bioek} in order to allow $x(t)$ and $y(t)$ to exceed unity.}\\
In the existing literature, in view of taking into account the changes over time of the habitat conditions and of the available resources (see e.g. \cite{Ayea-12} for an empirical study concerning salmonids), some authors have introduced periodic variations in carrying capacities and intrinsic growth rates in related settings. Namely, in \cite{Le-16} the focus in on the effect of a periodic variation in the value of the carrying capacity for a species described by the logistic equation, while in
\cite{NiGu-76} a numerical investigation of the effect of a periodic variation in the values of the intrinsic growth rate and of the carrying capacity for a species described by the logistic equation with a time delay is performed.
Moreover, a periodic variation in the carrying capacity has been considered in \cite{BoWa-07,CuHe-02} for the Beverton-Holt equation, related to the logistic equation, as well as in \cite{BoSt-15}, where also the intrinsic growth rate is assumed to vary in a periodic fashion in a quantum calculus version of the classical Beverton-Holt equation. In the same perspective, we can assume that $r_x$ and $r_y,$ as well as $K_x$ and $K_y,$ periodically alternate between two different values, e.g. due to a seasonal effect. This allows us to enter the setting of Linked Twist Maps and to apply Theorem \ref{th} to prove the existence of chaotic dynamics induced by the related Poincar\'e map, although neither the intrinsic growth rates nor the carrying capacities of the two populations affect the center position. Namely, as shown in different contexts e.g. in \cite{BuZa-09,PaZa-13}, a geometrical configuration connected with LTMs may be obtained even when the center position is not affected by a variation of a certain parameter, as long as the latter suitably influences the shape of the orbits. We show two such frameworks in Figures \ref{biofr} and
\ref{biofk}. The former is related to Example \ref{bioer}, where we focus on a variation of the intrinsic growth rates, while the latter illustrates Example \ref{bioek}, where we will deal with periodic oscillations in the two populations carrying capacities.
For clarity's sake, differently from \cite{BoSt-15,NiGu-76}, we will consider the variation in the intrinsic growth rates separately from the variation in the carrying capacities, in order to better understand the effect produced by each parameter on the orbits shape. Comparing Example \ref{bioer} with Example \ref{bioek}, we notice that raising the value of the intrinsic growth rate or of the carrying capacity for population $i\in\{x,y\}$ stretches orbits along the direction of the coordinate $i$-axis.
On the other hand, increasing carrying capacities enlarges the region in which orbits may lie, while increasing intrinsic growth rates does not. This difference is not very apparent in the framework considered in Example \ref{bioek}, where we will confine ourselves to carrying capacities close to unity, that is the value considered in Example \ref{bioer}, as explained in Footnote 12.\\
Turning to more technical aspects, since orbits of System \eqref{bio} are run counterclockwise, the proof of Theorem \ref{appbio}, concerning the framework with a periodic variation in intrinsic growth rates, will look similar to that of Theorem \ref{app18}, and thus it will be omitted for brevity's sake. For the framework with a periodic variation in carrying capacities, we will not even report the statement of our result about the existence of chaotic dynamics, limiting ourselves to some comments concerning it. On the other hand, since the center position is not influenced by a change in intrinsic growth rates or in carrying capacities, but the shape of the orbits is,
notation and, most importantly, the definition of linked together annuli need to be modified accordingly and also the statement of Theorem \ref{appbio} will be affected by those changes, together with the numerical Examples \ref{bioer} and \ref{bioek}. We refer the reader to Subsection \ref{31} for the remaining unchanged points.\\
Focusing at first on a variation in intrinsic growth rates in \eqref{bio}, let us assume that they alternate between certain values for the two populations, denoted by $r_x^{(1)}$ and $r_y^{(1)},$ for $t\in[0,T^{(1)})$, and different values, denoted by $r_x^{(2)}$ and $r_y^{(2)},$ for $t\in[T^{(1)},T^{(1)}+T^{(2)}).$\footnote{We stress that, in such manner, we are supposing that the switches between the different values of the intrinsic growth rates occur at the same time for both populations $x$ and $y.$ This is also the case that we will
consider in Example \ref{bioer}, in which we assume that periodic simultaneous exchanges between the intrinsic growth rate values of the two populations occur. However, more general frameworks in which the switches between the different values of the intrinsic growth rates are not simultaneous for the two populations could be easily analyzed as well, considering four dynamical regimes, rather than two. The same remark applies to the framework describing a periodic variation in the value of carrying capacities and to Example \ref{bioek}.}
Supposing that the same alternation between the two regimes occurs with $T$-periodicity, where $T=T^{(1)}+T^{(2)},$ we can assume that we are dealing with a system with periodic coefficients of the form
\begin{equation}\label{biot}
\left\{
\begin{array}{ll}
x\,'=r_x(t)\, x\,\left(1-\frac{x}{K_x(t)}\right)(\alpha(t)-\beta(t) y)\\
\vspace{-2mm}\\
y\,'=r_y(t)\, y\,\left(1-\frac{y}{K_y(t)}\right)(-\gamma(t)+\delta(t) x)
\end{array}
\right.
\end{equation}
where $c(t)\equiv c,$ for $c\in\{K_x,K_y,\alpha,\beta,\gamma,\delta\},$ and for $r_i(t),$ with $i\in\{x,y\},$ it holds that
\begin{equation}\label{rt}
r_i(t)=\left\{
\begin{array}{ll}
r_i^{(1)} \quad & \mbox{for } \, t\in[0,T^{(1)}) \\
\vspace{-2mm}\\
r_i^{(2)} \quad & \mbox {for } \, t\in[T^{(1)},T)
\end{array}
\right.
\end{equation}
with $0<\alpha<\beta$ and $0<\gamma<\delta.$ The functions $r_i(t),$ with $i\in\{x,y\},$ are piecewise constant and they are supposed to be extended to the whole real line by $T$-periodicity.\\
As observed above, the center coincides with $S=\left(\frac{\gamma}{\delta},\frac{\alpha}{\beta}\right)$ both when the intrinsic growth rates assume values $r_x^{(1)}$ and $r_y^{(1)},$ and thus we are in the regime whose dynamics are governed by the system that we will call $(1),$ as well as when they assume values $r_x^{(2)}$ and $r_y^{(2)},$ and thus we are in the regime whose dynamics are governed by the system that we will call $(2).$ In the former case, the orbits have equation
$$
\begin{array}{lll}
H^{(1)}(x,y)&\!\!\!\! = &\!\!\!\!\frac{1}{r_y^{(1)}}\left(-\alpha\log(y)+(\alpha-\beta K_y)\log(K_y-y)\right)+\\
&&\!\!\!\!\frac{1}{r_x^{(1)}}\left(-\gamma\log(x)+(\gamma-\delta K_x)\log(K_x-x)\right)=e,
\end{array}
$$
for some $e\ge e_0^{(1)},$ while, in the latter case, the orbits have equation
$$
\begin{array}{lll}
H^{(2)}(x,y)&\!\!\!\! = &\!\!\!\!\frac{1}{r_y^{(2)}}\left(-\alpha\log(y)+(\alpha-\beta K_y)\log(K_y-y)\right)+\\
&&\!\!\!\!\frac{1}{r_x^{(2)}}\left(-\gamma\log(x)+(\gamma-\delta K_x)\log(K_x-x)\right)=h,
\end{array}
$$
for some $h\ge h_0^{(2)},$ where $e_0^{(1)}$ and $h_0^{(2)}$ are the minimum energy levels attained by $H^{(1)}(x,y)$ and $H^{(2)}(x,y)$ on the the open rectangle $(0,K_x)\times(0,K_y),$ respectively, i.e., $e_0^{(1)}=H^{(1)}(S)$ and $h_0^{(2)}=H^{(2)}(S).$\\
The sets $\Gamma^{(1)}(e)=\{(x,y)\in (0,K_x)\times(0,K_y):H^{(1)}(x,y)=e\},$ for $e>e_0^{(1)},$ and
$\Gamma^{(2)}(h)=\{(x,y)\in (0,K_x)\times(0,K_y):H^{(2)}(x,y)=h\},$ for $h>h_0^{(2)},$ are simple closed curves surrounding $S.$
We call {\it annulus around} $S$ {\it for System} (1) any set $\mathcal C^{(1)}(e_1,e_2)=\{(x,y)\in (0,K_x)\times(0,K_y):e_1\le H^{(1)}(x,y)\le e_2\}$ with $e_0^{(1)}<e_1<e_2,$ whose inner boundary coincides with $\Gamma^{(1)}(e_1)$ and whose outer boundary coincides with $\Gamma^{(1)}(e_2).$ Similarly, we call {\it annulus around} $S$ {\it for System} (2) any set $\mathcal C^{(2)}(h_1,h_2)=\{(x,y)\in (0,K_x)\times(0,K_y):h_1\le H^{(2)}(x,y)\le h_2\}$ with
$h_0^{(2)}<h_1<h_2,$ whose inner boundary coincides with $\Gamma^{(2)}(h_1)$ and whose outer boundary coincides with $\Gamma^{(2)}(h_2).$\\
In the present framework, when considering annuli composed of level lines of $H^{(1)}$ and $H^{(2)},$ the definition of linked together annuli can be easily given by looking at the intersection points between the annuli and the horizontal line $\ell$ passing through $S,$ having equation $y=\frac{\alpha}{\beta}.$ The ordering
``$\vartriangleleft$'' in this case simply consists in a comparison between the $x$-coordinates,\footnote{We stress that a comparison between the vertical coordinates, as described in Subsection \ref{32}, would work as well. However, the consideration of the horizontal line $\ell$ is useful also in view of introducing the system of generalized polar coordinates centered at $S,$ that we need in the proof of Theorem \ref{appbio}, here omitted due to its similarity with the proof of Theorem \ref{app18}.} so that, given $P=(x_P,y_P)$ and $Q=(x_Q,y_Q)$ belonging to $\ell,$ we have that $y_P=y_Q$ and it holds that $P\vartriangleleft\, Q$ (resp. $P\trianglelefteq\, Q$) if and only if $x_P<x_Q$ (resp. $x_P\le x_Q$).
In particular, assume that, like in Example \ref{bioer}, it holds that $r_x^{(1)}>r_x^{(2)}$ and $r_y^{(1)}<r_y^{(2)},$ so that orbits are compressed along the $x$-direction and stretched along the $y$-direction passing from System (1) to System (2).
Then, taking inspiration from Figure \ref{biofr}, we introduce the definition of linked together annuli for the considered setting as follows:\footnote{Definition \ref{lim} can be easily modified when $r_x^{(1)}<r_x^{(2)}$ and $r_y^{(1)}>r_y^{(2)},$ so that orbits are stretched along the $x$-direction and compressed along the $y$-direction passing from System (1) to System (2). In the other cases (i.e., $r_x^{(1)}<r_x^{(2)}$ and $r_y^{(1)}<r_y^{(2)},$ or $r_x^{(1)}>r_x^{(2)}$ and $r_y^{(1)}>r_y^{(2)}$) it is sometimes possible to find linked together annuli, according to the considered parameter configurations.
For brevity's sake, we will not deepen such point, focusing in what follows just on the case $r_x^{(1)}>r_x^{(2)}$ and $r_y^{(1)}<r_y^{(2)},$ also in view of Example \ref{bioer}.}
\begin{definition}\label{lim}
Given the annuli $\mathcal C^{(1)}(e_1,e_2)$ and $\mathcal C^{(2)}(h_1,h_2)$ around $S,$ we say that they are linked together if
$$S^{\,(1)}_{2,-}\vartriangleleft\, S^{\,(1)}_{1,-}\trianglelefteq\, S^{\,(2)}_{2,-}\vartriangleleft\, S^{\,(2)}_{1,-}\triangleleft\, S^{\,(2)}_{1,+}\vartriangleleft\, S^{\,(2)}_{2,+}\trianglelefteq\, S^{\,(1)}_{1,+}\vartriangleleft\, S^{\,(1)}_{2,+}$$
where, for $i\in\{1,2\},$ $S^{\,(1)}_{i,-}$ and $S^{\,(1)}_{i,+}$ denote the intersection points\footnote{Notice that, for $e_i>e_0^{(1)}$ and $h_i>h_0^{(2)},\,i\in\{1,2\},$ the boundary sets $\Gamma^{(1)}(e_i)$ and $\Gamma^{(2)}(h_i)$ intersect the straight line $\ell$ in exactly two points because
$\{(x,y)\in (0,K_x)\times(0,K_y):H^{(1)}(x,y)\le e\}$ and $\{(x,y)\in (0,K_x)\times(0,K_y):H^{(2)}(x,y)\le h\}$ are star-shaped for all $e>e_0^{(1)}$ and for every $h>h_0^{(2)},$ respectively.}
between $\Gamma^{(1)}(e_i)$ and the straight line $\ell,$ with
$S^{\,(1)}_{i,-}\vartriangleleft\, S\vartriangleleft\, S^{\,(1)}_{i,+},$ and, similarly, $S^{\,(2)}_{i,-}$ and $S^{\,(2)}_{i,+}$ denote the intersection points between $\Gamma^{(2)}(h_i)$ and $\ell,$ with $S^{\,(2)}_{i,-}\vartriangleleft\, S\vartriangleleft\, S^{\,(2)}_{i,+}.$
\end{definition}
\noindent
Under the maintained assumptions, looking also at Figure \ref{biofr}, we deduce that, when an annulus composed of level lines of $H^{(1)}$ is linked with an annulus composed of level lines of $H^{(2)},$ then those two annuli cross in four pairwise disjoint generalized rectangles, differently from the frameworks considered in Section \ref{sec-3}, where only two intersections occurred.
In particular, each generalized rectangle contains a chaotic set, when the switching times between Systems (1) and (2) are large enough. Namely, in such eventuality, the presence of complex dynamics may be proved by applying Theorem \ref{th} to any pair composed by two of those generalized rectangles, when dealing with the Poincar\'e maps associated with Systems (1) and (2). Indeed, the Poincar\'e map $\Psi$ of System \eqref{biot} may be decomposed as $\Psi=\Psi^{(2)}\,\circ\Psi^{(1)},$ where $\Psi^{(1)}$ is the Poincar\'e map associated with System (1) for $t\in[0,T^{(1)}]$ and $\Psi^{(2)}$ is the Poincar\'e map associated with System (2) for $t\in[0,T^{(2)}].$ Similar to the settings analyzed in Section \ref{sec-3}, the increasing monotonicity for both systems of the orbit period, denoted respectively by $\tau^{(1)}(e)$ and $\tau^{(2)}(h),$ with the energy levels $e>e_0^{(1)}$ and $h>h_0^{(2)},$ has been proven in \cite{Sc-85,Sc-90}.\\
We are now in position to state our result about System \eqref{biot} with $r_i(t)$ defined as in \eqref{rt} for $i\in\{x,y\}.$
\begin{theorem}\label{appbio}
For any choice of the positive parameters $\alpha,\,\beta,\,\gamma,\,\delta,\,K_x,\,K_y,$ with
$0<\alpha<\beta$ and $0<\gamma<\delta,$ let $\mathcal C^{(1)}(e_1,e_2)$ be an annulus around $S,$ for some $e_0^{(1)}<e_1<e_2,$ obtained when the intrinsic growth rates for the two populations assume values $r_x^{(1)}$ and $r_y^{(1)},$ respectively, and let $\mathcal C^{(2)}(h_1,h_2)$ be an annulus around $S,$ for some $h_0^{(2)}<h_1<h_2,$ obtained when the intrinsic growth rates assume values $r_x^{(2)}$ and $r_y^{(2)}.$
Assume that $r_x^{(1)}>r_x^{(2)},\,r_y^{(1)}<r_y^{(2)}$ and that $\mathcal C^{(1)}(e_1,e_2)$ and $\mathcal C^{(2)}(h_1,h_2)$ are linked together, calling $\mathcal R_j,\,j\in\{1,2,3,4\},$ the connected components of $\mathcal C^{(1)}(e_1,e_2)\cap\mathcal C^{(2)}(h_1,h_2).$
Then, if $T^{(1)}>\frac{9\,\tau^{(1)}(e_1)\tau^{(1)}(e_2)}{2\,(\tau^{(1)}(e_2)-\tau^{(1)}(e_1))}$ and
$T^{(2)}>\frac{7\,\tau^{(2)}(h_1)\tau^{(2)}(h_2)}{2\,(\tau^{(2)}(h_2)-\tau^{(2)}(h_1))},$ the
Poincar\'e map $\Psi=\Psi^{(2)}\circ\Psi^{(1)}$ of System \eqref{biot}, with $r_i(t)$ defined as in \eqref{rt} for $i\in\{x,y\},$ induces chaotic dynamics on two symbols in every $\mathcal R_j,\,j\in\{1,2,3,4\},$ and thus all the properties listed in Theorem \ref{th} are fulfilled for $\Psi$.
\end{theorem}
The proof is omitted because of its similarity with that of Theorem \ref{app18}. We just stress that the smaller constants in the lower bounds of
$T^{(1)}$ and $T^{(2)}$ in the statement of Theorem \ref{appbio} with respect to those for $T_S$ and $T_V$ in Theorem \ref{app18} are due to the fact that it is here possible to be more precise with the estimates concerning the angular coordinate, since, after having introduced a system of generalized polar coordinates centered at $S,$ it holds that each generalized rectangle
$\mathcal R_j,\,j\in\{1,2,3,4\},$ is contained in a single quadrant, rather than in a half-plane. For further details, see Section 4 in
\cite{BuZa-09}, where the authors deal with a similar geometrical configuration. Moreover, as observed above, the number of chaotic sets increases with respect to the frameworks considered in Subsections \ref{31} and \ref{32}, due to the larger number of intersection sets between two linked together annuli. Let us illustrate the just described framework in the next example.
\begin{example}\label{bioer}
Taking $\alpha=16<\beta=32,\,\gamma=24<\delta=30,\,K_x=K_y=1,$ System \eqref{bio} has a center in $S=\left(\frac{\gamma}{\delta},\frac{\alpha}{\beta}\right)=(0.8,0.5),$ both with $r_x=r_x^{(1)}=2,\,r_y=r_y^{(1)}=0.5,$ and with $r_x=r_x^{(2)}=0.5,\,r_y=r_y^{(2)}=2.$
Since $r_x^{(1)}>r_x^{(2)}$ and $r_y^{(1)}<r_y^{(2)},$ the intrinsic growth rate of population $x$ is larger in the time interval $[0,T^{(1)}),$ while the intrinsic growth rate of population $y$ is larger for $t\in[T^{(1)},T).$ Moreover, in the considered case, a comparison between preys and predators shows that the intrinsic growth rate of population $x$ is larger than the one of population $y$ in the former time period, since $r_x^{(1)}>r_y^{(1)},$ and the situation is reversed in the latter time period, since $r_x^{(2)}<r_y^{(2)}.$
As shown in Figure \ref{biofr}, in the analyzed framework two linked together annuli $\mathcal C^{(1)}(e_1,e_2)$ and $\mathcal C^{(2)}(h_1,h_2)$ can be obtained for $e_1=53,\,e_2=56.9,\,h_1=43,\,h_2=44.4,$ intersecting in the four pairwise disjoint generalized rectangles denoted by $\mathcal R_j,\,j\in\{1,2,3,4\}.$ Software-assisted computations show that
$\tau^{(1)}(e_1)\approx 1.055 <\tau^{(1)}(e_2)\approx 1.195$ and $\tau^{(2)}(h_1)\approx 1.06<\tau^{(2)}(h_2)\approx 1.095.$ Hence, Theorem \ref{appbio} guarantees the existence of chaotic dynamics for the Poincar\'e map $\Psi=\Psi^{(2)}\circ\Psi^{(1)}$ associated with System \eqref{biot} with $r_i(t)$ defined as in \eqref{rt} for $i\in\{x,y\}$ provided that
$T^{(1)}>\frac{9\,\tau^{(1)}(e_1)\tau^{(1)}(e_2)}{2\,(\tau^{(1)}(e_2)-\tau^{(1)}(e_1))}\approx 40.52$ and
$T^{(2)}>\frac{7\,\tau^{(2)}(h_1)\tau^{(2)}(h_2)}{2\,(\tau^{(2)}(h_2)-\tau^{(2)}(h_1))}\approx 116.07.$ Thus, considering a period of one year, and assuming, to a first approximation, that $T^{(1)}=T^{(2)}=\frac{365}{2},$ so that the length of the time interval in which preys have an higher intrinsic growth rate coincides with the length of the time interval in which predators have an higher intrinsic growth rate, it is possible to apply Theorem \ref{appbio} to conclude that $\Psi$ induces chaotic dynamics on two symbols in every $\mathcal R_j,\,j\in\{1,2,3,4\}.$
\end{example}
\begin{figure}[ht]
\centering
\includegraphics[width=5.5cm,height=6cm]{fig6}
\caption{The two linked together annuli $\mathcal C^{(1)}(e_1,e_2)$ (whose boundary is colored in red) and $\mathcal C^{(2)}(h_1,h_2)$ (whose boundary is colored in blue) considered in Example \ref{bioer}, together with the corresponding generalized rectangles $\mathcal R_j,\,j\in\{1,2,3,4\}.$}\label{biofr}
\end{figure}
Analogous considerations to those made above hold true when we deal with a variation in carrying capacities in \eqref{bio}, assuming that they alternate in a periodic fashion between certain values, denoted by $K_x^{(I)}$ and $K_y^{(I)},$ for $t\in[0,T^{(I)})$, and other values, denoted by $K_x^{(II)}$ and $K_y^{(II)},$ for $t\in[T^{(I)},T^{(I)}+T^{(II)}).$
Supposing that the same alternation between the two regimes occurs with $\mathcal T$-periodicity, where $\mathcal T=T^{(I)}+T^{(II)},$ we can assume that we are dealing with System \eqref{biot} with periodic coefficients,
where $c(t)\equiv c$ for $c\in\{r_x,r_y,\alpha,\beta,\gamma,\delta\}$ and for $K_i(t),$ with $i\in\{x,y\},$ it holds that
\begin{equation}\label{kt}
K_i(t)=\left\{
\begin{array}{ll}
K_i^{(I)} \quad & \mbox{for } \, t\in[0,T^{(I)}) \\
\vspace{-2mm}\\
K_i^{(II)} \quad & \mbox {for } \, t\in[T^{(I)},\mathcal T)
\end{array}
\right.
\end{equation}
with $0<\alpha<\beta$ and $0<\gamma<\delta.$ The piecewise constant functions $K_i(t),$ with $i\in\{x,y\},$ are supposed to be extended to the whole real line by $\mathcal T$-periodicity. The center coincides with $S=\left(\frac{\gamma}{\delta},\frac{\alpha}{\beta}\right)$
both when the carrying capacities assume values $K_x^{(I)}$ and $K_y^{(I)},$ and thus we are in the regime whose dynamics are governed by the system that we will call $(I),$ as well as when they assume values $K_x^{(II)}$ and $K_y^{(II)},$ and thus we are in the regime whose dynamics are governed by the system that we will call $(II).$ In the former case, the orbits have equation
$$
\begin{array}{lll}
H^{(I)}(x,y)&\!\!\!\! = &\!\!\!\!\frac{1}{r_y}\left(-\alpha\log(y)+(\alpha-\beta K_y^{(I)})\log(K_y^{(I)}-y)\right)+\\
&&\!\!\!\!\frac{1}{r_x}\left(-\gamma\log(x)+(\gamma-\delta K_x^{(I)})\log(K_x^{(I)}-x)\right)=e,
\end{array}
$$
for some $e\ge e_0^{(I)}=H^{(I)}(S),$ while, in the latter case, the orbits have equation
$$
\begin{array}{lll}
H^{(II)}(x,y)&\!\!\!\! = &\!\!\!\!\frac{1}{r_y}\left(-\alpha\log(y)+(\alpha-\beta K_y^{(II)})\log(K_y^{(II)}-y)\right)+\\
&&\!\!\!\!\frac{1}{r_x}\left(-\gamma\log(x)+(\gamma-\delta K_x^{(II)})\log(K_x^{(II)}-x)\right)=h,
\end{array}
$$
for some $h\ge h_0^{(II)}=H^{(II)}(S).$\\
Definition \ref{lim} of linked together annuli and the subsequent comments are valid for the present context, too, when adapting notation, by replacing (1) with $(I)$ and (2) with $(II).$ Also a result analogous to Theorem \ref{appbio} still hold true when considering variations in carrying capacities. We omit to report its precise statement, focusing instead on a numerical example in which it can be applied to show the existence of chaotic dynamics for the Poincar\'e map $\Psi=\Psi^{(II)}\circ\Psi^{(I)}$ associated with System \eqref{biot} with $K_i(t)$ defined as in \eqref{kt} for $i\in\{x,y\}.$
\begin{example}\label{bioek}
Taking $\alpha=64<\beta=128,\,\gamma=96<\delta=112,\,r_x=r_y=1,$ System \eqref{bio} has a center in $S=\left(\frac{\gamma}{\delta},\frac{\alpha}{\beta}\right)=(0.86,0.5),$ both with $K_x=K_x^{(I)}=0.99,\,K_y=K_y^{(I)}=0.9,$ and with $K_x=K_x^{(II)}=0.9,\,K_y=K_y^{(II)}=0.99.$
Similar to Example \ref{bioer}, since $K_x^{(I)}>K_x^{(II)}$ and $K_y^{(I)}<K_y^{(II)},$ the carrying capacity for population $x$ is larger in the time interval $[0,T^{(I)}),$ while the carrying capacity for population $y$ is larger for $t\in[T^{(I)},\mathcal T).$ Moreover, a comparison between preys and predators shows that the environment can carry a larger size of preys rather than of predators in the former time period, since $K_x^{(I)}>K_y^{(I)},$ and the situation is reversed in the latter time period, since $K_x^{(II)}<K_y^{(II)}.$
As shown in Figure \ref{biofk}, in the analyzed framework two linked together annuli $\mathcal C^{(I)}(e_1,e_2)$ and $\mathcal C^{(II)}(h_1,h_2)$ can be obtained for $e_1=145.3,\,e_2=146.7,\,h_1=129.4,\,h_2=131.1,$ intersecting in the four pairwise disjoint generalized rectangles denoted by $\mathcal R_j,\,j\in\{1,2,3,4\}.$ Software-assisted computations for the periods show that
$\tau^{(I)}(e_1)\approx 0.355 <\tau^{(I)}(e_2)\approx 0.360$ and $\tau^{(II)}(h_1)\approx 0.635<\tau^{(II)}(h_2)\approx 0.650.$\footnote{Also in this setting, the increasing monotonicity of the period of the orbits follows from the results in \cite{Sc-85,Sc-90}.} Hence, a result analogous to Theorem \ref{appbio} guarantees the existence of chaotic dynamics for the Poincar\'e map $\Psi=\Psi^{(II)}\circ\Psi^{(I)}$ associated with System \eqref{biot} with $K_i(t)$ defined as in \eqref{kt} for $i\in\{x,y\}$ provided that
$T^{(I)}>\frac{9\,\tau^{(I)}(e_1)\tau^{(I)}(e_2)}{2\,(\tau^{(I)}(e_2)-\tau^{(I)}(e_1))}\approx 115.02$ and
$T^{(II)}>\frac{7\,\tau^{(II)}(h_1)\tau^{(II)}(h_2)}{2\,(\tau^{(II)}(h_2)-\tau^{(II)}(h_1))}\approx 96.31.$ Thus, considering a period of one year, and assuming e.g. that $T^{(I)}=T^{(II)}=\frac{365}{2},$ it is then possible to apply Theorem \ref{appbio} to conclude that $\Psi$ induces chaotic dynamics on two symbols in every $\mathcal R_j,\,j\in\{1,2,3,4\}.$
\end{example}
\begin{figure}[ht]
\centering
\includegraphics[width=5.5cm,height=6cm]{fig7-m}
\caption{The two linked together annuli $\mathcal C^{(I)}(e_1,e_2)$ (whose boundary is colored in red) and $\mathcal C^{(II)}(h_1,h_2)$ (whose boundary is colored in blue) considered in Example \ref{bioek}, together with the corresponding generalized rectangles $\mathcal R_j,\,j\in\{1,2,3,4\}.$}\label{biofk}
\end{figure}
\section{Conclusion}\label{sec-5}
In the present work, led by various possible interpretations of the same Hamiltonian system, we have shown how to apply the Linked Twist Maps (LTMs) method to prove the existence of chaotic dynamics for the associated Poincar\'e map, perturbing each time a different parameter in a periodic fashion and obtaining various geometrical configurations in the phase plane, when considering both the orbits of the original system and those of the perturbed one.
Namely, we can try to use the LTMs technique whenever we have a conservative system with a nonisochronous center and it is sensible to assume a periodic variation for one or more model parameters\footnote{See e.g. \cite{Maea-10}, where the authors apply the LTMs method to a class of periodically perturbed
planar Hamiltonian systems, dealing also with perturbations by means of dissipative terms.}, both influencing the center or not, e.g. due to a seasonal effect. We just stress that, if the position of the center is not affected by the change in the considered parameter value, it is necessary to check that the shape of the orbits changes in a way that allows to detect at least two linked annuli.\\
We finally remark that in the present contribution we have proposed planar applications of the LTMs technique. Nonetheless, such method can be employed in (at least) three-dimensional frameworks, too, when dealing with linked together cylindrical sets. See \cite{RZZa-14} for a 3D non-Hamiltonian example concerning a predator-prey model in a periodically varying
environment. We will investigate a three-dimensional continuous-time model connected with some game theoretical context in a future work, taking inspiration e.g. from the 3D setting considered in \cite{Anea-19} (cf. Footnote 6).
In regard to economic settings, we recall that the SAP method, on which the LTMs technique is based, has been recently used to prove the existence of chaotic dynamics for some discrete-time triopoly game models in \cite{Pi-15,Pi-16}, while in \cite{Meea-09} the SAP technique had been applied to one- and two-dimensional discrete-time economic models, concerning overlapping generations and duopoly frameworks.
\section*{Acknowledgments}
\noindent
Many thanks to Professor J. Hofbauer for useful discussions about the monotonicity of the period of the orbits.
\newpage
|
1,314,259,993,509 | arxiv | \section{Introduction}
Music source separation is a problem that has been studied for a few decades now: given an audio track with several instruments mixed together (a regular MP3 file, for example), how can it be separated into its component instruments? The obvious application of this problem is in music production - creating karaoke tracks, highlighting select instruments in an audio playback, etc. There is another reason why this is a useful problem to study: it acts as a powerful enabler for several other applications in music informatics. This is because complex, multi-instrument music tracks are not easily processed by such algorithms in their raw audio form. However, once individual instruments have been isolated from such a track, they can relatively easily be transcribed by contemporary algorithms.
Up until the early 2010s, the most common approaches to this problem were not data-driven, but rooted in exploiting known statistical properties of music signals, or in signal processing theory. However, as with many fields, that has changed in the last few years with the advent of cheaper computing power and proliferation of research in machine learning. The best performance on this problem is currently achieved by deep learning-based methods. These methods feed the mixture at the input of the network, and the source(s) as targets (or rather typically the spectrograms of the input/output, since many of the patterns to be discovered are in the frequency domain) to learn a function mapping between the two.
These deep learning approaches use a pixel-level loss as the cost function, averaging the L2 losses between corresponding pixels in the output and target spectrograms. (The term 'pixel' here, and in the rest of the paper is used to denote time-frequency bins in the spectrogram, because the spectrogram is treated as an image for the purpose of our work.) However, we believe that this is not the ideal loss function for this problem. This is because it does not explicitly give weight to higher-level patterns in spectrograms, which could exhibit similarity between similar pieces of audio. For example, non-pitched instruments like drums have signal present across frequencies, and therefore exhibit vertical lines in their spectrograms. On the other hand, vocal spectrograms display harmonicity, i.e. horizontal lines. Thus, we propose that pairing the pixel-level L2 loss with a loss between higher-level patterns extracted from the spectrogram could lead to improved performance. For the latter, we port the loss terms developed by the authors in \cite{b1} for the visual domain, treating spectrograms as images for this purpose. This is not an ideal treatment, and better alternatives will be discussed in Section \ref{future-work} on future work.
The rest of this paper is organized as follows: In the following sections we introduce the core deep learning model for music source separation that we have utilized in our work, and briefly summarize the learning from \cite{b1} in using VGG feature maps to compute the spectrogram feature losses. After laying down related work, we describe in detail our experiments and their results. We summarize the implications of these results and finally discuss ideas to build further on this work.
\section{Related Work}
To the best of our knowledge, there is no existing work on the application of such spectrogram feature losses to music source separation. The general idea of applying feature/style reconstruction losses as proposed in \cite{b1} for the visual domain, to an audio domain problem has been explored by some researchers, with mixed results. In \cite{b8}, the authors propose an audio style transfer using, as one of the approaches, style reconstruction losses extracted using the VGG network, similar to \cite{b1}. In their case, the VGG does not yield results of acceptable quality (as per subjective tests) but using a shallow CNN does. In \cite{b9}, the authors explore audio generation as an audio style transfer problem, using similar loss terms. More generally, the idea of perceptual losses for audio is still an open area of research, where the task is to find loss measures that correlate better with subjective measures of audio quality. However, while the feature losses we explore in our work are derived from perceptual losses in the image domain, they are more directly a descriptor of visual patterns in audio spectrograms than being a perceptual descriptor of the underlying audio.
\section{MMDenseNets for Music Source Separation}
Multi-scale Multi-band DenseNets, or MMDenseNets are a CNN-based deep network model for music source separation. They were proposed in \cite{b2}, and variations of this model achieve the current State-of-the-Art performance on the music source separation task, based on the SiSEC - the Signal Separation Evaluation Campaign. This is a benchmark competition for this task that we discuss in greater detail in Section \ref{dbm}. In this section, we provide a brief overview of this model.
At the input of the MMDenseNet is the spectrogram of the mixed-up song, in its STFT (Short-Time Fourier Transform) representation. Each source to be separated has its own network and set of weights, and for each network, the training targets comprise the corresponding pure source spectrograms. Since this is a real-valued neural network, the phase of the mixture spectrogram is isolated and only the magnitude is fed into the network. Similarly, during training, the target consists only of the magnitude of the source spectrogram. In order to recover the estimated time-domain source signal during inference, the phase of the input mixture spectrogram is directly applied to each source spectrogram, and an inverse-STFT taken of the result. In case the data is stereophonic, i.e. contains more than one channel, this information is fed into the MMDenseNet as a multi-channel spectrogram image.
The network architecture itself is based on the DenseNet, which is a deep CNN where every layer's output is directly fed to every other layer succeeding it. For greater detail on the DenseNet architecture, the reader is referred to the original paper \cite{b3}. Furthermore, while the original DenseNet is a classifier and periodically downsamples the original image, in the current application an image needs to be created at the output. For this purpose, the MMDenseNet includes an upsampling path, also comprised of DenseNet blocks, resulting in an autoencoder style architecture. What makes the MMDenseNet especially unique is its use of sub-band networks - in simple language, rather than sharing the convolution kernel across the spectrogram image, it trains separate convolutional layers (and therefore kernels) for different frequency bands. It achieves this in practice by splitting the input spectrogram along the frequency axis into two or more sub-images - each of which can be thought of as representing a sub-(frequency)band image. Each of these sub-band images is propagated through its own DenseNet autoencoder as described above. Towards the output, feature maps from these sub-band DenseNets are joined back along the frequency axis. The MMDenseNet architecture is illustrated in Figure \ref{fig:Illustration-of-complete-mmdensenet}.
As a post-processing step during inference, the predictions of the network for each source are scaled, for each time-frequency bin, so that their sum is equal to the original mixture at the corresponding time-frequency bin. This is akin to single-channel Wiener filtering, and is also part of the procedure established in \cite{b2}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.15]{images/mm_dense_net}
\end{center}
\caption{Illustration of complete MMDenseNet architecture. Reproduced with permission from
\cite{b2}\label{fig:Illustration-of-complete-mmdensenet}}
\end{figure}
\section{Spectrogram Feature Loss}\label{vgg-loss}
In this section, we explain how a high-level spectrogram feature loss can be computed using the VGG network. This network refers to a deep convolutional neural network developed by Oxford's Visual Geometry Group (VGG - hence the name of the network) for the purpose of image classification \cite{b4}. The network uses a succession of convolution, \textit{relu} activation and max-pooling layers to extract image features, plugging in a fully connected layer followed by a softmax layer in the end for performing the classification task. This network was among the winners in the ImageNet challenge in 2014.
While the purpose of the VGG network as it was developed was image classification, it is of interest to us for our problem because it can also be viewed as a feature extractor. Successive hidden layers of the network compute ’higher-level’ image features, like shapes and forms. So, instead of comparing two images only on their pixel values, we could use the VGG as a feature extractor to obtain high-level features and compare the images on these features as well. It was this insight that was used by the authors in \cite{b1} to do a style-transfer between two images.
For the purpose of this work, we treat the high-level spectrogram feature loss calculation as a black-box, computed exactly as in \cite{b1}, i.e. using the VGG network and computing two related loss terms - the feature and style reconstruction losses. We use the same layers of the VGG network for computing these loss terms as in \cite{b1}. Throughout our experiments, which we shall describe in Section \ref{experiments}, we give a weight of 0.5 to the regular pixel-level L2 loss and 0.25 to each of these two high-level feature losses. In the rest of this paper, we use the term \textit{composite spectrogram loss} for the weighted combination. We arrived at these values for the weights empirically. In particular, we also tried using only the high-level feature losses but found the performance to be inferior for this setting. Ideally, we would use an analog to the VGG network for the audio or music domain, to optimize for extracting audio-specific features. However no such publicly available and rigorously tested network exists. In Section \ref{future-work} on future work, we discuss how this black-box calculation can be better customized for this application.
\section{Experiments}
\subsection{Dataset, Benchmarks and Metrics}\label{dbm}
The SiSEC is a biennial forum where researchers in signal separation - across a variety of signal domains (eg. bio-medical, music, etc.) compare the performance of their algorithms on a standardized task. The music source separation sub-task currently involves separation of 50 professionally recorded stereo tracks, across varying genres like pop, rock, rap etc., into \textit{vocals}, \textit{drums}, \textit{bass} and \textit{other}, i.e. the collection of remaining instruments as one track. Since the researchers report detailed standardized metrics, and also discuss their approach at varying lengths, this is a good resource to glean the State-of-the-Art for this problem.
For this sub-task SiSEC provides a dataset called \textit{musdb18} \cite{b10}. It consists of 150 professionally-recorded tracks across genres, of which the actual testing is to be done on 50 tracks, while the other 100 can be used for training in supervised approaches. For each track, the ’true’ isolated \textit{vocals}, \textit{drums}, \textit{bass} and \textit{other} tracks are provided, along with the main mixed track.
Performance is evaluated on a collection of specialized metrics developed and widely used by the research community in blind source separation, called BSS Eval \cite{b5}. These measures are somewhat akin to an SNR measure. In the following sections of this paper, we will compare performances on the Signal-to-Distortion Ratio (SDR) as it is the overarching metric that encompasses the other metrics.
\subsection{Baseline Model Implementation}\label{baseline-model}
Since the MMDenseNet model is not open-sourced, we created our own implementation following the general guidelines listed in \cite{b2} and applied it to the SiSEC 2018 task. The parameters for the MMDenseNet architecture in our implementation are the same as those given in Table 1 of \cite{b2}. Other important implementation details are as follows: We use 2048 samples for the FFT, with a hop rate of 1024. Each spectrogram contains 128 time frames. We use RMSProp for optimization, starting with a learning rate of 0.001 and dropping it to 0.0001 when learning saturates. Finally, we use a bottleneck-compressed version of the DenseNet as explained in \cite{b4}, with a factor of 4 for the bottleneck and a factor of 0.2 for the compression.
As described in the SiSEC 2018 paper \cite{b6}, we calculate the median value of the SDR for each source over all time windows. Figure \ref{fig:MMDenseNet-vocals} shows the boxplot of the SDR thus obtained over all songs in the \textit{musdb18} test database, for each method submitted to SiSEC 2018, for the \textit{vocals} source as an illustration of our relative performance. Our relative performance is similar for the other sources. Our method is labelled OURS. While our focus was more to get a reasonably performant working implementation of a deep learning music source separation system to be able to compare the pixel-level loss with composite spectrogram losses, we do come close to the State-of-the-Art as well. It should be noted that among the submissions in Figure \ref{fig:MMDenseNet-vocals}, TAK1, TAK2 and TAK3 are based on the MMDenseNet. The gap in performance between our model and these submissions can be explained by a mix of reasons - chiefly, their use of data augmentation, the use of an LSTM layer in addition to the DenseNet CNNs, and the use of specialized architectures for different sources (For eg., increasing complexity of the lower frequency sub-band for the \textit{bass} source).
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{images/mm_dense_net_vocals_SDR_only}
\end{center}
\caption{Boxplots (over all the test songs) of our baseline model's performance compared to other SiSEC 2018 submissions,
for the \textit{vocals} source. The SDR should be viewed as the overall summary metric, with a higher SDR implying better performance. \label{fig:MMDenseNet-vocals}}
\end{figure}
\subsection{Pixel-level vs. Composite Spectrogram Loss Comparison Methodology}\label{experiments}
With the baseline model implemented as above, we conducted a series of experiments to compare its performance with pixel loss, with the same model tuned with the composite spectrogram loss as defined in Section \ref{vgg-loss}. Below we describe the settings for each experiment. In all the experiments, training was done with the development set of the \textit{musdb18} database and the reported SDR is on its test set. Our experiments cover the sources \textit{vocals}, \textit{drums} and \textit{bass}.
\begin{itemize}
\item \textbf{Experiment A:} In this experiment, we compared the performance of the \textit{vocals} source isolation obtained by the pixel loss-tuned model with that of the composite spectrogram loss-tuned model. The parameter settings of the model in both the cases were identical and the same as those described in Section \ref{baseline-model}. We repeated this experiment four times, to reduce false inferences due to experimental randomness and thus to be able to comment on the statistical significance of the observed difference in performance between the two models, if any. Machine learning optimization is a random process - with some of the randomness introduced by the optimization algorithm, and some introduced by the parallel computing typically used for the optimization (eg. GPUs). We used Keras as our implementation framework, and while it can control for the former source of randomness through the use of random seeds, there is currently no way to control for the latter.
\item \textbf{Experiment B:} This was same as the above experiment, conducted for the \textit{drums} source (instead of \textit{vocals}).
\item \textbf{Experiment C:} This was also same as the above experiment, conducted for the \textit{bass} source.
\item \textbf{Experiment D:} In this experiment, we once again compared the \textit{vocals} source. However we did this with a single-channel model in place of the stereo (two-channel) model used in the above experiments (The \textit{musdb18} songs are available as two-channel recordings. A single-channel version can be created by averaging the two channels). The motivation to do this experiment was to test the composite spectrogram loss under more diverse use-cases and settings. Like the above experiments, this was conducted four times as well.
\end{itemize}
\section{Results and Discussion}
We discuss the results for each of the experiments described in the previous section.
\begin{itemize}
\item \textbf{Experiment A:} In Table \ref{tab:experiment-a-results}, we display:
\begin{enumerate}
\item The pixel-level L2 loss value obtained for the validation set upon convergence for both the models, for the \textit{vocals} source, in four independent runs. It should be noted, for the composite spectrogram loss-tuned model, that the pixel-level L2 loss is one of the components of the overall loss, as explained in Section \ref{vgg-loss}. For this model, we chose the epoch with the minimum composite validation loss, as one would usually do, but report in this table only the pixel-level L2 loss component, for a like-to-like comparison.
\item The SDR value obtained over the \textit{musdb18} test dataset by both the models, for the \textit{vocals} source, in the above four independent runs. The figure reported here is the median over the test dataset, as explained in Section \ref{baseline-model}.
\end{enumerate}
While there seems to be a visible difference in performance between the two models, with the composite spectrogram loss-tuned model outperforming the pixel loss-tuned model, we run the SDR results through a t-test for statistical rigor. The output from these tests conducted in R is also displayed in Table \ref{tab:experiment-a-results}. The differences are significant at a 5\% significance level. On this sample, the composite spectrogram loss-tuned model delivers a 0.27 dB improvement in performance.
\begin{table}
\caption{Comparison of source separation performance for the \textit{vocals} source between the pixel loss-tuned model (Model 1) and the composite spectrogram loss-tuned model (Model 2). Lower val. loss and higher SDR are better}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Run & \multicolumn{2}{c|}{Min. pixel val. loss (L2)} & \multicolumn{2}{c|}{SDR (dB)}\tabularnewline
\hline
\hline
& Model 1 & Model 2 & Model 1 & Model 2\tabularnewline
\hline
1 & $0.59$ & $\textbf{0.47}$ & $3.70$ & $\textbf{3.98}$\tabularnewline
\hline
2 & $0.59$ & $\textbf{0.50}$ & $3.72$ & $\textbf{3.93}$\tabularnewline
\hline
3 & $0.59$ & $\textbf{0.50}$ & $3.83$ & $\textbf{4.06}$\tabularnewline
\hline
4 & $0.60$ & $\textbf{0.48}$ & $3.73$ & $\textbf{3.84}$\tabularnewline
\hline
\end{tabular}
\end{center}
\begin{center}
\bigskip
Welch Two Sample t-test
\begin{tabular}{lc}
\hline
t-statistic & -4.52\tabularnewline
df & 4.47\tabularnewline
p-value & 0.008\tabularnewline
Mean SDR with pixel loss & 4.32\tabularnewline
Mean SDR with composite spectrogram loss & 4.59\tabularnewline
95\% confidence interval (Difference of means) & -0.43, -0.11\tabularnewline
\hline
\end{tabular}
\end{center}
\label{tab:experiment-a-results}
\end{table}
\item \textbf{Experiment B:} Similar to the above experiment, Table \ref{tab:experiment-b-results} shows the validation pixel-level L2 loss upon convergence, and the median SDR obtained over the \textit{musdb18} test set for both the models, for the source \textit{drums}. The table also gives the results of the t-test to check if the SDR results are significantly different. We can see, once again, that the composite spectrogram loss-tuned model outperforms the pixel loss-tuned model. However, the 5\% significance is more borderline for \textit{drums}. On this sample, the VGG loss model delivers a 0.18 dB improvement in performance.
\begin{table}
\caption{Comparison of source separation performance for the \textit{drums} source between the pixel loss-tuned model (Model 1) and the composite spectrogram loss-tuned model (Model 2). Lower val. loss and higher SDR are better}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Run & \multicolumn{2}{c|}{Min. pixel val. loss (L2)} & \multicolumn{2}{c|}{SDR (dB)}\tabularnewline
\hline
\hline
& Model 1 & Model 2 & Model 1 & Model 2\tabularnewline
\hline
1 & $0.46$ & $\textbf{0.37}$ & $4.70$ & $\textbf{4.88}$\tabularnewline
\hline
2 & $0.46$ & $\textbf{0.37}$ & $4.53$ & $\textbf{4.65}$\tabularnewline
\hline
3 & $0.48$ & $\textbf{0.37}$ & $4.52$ & $\textbf{4.71}$\tabularnewline
\hline
4 & $0.46$ & $\textbf{0.38}$ & $4.64$ & $\textbf{4.88}$\tabularnewline
\hline
\end{tabular}
\end{center}
\begin{center}
\bigskip
Welch Two Sample t-test
\begin{tabular}{lc}
\hline
t-statistic & -2.49\tabularnewline
df & 5.53\tabularnewline
p-value & 0.051\tabularnewline
Mean SDR with pixel loss & 4.60\tabularnewline
Mean SDR with composite spectrogram loss & 4.78\tabularnewline
95\% confidence interval (Difference of means) & -0.37, 0.00\tabularnewline
\hline
\end{tabular}
\end{center}
\label{tab:experiment-b-results}
\end{table}
\item \textbf{Experiment C:} Similar to the above experiment, Table \ref{tab:experiment-c-results} shows the validation pixel-level L2 loss upon convergence, and the median SDR obtained over the \textit{musdb18} test set for both the models, for the source \textit{bass}. While the composite spectrogram loss-tuned model consistently converges to a lower validation L2 loss, in terms of SDR performance the two models seem to be nearly identical, at least based on these samples. We do not run these SDRs through a t-test.
\begin{table}
\caption{Comparison of source separation performance for the \textit{bass} source between the pixel loss-tuned model (Model 1) and the composite spectrogram loss-tuned model (Model 2). Lower val. loss and higher SDR are better}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Run & \multicolumn{2}{c|}{Min. pixel val. loss (L2)} & \multicolumn{2}{c|}{SDR (dB)}\tabularnewline
\hline
\hline
& Model 1 & Model 2 & Model 1 & Model 2\tabularnewline
\hline
1 & $0.64$ & $\textbf{0.49}$ & $\textbf{4.10}$ & $4.03$\tabularnewline
\hline
2 & $0.61$ & $\textbf{0.50}$ & $4.00$ & $\textbf{4.01}$\tabularnewline
\hline
3 & $0.61$ & $\textbf{0.51}$ & $4.09$ & $\textbf{4.14}$\tabularnewline
\hline
4 & $0.63$ & $\textbf{0.51}$ & $4.04$ & $\textbf{4.06}$\tabularnewline
\hline
\end{tabular}
\end{center}
\label{tab:experiment-c-results}
\end{table}
\item \textbf{Experiment D:} Table \ref{tab:experiment-d-results} shows the validation pixel-level L2 loss upon convergence, and the median SDR obtained over the \textit{musdb18} test set for both the models, for the source \textit{vocals}, for a single-channel model. The table also gives the results of the t-test to check if the SDR results are significantly different. We can see that the composite spectrogram loss-tuned model outperforms the pixel loss-tuned model at a 5\% significance level for the \textit{vocals} source under these settings as well. On this sample, the former delivers a 0.2 dB improvement in performance.
\begin{table}
\caption{Comparison of source separation performance for the \textit{vocals} source for a single-channel model between the pixel loss-tuned model (Model 1) and the composite spectrogram loss-tuned model (Model 2). Lower val. loss and higher SDR are better}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Run & \multicolumn{2}{c|}{Min. pixel val. loss (L2)} & \multicolumn{2}{c|}{SDR (dB)}\tabularnewline
\hline
\hline
& Model 1 & Model 2 & Model 1 & Model 2\tabularnewline
\hline
1 & $0.59$ & $\textbf{0.47}$ & $3.70$ & $\textbf{3.98}$\tabularnewline
\hline
2 & $0.59$ & $\textbf{0.50}$ & $3.72$ & $\textbf{3.93}$\tabularnewline
\hline
3 & $0.59$ & $\textbf{0.50}$ & $3.83$ & $\textbf{4.06}$\tabularnewline
\hline
4 & $0.60$ & $\textbf{0.48}$ & $3.73$ & $\textbf{3.84}$\tabularnewline
\hline
\end{tabular}
\end{center}
\begin{center}
\bigskip
Welch Two Sample t-test
\begin{tabular}{lc}
\hline
t-statistic & -3.81\tabularnewline
df & 5.06\tabularnewline
p-value & 0.012\tabularnewline
Mean SDR with pixel loss & 3.75\tabularnewline
Mean SDR with composite spectrogram loss & 3.95\tabularnewline
95\% confidence interval (Difference of means) & -0.35, -0.07\tabularnewline
\hline
\end{tabular}
\end{center}
\label{tab:experiment-d-results}
\end{table}
\end{itemize}
We also show, in Figure \ref{fig:L2-loss-trajectories}, the validation pixel-level L2 loss trajectory for both the models, averaged across the four runs, for Experiment A (two-channel vocals). The trajectories for the other sources are similar. The difference in performance between the two models is once again evident from this plot, with the composite spectrogram loss-tuned model converging to a lower pixel-level validation loss.
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{images/vgg_vs_l2_loss_vocals}
\end{center}
\caption{Trajectory of validation pixel-level L2 losses for the composite spectrogram loss-tuned model vs. pixel-level loss-tuned model when training for $24$ epochs for the \textit{vocals} source \label{fig:L2-loss-trajectories}}
\end{figure}
The results from our above experiments demonstrate that using a loss derived from high-level spectrogram patterns to tune the model does indeed improve performance over using only a pixel-level loss, for the \textit{vocals} and \textit{drums} sources, by 0.3 dB and 0.2 dB respectively (for the multi-channel model) over the samples in our study. While in itself, this is a valuable result and an improvement over the baseline model, it also lays down the case for further exploration of loss functions appropriate for music (or more generally, audio) data.
\section{Conclusion and Future Work}\label{future-work}
In this paper, we have demonstrated how using a high-level spectrogram feature loss, in addition to the standard pixel-level loss, can improve performance of a machine learning-based music source separation system. We believe that this is an improvement that could be generalized to related systems dealing with audio data. One area of improvement to the current work could be to explore spectrogram feature losses more customized to the audio/music domain. For eg., an audio classifier could be built and used in place of the VGG net. For the current application, it could be (for example) a network for discriminating between different musical instrument sounds. Secondly, to study the generalizability of our observation within deep learning-based music source separation, we could explore implementing alternative models described in literature for this task, with spectrogram feature losses (or their analog, for models that process the audio as a 1D signal).
|
1,314,259,993,510 | arxiv | \section{Introduction}
We consider the initial-value problem for the focusing energy-critical nonlinear Schr\"odinger equation in dimension $d\geq 3$,
\begin{equation}\label{nls}
\begin{cases}
\ iu_t+\Delta u=F(u)\\
\ u(t_0)=u_0\in \dot H^1_x({\mathbb{R}}^d),
\end{cases}
\end{equation}
where the nonlinearity is given by $F(u)=- |u|^{{\frac 4{d-2}}}u$. As indicated in the title, our main results are for dimensions $d\geq 5$;
nevertheless, many of our arguments remain valid in dimensions three and four.
The name `energy-critical' refers to the fact that the scaling symmetry
\begin{equation}\label{scaling}
u(t,x) \mapsto u_\lambda(t,x):= \lambda^{\frac{d-2}2} u( \lambda^2 t, \lambda x)
\end{equation}
leaves both the equation and the energy invariant. The energy of a solution is defined by
\begin{equation*}
E(u(t)) := \int_{{\mathbb{R}}^d} \bigl(\tfrac 12 |\nabla u(t,x)|^2 - \tfrac{d-2}{2d} |u(t,x)|^{\frac{2d}{d-2}}\bigr)\, dx
\end{equation*}
and is conserved under the flow; see Theorem~\ref{T:local} below. We refer to the gradient term in the formula above as the
\emph{kinetic energy} and to the second term as the \emph{potential energy}. Note that the potential energy is negative,
which expresses the focusing nature of the nonlinearity.
\begin{definition}[Solution] A function $u: I \times {\mathbb{R}}^d \to {\mathbb{C}}$ on a non-empty time interval $t_0\in I \subset {\mathbb{R}}$
is a \emph{solution} (more precisely, a strong $\dot H^1_x({\mathbb{R}}^d)$ solution) to \eqref{nls} if it lies in the class $C^0_t \dot
H^1_x(K \times {\mathbb{R}}^d) \cap L^{2(d+2)/(d-2)}_{t,x}(K \times {\mathbb{R}}^d)$ for all compact $K \subset I$, and obeys the Duhamel
formula
\begin{align}\label{old duhamel}
u(t_1) = e^{i(t_1-t_0)\Delta} u(t_0) - i \int_{t_0}^{t_1} e^{i(t_1-t)\Delta} F(u(t))\ dt
\end{align}
for all $t_1 \in I$. We refer to the interval $I$ as the \emph{lifespan} of $u$. We say that $u$ is a \emph{maximal-lifespan
solution} if the solution cannot be extended to any strictly larger interval. We say that $u$ is a \emph{global solution} if $I
= {\mathbb{R}}$.
\end{definition}
The condition that $u$ is in $L^{2(d+2)/(d-2)}_{t,x}$ locally in time is natural for several reasons (see
Theorem~\ref{T:local}): (i) By Strichartz inequality, all solutions to the linear problem lie in this space. (ii) Local
solutions exist in this space. (iii) Finiteness of this norm on the maximal interval of existence implies that the solution is
global and scatters both forward and backward in time. (iv) A posteriori, one can show that the locally
$L^{2(d+2)/(d-2)}_{t,x}$ solution is in fact the only solution belonging to $C^0_t \dot H^1_x(I \times {\mathbb{R}}^d)$.
In view of point (iii) above, it is natural to define the \emph{scattering size} of a solution to \eqref{nls}
on a time interval $I$ by
$$
S_I(u):= \int_I \int_{{\mathbb{R}}^d} |u(t,x)|^{\frac{2(d+2)}{d-2}}\, dx \,dt.
$$
Associated to the notion of solution is a corresponding notion of blowup. As we will see in Theorem~\ref{T:local}, this
precisely corresponds to the impossibility of continuing the solution.
\begin{definition}[Blowup]\label{D:blowup}
We say that a solution $u$ to \eqref{nls} \emph{blows up forward in time} if there exists a time $t_1 \in I$ such that
$$ S_{[t_1, \sup I)}(u) = \infty$$
and that $u$ \emph{blows up backward in time} if there exists a time $t_1 \in I$ such that
$$ S_{(\inf I, t_1]}(u) = \infty.$$
\end{definition}
The local theory for \eqref{nls} was worked out by Cazenave and Weissler \cite{cwI}. They performed a fixed-point argument to
construct local-in-time solutions for arbitrary initial data in $\dot H^1_x({\mathbb{R}}^d)$; however, as is the case with critical
equations, the resulting time of existence depends on the profile of the initial data and not merely on its $\dot H^1_x$-norm.
They also constructed global solutions from small initial data. Unconditional uniqueness was subsequently proved
by Cazenave, \cite[Proposition 4.2.5]{cazenave:book}. We summarize their results in the theorem below.
\begin{theorem}[Local well-posedness, \cite{cwI, cazenave:book}]\label{T:local}
Given $u_0\in\dot H_x^1({\mathbb{R}}^d)$ and $t_0\in {\mathbb{R}}$, there exists a unique maximal-lifespan solution $u: I\times\R^d \rightarrow {\mathbb{C}}$ to
\eqref{nls} with initial data $u(t_0)=u_0$. This solution also has the following properties:
\begin{CI}
\item (Local existence) $I$ is an open neighbourhood of $t_0$.
\item (Energy conservation) The energy of $u$ is conserved, that is, $E(u(t))=E(u_0)$ for all $t\in I$.
\item (Blowup criterion) If $\sup(I)$ is finite, then $u$ blows up forward in time; if $\inf(I)$ is finite,
then $u$ blows up backward in time.
\item (Scattering) If $\sup(I)=+\infty$ and $u$ does not blow up forward in time, then $u$ scatters forward in time, that is,
there exists a unique $u_+ \in \dot H^1_x({\mathbb{R}}^d)$ such that
\begin{equation}\label{like u+}
\lim_{t \to +\infty} \| u(t)-e^{it\Delta} u_+ \|_{\dot H^1_x({\mathbb{R}}^d)} = 0.
\end{equation}
Conversely, given $u_+ \in \dot H^1_x({\mathbb{R}}^d)$ there is a unique solution to \eqref{nls} in a neighbourhood of infinity
so that \eqref{like u+} holds.
\item (Small data global existence) If $\|\nabla u_0\|_2$ is sufficiently small (depending on $d$), then $u$ is a global solution
which does not blow up either forward or backward in time. Indeed, in this case $S_{\mathbb{R}}(u)\lesssim \|\nabla u_0\|_2^{2(d+2)/(d-2)}$.
\item (Unconditional uniqueness) If $\tilde u \in C^0_t \dot H^1_x(J \times {\mathbb{R}}^d)$ with $t_0\in J$, obeys \eqref{old duhamel}
and $\tilde u(t_0)=u_0$, then $J\subseteq I$ and $\tilde u \equiv u$ throughout $J$.
\end{CI}
\end{theorem}
A variant of the local well-posedness theorem above is the following lemma.
\begin{lemma}[Stability, \cite{TV}]\label{L:stab} Let $d\geq 3$. For every $E,L > 0$ and ${\varepsilon} > 0$ there exists $\delta > 0$
with the following property: Suppose $\tilde u: I \times {\mathbb{R}}^d \to {\mathbb{C}}$ is an approximate solution to \eqref{nls} in the sense
that
\begin{equation*}
\bigl\| \nabla \bigl[i\tilde u_t + \Delta \tilde u - F(\tilde u)\bigr] \bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}(I\times{\mathbb{R}}^d)} \leq \delta
\end{equation*}
and also obeys
\begin{align*}
\|\tilde u\|_{L_t^\infty \dot H^1_x( I\times\R^d)}\leq E \quad \text{and} \quad S_I(\tilde u)\leq L.
\end{align*}
If $t_0 \in I$ and $u_0 \in \dot H^1_x({\mathbb{R}}^d)$ are such that
$$
\|\tilde u(t_0)-u_0\|_{\dot H^1_x({\mathbb{R}}^d)} \leq \delta,
$$
then there exists a solution $u: I \times {\mathbb{R}}^d \to {\mathbb{C}}$ to \eqref{nls} with $u(t_0) = u_0$ such that
\begin{align*}
\|\tilde u -u\|_{L_t^\infty \dot H^1_x( I\times\R^d)} + S_I(\tilde u - u) \leq {\varepsilon}.
\end{align*}
\end{lemma}
\begin{remark}
The result in \cite{TV} is slightly more general; we merely stated the version we will use. Lemma~\ref{L:stab} implies the
existence and uniqueness of maximal-lifespan solutions in Theorem~\ref{T:local}. It also proves that the solution depends
uniformly continuously on the initial data (on bounded sets), which was missing from \cite{cwI,cazenave:book}. See also
\cite{ckstt:gwp, RV} for earlier results in dimensions three and four.
\end{remark}
The defocusing case, that is, $F(u)=|u|^{\frac{4}{d-2}}u$, has received a lot of attention.
It is known that all $\dot H^1_x$ initial data lead to global solutions with finite scattering size.
Indeed, this was proved by Bourgain \cite{borg:scatter}, Grillakis \cite{grillakis}, and
Tao \cite{tao: gwp radial} for spherically symmetric initial data, and by Colliander--Keel--Staffilani--Takaoka--Tao
\cite{ckstt:gwp}, Ryckman--Visan \cite{RV}, and Visan \cite{thesis:art, Monica:thesis} for arbitrary initial data.
In the focusing case, things are more subtle. From Theorem~\ref{T:local}, we see that all maximal-lifespan solutions with
sufficiently small kinetic energy are global and scatter. However,
$$
W(t,x) = W(x):=\frac 1{(1+\frac{|x|^2}{d(d-2)})^{\frac {d-2}2}}\in \dot H^1_x({\mathbb{R}}^d)
$$
is a stationary solution to \eqref{nls}, that is, $W$ solves the nonlinear elliptic equation
\begin{align*}
\Delta W + |W|^{\frac4{d-2}}W=0.
\end{align*}
(See Appendix~\ref{A: W} for further properties of the ground state $W$, including its connection to the sharp version of
Sobolev embedding.) In particular, $W$ is a solution to \eqref{nls} that blows up both forward and backward in time in the sense
of Definition~\ref{D:blowup}. It is believed that $W$ has minimal kinetic energy among all blowup solutions. More precisely, we
have
\begin{conjecture}\label{conj}
Let $d\geq 3$ and let $u:I\times{\mathbb{R}}^d\to{\mathbb{C}}$ be a solution to \eqref{nls}. If
$$
E_* := \sup_{t\in I} \|\nabla u(t)\|_2 < \|\nabla W\|_2,
$$
then
$$
\int_I \int_{{\mathbb{R}}^d} |u(t,x)|^{\frac{2(d+2)}{d-2}}\, dx\, dt \leq C(E_*) < \infty.
$$
\end{conjecture}
An analogous conjecture is believed to hold in the mass-critical case. There, the role of $W$ is played by the ground state
$Q$, which is also a maximizer in the sharp Gagliardo--Nirenberg inequality. Indeed, Weinstein \cite{weinstein} first realized
the importance of $Q$ as a minimal-mass blowup example, albeit in the finite energy case. The mass-critical conjecture for
$L_x^2$ initial data and dimensions $d\geq 2$ was recently settled in the spherically symmetric case; see \cite{KTV} and
\cite{KVZ}.
Conjecture~\ref{conj} was verified in dimensions $d=3,4,5$ for spherically symmetric initial data by Kenig and Merle \cite{Evian, kenig-merle}.
In this paper, we verify the conjecture in dimensions $d\geq 5$ without any further symmetry assumptions on the solution.
More precisely, we derive
\begin{theorem}[Spacetime bounds]\label{T:main}
Let $d\geq 5$ and let $u:I\times{\mathbb{R}}^d\to{\mathbb{C}}$ be a solution to \eqref{nls}. If
$$
E_* := \sup_{t\in I} \|\nabla u(t)\|_2 < \|\nabla W\|_2,
$$
then
$$
\int_I \int_{{\mathbb{R}}^d} |u(t,x)|^{\frac{2(d+2)}{d-2}}\, dx\, dt \leq C(E_*) < \infty.
$$
\end{theorem}
The key development that allows us to treat non-radial data is a proof that certain minimal kinetic energy blowup solutions
have finite mass; indeed, that they belong to $L_t^\infty \dot H^{-{\varepsilon}}_x$ for some ${\varepsilon}>0$. This is done in Section~\ref{S:neg}.
Examination of the stationary solution $u(t,x)=W(x)$ shows that this is not true in dimensions three and four, indicating
a difficulty intrinsic to these dimensions. At a more technical level, the main novelty presented here is a proof that
such minimal kinetic energy solutions exhibit additional decay in $L^p_x$-sense. This is then harnessed in a double Duhamel trick
(cf. \cite{tao:attractor}) to prove the $L_x^2$-based properties mentioned above.
Combining Theorem~\ref{T:main} with the local theory gives the following more appealing formulation.
However, the slightly stronger statement in Theorem~\ref{T:main} is helpful for applications;
an example of this is Theorem~\ref{T:conc} below.
\begin{corollary}[Scattering]\label{C:main}
Let $d\geq 5$ and let $u$ be a solution to \eqref{nls} with maximal lifespan $I$. Assume also that
$$
\sup_{t\in I} \|\nabla u(t)\|_2 < \|\nabla W\|_2.
$$
Then $I={\mathbb{R}}$ and
$$
\int_{\mathbb{R}} \int_{{\mathbb{R}}^d} |u(t,x)|^{\frac{2(d+2)}{d-2}}\, dx\, dt < \infty.
$$
\end{corollary}
A more effective criterion for global well-posedness (depending directly on $u_0$) can be obtained from Corollary~\ref{C:main}
using an energy-trapping argument of Kenig and Merle \cite{kenig-merle}; see Corollary~\ref{trap}.
\begin{corollary}\label{C:inductE}
Let $d\geq 5$ and let $u_0\in \dot H^1_x({\mathbb{R}}^d)$ be such that $\|\nabla u_0\|_2\leq \|\nabla W\|_2$ and $E(u_0)<E(W)$.
Then the corresponding solution $u$ to \eqref{nls} is global and moreover,
$$
\int_{\mathbb{R}} \int_{{\mathbb{R}}^d} |u(t,x)|^{\frac{2(d+2)}{d-2}}\, dx\, dt <\infty.
$$
\end{corollary}
We would like to note that the proof of Theorem~\ref{T:main} adapts without difficulty (indeed, with some simplifications)
to the defocusing case $F(u)=|u|^{\frac4{d-2}}u$; as such, it constitutes a new (more streamlined) derivation of the main results
in \cite{thesis:art, Monica:thesis}.
The result in Theorem~\ref{T:main} is sharp. Indeed, the ground state $W$ is a solution to \eqref{nls} that blows up at infinity
(in the sense of Definition~\ref{D:blowup}) in both time directions. Moreover, there exist solutions with kinetic energy
only slightly greater than that of $W$ which blow up in finite time. More precisely, in Section~\ref{S:blowup} we prove
\begin{proposition}[Blowup]\label{P:blowup}
Let $d\geq 3$ and $u_0\in \dot H_x^1({\mathbb{R}}^d)$ with $E(u_0)<E(W)$ and $\|\nabla u_0\|_2 \geq \|\nabla W \|_2$. Assume also that
either $xu_0\in L^2_x({\mathbb{R}}^d)$ or $u_0\in H_x^1({\mathbb{R}}^d)$ is radial. Then the corresponding solution $u$ to \eqref{nls} blows up in
finite time.
\end{proposition}
Results of this type were previously obtained in the energy-subcritical case. Ogawa and Tsutsumi \cite{OgawaTsutsumi} treated
the case of initial data with negative energy. Holmer and Roudenko \cite{HolmerRoudenko} extended their result to include
certain positive-energy initial data. Proposition~\ref{P:blowup} appears in \cite{kenig-merle}, where a proof is given in the case
$xu_0\in L^2_x({\mathbb{R}}^d)$. For a complete proof, see Section~\ref{S:blowup}.
In view of Corollary~\ref{C:inductE} and Proposition~\ref{P:blowup}, one may inquire about the case $E(u)=E(W)$. For radial data
in dimensions $3$, $4$, and $5$, this is discussed in \cite{DuckyMerle}.
Our last result proves that blowup solutions with bounded kinetic energy must concentrate a fixed amount of
kinetic energy around the blowup time. The argument given shows that this result follows from a positive answer
to Conjecture~\ref{conj}; the dimensional restriction below reflects only the fact that this conjecture
is currently open in three and four dimensions.
\begin{theorem}[Blowup solutions concentrate kinetic energy]\label{T:conc}
Fix $d\geq 5$ and let $u$ be a solution to \eqref{nls} that blows up at time $T^*\in[-\infty, \infty]$. Assume also that
\begin{align}\label{type II}
\limsup_{t\to T^*}\|\nabla u(t)\|_2<\infty.
\end{align}
If $T^* $ is finite, then there exists a sequence $t_n\to T^*$ such that for any sequence $R_n\in (0,\infty)$ obeying
$|T^*-t_n|^{-\frac 12}R_n\to\infty$,
$$
\limsup_{n\to \infty} \sup_{x_0\in {\mathbb{R}}^d}\int_{|x-x_0|\leq R_n} |\nabla u(t_n,x)|^2\, dx \geq \|\nabla W\|_2^2.
$$
If $|T^*|=\infty$, then there exists a sequence $t_n\to T^*$ such that for any sequence $R_n\in (0,\infty)$ obeying
$|t_n|^{-\frac 12}R_n\to\infty$,
$$
\limsup_{n\to \infty} \sup_{x_0\in {\mathbb{R}}^d}\int_{|x-x_0|\leq R_n} |\nabla u(t_n,x)|^2\, dx \geq \|\nabla W\|_2^2.
$$
\end{theorem}
We prove Theorem~\ref{T:conc} in Section~\ref{S:Conc}. The argument is inspired by that in \cite{KTV}, which employs some ideas from
\cite{bourg.2d}. The fact that the kinetic energy is not conserved introduces new subtleties.
A similar result in dimensions $d=3,4,5$ for spherically symmetric data was proposed by Kenig and Merle \cite{kenig-merle};
see Section~\ref{S:Conc} for a fuller discussion.
\subsection{Outline of the proof of Theorem~\ref{T:main}}
We argue by contradiction. We show that if the theorem failed, it would imply the existence of a very special
type of counterexample. Such counterexamples are then shown to have a wealth of properties not immediately
apparent from their construction, so many properties, in fact, that they cannot exist.
While we will make some further reductions later, the main property of the special counterexamples is
almost periodicity modulo symmetries:
\begin{definition}[Almost periodicity modulo symmetries]\label{D:ap}
Let $d \geq 3$. A solution $u$ to \eqref{nls} with lifespan $I$ is said to be \emph{almost periodic modulo symmetries} if there
exist functions $N: I \to {\mathbb{R}}^+$, $x:I\to {\mathbb{R}}^d$, and $C: {\mathbb{R}}^+ \to {\mathbb{R}}^+$ such that for all
$t \in I$ and $\eta > 0$,
$$ \int_{|x-x(t)| \geq C(\eta)/N(t)} |\nabla u(t,x)|^2\, dx \leq \eta$$
and
$$ \int_{|\xi| \geq C(\eta) N(t)} |\xi|^2\, | \hat u(t,\xi)|^2\, d\xi \leq \eta.$$
We refer to the function $N$ as the \emph{frequency scale function} for the solution $u$, $x$ the \emph{spatial center function},
and to $C$ as the \emph{compactness modulus function}.
\end{definition}
\begin{remark}
The parameter $N(t)$ measures the frequency scale of the solution at time $t$, while $1/N(t)$ measures the spatial scale; see
\cite{tvz:cc} for further discussion. It is possible to multiply $N(t)$ by any function of $t$ that is bounded both above
and below, provided that we also modify the compactness modulus function $C$ accordingly.
\end{remark}
\begin{remark}\label{R:pot energy}
By the Ascoli--Arzela Theorem, a family of functions is precompact in $\dot H^1_x({\mathbb{R}}^d)$ if and only if it is norm-bounded and
there exists a compactness modulus function $C$ so that
$$
\int_{|x| \geq C(\eta)} |\nabla f(x)|^2\ dx + \int_{|\xi| \geq C(\eta)} |\xi|^2 \, |\hat f(\xi)|^2\ d\xi \leq \eta
$$
for all functions $f$ in the family. Thus, an equivalent formulation of Definition~\ref{D:ap} is as follows: $u$ is almost
periodic modulo symmetries if and only if
$$
\{ u(t): t \in I \} \subseteq \{ \lambda^{\frac{d-2}2} f(\lambda (x+x_0)) : \, \lambda\in(0,\infty), \ x_0\in {\mathbb{R}}^d, \text{ and }f \in K \}
$$
for some compact subset $K$ of $\dot H^1_x({\mathbb{R}}^d)$. In particular, as every compact set in $\dot H^1_x({\mathbb{R}}^d)$ is compact in
$L_x^{2d/(d-2)}({\mathbb{R}}^d)$ (by Sobolev embedding), any solution $u: I\times\R^d\to {\mathbb{C}}$ to \eqref{nls} that is almost periodic modulo
symmetries must also satisfy
$$
\int_{|x-x(t)| \geq C(\eta)/N(t)} |u(t,x)|^{\frac{2d}{d-2}}\, dx \leq \eta
$$
for all $t\in I$ and $\eta>0$.
\end{remark}
\begin{remark}\label{R:c small}
A further consequence of compactness modulo symmetries is the existence of a function $c: {\mathbb{R}}^+\to {\mathbb{R}}^+$ so that
$$
\int_{|x-x(t)| \leq c(\eta)/N(t)} |\nabla u(t,x)|^2\, dx
+ \int_{|\xi| \leq c(\eta) N(t)} |\xi|^2\, | \hat u(t,\xi)|^2\, d\xi \leq \eta
$$
for all $t \in I$ and $\eta > 0$.
\end{remark}
With these preliminaries out of the way, we can now describe the first major milestone in the proof of Theorem~\ref{T:main}.
\begin{theorem}[Reduction to almost periodic solutions]\label{T:reduct}
Suppose $d \geq 3$ is such that Conjecture~\ref{conj} failed. Then there exists a maximal-lifespan solution
$u: I\times\R^d\to {\mathbb{C}}$ to \eqref{nls} such that $\sup_{t\in I} \|\nabla u(t)\|_2 < \|\nabla W\|_2$, $u$ is almost periodic modulo
symmetries, and $u$ blows up both forward and backward in time. Moreover, $u$ has minimal kinetic energy among all blowup solutions,
that is,
$$
\sup_{t\in J} \|\nabla v(t)\|_2 \leq \sup_{t\in I} \|\nabla u(t)\|_2
$$
for all maximal-lifespan solutions $v:J\times{\mathbb{R}}^d \to {\mathbb{C}}$ that blowup in at least one time direction.
\end{theorem}
Most of the properties of $u$ described in this theorem stem directly from the fact that it is a minimal kinetic energy blowup
solution. The innovative discovery that such minimal blowup solutions exist was made by Keraani \cite[Theorem 1.3]{keraani-l2}
in the context of the mass-critical NLS. This was adapted to the energy-critical setting in \cite{kenig-merle}, which also
constitutes the first application of the existence of minimal blowup solutions to the well-posedness problem.
Following Bourgain~\cite{borg:scatter}, earlier works on the energy-critical NLS (e.g., \cite{ckstt:gwp,RV,thesis:art})
focussed their attention on \emph{almost}-minimal blowup solutions, which are then shown to have space and frequency localization
properties similar to (but slightly weaker than) those in Definition~\ref{D:ap}. While these earlier methods are inherently
quantitative, they add significantly to the complexity of the argument.
One of the main ingredients in the proof of Theorem~\ref{T:reduct} is a linear profile decomposition of Keraani, which
is reproduced below as Lemma~\ref{L:cc}. Using this result, Kenig and Merle proved a slight variant of Theorem~\ref{T:reduct}
in dimensions $3$, $4$, and~$5$; see \cite[Proposition~4.1]{kenig-merle}. They further indicate that the argument may be
modified to give a proof of the full Theorem~\ref{T:reduct} in these dimensions (see the proof of Corollary~5.16 in
\cite{kenig-merle}).
We will present a complete proof of Theorem~\ref{T:reduct} uniformly in $d\geq 3$. In doing so, we uncovered
several difficulties in extracting a blowup solution with minimal kinetic energy that were not elaborated upon in \cite{kenig-merle}.
These difficulties are related to the fact that unlike the energy, the kinetic energy is not conserved.
Firstly, choosing a `bad profile' requires a certain amount of gymnastics; see, for instance, Lemma~\ref{L:bad profile}
and the discussion that follows it. The difficulty arises from the possibility that the scattering size of several profiles
is large over short times, while their kinetic energy does not achieve the critical value until much later.
Secondly, having selected the `bad profile', one must then prove kinetic energy decoupling at the (potentially) \emph{later}
time when this profile achieves the critical kinetic energy; see Lemma~\ref{L:decouple ke}.
Related arguments (for the cubic NLS in three spatial dimensions) appear in \cite{kenig-merle:1/2}.
To prove Theorem~\ref{T:main}, we will need to demonstrate the existence of minimal kinetic energy blowup solutions
with more refined properties than those provided by Theorem~\ref{T:reduct}; in particular, we need to better constrain
the behaviour of the frequency scale function $N(t)$. Theorem~1.16 in \cite{KTV} is the strongest result of this type
of which we are aware. In Section~\ref{S:enemies}, we adapt the argument given there to obtain
\begin{theorem}[Three special scenarios for blowup]\label{T:enemies}
Fix $d\geq 3$ and suppose that Conjecture~\ref{conj} fails for this choice of $d$.
Then there exists a minimal kinetic energy, maximal-lifespan solution $u:I\times{\mathbb{R}}^d\to {\mathbb{C}}$,
which is almost periodic modulo symmetries, $S_I(u)=\infty$,
and obeys $\sup_{t\in I} \|\nabla u(t)\|_2 < \|\nabla W\|_2$.
We can also ensure that the lifespan $I$ and the frequency scale function $N:I\to{\mathbb{R}}^+$ match one of the following three scenarios:
\begin{itemize}
\item[I.] (Finite-time blowup) We have that either $|\inf I|<\infty$ or $\sup I<\infty$.
\item[II.] (Soliton-like solution) We have $I = {\mathbb{R}}$ and
\begin{equation*}
N(t) = 1 \quad \text{for all} \quad t \in {\mathbb{R}}.
\end{equation*}
\item[III.] (Low-to-high frequency cascade) We have $I = {\mathbb{R}}$,
\begin{equation*}
\inf_{t \in {\mathbb{R}}} N(t) \geq 1, \quad \text{and} \quad \limsup_{t \to +\infty} N(t) = \infty.
\end{equation*}
\end{itemize}
\end{theorem}
Therefore, in order to prove Theorem~\ref{T:main} it suffices to preclude the existence of solutions that satisfy the criteria in
Theorem~\ref{T:enemies}. The key step in all three scenarios above is to prove negative regularity, that is, the solution $u$
lies in $L_x^2$ or better. In scenarios~II and~III, the proof that $u\in L^2_x$ requires $d\geq5$; note that in lower dimensions,
the ground-state solution $W$ does not belong to $L^2_x$. Similar in spirit to \cite{KTV, KVZ}, negative regularity is deduced
from the minimality of the solution considered; recall that $u$ has minimal kinetic energy among all blowup solutions. In this regard,
our approach differs from those in \cite{borg:scatter, ckstt:gwp, grillakis, RV, tao: gwp radial, thesis:art, Monica:thesis}
where various versions of the Morawetz inequality are used to prove negative regularity.
The fact that the solutions described in Theorem~\ref{T:enemies} belong to $L^2_x$ is a very peculiar property. General $\dot H^1_x$
initial data do not decay this quickly at infinity. That it holds for the solutions in Theorem~\ref{T:enemies} is closely tied
to the almost periodicity of these solutions, which is itself a very non-generic behaviour for a dispersive equation. Indeed,
generic initial data lead to solutions containing waves that radiate to infinity as time progresses, while almost periodic solutions
remain concentrated in space.
The key property that leads to the solutions in Theorem~\ref{T:enemies} having finite mass (and being almost periodic) is their
selection as minimal kinetic energy blowup solutions. Any low-frequency component that does not contribute directly to the blowup
behaviour would constitute a waste of kinetic energy. A further manifestation of this minimality is the absence of a scattered wave
at the endpoints of the lifespan $I$; more formally, we have the following Duhamel formula, which plays an important role in
proving negative regularity. For a proof, see \cite[Section~6]{tvz:cc}.
\begin{lemma}\label{L:duhamel}
Let $u$ be an almost periodic solution to \eqref{nls} on its maximal-lifespan $I$. Then, for all $t\in I$,
\begin{equation}\label{Duhamel}
\begin{aligned}
u(t)&=\lim_{T\nearrow\,\sup I}i\int_t^T e^{i(t-t')\Delta} F(u(t'))\,dt'\\
&=-\lim_{T\searrow\,\inf I}i\int_T^t e^{i(t-t')\Delta} F(u(t'))\,dt',
\end{aligned}
\end{equation}
as weak limits in $\dot H^1_x$.
\end{lemma}
The finite-time blowup scenario is considered in Section~\ref{S:finite time}. Arguing as in \cite{kenig-merle},
we prove that the $L_x^2$ norm of $u(t)$ converges to zero as $t$ approaches the finite endpoint.
Since mass is conserved, this implies that $u$ is identically zero.
For the remaining two cases, we prove negative regularity in Section~\ref{S:neg}. This is the heart of the matter
and is achieved in two stages. First, we prove that the solution belongs to $L^\infty_t L^p_x$ for certain values
of $p$ less than $2d/(d-2)$. This demonstrates that the solution decays more quickly at infinity than a general
$u\in L^\infty_t \dot H^1_x$; recall that since $u$ has uniformly bounded kinetic energy, $u\in L^\infty_t L^{2d/(d-2)}_x$ by
virtue of Sobolev embedding. The proof of this first step involves a bootstrap argument built off the Duhamel formulae
\eqref{Duhamel}. In order to disentangle frequency interactions, we make use of an `acausal' Gronwall inequality,
Lemma~\ref{L:Gronwall}. The need for such a result in this paper stems from the two ways in which the nonlinearity
can produce low frequencies: from combinations of higher frequencies in $u$ and from fractional powers of lower frequencies in $u$.
The second step in proving negative regularity is to upgrade the decay proved in the first step to $L^2_x$-based spaces.
To do this, we take advantage of the global existence together with a double Duhamel trick in the spirit of \cite{tao:attractor}.
In order to make the associated time integrals converge, we need both $d\geq 5$ and the decay proved in step one.
In Section~\ref{S:cascade}, we use the negative regularity proved in Section~\ref{S:neg} together with the conservation of mass
to preclude the low-to-high frequency cascade.
In Section~\ref{S:Soliton}, we preclude the soliton. To achieve this, we first use the negative regularity proved
in Section~\ref{S:neg} to deduce compactness properties for $u$ in $L_x^2$. Secondly, we argue as in \cite{DHR,kenig-merle:wave}
to show that a minimal kinetic energy blowup solution must have zero momentum. Notice that in order to even define the
momentum we need $u(t) \in \dot H^{1/2}_x$, which is considerable negative regularity compared to $\dot H^1_x$.
Using the vanishing of the momentum, we will deduce that the spatial center function obeys $|x(t)|=o(t)$ as $t\to \infty$, rather than
merely $O(t)$. This mimics similar arguments in \cite{DHR,kenig-merle:wave} and relies crucially
on the $L^2_x$-compactness properties proved in the first step. To preclude the soliton, we now use a truncated virial
inequality much as in \cite{DHR, kenig-merle}; negative regularity and the fact that $|x(t)|=o(t)$ are needed in this
last step.
Proposition~\ref{P:blowup} is proved in Section~\ref{S:blowup}. In Section~\ref{S:Conc}, we prove Theorem~\ref{T:conc}.
\subsection*{Acknowledgements}
We would like to thank Xiaoyi Zhang for permission to incorporate portions of some earlier
joint work \cite{RadialHD} into this article. This unpublished manuscript gave a proof of
Theorem~\ref{T:main} in the case of radial data. We would also like to acknowledge comments on
that manuscript from C. Kenig, as well as other helpful correspondence. We are grateful to Terry Tao
for comments on an earlier version of this manuscript.
R.~K. was supported by NSF grants DMS-0701085 and DMS-0401277 and by a Sloan Foundation Fellowship.
Both authors were supported by the Institute for Advanced Study through NSF grant DMS-0635607.
Any opinions, findings and conclusions or recommendations expressed are those of the authors and do not reflect the views of the
National Science Foundation.
\section{Notations and useful lemmas}\label{S:Not}
\subsection{Some notation}
We use $X \lesssim Y$ or $Y \gtrsim X$ whenever $X \leq CY$ for some constant $C>0$. We use $O(Y)$ to denote any quantity $X$
such that $|X| \lesssim Y$. We use the notation $X \sim Y$ whenever $X \lesssim Y \lesssim X$. The fact that these constants
depend upon the dimension $d$ will be suppressed. If $C$ depends upon some additional parameters, we will indicate this with subscripts;
for example, $X \lesssim_u Y$ denotes the assertion that $X \leq C_u Y$ for some $C_u$ depending on $u$;
similarly for $X \sim_u Y$, $X = O_u(Y)$, etc. We denote by $X\pm$ any quantity of the form $X\pm{\varepsilon}$ for any ${\varepsilon}>0$.
For any spacetime slab $I\times {\mathbb{R}}^d$, we use $L_t^qL_x^r(I\times {\mathbb{R}}^d)$ to denote the Banach space of functions $u: I\times
{\mathbb{R}}^d\to \mathbb C$ whose norm is
$$
\|u\|_{L_t^qL_x^r(I\times{\mathbb{R}}^d)}:=\Bigl(\int_I\|u(t)\|_{L^r_x}^q \, dt\Bigr)^{\frac 1q}<\infty,
$$
with the usual modifications when $q$ or $r$ are equal to infinity. When $q=r$ we abbreviate $L^q_t L^q_x$ as $L^q_{t,x}$.
We define the Fourier transform on ${\mathbb{R}}^d$ by
$$
\hat f(\xi):= (2\pi)^{-d/2} \int_{{\mathbb{R}}^d} e^{-ix\xi}f(x)\,dx.
$$
For $s\in {\mathbb{R}}$, we define the fractional differentiation/integral operator
$$
\widehat{|\nabla|^s f}(\xi):=|\xi|^s\hat f(\xi),
$$
which in turn defines the homogeneous Sobolev norm
$$
\|f\|_{\dot H_x^s({\mathbb{R}}^d)}:=\||\nabla|^s f\|_{L_x^2({\mathbb{R}}^d)}.
$$
\subsection{Basic harmonic analysis}\label{ss:basic}
Let $\varphi(\xi)$ be a radial bump function supported in the ball $\{ \xi \in {\mathbb{R}}^d: |\xi| \leq \tfrac {11}{10} \}$ and equal to
$1$ on the ball $\{ \xi \in {\mathbb{R}}^d: |\xi| \leq 1 \}$. For each number $N > 0$, we define the Fourier multipliers
\begin{align*}
\widehat{P_{\leq N} f}(\xi) &:= \varphi(\xi/N) \hat f(\xi)\\
\widehat{P_{> N} f}(\xi) &:= (1 - \varphi(\xi/N)) \hat f(\xi)\\
\widehat{P_N f}(\xi) &:= \psi(\xi/N)\hat f(\xi) := (\varphi(\xi/N) - \varphi(2\xi/N)) \hat f(\xi)
\end{align*}
and similarly $P_{<N}$ and $P_{\geq N}$. We also define
$$ P_{M < \cdot \leq N} := P_{\leq N} - P_{\leq M} = \sum_{M < N' \leq N} P_{N'}$$
whenever $M < N$. We will usually use these multipliers when $M$ and $N$ are \emph{dyadic numbers} (that is, of the form $2^n$
for some integer $n$); in particular, all summations over $N$ or $M$ are understood to be over dyadic numbers. Nevertheless, it
will occasionally be convenient to allow $M$ and $N$ to not be a power of $2$.
Like all Fourier multipliers, the Littlewood-Paley operators commute with the propagator $e^{it\Delta}$, as well as with
differential operators such as $i\partial_t + \Delta$. We will use basic properties of these operators many many times,
including
\begin{lemma}[Bernstein estimates]\label{Bernstein}
For $1 \leq p \leq q \leq \infty$,
\begin{align*}
\bigl\| |\nabla|^{\pm s} P_N f\bigr\|_{L^p_x({\mathbb{R}}^d)} &\sim N^{\pm s} \| P_N f \|_{L^p_x({\mathbb{R}}^d)},\\
\|P_{\leq N} f\|_{L^q_x({\mathbb{R}}^d)} &\lesssim N^{\frac{d}{p}-\frac{d}{q}} \|P_{\leq N} f\|_{L^p_x({\mathbb{R}}^d)},\\
\|P_N f\|_{L^q_x({\mathbb{R}}^d)} &\lesssim N^{\frac{d}{p}-\frac{d}{q}} \| P_N f\|_{L^p_x({\mathbb{R}}^d)}.
\end{align*}
\end{lemma}
We will also need the following fractional chain rule \cite{ChW:fractional chain rule}. For a textbook treatment, see
\cite[\S 2.4]{Taylor:book}.
\begin{lemma}[Fractional chain rule, \cite{ChW:fractional chain rule}]\label{F Lip}
Suppose $G\in C^1(\mathbb C)$, $s \in (0,1]$, and $1<p,p_1,p_2<\infty$ are such that $\frac 1p=\frac 1{p_1}+\frac 1{p_2}$. Then,
$$
\||\nabla|^sG(u)\|_p\lesssim \|G'(u)\|_{p_1}\||\nabla|^s u\|_{p_2}.
$$
\end{lemma}
\subsection{Strichartz estimates}
Let $e^{it\Delta}$ be the free Schr\"odinger evolution. From the explicit formula
$$ e^{it\Delta} f(x) = \frac{1}{(4\pi i t)^{d/2}} \int_{{\mathbb{R}}^d} e^{i|x-y|^2/4t} f(y)\, dy,$$
one easily obtains the standard dispersive inequality
\begin{equation}\label{dispersive}
\| e^{it\Delta} f \|_{L_x^\infty({\mathbb{R}}^d)} \lesssim|t|^{-\frac d2} \| f \|_{L_x^1({\mathbb{R}}^d)}
\end{equation}
for all $t\neq 0$.
\begin{definition}[Admissible pairs]
For $d\geq3$, we say that a pair of exponents $(q,r)$ is \emph{Schr\"odinger-admissible} if
\begin{equation}\label{admissible}
\frac 2q+\frac dr=\frac d2 \quad \text{and} \quad 2\le q, r \le\infty.
\end{equation}
For a fixed spacetime slab $I\times {\mathbb{R}}^d$, we define the \emph{Strichartz norms}
$$
\|u\|_{\dot S^0(I)}:=\sup_{(q,r)\ \text{admissible}} \|u\|_{L_t^qL_x^r( I\times\R^d)} \qquad \text{and} \qquad \|u\|_{\dot
S^1(I)}:=\|\nabla u\|_{\dot S^0(I)}.
$$
We write $\dot S^0(I)$ and $\dot S^1(I)$ for the closure of all test functions under these norms, respectively.
\end{definition}
A simple application of Sobolev embedding yields
\begin{align*}
\|\nabla u\|_{L_t^\infty L_x^2( I\times\R^d)} & + \|\nabla u\|_{L_{t,x}^{\frac{2(d+2)}d}( I\times\R^d)}
+ \|\nabla u\|_{L_t^2 L_x^{\frac {2d}{d-2}}( I\times\R^d)}\\
&+\|u\|_{L_t^\infty L_x^{\frac {2d}{d-2}}( I\times\R^d)} + \|u\|_{L_{t,x}^\frac {2(d+2)}{d-2}( I\times\R^d)} \lesssim \|u\|_{\dot S^1(I)}
\end{align*}
for all $d\geq 3$.
As a consequence of the dispersive estimate \eqref{dispersive}, we have the following standard Strichartz estimate.
\begin{lemma}[Strichartz]\label{strichartz}
Let $k= 0,1$, let $I$ be a compact time interval, and let $u: I\times{\mathbb{R}}^d \to \mathbb C$ be a solution to the forced
Schr\"odinger equation
$$
iu_t+\Delta u=F.
$$
Then,
$$
\|u\|_{\dot S^k(I)}\lesssim \|u(t_0)\|_{\dot H_x^k}+\|F\|_{\dot N^k(I)}
$$
for any $t_0\in I$.
\end{lemma}
\begin{proof}
See, for example, \cite{gv:strichartz, strichartz}. For the endpoint $(q,r)=\bigl(2,\frac{2d}{d-2}\bigr)$, see \cite{tao:keel}.
\end{proof}
The next result is Lemma~3.7 from \cite{keraani-h1} extended to all dimensions. This will play an important role in the proof
of Lemma~\ref{L:bad profile}. Below we offer a quantitative and more streamlined proof.
\begin{lemma}\label{L:Keraani3.7}
Given $\phi\in \dot H^1_x({\mathbb{R}}^d)$,
$$
\| \nabla e^{it\Delta} \phi \|_{L^2_{t,x}([-T,T]\times\{|x|\leq R\})}^3 \lesssim
T^{\frac2{d+2}} R^{\frac{3d+2}{2(d+2)}} \| e^{it\Delta} \phi \|_{L^{2(d+2)/(d-2)}_{t,x}}\| \nabla \phi \|_{L^2_x}^2.
$$
\end{lemma}
\begin{proof}
Given $N>0$, H\"older's and Bernstein's inequalities imply
\begin{align*}
\| \nabla e^{it\Delta} \phi_{< N} \|_{L^2_{t,x}([-T,T]\times\{|x|\leq R\})}
&\lesssim T^{2/(d+2)} R^{2d/(d+2)} \| e^{it\Delta} \nabla \phi_{< N} \|_{L^{2(d+2)/(d-2)}_{t,x}} \\
&\lesssim T^{2/(d+2)} R^{2d/(d+2)} \, N\, \| e^{it\Delta} \phi \|_{L^{2(d+2)/(d-2)}_{t,x}}.
\end{align*}
On the other hand, the high frequencies can be estimated using local smoothing:
\begin{align*}
\| \nabla e^{it\Delta} \phi_{\geq N} \|_{L^2_{t,x}([-T,T]\times\{|x|\leq R\})}
&\lesssim R^{1/2} \| |\nabla|^{1/2} \phi_{\geq N} \|_{L^2_x} \\
&\lesssim N^{-1/2} R^{1/2} \| \nabla \phi \|_{L^2_x}.
\end{align*}
The lemma now follows by optimizing the choice of $N$.
\end{proof}
We will also make use of the following bilinear estimate:
\begin{lemma}[Bilinear Strichartz]\label{L:bilinear strichartz}
For any spacetime slab $I \times {\mathbb{R}}^d$ and any $M, N > 0$, we have
\begin{align*}
\|e^{it\Delta} \phi_N e^{it\Delta} \phi_M \|_{L^2_{t,x}(I \times {\mathbb{R}}^d)}
& \lesssim M^{\frac{d-4}2} N^{-1} \|\nabla \phi_M\|_{L_x^2} \|\nabla \phi_N\|_{L_x^2},
\end{align*}
for any function $\phi$.
\end{lemma}
\begin{proof}
See \cite[Lemma 2.5]{Monica:thesis}, which builds on earlier versions in \cite{borg:book, ckstt:gwp}.
\end{proof}
\subsection{Concentration compactness}\label{SS:cc}
In this subsection we record the linear profile decomposition statement due to Keraani \cite{keraani-h1},
which will lead to the reduction in Theorem~\ref{T:reduct}.
We first recall the symmetries of the equation \eqref{nls} which fix the initial surface $t=0$ and preserve
the energy.
\begin{definition}[Symmetry group]\label{D:sym}
For any phase $\theta \in {\mathbb{R}}/2\pi {\mathbb{Z}}$, position $x_0\in {\mathbb{R}}^d$, and scaling parameter $\lambda > 0$, we define a unitary transformation
$g_{\theta,x_0,\lambda}: \dot H^1_x({\mathbb{R}}^d) \to \dot H^1_x({\mathbb{R}}^d)$ by
$$
[g_{\theta,x_0, \lambda} f](x) := \lambda^{-\frac{d-2}2} e^{i\theta} f\bigl( \lambda^{-1}(x-x_0) \bigr).
$$
Let $G$ denote the collection of such transformations. For a function $u: I \times {\mathbb{R}}^d \to {\mathbb{C}}$, we define $T_{g_{\theta,x_0,
\lambda}} u: \lambda^2 I \times {\mathbb{R}}^d \to {\mathbb{C}}$ where $\lambda^2 I := \{ \lambda^2 t: t \in I \}$ by the formula
$$
[T_{g_{\theta,x_0, \lambda}} u](t,x) := \lambda^{-\frac{d-2}2} e^{i\theta} u\bigl( \lambda^{-2}t, \lambda^{-1}(x-x_0)\bigr).
$$
Note that if $u$ is a solution to \eqref{nls}, then $T_{g}u$ is a solution to \eqref{nls} with initial data $g u_0$.
\end{definition}
\begin{remark} It is easy to verify that $G$ is a group and that the map $g \mapsto T_g$ is a homomorphism.
The map $u \mapsto T_g u$ maps solutions to \eqref{nls} to solutions with the same energy and scattering size as $u$,
that is, $E(T_g u) = E(u)$ and $S(T_g u)=S(u)$. Furthermore, $u$ is a maximal-lifespan solution if and only if $T_g u$
is a maximal-lifespan solution.
\end{remark}
We are now ready to state Keraani's linear profile decomposition.
\begin{lemma} [Linear profile decomposition, \cite{keraani-h1}]\label{L:cc}
Fix $d\ge 3$ and let $\{u_n\}_{n\geq 1}$ be a sequence of functions bounded in $\dot H_x^1({\mathbb{R}}^d)$. Then,
after passing to a subsequence if necessary, there exist a sequence of functions $\{\phi^j\}_{j\geq 1}\subset \dot H_x^1({\mathbb{R}}^d)$,
group elements $g_n^j \in G$, and times $t_n^j\in {\mathbb{R}}$ such that we have the decomposition
\begin{align}\label{decomp}
u_n = \sum_{j=1}^J g_n^j e^{it_n^j\Delta}\phi^j + w_n^J
\end{align}
for all $J\geq 1$; here, $w_n^J \in \dot H^1_x({\mathbb{R}}^d)$ obey
\begin{equation}\label{w scat}
\lim_{J\to \infty}\limsup_{n\to\infty} \bigl\| e^{it\Delta}w_n^J
\bigr\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}({\mathbb{R}}\times{\mathbb{R}}^d)}=0.
\end{equation}
Moreover, for any $j \neq j'$,
\begin{align}\label{crazy}
\frac{\lambda_n^j}{\lambda_n^{j'}} + \frac{\lambda_n^{j'}}{\lambda_n^{j}}
+ \frac{|x_n^j-x_n^{j'}|^2}{\lambda_n^j \lambda_n^{j'}}
+ \frac{\bigl|t_n^j(\lambda_n^j)^2- t_n^{j'}(\lambda_n^{j'})^2\bigr|}{\lambda_n^j\lambda_n^{j'}}\to\infty
\quad \text{as } n\to \infty.
\end{align}
Furthermore, for any $J \geq 1$ we have the kinetic energy decoupling property
\begin{equation}\label{decouple}
\lim_{n \to \infty} \Bigl[ \|\nabla u_n\bigr\|_2^2 - \sum_{j=1}^J \|\nabla \phi^j\|_2^2 - \|\nabla w_n^J\|^2_2 \Bigr] = 0.
\end{equation}
\end{lemma}
Finally, we will need the following result, which shows that for all $J\geq 1$, the error term $w_n^J$ converges weakly to zero
in $\dot H^1_x({\mathbb{R}}^d)$ modulo the symmetries associated to $\phi^j$ for $1\leq j\leq J$.
This property is actually built into the proof of Lemma~\ref{L:cc}; however, since it is not explicitly stated in \cite{keraani-h1}
and is easy to verify \emph{a posteriori}, we do that here.
\begin{lemma}[Strong decoupling]\label{L:strong decouple}
For all $J\geq 1$ and all $1\leq j\leq J$, the sequence $e^{-it_n^j\Delta}[(g_n^j)^{-1}w_n^J]$ converges weakly to zero in $\dot
H^1_x({\mathbb{R}}^d)$ as $n\to \infty$. In particular, this implies the kinetic energy decoupling \eqref{decouple}.
\end{lemma}
\begin{proof}
Fix $J\geq 1$ and $1\leq j\leq J$. By \eqref{decouple} and the fact that $\{u_n\}_{n\geq 1}$ is bounded in $\dot H^1_x({\mathbb{R}}^d)$,
we deduce that $\{e^{-it_n^j\Delta}[(g_n^j)^{-1}w_n^J]\}_{n\geq 1}$ is bounded in $\dot H^1_x({\mathbb{R}}^d)$. Using Alaoglu's Theorem (and
passing to a subsequence if necessary), we obtain that $e^{-it_n^j\Delta}[(g_n^j)^{-1}w_n^J]$ converges weakly in $\dot
H^1_x({\mathbb{R}}^d)$ to some $\psi\in \dot H_x^1({\mathbb{R}}^d)$. To prove the lemma, it thus suffices to show that $\psi\equiv 0$.
By weak convergence and \eqref{decomp},
\begin{equation}\label{decomp2}
\begin{aligned}
\|\psi\|_{\dot H^1_x({\mathbb{R}}^d)}^2
&=\lim_{n\to \infty} \bigl\langle \nabla e^{-it_n^j\Delta}[(g_n^j)^{-1}w_n^J], \nabla \psi \bigr\rangle\\
&= \sum_{l=J+1}^L \lim_{n\to \infty} \bigl\langle \nabla g_n^l e^{it_n^l\Delta}\phi^l, \nabla g_n^j e^{it_n^j\Delta}\psi \bigr\rangle\\
&\quad + \lim_{n\to \infty} \bigl\langle \nabla e^{-it_n^j\Delta}[(g_n^j)^{-1} w_n^L], \nabla \psi \bigr\rangle,
\end{aligned}
\end{equation}
for all $L>J$. The limits on the right-hand side are guaranteed to exist; indeed, using \eqref{crazy}, a change of variables
shows
$$
\lim_{n\to \infty} \bigl\langle \nabla g_n^l e^{it_n^l\Delta}\phi^l, \nabla g_n^j e^{it_n^j\Delta}\psi \bigr\rangle\to 0 \quad
\text{as} \quad n\to \infty
$$
for all $L\geq l\geq J+1>j$; see the proof of \cite[Lemma~2.7]{keraani-h1}.
On the other hand, combining the fact that the family $\{e^{-it_n^j\Delta}[(g_n^j)^{-1} w_n^L]\}_{n,L\geq1}$ is bounded in $\dot
H^1_x({\mathbb{R}}^d)$ with
$$
\lim_{L\to \infty}\limsup_{n\to \infty} S_{{\mathbb{R}}}\bigl( e^{it\Delta} e^{-it_n^j\Delta}[(g_n^j)^{-1} w_n^L]\bigr) =\lim_{L\to
\infty}\limsup_{n\to \infty} S_{{\mathbb{R}}}\bigl( e^{it\Delta} w_n^L\bigr)=0,
$$
we deduce that $e^{-it_n^j\Delta}[(g_n^j)^{-1} w_n^L]$ converges weakly to zero in $\dot H^1_x({\mathbb{R}}^d)$ as $n,L\to \infty$
(cf. \cite[Lemma~3.63]{merle-vega}).
Thus, for $L$ sufficiently large,
$$
\limsup_{n\to \infty} \bigl|\bigl\langle \nabla e^{-it_n^j\Delta}[(g_n^j)^{-1} w_n^L], \nabla \psi \bigr\rangle\bigr|
\leq \tfrac 12 \|\psi\|_{\dot H^1_x({\mathbb{R}}^d)}^2.
$$
Returning to \eqref{decomp2} and choosing $L$ large, we conclude $\psi\equiv 0$. This finishes the proof of Lemma~\ref{L:strong
decouple}.
\end{proof}
\subsection{Additional harmonic analysis}
In this subsection we record tools that will be used to prove the concentration result Theorem~\ref{T:conc}.
The next lemma is an inverse Strichartz inequality. It roughly states that the Strichartz norm cannot be large without there
being a bubble of concentration in spacetime. Results of this type constitute an important precursor to the concentration
compactness technique. The prototype is \cite[\S2--3]{bourg.2d}, which has been extended and elaborated in many subsequent
papers; see, for instance, \cite{BegoutVargas,borg:scatter,keraani-h1,keraani-l2,merle-vega,tao: gwp radial,tao:pseudo}.
While the lemma can be deduced a posteriori from these works, we give a self-contained argument using the ideas in them.
\begin{lemma}[Inverse Strichartz]\label{L:B conc}
Fix $d\geq 3$. Let $\phi\in \dot H^1_x({\mathbb{R}}^d)$ and $\eta>0$ such that
$$
\int_I \int_{{\mathbb{R}}^d} \bigl|e^{it\Delta} \phi \bigr|^{\frac{2(d+2)}{d-2}} \,dx\,dt \geq \eta
$$
for some interval $I\subseteq {\mathbb{R}}$. Then there exist $C=C(\|\nabla \phi\|_2, \eta)$, $x_0\in {\mathbb{R}}^d$, and $J\subseteq I$ so that
\begin{equation*}
\int_{|x - x_0|\leq C |J|^{1/2}} \bigl|e^{it\Delta} \nabla \phi \bigr|^2 \,dx \geq C^{-1} \quad\text{for all} \quad t\in J.
\end{equation*}
Notice that $C$ does not depend on $I$ or $J$.
\end{lemma}
\begin{proof}
First we prove that
\begin{equation}\label{single M big}
\int_I \int_{{\mathbb{R}}^d} \bigl|e^{it\Delta} \phi_M \bigr|^{\frac{2(d+2)}{d-2}} \,dx\,dt \gtrsim \eta^c
\end{equation}
for some dyadic $M \gtrsim |I|^{-1/2}$ and some (dimension-dependent) $c>0$. We begin with the argument for dimensions $d\geq 6$.
Using elementary Littlewood--Paley theory together with the Strichartz inequality and the bilinear Strichartz
inequality (Lemma~\ref{L:bilinear strichartz}), we argue as follows:
\begin{align}
\eta &\lesssim \Bigl\| \Bigl( \sum_M \bigl| e^{it\Delta} \phi_M\bigr|^2 \Bigr)^{\frac{d+2}{2(d-2)}}
\Bigl( \sum_N \bigl| e^{it\Delta} \phi_N\bigr|^2 \Bigr)^{\frac{d+2}{2(d-2)}} \Bigr\|_{L^1_{t,x}} \notag \\
&\lesssim \Bigl\| \Bigl( \sum_M \bigl| e^{it\Delta} \phi_M\bigr|^{\frac{d+2}{d-2}} \Bigr)
\Bigl( \sum_N \bigl| e^{it\Delta} \phi_N\bigr|^{\frac{d+2}{d-2}} \Bigr) \Bigr\|_{L^1_{t,x}} \notag \\
&\lesssim \sum_{M\leq N} \|e^{it\Delta} \phi_M e^{it\Delta} \phi_N\|_{L_{t,x}^2}^{\frac2{d-2}}
\|e^{it\Delta} \phi_N\|_{L_{t,x}^{\frac{2(d+2)}d}}
\|e^{it\Delta} \phi_N\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac2{d-2}} \times \notag \\
&\qquad \qquad \qquad \qquad \qquad \times
\|e^{it\Delta} \phi_M\|_{L_{t,x}^\infty}^{\frac{8}{(d-2)^2}}
\|e^{it\Delta} \phi_M\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac{(d+2)(d-4)}{(d-2)^2}} \notag \\
&\lesssim \sum_{M\leq N}\bigl(\tfrac{M}{N}\bigr)^{\frac d{d-2}} \|\nabla \phi_N\|_{L_x^2}^{\frac{d+2}{d-2}}
\|\nabla \phi_M\|_{L_x^2}^{\frac{d-6}{d-2}}\|e^{it\Delta} \phi_M\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac8{d-2}} \notag \\
&\lesssim \|\nabla \phi\|_{L_x^2}^2 \sup_{M} \|e^{it\Delta} \phi_M\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac8{d-2}},
\label{big calc}
\end{align}
where all space-time norms are on $I\times{\mathbb{R}}^d$.
On the other hand, by Bernstein's inequality (Lemma~\ref{Bernstein}),
$$
\int_I \int_{{\mathbb{R}}^d} \bigl|e^{it\Delta} \phi_{M} \bigr|^{\frac{2(d+2)}{d-2}} \,dx\,dt
\lesssim |I| M^2 \|\nabla \phi\|_{L^2_x}^{\frac{2(d+2)}{d-2}}.
$$
Combining this with the argument above, we see that there is $M\gtrsim_{\|\nabla \phi\|_2, \eta} |I|^{-1/2}$ so that
\eqref{single M big} holds with $c=(d-2)/8$.
To obtain \eqref{single M big} in dimensions $3\leq d <6$, we merely need to find a replacement for \eqref{big calc}.
We argue as follows using the same tools as before:
\begin{align*}
\eta
&\lesssim \|e^{it\Delta} \phi\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac{2(6-d)}{d-2}}
\Bigl\| \Bigl( \sum_M \bigl| e^{it\Delta} \phi_M\bigr|^2 \Bigr)
\Bigl( \sum_N \bigl| e^{it\Delta} \phi_N\bigr|^2 \Bigr) \Bigr\|_{L^{\frac{d+2}{2(d-2)}}_{t,x}}\\
&\lesssim \|\nabla \phi\|_{L_x^2}^{\frac{2(6-d)}{d-2}} \sum_{M\leq N} \|e^{it\Delta} \phi_M\|_{L^{\frac{2(d+2)}{d-2}}_{t,x}}
\|e^{it\Delta} \phi_N\|_{L^{\frac{2(d+2)}{d-2}}_{t,x}}
\|e^{it\Delta} \phi_M e^{it\Delta} \phi_N\|_{L_{t,x}^2}^{\frac {2(d-2)}{d+2}}\times\\
&\qquad \qquad \qquad \qquad \qquad \times
\|e^{it\Delta} \phi_M \|_{L_{t,x}^\infty}^{\frac {6-d}{d+2}} \|e^{it\Delta}\phi_N\|_{L_{t,x}^\infty}^{\frac {6-d}{d+2}}\\
&\lesssim \|\nabla \phi\|_{L_x^2}^{\frac{2(6-d)}{d-2}} \sum_{M\leq N} \bigl( \tfrac {M}{N}\bigr)^{\frac{(d-2)^2}{2(d+2)}}
\|\nabla \phi_N \|_{L_x^2}^2 \|\nabla \phi_M \|_{L_x^2} \|e^{it\Delta} \phi_M\|_{L^{\frac{2(d+2)}{d-2}}_{t,x}}\\
&\lesssim \|\nabla \phi\|_{L_x^2}^{\frac{d+6}{d-2}} \sup_M \|e^{it\Delta} \phi_M\|_{L^{\frac{2(d+2)}{d-2}}_{t,x}}.
\end{align*}
Having proved \eqref{single M big}, we continue as follows: Using Bernstein combined with the Strichartz inequality,
we obtain the upper bound
$$
\|e^{it\Delta} \phi_{M}\|_{L_{t,x}^{\frac{2(d+2)}d}(I\times{\mathbb{R}}^d)}\lesssim M^{-1}\|\nabla \phi\|_{L_x^2};
$$
this combined with \eqref{single M big} and H\"older's inequality yields
$$
\|e^{it\Delta} \phi_{M}\|_{L_{t,x}^\infty(I\times{\mathbb{R}}^d)}\gtrsim_{\|\nabla \phi\|_2, \eta} M^{\frac{d-2}2}.
$$
Thus, there exist $t_0\in I$ and $x_0\in{\mathbb{R}}^d$ so that
$$
\bigl| [e^{it_0\Delta}\phi_M](x_0) \bigr| \gtrsim_{\|\nabla \phi\|_2, \eta} M^{\frac{d-2}2}.
$$
Using basic properties of the kernel of $e^{it\Delta} P_M$, we may deduce
$$
\int_{|x-x_0|\lesssim M^{-1}} \bigl| e^{it\Delta} \nabla \phi (x) \bigr|^2\,dx \gtrsim_{\|\nabla \phi\|_2,\eta} 1
$$
for all $|t-t_0|\lesssim M^{-2}$. Let $J:=\{t\in I:\, |t-t_0|\lesssim M^{-2}\}$. To obtain the claim,
we simply note that because of our lower bound on $M$, the length of $J$ obeys $|J|\gtrsim_{\|\nabla \phi\|_2,\eta} M^{-2}$.
\end{proof}
Next, we recall \cite[Lemma~10.2]{KTV}. While \cite[Lemma~10.2]{KTV} is stated
and proved in dimension $d=2$, the proof extends without difficulty to higher dimensions.
\begin{lemma}[Tightness of profiles]\label{L:comp prof}
Let $d\geq 3$ and let $\psi\in \dot H^1_x({\mathbb{R}}^d)$. Assume that
$$
\int_{|x-x_k|\leq r_k} \bigl|e^{it_k\Delta} \nabla \psi \bigr|^2 \,dx \geq {\varepsilon}
$$
for some ${\varepsilon}>0$ and sequences $t_k\in{\mathbb{R}}$, $x_k \in {\mathbb{R}}^d$, and $r_k>0$. Then for any sequence $a_k\to\infty$,
\begin{equation*}
\int_{|x|\leq a_k r_k} \bigl|e^{it_k\Delta} \nabla \psi \bigr|^2 \,dx \to \|\nabla \psi\|_2^2.
\end{equation*}
\end{lemma}
As the kinetic energy is not conserved, we need to upgrade this lemma as follows:
\begin{proposition}[Tightness of trajectories]\label{P:comp prof}
Let $\psi:I\times{\mathbb{R}}^d\to{\mathbb{C}}$ be a solution to \eqref{nls} with $S_I(\psi)<\infty$. Suppose
$$
\int_{|x-x_k|\leq r_k} \bigl|e^{it_k\Delta} \nabla \psi(\tau_k) \bigr|^2 \,dx \geq {\varepsilon}
$$
for some ${\varepsilon}>0$ and sequences $t_k\in{\mathbb{R}}$, $x_k \in {\mathbb{R}}^d$, $\tau_k\in I$, and $r_k>0$. Then
\begin{equation*}
\Bigl|\|\nabla \psi(\tau_k)\|_2^2- \int_{|x|\leq a_k r_k} \bigl|e^{it_k\Delta} \nabla \psi(\tau_k) \bigr|^2 \,dx \Bigr|\to 0
\end{equation*}
for any sequence $a_k\to\infty$.
\end{proposition}
\begin{proof}
It suffices to treat the case where the sequence $\tau_k$ converges (possibly to $\pm\infty$). By Theorem~\ref{T:local}, we may
assume that $I$ is closed.
If $\tau_k$ converges to a finite point (in $I$), then the claim follows from Lemma~\ref{L:comp prof} and the $\dot
H^1_x$-continuity of the flow.
Next we treat the case $\tau_k\to\infty$; a similar argument settles the case $\tau_k\to-\infty$. In particular, $\sup I
=\infty$. As $\psi$ has finite scattering size on $I$, Theorem~\ref{T:local} implies the existence of $\psi_+\in \dot H^1_x$ so
that
$$
\bigl\| \psi(\tau_k) - e^{i\tau_k\Delta} \psi_+ \bigr\|_{\dot H^1_x} \to 0.
$$
We may now apply Lemma~\ref{L:comp prof} to complete the argument.
\end{proof}
\subsection{A Gronwall inequality}
Our last technical tool is the most elementary. It is a form of Gronwall's inequality that involves both the past
and the future, `acausal' in the terminology of \cite{tao:book}. It will be used in Section~\ref{S:neg}.
\begin{lemma}\label{L:Gronwall}
Given $\gamma>0$, $0<\eta<\tfrac12(1-2^{-\gamma})$, and $\{b_k\}\in\ell^\infty({\mathbb{Z}}^+)$,
let $x_k\in\ell^\infty({\mathbb{Z}}^+)$ be a non-negative sequence obeying
\begin{align}\label{Gron rec}
x_k \leq b_k + \eta \sum_{l=0}^\infty 2^{-\gamma|k-l|} x_l \qquad \text{for all $k\geq 0$.}
\end{align}
Then
\begin{align}\label{Gron bound}
x_k \lesssim \sum_{l=0}^{k} r^{|k-l|} b_l \qquad \text{for all $k\geq 0$}
\end{align}
for some $r=r(\eta)\in (2^{-\gamma},1)$. Moreover, $r\downarrow 2^{-\gamma}$ as $\eta\downarrow 0$.
\end{lemma}
\begin{proof}
Our proof follows a well-travelled path.
By decreasing entries in $b_k$ we can achieve equality in \eqref{Gron rec}; since this also reduces the righthand side
of \eqref{Gron bound}, it suffices to prove the lemma in this case. Note that since $x_k\in\ell^\infty$, $b_k$ will remain
a bounded sequence.
Let $A$ denote the doubly infinite matrix with entries $A_{k,l}=2^{-\gamma|k-l|}$ and let $P$ denote the natural projection
from $\ell^2({\mathbb{Z}})$ onto $\ell^2({\mathbb{Z}}^+)$. Our goal is to show that \eqref{Gron bound} holds for any solution of
\begin{equation}\label{groneq}
(1-\eta PAP^*)x =b.
\end{equation}
First we observe that since
$$
\|A\|=\sum_{k\in{\mathbb{Z}}} 2^{-\gamma|k|} = \frac{1+2^{-\gamma}}{1-2^{-\gamma}},
$$
$\eta A$ is a contraction on $\ell^\infty$. Thus we may write
$$
x = \sum_{p=0}^\infty (\eta PAP^*)^p b \leq \sum_{p=0}^\infty P (\eta A)^p P^* b = P (1-\eta A)^{-1} P^* b,
$$
where the inequality is meant entry-wise. The justification for this inequality is simply that the matrix $A$
has non-negative entries. We will complete the proof of \eqref{Gron bound} by computing the entries of $(1-\eta A)^{-1}$.
This is easily done via Fourier methods: Let
$$
a(z) := \sum_{k\in{\mathbb{Z}}} 2^{-\gamma|k|} z^k = 1 + \frac{2^{-\gamma}z}{1-2^{-\gamma}z} + \frac{2^{-\gamma}z^{-1}}{1-2^{-\gamma}z^{-1}}
$$
and
\begin{align*}
f(z):= \frac{1}{1-\eta a(z)} &= \frac{(z-2^\gamma)(z-2^{-\gamma})}{z^2-(2^{-\gamma}+2^{\gamma}-\eta2^{\gamma}+\eta2^{-\gamma})z+1} \\
&=1 + \frac{(1-r2^{-\gamma})(r2^{\gamma}-1)}{(1-r^2)} \Bigl[ 1 + \frac{rz}{1-rz} + \frac{rz^{-1}}{1-rz^{-1}}\Bigr],
\end{align*}
where $r\in(0,1)$ and $1/r$ are the roots of $z^2-(2^{-\gamma}+2^{\gamma}-\eta2^{\gamma}+\eta2^{-\gamma})z+1=0$. From this formula,
we can immediately read off the Fourier coefficients of $f$, which give us the matrix elements of $(1-\eta A)^{-1}$. In particular,
they are $O(r^{|k-l|})$.
\end{proof}
\section{Reduction to almost periodic solutions}\label{S:Reduct}
The goal of this section is to prove Theorem~\ref{T:reduct}. In order to achieve this, we will first prove a Palais-Smale
condition modulo symmetries.
For any $0\leq E_0\le \|\nabla W\|_2^2$, we define
$$
L(E_0):=\sup \{S(u):\, u: I\times\R^d\to {\mathbb{C}} \text{ such that } \sup_{t\in I}\|\nabla u(t)\|_2^2 \leq E_0\},
$$
where the supremum is taken over all solutions $u: I\times\R^d\to {\mathbb{C}}$ to \eqref{nls} obeying $\sup_{t\in I}\|\nabla u(t)\|_2^2 \leq E_0$.
Thus, $L:\bigl[0, \|\nabla W\|_2^2\bigr] \to [0, \infty]$ is a non-decreasing function with
$L\bigl(\|\nabla W\|_2^2\bigr)=\infty$. Moreover, from Theorem~\ref{T:local},
\begin{align*}
L(E_0)\lesssim_d E_0^{\frac{d+2}{d-2}} \quad \text{for} \quad E_0\leq \eta_0,
\end{align*}
where $\eta_0=\eta_0(d)$ is the threshold from the small data theory.
From Lemma~\ref{L:stab}, we see that $L$ is continuous. Therefore, there must exist a unique \emph{critical kinetic energy}
$E_c$ such that $L(E_0)<\infty$ for $E_0<E_c$ and $L(E_0)=\infty$ for $E_0\geq E_c$. In particular, if $u: I\times\R^d\to {\mathbb{C}}$ is a
maximal-lifespan solution to \eqref{nls} such that $\sup_{t\in I}\|\nabla u(t)\|_2^2 < E_c$, then $u$ is global and
$$
S_{\mathbb{R}}(u)\leq L\bigl(\sup_{t\in I}\|\nabla u(t)\|_2^2\bigr).
$$
Failure of Conjecture~\ref{conj} is equivalent to $0 < E_c < \|\nabla W\|_2^2$.
\subsection{The key convergence result}
In this subsection we prove the folowing
\begin{proposition}[Palais-Smale condition modulo symmetries]\label{P:palais-smale}
Fix $d\geq 3$. Let $u_n:I_n\times{\mathbb{R}}^d\mapsto {\mathbb{C}}$ be a sequence of solutions to \eqref{nls} such that
\begin{align}\label{max ke}
\limsup_{n\to \infty} \sup_{t\in I_n}\|\nabla u_n(t)\|_2^2 =E_c
\end{align}
and let $t_n\in I_n$ be a sequence of times such that
\begin{equation*}
\lim_{n\to \infty} S_{\ge t_n}(u_n) = \lim_{n\to \infty} S_{\le t_n}(u_n) = \infty.
\end{equation*}
Then the sequence $u_n(t_n)$ has a subsequence which converges in $\dot H^1_x({\mathbb{R}}^d)$ modulo symmetries.
\end{proposition}
\begin{proof}
Using the time-translation symmetry of \eqref{nls}, we may set $t_n=0$ for all $n\geq 1$. Thus,
\begin{equation}\label{blow up in two}
\lim_{n\to \infty} S_{\ge 0} (u_n) = \lim_{n\to \infty} S_{\le 0}(u_n) = \infty.
\end{equation}
Applying Lemma~\ref{L:cc} to the sequence $u_n(0)$ (which is bounded in $\dot H^1_x({\mathbb{R}}^d)$ by \eqref{max ke}) and passing to a
subsequence if necessary, we obtain the decomposition
$$
u_n(0) = \sum_{j=1}^J g_n^j e^{it_n^j\Delta} \phi^j + w_n^J
$$
as in Lemma~\ref{L:cc}.
Refining the subsequence once for every $j$ and using a diagonal argument, we may assume that for each $j$, the sequence
$\{t_n^j\}_{n\geq 1}$ converges to some $t^j\in [-\infty, \infty]$. If $t^j\in (-\infty, \infty)$, then by replacing $\phi^j$ by
$e^{i t^j \Delta}\phi^j$, we may assume that $t^j=0$; moreover, absorbing the error $e^{i t_n^j \Delta}\phi^j - \phi^j$ into the
error term $w_n^J$, we may assume that $t_n^j\equiv 0$. Thus, either $t_n^j\equiv 0$ or $t_n^j\to\pm \infty$.
We now define the nonlinear profiles $v^j:I^j\times{\mathbb{R}}^d \to {\mathbb{C}}$ associated to $\phi^j$ and $t_n^j$ as follows:
\begin{CI}
\item If $t_n^j\equiv 0$, then $v^j$ is the maximal-lifespan solution to \eqref{nls} with initial data $v^j(0)=\phi^j$.
\item If $t_n^j\to \infty$, then $v^j$ is the maximal-lifespan solution to \eqref{nls} that scatters forward in time to $e^{it\Delta}\phi^j$.
\item If $t_n^j\to -\infty$, then $v^j$ is the maximal-lifespan solution to \eqref{nls} that scatters backward in time to $e^{it\Delta}\phi^j$.
\end{CI}
For each $j,n\geq 1$, we introduce $v_n^j:I_n^j\times{\mathbb{R}}^d\to {\mathbb{C}}$ defined by
$$
v_n^j(t):= T_{g_n^j}\bigl[ v^j(\cdot + t_n^j)\bigr](t),
$$
where $I_n^j:=\{t\in {\mathbb{R}}:\, (\lambda_n^j)^{-2} t + t_n^j \in I^j\}$. Each $v_n^j$ is a solution to \eqref{nls} with initial data at
time $t=0$ given by $v_n^j(0)=g_n^j v^j(t_n^j)$ and maximal lifespan $I_n^j= (-T^-_{n,j}, T^+_{n,j})$, where $-\infty\leq
-T^-_{n,j}<0<T^+_{n,j}\leq \infty$.
By \eqref{decouple}, there exists $J_0\geq 1$ such that
$$
\|\nabla \phi^j \|_2\leq \eta_0 \quad \text{for all} \quad j\geq J_0,
$$
where $\eta_0=\eta_0(d)$ is the threshold for the small data theory. Hence, by Theorem~\ref{T:local}, for all $n\geq 1$ and all
$j\geq J_0$ the solutions $v_n^j$ are global and moreover,
\begin{align}\label{tail}
\sup_{t\in {\mathbb{R}}} \|\nabla v_n^j(t)\|_2^2 + S_{\mathbb{R}}(v_n^j)\lesssim \|\nabla \phi^j \|_2^2.
\end{align}
\begin{lemma}[At least one bad profile]\label{L:bad profile}
There exists $1\leq j_0<J_0$ such that
$$
\limsup_{n\to \infty} S_{[0, T^+_{n,j_0})}(v_n^{j_0})=\infty.
$$
\end{lemma}
\begin{proof}
Assume for a contradiction that for all $1\leq j<J_0$,
\begin{align}\label{S ass}
\limsup_{n\to \infty} S_{[0, T^+_{n,j})}(v_n^j)< \infty.
\end{align}
In particular, this implies $T^+_{n,j}=\infty$ for all $1\leq j<J_0$ and all sufficiently large $n$. Moreover, subdividing
$[0,\infty)$ into intervals where the scattering size of $v_n^j$ is small, applying the Strichartz inequality on each such
interval, and then summing, we obtain
\begin{align}\label{S1 ass}
\limsup_{n\to \infty} \|v_n^j\|_{\dot S^1([0,\infty))}< \infty
\end{align}
for all $1\leq j<J_0$.
Combining \eqref{S ass} with \eqref{tail}, and then using \eqref{decouple} and \eqref{max ke},
\begin{equation}\label{S decouple}
\sum_{j\geq 1} S_{[0,\infty)}(v_n^j)\lesssim 1 + \sum_{j\geq J_0}\|\nabla \phi^j \|_2^2 \lesssim 1 + E_c
\end{equation}
for all $n$ sufficiently large.
From these assumptions, we will deduce a bound on the scattering size of $u_n$ forward in time (for $n$ sufficiently large),
thus contradicting \eqref{blow up in two}. In order to achieve this, we will use Lemma~\ref{L:stab}. To this end, we define the
approximation
$$
u_n^J(t):=\sum_{j=1}^J v_n^j(t) + e^{it\Delta}w_n^J.
$$
Note that
\begin{align*}
\|u_n^J(0)-u_n(0)\|_{\dot H^1_x({\mathbb{R}}^d)}
&\lesssim \bigl\| \sum_{j=1}^J \bigl( g_n^j v^j(t_n^j) - g_n^j e^{it_n^j\Delta} \phi^j\bigr) \bigr\|_{\dot H^1_x({\mathbb{R}}^d)}\\
&\lesssim \sum_{j=1}^J \bigl\| v^j(t_n^j) - e^{it_n^j\Delta} \phi^j\bigr\|_{\dot H^1_x({\mathbb{R}}^d)},
\end{align*}
and hence, by our choice of $v^j$,
$$
\limsup_{n\to \infty}\|u_n(0)-u_n^J(0)\|_{\dot H^1_x({\mathbb{R}}^d)} =0.
$$
We now show that $u_n^J$ does not blowup forward in time. Indeed, by \eqref{crazy} and the fact that $v_n^j$ does not blow up
forward in time for any $j\geq1$ and all $n$ sufficiently large,
\begin{align*}
\limsup_{n\to \infty} S_{[0,\infty)}\bigl(|v_n^j|^{1-\theta} |v_n^{j'}|^\theta \bigl) = 0
\end{align*}
for any $0<\theta<1$ and $j\neq j'$; see \cite{keraani-h1}. Thus, by \eqref{w scat} and \eqref{S decouple},
\begin{equation}\label{S unj}
\begin{aligned}
\lim_{J\to \infty}\limsup_{n\to \infty} S_{[0,\infty)}(u_n^J) &\lesssim \lim_{J\to \infty} \limsup_{n\to \infty} \Bigl(
S_{[0,\infty)}\bigl( \sum_{j=1}^J v_n^j\bigr)
+ S_{[0,\infty)}\bigl(e^{it\Delta}w_n^J\bigr)\Bigr)\\
&\lesssim \lim_{J\to \infty} \limsup_{n\to \infty} \sum_{j=1}^J S_{[0,\infty)}(v_n^j) \lesssim 1 + E_c.
\end{aligned}
\end{equation}
By the same argument as that used to derive \eqref{S1 ass} from \eqref{S ass}, we obtain
\begin{align}\label{S1 unj}
\lim_{J\to \infty}\limsup_{n\to \infty} \|u_n^J\|_{\dot S^1([0, \infty))} \leq C(E_c)<\infty.
\end{align}
To apply Lemma~\ref{L:stab}, it suffices to show that $u_n^J$ asymptotically solves \eqref{nls} in the sense that
\begin{align*}
\lim_{J\to\infty} \limsup_{n \to \infty}
\bigl\| \nabla \bigl[(i \partial_t + \Delta) u_n^J - F(u_n^J) \bigr]\bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}([0,\infty)\times{\mathbb{R}}^d)} =0,
\end{align*}
which by the triangle inequality reduces to proving
\begin{align}\label{eq good approx 1}
\lim_{J\to\infty} \limsup_{n \to \infty}
\Bigl\| \nabla \Bigl[\sum_{j=1}^J F(v_n^j) - F\bigl(\sum_{j=1}^J v_n^j\bigr) \Bigr]\Bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}([0,\infty)\times{\mathbb{R}}^d)} =0
\end{align}
and
\begin{align}\label{eq good approx 2}
\lim_{J\to\infty} \limsup_{n \to \infty}
\bigl\| \nabla \bigl[F\bigl( u_n^J-e^{it\Delta} w_n^J\bigr) - F(u_n^J)\bigr] \bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}([0,\infty)\times{\mathbb{R}}^d)} =0.
\end{align}
The arguments we use to prove \eqref{eq good approx 1} and \eqref{eq good approx 2} owe much to the proof of
\cite[Proposition 3.4]{keraani-h1}, particularly to Keraani's treatment of the most delicate point, \eqref{to prove 2}.
We are grateful to C.~Kenig for drawing our attention to this aspect of Keraani's work.
We first address \eqref{eq good approx 1}. Note that we can write
$$
\Bigl|\nabla \Bigl(\sum_{j=1}^J F(f_j) - F\bigl(\sum_{j=1}^J f_j\bigr) \Bigr) \Bigr| \lesssim_J \sum_{j\neq j'} |\nabla f_j|
|f_{j'}|^{\frac{4}{d-2}}.
$$
Next, recall that by \eqref{tail} and \eqref{S1 ass}, $v_n^j\in \dot S^1([0,\infty))$ for all $j\geq 1$ and all $n$ sufficiently
large; invoking \eqref{crazy}, a simple computation shows
$$
\limsup_{n\to \infty} \bigl\| |v_n^{j'}|^{\frac{4}{d-2}} \nabla v_n^j \bigl\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}([0,\infty)\times{\mathbb{R}}^d)} =0
$$
for any $j\neq j'$; see \cite{keraani-h1}. Thus,
\begin{align*}
\limsup_{n \to \infty} \Bigl\|\nabla \Bigl[ \sum_{j=1}^J F(v_n^j) & - F\bigl(\sum_{j=1}^J v_n^j\bigr)\Bigr] \Bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}([0,\infty)\times{\mathbb{R}}^d)}\\
&\lesssim_J \limsup_{n \to \infty}\sum_{j\neq j'} \bigl\|\nabla v_n^j |v_n^{j'}|^{\frac{4}{d-2}} \bigl\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}([0,\infty)\times{\mathbb{R}}^d)} =0
\end{align*}
and \eqref{eq good approx 1} follows.
We now consider \eqref{eq good approx 2}. In what follows, all spacetime norms are taken on the slab $[0, \infty)\times{\mathbb{R}}^d$, unless noted
otherwise. In dimensions $d\geq 6$,
\begin{align*}
\bigl\| \nabla \bigl[F\bigl( u_n^J-e^{it\Delta} w_n^J\bigr) - F(u_n^J) \bigr]\bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}}
&\lesssim \|\nabla e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}d}}
\|e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac4{d-2}}\\
&\quad +\|\nabla u_n^J\|_{L_{t,x}^{\frac{2(d+2)}d}}
\|e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac4{d-2}}\\
&\quad + \bigl\| |u_n^J|^{\frac4{d-2}} \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}}
\end{align*}
by H\"older's inequality. In dimensions $d=3,4,5$, one must add the term
$$
\|\nabla u_n^J\|_{L_{t,x}^{\frac{2(d+2)}d}} \|e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}
\|u_n^J\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac{6-d}{d-2}}
$$
to the right-hand side above.
Using \eqref{w scat}, \eqref{S unj}, \eqref{S1 unj}, and the Strichartz inequality combined with the fact that $w_n^J$
is bounded in $\dot H^1_x$, we see that the claim \eqref{eq good approx 2} follows once we establish
\begin{align}\label{to prove}
\lim_{J\to\infty} \limsup_{n \to \infty}
\bigl\| |u_n^J|^{\frac4{d-2}} \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}([0, \infty)\times{\mathbb{R}}^d)}=0.
\end{align}
By H\"older, \eqref{S unj}, and the Strichartz inequality,
\begin{align*}
\bigl\| |u_n^J|^{\frac4{d-2}}& \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{2(d+2)}{d+4}}}\\
&\lesssim \| u_n^J \|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac3{d-2}}
\|\nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{2(d+2)}d}}^{\frac{d-3}{d-2}}
\| u_n^J \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{d+2}{d-1}}}^{\frac1{d-2}}\\
&\lesssim \bigl\| \bigl(\sum_{j=1}^J v_n^j\bigr) \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{d+2}{d-1}}}^{\frac1{d-2}}
+ \|e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac1{d-2}}
\|\nabla e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}d} }^{\frac1{d-2}}\\
&\lesssim \bigl\| \bigl(\sum_{j=1}^J v_n^j \bigr) \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{d+2}{d-1}}}^{\frac1{d-2}}
+ \|e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}{d-2}}}^{\frac1{d-2}}.
\end{align*}
Invoking \eqref{w scat}, proving \eqref{to prove} reduces to proving
\begin{align}\label{to prove 1}
\lim_{J\to\infty} \limsup_{n \to \infty}
\bigl\| \bigl(\sum_{j=1}^J v_n^j\bigr) \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{d+2}{d-1}}([0, \infty)\times{\mathbb{R}}^d)}=0.
\end{align}
Let $\eta>0$. By \eqref{S decouple}, we see that there exists $J'=J'(\eta)\geq 1$ such that
$$
\sum_{j\geq J'} S_{[0, \infty)}(v_n^j) \leq \eta.
$$
Thus, using H\"older's inequality and arguing as for \eqref{S unj},
\begin{align*}
\limsup_{n\to \infty} \bigl\| \bigl(\sum_{j=J'}^J v_n^j\bigr) \nabla e^{it\Delta} w_n^J \bigr\|_{L_{t,x}^{\frac{d+2}{d-1}}}^{\frac{2(d+2)}{d-2}}
&\lesssim \limsup_{n\to \infty} \Bigl( \sum_{j\geq J'} S_{[0, \infty)}(v_n^j) \Bigr) \|\nabla e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}d} }^{\frac{2(d+2)}{d-2}} \\
&\lesssim \eta.
\end{align*}
As $\eta>0$ is arbitrary, proving \eqref{to prove 1} reduces to showing
\begin{align}\label{to prove 2}
\lim_{J\to\infty} \limsup_{n \to \infty}\| v_n^j \nabla e^{it\Delta} w_n^J \|_{L_{t,x}^{\frac{d+2}{d-1}}([0, \infty)\times{\mathbb{R}}^d)}=0
\quad \text{for}\quad 1\leq j\leq J'.
\end{align}
Fix $1\leq j\leq J'$. By a change of variables,
\begin{align*}
\| v_n^j \nabla e^{it\Delta} w_n^J \|_{L_{t,x}^{\frac{d+2}{d-1}}}
=\bigl\| v^j \nabla {\tilde w_n^J} \bigr\|_{L_{t,x}^{\frac{d+2}{d-1}}},
\end{align*}
where $\tilde w_n^J:=\bigl[T_{(g_n^j)^{-1}} \bigl(e^{it\Delta} w_n^J\bigr)\bigr](\cdot-t_n^j) $. Note that
\begin{align}\label{tilde w}
S_{{\mathbb{R}}}(\tilde w_n^J) = S_{{\mathbb{R}}}(e^{it\Delta} w_n^J) \quad \text{and} \quad
\|\nabla\tilde w_n^J\|_{L_{t,x}^{\frac{2(d+2)}d}} = \|\nabla e^{it\Delta} w_n^J\|_{L_{t,x}^{\frac{2(d+2)}d}}.
\end{align}
By density, we may assume $v^j\in C^\infty_c({\mathbb{R}}\times{\mathbb{R}}^d)$. Invoking H\"older's inequality, it thus suffices to show
\begin{align}\label{to prove 2a}
\lim_{J\to\infty} \limsup_{n \to \infty}\| \nabla \tilde w_n^J \|_{L_{t,x}^2(K)}=0
\end{align}
for any compact $K\in {\mathbb{R}}\times{\mathbb{R}}^d$. This is a consequence of Lemma~\ref{L:Keraani3.7}, \eqref{tilde w}, and \eqref{w scat}.
Tracing back through the argument we see that we have verified \eqref{to prove 1} and hence \eqref{eq good approx 2}.
We are now in a position to apply Lemma~\ref{L:stab}; invoking \eqref{S unj}, we conclude that for $n$ sufficiently large,
\begin{align}\label{S un}
S_{[0,\infty)}(u_n) \lesssim 1 + E_c,
\end{align}
thus contradicting \eqref{blow up in two}. This finishes the proof of Lemma~\ref{L:bad profile}.
\end{proof}
Returning to the proof of Proposition~\ref{P:palais-smale} and rearranging the indices, we may assume that there exists $1\leq
J_1 <J_0$ such that
$$
\limsup_{n\to \infty}S_{[0, T^+_{n,j})}(v_n^j)=\infty \text{ for } 1\leq j\leq J_1 \ \ \text{and} \ \ %
\limsup_{n\to \infty} S_{[0,\infty)}(v_n^j)<\infty \text{ for } j> J_1.
$$
Passing to a subsequence in $n$, we can guarantee that $S_{[0, T^+_{n,1})}(v_n^1)\to\infty$.
For each $m,n\geq 1$ let us define an integer $j(m,n)\in \{1, \ldots, J_1\}$ and an interval $K^m_n$ of the form $[0, \tau]$ by
\begin{equation}\label{Kdefn}
\sup_{1\leq j \leq J_1} S_{K^m_n}(v_n^j) = S_{K^m_n}(v_n^{j(m,n)}) = m.
\end{equation}
By the pigeonhole principle, there is a $1\leq j_1\leq J_1$ so that for infinitely many $m$ one has $j(m,n)=j_1$ for infinitely
many $n$. Note that the infinite set of $n$ for which this holds may be $m$-dependent. By reordering the indices, we may assume
that $j_1=1$. Then, by the definition of the critical kinetic energy, we obtain
\begin{align}\label{baddie}
\limsup_{m \to \infty} \; \limsup_{n\to \infty} \; \sup_{t\in K_n^m} \|\nabla v_n^1(t)\|_2^2 \geq E_c.
\end{align}
On the other hand, by virtue of \eqref{Kdefn}, all $v_n^j$ have finite scattering size on $K^m_n$ for each $m\geq 1$. Thus, by
the same argument used in Lemma~\ref{L:bad profile}, we see that for $n$ and $J$ sufficiently large, $u_n^J$ is a good
approximation to $u_n$ on each $K_n^m$. More precisely,
\begin{align}\label{good approx ke}
\lim_{J\to \infty} \limsup_{n\to \infty} \| u_n^J - u_n\|_{L_t^\infty \dot H^1_x (K_n^m\times{\mathbb{R}}^d)}=0
\end{align}
for each $m\geq 1$.
Our next result proves asymptotic kinetic energy decoupling for $u_n^J$ at all times of existence.
\begin{lemma}[Kinetic energy decoupling for $u_n^J$]\label{L:decouple ke}
For all $J\geq 1$ and $m\geq1$,
\begin{align*}
\limsup_{n\to \infty} \sup_{t\in K^m_n} \Bigl|\|\nabla u_n^J(t)\|^2_2 -\sum_{j=1}^J \|\nabla v_n^j(t)\|_2^2 - \|\nabla w_n^J\|_2^2
\Bigr|=0.
\end{align*}
\end{lemma}
\begin{proof}
Fix $J\geq 1$ and $m\geq1$. Then, for all $t\in K^m_n$,
\begin{align*}
\|\nabla u_n^J(t)\|_2^2
&= \langle \nabla u_n^J(t), \nabla u_n^J(t)\rangle\\
&= \sum_{j=1}^J \|\nabla v_n^j(t)\|_2^2 + \|\nabla w_n^J\|_2^2 + \sum_{j\neq j'} \langle \nabla v_n^j(t), \nabla v_n^{j'} (t)\rangle\\
&\quad + \sum_{j=1}^J \bigl( \bigl\langle \nabla e^{it\Delta}w_n^J , \nabla v_n^j(t) \bigr\rangle
+ \bigl\langle \nabla v_n^j(t), \nabla e^{it\Delta}w_n^J \bigr\rangle \bigr).
\end{align*}
To prove Lemma~\ref{L:decouple ke}, it thus suffices to show that for all sequences $t_n\in K_n^m$,
\begin{align}\label{orthog'}
\langle \nabla v_n^j(t_n), \nabla v_n^{j'} (t_n)\rangle \to 0 \quad \text{as } n\to \infty
\end{align}
and
\begin{align}\label{orthog}
\bigl\langle \nabla e^{it_n\Delta}w_n^J , \nabla v_n^j(t_n) \bigr\rangle \to 0 \quad \text{as } n\to \infty
\end{align}
for all $1\leq j,j'\leq J$ with $j\neq j'$. We will only demonstrate the latter, which requires Lemma~\ref{L:strong decouple};
the former can be deduced in much the same manner using \eqref{crazy}.
By a change of variables,
\begin{align}\label{rescale}
\bigl\langle \nabla e^{it_n\Delta}w_n^J , \nabla v_n^j(t_n) \bigr\rangle = \bigl\langle \nabla e^{i t_n (\lambda_n^j)^{-2}\Delta}[
(g_n^j)^{-1}w_n^J] , \nabla v^j\bigl(\tfrac{t_n}{(\lambda_n^j)^2} +t_n^j \bigr) \bigr\rangle.
\end{align}
As $t_n\in K_n^m \subset [0, T_{n,j}^+)$ for all $1\leq j\leq J_1$, we have $t_n(\lambda_n^j)^{-2} +t_n^j \in I^j$ for all $j\geq
1$. Recall that $I^j$ is the maximal lifespan of $v^j$; for $j>J_1$ this is ${\mathbb{R}}$. By refining the sequence once for every $j$
and using the standard diagonalisation argument, we may assume $t_n(\lambda_n^j)^{-2} +t_n^j$ converges for every $j$.
Fix $1\leq j\leq J$. If $t_n(\lambda_n^j)^{-2} +t_n^j$ converges to some point $\tau^j$ in the interior of $I^j$, then by the
continuity of the flow, $v^j\bigl(t_n(\lambda_n^j)^{-2} +t_n^j \bigr)$ converges to $v^j(\tau^j)$ in $\dot H^1_x({\mathbb{R}}^d)$. On the
other hand, by \eqref{decouple},
\begin{align}\label{bounded}
\limsup_{n\to \infty} \bigl\| e^{it_n(\lambda_n^j)^{-2}\Delta}[ (g_n^j)^{-1}w_n^J]\bigr\|_{\dot H^1_x({\mathbb{R}}^d)} =\limsup_{n\to
\infty} \|w_n^J\|_{\dot H^1_x({\mathbb{R}}^d)} \lesssim E_c.
\end{align}
Combining this with \eqref{rescale}, we obtain
\begin{align*}
\lim_{n \to \infty}\bigl\langle \nabla e^{it_n\Delta}w_n^J , \nabla v_n^j(t_n) \bigr\rangle
&= \lim_{n \to \infty}\bigl\langle \nabla e^{it_n(\lambda_n^j)^{-2}\Delta}[ (g_n^j)^{-1}w_n^J] , \nabla v^j (\tau^j) \bigr\rangle\\
&= \lim_{n \to \infty}\bigl\langle \nabla e^{-it_n^j\Delta}[ (g_n^j)^{-1}w_n^J] , \nabla e^{-i \tau^j\Delta} v^j (\tau^j)
\bigr\rangle.
\end{align*}
Invoking Lemma~\ref{L:strong decouple}, we deduce \eqref{orthog}.
Consider now the case when $t_n(\lambda_n^j)^{-2} +t_n^j$ converges to $\sup I^j$. Then we must have $\sup I^j=\infty$ and $v^j$
scatters forward in time. This is clearly true if $t_n^j\to \infty$ as $n\to \infty$; in the other cases, failure would imply
$$
\limsup_{n\to \infty} S_{[0,t_n]}(v_n^j) = \limsup_{n\to \infty} S_{\bigl[t_n^j,t_n(\lambda_n^j)^{-2} + t_n^j\bigr]}(v^j) =\infty,
$$
which contradicts $t_n\in K_n^m$. Therefore, there exists $\psi^j\in \dot H^1_x({\mathbb{R}}^d)$ such that
$$
\lim_{n\to \infty} \Bigl\| v^j \bigl(t_n(\lambda_n^j)^{-2} +t_n^j\bigr)
- e^{i\bigl(t_n(\lambda_n^j)^{-2} +t_n^j\bigr)\Delta}\psi^j \Bigr\|_{\dot H^1_x({\mathbb{R}}^d)} = 0.
$$
Together with \eqref{rescale}, this yields
$$
\lim_{n \to \infty}\bigl\langle \nabla e^{it_n\Delta}w_n^J , \nabla v_n^j(t_n) \bigr\rangle = \lim_{n \to \infty}\bigl\langle
\nabla e^{-it_n^j\Delta}[ (g_n^j)^{-1}w_n^J] , \nabla \psi^j \bigr\rangle,
$$
which by Lemma~\ref{L:strong decouple} implies \eqref{orthog}.
Finally, we consider the case when $t_n(\lambda_n^j)^{-2} +t_n^j$ converges to $\inf I^j$. Since $t_n(\lambda_n^j)^{-2}\geq 0$
and $\inf I^j<\infty$ for all $j\geq 1$ we see that $t_n^j$ does not converge to $+\infty$. Moreover, if $t_n^j\equiv 0$, then
$\inf I^j<0$; as $t_n(\lambda_n^j)^{-2}\geq 0$, we see that $t_n^j$ cannot be identically zero. This leaves $t_n^j\to -\infty$ as
$n\to \infty$. Thus $\inf I^j=-\infty$ and $v^j$ scatters backward in time to $e^{it\Delta}\phi^j$. We obtain
$$
\lim_{n\to \infty} \Bigl\| v^j \bigl(t_n(\lambda_n^j)^{-2} +t_n^j\bigr)
- e^{i\bigl(t_n(\lambda_n^j)^{-2} +t_n^j\bigr)\Delta}\phi^j \Bigr\|_{\dot H^1_x({\mathbb{R}}^d)} = 0,
$$
which by \eqref{rescale} implies
$$
\lim_{n \to \infty}\bigl\langle \nabla e^{it_n\Delta}w_n^J , \nabla v_n^j(t_n) \bigr\rangle = \lim_{n \to \infty}\bigl\langle
\nabla e^{-it_n^j\Delta}[ (g_n^j)^{-1}w_n^J] , \nabla \phi^j \bigr\rangle.
$$
Invoking Lemma~\ref{L:strong decouple} once again, we derive \eqref{orthog}.
This finishes the proof of Lemma~\ref{L:decouple ke}.
\end{proof}
We now return to the proof of Proposition~\ref{P:palais-smale}. By \eqref{max ke}, \eqref{good approx ke}, and
Lemma~\ref{L:decouple ke},
\begin{align*}
E_c\geq \limsup_{n\to\infty} \sup_{t\in K^m_n} \|\nabla u_n(t)\|_2^2
= \lim_{J\to\infty} \limsup_{n\to\infty} \, \Bigl\{ \|\nabla w_n^J\|_2^2 + \sup_{t\in K^m_n} \sum_{j=1}^J \|\nabla v_n^j(t)\|_2^2 \Bigr\}.
\end{align*}
Invoking \eqref{baddie}, this implies $J_1=1$, $v_n^j\equiv0$ for all $j\geq 2$, and $w_n:=w_n^1$ converges to zero strongly in
$\dot H^1_x({\mathbb{R}}^d)$. In other words,
\begin{align}\label{just one}
u_n(0)=g_n e^{i\tau_n\Delta}\phi + w_n
\end{align}
for some $g_n\in G$, $\tau_n\in {\mathbb{R}}$, and some functions $\phi, w_n \in \dot H^1_x({\mathbb{R}}^d)$ with $w_n\to 0$ strongly in $\dot H^1_x({\mathbb{R}}^d)$.
Moreover, the sequence $\tau_n$ obeys $\tau_n\equiv 0$ or $\tau_n\to \pm \infty$.
If $\tau_n\equiv 0$, \eqref{just one} immediately implies that $u_n(0)$ converges modulo symmetries to $\phi$, which proves
Proposition~\ref{P:palais-smale} in this case.
Finally, we will show that this is the only possible case, that is, $\tau_n$ cannot converge to either $\infty$ or $-\infty$. We
argue by contradiction. Assume that $\tau_n$ converges to $\infty$; the proof in the negative time direction is essentially the
same. By the Strichartz inequality, $S_{\mathbb{R}}({e^{it\Delta}}\phi)<\infty$; thus we have
$$
\lim_{n\to \infty} S_{\ge 0}\bigl({e^{it\Delta}} e^{i\tau_n\Delta}\phi\bigr)=0.
$$
Since the action of $G$ preserves linear solutions and the scattering size, this implies
$$
\lim_{n\to \infty}S_{\ge 0}\bigl({e^{it\Delta}} g_n e^{i\tau_n\Delta}\phi\bigr)=0.
$$
Combining this with \eqref{just one} and $w_n\to0$ in $\dot H^1_x$, we conclude
$$
\lim_{n\to \infty}S_{\ge 0}\bigl({e^{it\Delta}} u_n(0)\bigr)=0.
$$
An application on Lemma~\ref{L:stab} yields
$$
\lim_{n\to \infty}S_{\ge 0}(u_n)=0,
$$
which contradicts \eqref{blow up in two}.
This completes the proof of Proposition~\ref{P:palais-smale}.
\end{proof}
\subsection{Proof of Theorem~\ref{T:reduct}}
Suppose $d\geq 3$ is such that Conjecture~\ref{conj} failed. Then the critical kinetic energy $E_c$ must obey $E_c<\|\nabla
W\|^2_2$. By the definition of the critical kinetic energy, we can find a sequence $u_n:I_n \times {\mathbb{R}}^d \to {\mathbb{C}}$ of solutions
to \eqref{nls} with $I_n$ compact,
\begin{align}\label{hyp}
\sup_{n\geq 1} \sup_{t\in I_n} \|\nabla u_n(t)\|_2^2 = E_c, \quad \text{and} \quad \lim_{n\to \infty}S_{I_n}(u_n)=\infty.
\end{align}
Let $t_n\in I_n$ be such that $S_{\geq t_n}(u_n)= S_{\leq t_n}(u_n)$. Then,
\begin{align}\label{hyp 2}
\lim_{n\to \infty} S_{\ge t_n}(u_n)=\lim_{n\to \infty}S_{\le t_n}(u_n)=\infty.
\end{align}
Using the time-translation symmetry of \eqref{nls}, we may take all $t_n=0$.
Applying Proposition~\ref{P:palais-smale} and passing to a subsequence if necessary, we can find $g_n\in G$ and a
function $u_0\in \dot H_x^1$ such that $g_n u_n(0)$ converges to $u_0$ strongly in $\dot H_x^1$. By
applying the group action $T_{g_n}$ to the solution $u_n$, we may take $g_n$ to all be the identity. Thus $u_n(0)$ converges
strongly to $u_0$ in $\dot H_x^1$.
Let $u: I\times\R^d \to {\mathbb{C}}$ be the maximal-lifespan solution to \eqref{nls} with initial data $u(0)=u_0$. As $u_n(0)\to u_0$ in $\dot H_x^1$,
Lemma~\ref{L:stab} shows that $I\subseteq \liminf I_n$ and
$$
\lim_{n\to \infty} \|u_n -u\|_{L_t^\infty \dot H^1_x(K\times{\mathbb{R}}^d)}=0,\quad\text{for all compact $K\subset I$}.
$$
Thus by \eqref{hyp},
\begin{align}\label{ke<crit}
\sup_{t\in I} \|\nabla u(t)\|_2^2\leq E_c.
\end{align}
Next we prove that $u$ blows up both forward and backward in time. Indeed, if $u$ does not blow up forward in time, then
$[0,\infty)\subset I$ and $S_{\ge 0}(u)< \infty$. By Lemma~\ref{L:stab}, this implies $S_{\ge 0}(u_n)<\infty$ for sufficiently
large $n$, which contradicts \eqref{hyp 2}. A similar argument proves that $u$ blows up backward in time.
Therefore, by our definition of $E_c$,
$$
\sup_{t\in I} \|\nabla u(t)\|_2^2\geq E_c.
$$
Combining this with \eqref{ke<crit}, we obtain
$$
\sup_{t\in I} \|\nabla u(t)\|_2^2 = E_c.
$$
It remains to show that $u$ is almost periodic modulo symmetries. Consider an arbitrary sequence $\tau_n \in I$. As $u$ blows up
in both time directions
$$
S_{\ge \tau_n}(u)=S_{\le \tau_n}(u)=\infty.
$$
Applying Proposition~\ref{P:palais-smale}, we conclude that $u(\tau_n)$ admits a convergent subsequence in $\dot H^1_x({\mathbb{R}}^d)$
modulo symmetries. Thus the orbit $\{G u(t): \, t\in I\}$ is precompact in $G\backslash \dot H_x^1$. This concludes the
proof of Theorem~\ref{T:reduct}.\qed
\section{The enemies}\label{S:enemies}
In this section, we prove Theorem~\ref{T:enemies}. The argument owes much to \cite[\S4]{KTV};
indeed, readers seeking a fuller treatment of certain details may consult that paper.
Let $v:J\times{\mathbb{R}}^d\to{\mathbb{C}}$ denote a minimal kinetic energy blowup solution whose existence (under the hypotheses of
Theorem~\ref{T:enemies}) is guaranteed by Theorem~\ref{T:reduct}. We denote the symmetry parameters of $v$ by
$N_v(t)$ and $x_v(t)$. We will construct our solution $u$ by taking a subsequential limit of various normalizations of $v$:
\begin{definition}
Given $t_0\in J$, we define the \emph{normalisation} of $v$ at $t_0$ by
\begin{equation}\label{untn}
v^{[t_0]} := T_{g_{0, -x_v(t_0)N_v(t_0),N_v(t_0)}}\bigr( v( \cdot + t_0) \bigr).
\end{equation}
This solution is almost periodic and has symmetry parameters
$$
N_{v^{[t_0]}}(t) = \frac{N_v(t_0+tN_v(t_0)^{-2})}{N_v(t_0)}
\text{ and }
x_{v^{[t_0]}}(t) = N_v(t_0)[x_v(t_0+tN_v(t_0)^{-2})-x_v(t_0)].
$$
\end{definition}
Note that by the definition of almost periodicity, any sequence of $t_n\in J$ admits a subsequence so that
$v^{[t_n]}(0)$ converges in $\dot H^1_x$. Furthermore, if $u_0$ denotes this limit and $u:I\times{\mathbb{R}}^d\to{\mathbb{C}}$ denotes the
maximal-lifespan solution with $u(0)=u_0$, then $u$ is almost periodic modulo symmetries with the same compactness
modulus function as $v$. Lastly, $v^{[t_n]}\to u$ in $\dot S^1$ (along the subsequence) uniformly on any compact subset of $I$.
Our first goal is to find a soliton from among the normalizations of $v$ if this is at all possible. To this end,
for any $T \geq 0$, we define the quantity
\begin{equation}\label{cdef}
\osc(T) := \inf_{t_0 \in J} \,\frac{\sup\, \{ N_v(t) : t \in J \text{ and } |t-t_0| \leq T N_v(t_0)^{-2} \}}
{\inf\, \{ N_v(t) : t \in J \text{ and } |t-t_0| \leq T N_v(t_0)^{-2} \}},
\end{equation}
which measures the least possible oscillation that one can find in $N_v(t)$ on time intervals of normalised duration $T$.
\medskip
{\bf Case 1:} $\lim_{T \to\infty} \osc(T) < \infty$. Under this hypothesis, we will be able to extract a soliton-like solution.
Choose $t_n$ so that
$$
\limsup_{n\to\infty} \frac{\sup\, \{ N_v(t) : t \in J \text{ and } |t-t_n| \leq n N_v(t_n)^{-2} \}}
{\inf\, \{ N_v(t) : t \in J \text{ and } |t-t_n| \leq n N_v(t_n)^{-2} \}} <\infty.
$$
Then a few computations reveal that any subsequential limit $u$ of $v^{[t_n]}$ fulfils the requirements to be classed
as a soliton in the sense of Theorem~\ref{T:enemies}. In particular, $u$ is global because an almost periodic (modulo
symmetries) solution cannot blow up in finite time without its frequency scale function converging to infinity.
\medskip
When $\osc(T)$ is unbounded, we must seek a solution belonging to one of the remaining two scenarios.
To aid in distinguishing between them, we introduce the quantity
$$
a(t_0) := \frac{ N_v(t_0) }{ \sup\,\{ N_v(t) : t \in J \text{ and } t \leq t_0\} }
+ \frac{ N_v(t_0) }{ \sup\,\{ N_v(t) : t \in J \text{ and } t \geq t_0\} }
$$
associated to each $t_0 \in J$. First we treat the case where $a(t_0)$ can be arbitrarily small. As we will see,
this may lead to either a finite-time blowup solution or to a cascade.
\medskip
{\bf Case 2:} $\lim_{T\to \infty} \osc(T) = \infty$ and $\inf_{t_0 \in J} a(t_0) = 0$. From the behaviour of $a(t_0)$
we may choose sequences $t_n^{-}<t_n<t_n^{+}$ from $J$ so that $a(t_n)\to 0$, $N_v(t_n^{-})/N_v(t_n)\to\infty$,
and $N_v(t_n^{+})/N_v(t_n)\to\infty$. Next we choose times $t_n'\in(t_n^{-},t_n^{+})$ so that
\begin{align}\label{N from below}
N_v(t_n') \leq 2 \inf\, \{ N(t) : t\in [t_n^{-},t_n^{+}] \}.
\end{align}
In particular, $N(t_n') \leq 2N(t_n)$, which allows us to deduce that
\begin{align}\label{N to infty}
\frac{ N_v(t_n^{-}) }{ N_v(t_n') }\to\infty \quad\text{and}\quad \frac{ N_v(t_n^{+}) }{ N_v(t_n') } \to\infty.
\end{align}
Let $u$ denote a subsequential limit of $v^{[t_n']}$ and let $I$ denote its maximal lifespan. If $I$ has a finite endpoint,
then $u$ is a finite-time blowup solution in the sense of Theorem~\ref{T:enemies} and we are done. Thus we are left to
consider the case $I={\mathbb{R}}$.
Let $s_n^\pm := (t_n^\pm - t_n') N_v(t_n')^2$. From \eqref{N to infty} we see that $N_u(s_n^\pm)\to\infty$ and so
deduce $s_n^\pm\to\pm\infty$ from the fact that $u$ is a global solution. Combining this with \eqref{N from below} we
find that $N_u(t)$ is bounded from below uniformly for $t\in{\mathbb{R}}$. Rescaling $u$ slightly, we may ensure that $N_u(t)\geq 1$
for all $t\in{\mathbb{R}}$.
From the fact that $\osc(T)\to\infty$, we see that $N_v(t)$ must show significant oscillation in neighbourhoods of $t_n'$.
Transferring this information to $u$ and using the lower bound on $N_u(t)$ we may conclude that
$\limsup_{|t|\to\infty} N_u(t) =\infty$. Using time-reversal symmetry, if necessary, we obtain a low-to-high cascade
in the sense of Theorem~\ref{T:enemies}.
\medskip
Finally, we treat the case where $a(t_0)$ is strictly positive; we will construct a finite-time blowup solution.
\medskip
{\bf Case 3:} $\lim_{T\to \infty} \osc(T) = \infty$ and $\inf_{t_0 \in J} a(t_0) = 2{\varepsilon} >0$. Let us call a $t_0\in J$
\emph{future-spreading} if $N(t)\leq{\varepsilon}^{-1} N(t_0)$ for all $t\geq t_0$; we call $t_0$ \emph{past-spreading} if
$N(t)\leq{\varepsilon}^{-1} N(t_0)$ for all $t\leq t_0$. Note that by hypothesis, every $t_0\in J$ is future-spreading, past-spreading,
or possibly both.
The fact that even a single time is future- or past-spreading guarantees that $J$ must be infinite in the forward or
reverse time direction, respectively; recall that finite-time blowup is accompanied by $N_v(t)\to\infty$ as $t$ approaches
the blowup time. Next we argue
that either all sufficiently late times are future-spreading or all sufficiently early times are past-spreading. If this
were not the case, one would be able to find arbitrarily long time intervals beginning with a future-spreading time and
ending with a past-spreading time. The existence of such intervals would contradict the divergence of $\osc(T)$.
By appealing to time-reversal symmetry, we restrict our attention to the case where all $t\geq t_0$ are future-spreading.
Choose $T$ so that $\osc(T) > 2{\varepsilon}^{-1}$.
We will now recursively construct an increasing sequence of times $\{t_n\}_{n=0}^\infty$ so that
\begin{align}\label{t_n props}
0 \leq t_{n+1} - t_n \leq 8T N(t_n)^{-2} \quad\text{ and }\quad
N(t_{n+1}) \leq \tfrac12 N(t_n).
\end{align}
Given $t_n$, set $t_n':=t_n + 4T N(t_n)^{-2}$. If $N(t_n')\leq \frac12 N(t_n)$ we choose $t_{n+1}=t_n'$ and the properties
set out above follow immediately. If $N_v(t_n') > \frac12 N_v(t_n)$, then
$$
J_n:=[t_n' - T N_v(t_n')^{-2},t_n' + T N_v(t_n')^{-2}] \subseteq [t_n,t_n+8T N(t_n)^{-2}].
$$
As $t_n$ is future-spreading, this allows us to conclude that $N(t)\leq {\varepsilon}^{-1} N(t_n)$ on $J_n$, but then by the
way $T$ is chosen, we may find $t_{n+1}\in J_n$ so that $N(t_{n+1})\leq \frac12 N(t_n)$.
Having obtained a sequence of times obeying \eqref{t_n props}, we may conclude that any subsequential limit $u$ of $v^{[t_n]}$
is a finite-time blowup solution. To elaborate, set $s_n := (t_0-t_n)N(t_n)^2$ and note that
$N_{v^{[t_n]}}(s_n)\geq 2^n$. However, $s_n$ is a bounded sequence; indeed,
\begin{align*}
|s_n| = N(t_n)^2 \sum^{n-1}_{k=0} \bigl[ t_{k+1} - t_k \bigr] \leq 8 T \sum^{n-1}_{k=0} \frac{N(t_n)^2}{N(t_k)^2}
\leq 8 T \sum^{n-1}_{k=0} 2^{-(n-k)} \leq 8T.
\end{align*}
In this way, we see that the solution $u$ must blow up at some time $-8T\leq t < 0$.
This completes the proof of Theorem~\ref{T:enemies}.
\qed
\section{Finite-time blowup}\label{S:finite time}
In this section we preclude scenario I from Theorem~\ref{T:enemies}. In this particular case, we do not need to restrict
to dimensions $d\geq 5$. The argument is essentially taken from \cite{kenig-merle}.
\begin{theorem}[No finite-time blowup]\label{T:no ftb}
Let $d\geq 3$. Then there are no maximal-lifespan solutions $u:I\times{\mathbb{R}}^d \to {\mathbb{C}}$ to \eqref{nls} that are almost periodic
modulo symmetries, obey
\begin{align}\label{infinite norm}
S_I(u)=\infty,
\end{align}
and
\begin{equation}\label{hype}
\sup_{t\in I} \|\nabla u(t)\|_2 < \|\nabla W\|_2,
\end{equation}
and are such that either $|\inf I|<\infty$ or $\sup I <\infty$.
\end{theorem}
\begin{proof}
Suppose for a contradiction that there existed such a solution $u$. Without loss of generality, we may assume $\sup I<\infty$.
We first argue that
\begin{align}\label{volcano}
\liminf_{t\nearrow \sup I} N(t) =\infty.
\end{align}
Assume for contradiction that $\liminf_{t\nearrow \sup I} N(t) <\infty$. Let $t_n \in I$ such that $t_n\nearrow \sup I$, and
define the rescaled functions $v_n: I_n\times{\mathbb{R}}^d\to {\mathbb{C}}$ by
$$
v_n(t,x):= u^{[t_n]}(t,x) = N(t_n)^{-\frac{d-2}2} u\bigl( t_n + t N(t_n)^{-2}, x(t_n) + x N(t_n)^{-1} \bigr),
$$
where $0\in I_n:=\{ t_n + N(t_n)^{-2}t: \, t\in I\}$. Then each $v_n$ is a solution to \eqref{nls} and $\{v_n(0)\}_n$ is
precompact in $\dot H^1_x({\mathbb{R}}^d)$. Thus, after passing to a subsequence if necessary, we may assume that $v_n(0)$ converges
strongly in $\dot H^1_x({\mathbb{R}}^d)$ to some function $v_0$. As $\|\nabla v_n(0)\|_2 = \|\nabla u(t_n)\|_2$
and, by assumption, $u$ is not identically zero, we conclude (using Sobolev embedding and the conservation of energy) that $v_0$
is not identically zero.
Let $v$ be the solution to \eqref{nls} with initial data $v_0$ at time $t=0$ and maximal lifespan $(-T_-, T_+)$ with $-\infty
\leq -T_- < 0 < T_+ \leq \infty$. From the local theory for \eqref{nls} (see, for example, Lemma~\ref{L:stab}), $v_n$ is
well-posed and has finite scattering size on any compact interval $J\in (-T_-, T_+)$. In particular, $u$ is well-posed with
finite scattering size on $\{ t_n + N(t_n)^{-2}t: \, t\in J\}$. However, as $t_n\nearrow \sup I$ and
$\liminf_{n\to \infty} N(t_n)<\infty$, this means that $u$ has finite scattering size beyond $\sup I$, which
contradicts the fact that, by assumption, $u$ blows up forward in time on $I$. Thus \eqref{volcano} must hold.
We now show that \eqref{volcano} implies
\begin{align}\label{mass leaves balls}
\limsup_{t\nearrow \,\sup I} \int_{|x|\leq R} |u(t,x)|^2\, dx = 0 \quad \text{for all $R>0$.}
\end{align}
Indeed, let $0<\eta<1$ and $t\in I$. By H\"older's inequality, Sobolev embedding, and \eqref{hype},
\begin{align*}
\int_{|x|\leq R} |u(t,x)|^2\, dx
&\leq \int_{|x-x(t)|\leq \eta R} |u(t,x)|^2\, dx + \mathop{\int_{ |x|\leq R}}_{|x-x(t)|>\eta R} |u(t,x)|^2\, dx\\
&\lesssim \eta^2 R^2 \|u(t)\|_{\frac{2d}{d-2}}^2 + R^2 \Bigl( \int_{|x-x(t)|>\eta R} |u(t,x)|^{\frac{2d}{d-2}} \, dx \Bigr)^{\frac{d-2}d} \\
&\lesssim \eta^2 R^2 \|\nabla W\|_2^2 + R^2 \Bigl( \int_{|x-x(t)|>\eta R} |u(t,x)|^{\frac{2d}{d-2}} \, dx \Bigr)^{\frac{d-2}d}.
\end{align*}
Letting $\eta\to 0$, we can make the first term on the right-hand side of the inequality above as small as we wish.
On the other hand, by \eqref{volcano}, almost periodicity modulo symmetries, and Remark~\ref{R:pot energy}, we see that
$$
\limsup_{t\nearrow \sup I} \int_{|x-x(t)|>\eta R} |u(t,x)|^{\frac{2d}{d-2}} \, dx =0.
$$
This proves \eqref{mass leaves balls}.
The next step is to prove that \eqref{mass leaves balls} implies the solution $u$ is identically zero, thus contradicting \eqref{infinite norm}.
For $t\in I$ define
$$
M_R(t):=\int_{{\mathbb{R}}^d} \phi\bigl(\tfrac{|x|}R\bigr) |u(x,t)|^2\,dx,
$$
where $\phi$ is a smooth, radial function, such that
\begin{align*}
\phi(r)=\begin{cases}
1 & \text{for } r\leq 1\\
0 & \text{for } r\geq 2.
\end{cases}
\end{align*}
By \eqref{mass leaves balls},
\begin{align}\label{M_R -> 0}
\limsup_{t\nearrow \sup I} M_R(t)=0 \quad \text{for all $R>0$.}
\end{align}
On the other hand, a simple computation involving Hardy's inequality and \eqref{hype} shows
$$
|\partial_t M_R(t)|\lesssim \|\nabla u(t)\|_2 \Bigl\|\frac{u(t)}{|x|}\Bigr\|_2\lesssim \|\nabla u(t)\|_2^2 \lesssim \|\nabla
W\|_2^2.
$$
Thus, by the Fundamental Theorem of Calculus,
\begin{align*}
M_R(t_1)= M_R(t_2) + \int_{t_2}^{t_1} \partial_t M_R(t)\, dt \lesssim M_R(t_2) + |t_1-t_2| \|\nabla W\|_2^2
\end{align*}
for all $t_1, t_2 \in I$ and $R>0$. Letting $t_2\nearrow \sup I$ and invoking \eqref{M_R -> 0}, we deduce
$$
M_R(t_1) \lesssim |\sup I-t_1| \|\nabla W\|_2^2.
$$
Now letting $R\to \infty$ and using the conservation of mass, we obtain $u_0\in L_x^2({\mathbb{R}}^d)$. Finally, letting $t_1\nearrow
\sup I$ we conclude $u_0=0$. By the uniqueness statement in Theorem~\ref{T:local}, this implies that the solution $u$ is
identically zero, contradicting \eqref{infinite norm}.
This concludes the proof of Theorem~\ref{T:no ftb}.
\end{proof}
\section{Negative regularity}\label{S:neg}
In this section we prove
\begin{theorem}[Negative regularity in the global case]\label{T:-reg}
Let $d\geq 5$ and let $u$ be a global solution to \eqref{nls} that is almost periodic modulo symmetries. Suppose also that
\begin{align}\label{ke bounded}
\sup_{t\in {\mathbb{R}}} \|\nabla u(t)\|_{L_x^2} <\infty
\end{align}
and
\begin{align}\label{inf bounded}
\inf_{t\in {\mathbb{R}}} N(t)\geq 1.
\end{align}
Then $u\in L_t^\infty \dot H_x^{-{\varepsilon}}({\mathbb{R}}\times{\mathbb{R}}^d)$ for some ${\varepsilon}={\varepsilon}(d)>0$. In particular, $u\in L^\infty_t L^2_x$.
\end{theorem}
The proof of Theorem~\ref{T:-reg} is achieved in two steps: First, we `break' scaling in a Lebesque space; more precisely,
we prove that our solution lives in $L^\infty_t L_x^p$ for some $2<p<\tfrac{2d}{d-2}$. Next, we use a double Duhamel trick to upgrade this
to $u\in L_t^\infty \dot H_x^{1-s}$ for some $s=s(p,d)>0$. Iterating the second step finitely many times, we derive Theorem~\ref{T:-reg}.
We learned the double Duhamel trick from \cite{tao:attractor} where it is used for a similar purpose; however, in that paper, the
breach of scaling comes directly from the subcritical nature of the nonlinearity.
Let $u$ be a solution to \eqref{nls} that obeys the hypotheses of Theorem~\ref{T:-reg}. Let $\eta>0$ be a small constant to be chosen
later. Then by Remark~\ref{R:c small} combined with \eqref{inf bounded}, there exists $N_0=N_0(\eta)$ such that
\begin{align}\label{ke small}
\|\nabla u_{\leq N_0}\|_{L_t^\infty L_x^2 ({\mathbb{R}}\times{\mathbb{R}}^d)}\leq \eta.
\end{align}
We turn now to our first step, that is, breaking scaling in a Lebesgue space. To this end, we define
\begin{equation}
A(N):=
\begin{cases}
N^{-\frac2{d-2}} \sup_{t\in {\mathbb{R}}} \|u_N(t)\|_{L_x^{\frac{2(d-2)}{d-4}}} & \quad \text{for}\quad d\geq 6\\
N^{-\frac 12} \sup_{t\in {\mathbb{R}}} \|u_N(t)\|_{L_x^5} & \quad \text{for}\quad d=5.
\end{cases}
\end{equation}
for frequencies $N\leq 10N_0$. Note that by Bernstein's inequality combined with Sobolev embedding and \eqref{ke bounded},
$$
A(N)\lesssim \|u_N\|_{L_t^\infty L_x^{\frac{2d}{d-2}}} \lesssim \|\nabla u\|_{L_t^\infty L_x^2} <\infty.
$$
We next prove a recurrence formula for $A(N$).
\begin{lemma}[Recurrence]\label{L:recurrence}
For all $N\leq 10 N_0$,
$$
A(N)\lesssim_u \bigl(\tfrac{N}{N_0}\bigr)^{\alpha}
+ \eta^{\frac4{d-2}} \sum_{\frac{N}{10}\leq N_1\leq N_0} \bigl(\tfrac{N}{N_1}\bigr)^{\alpha}A(N_1)
+ \eta^{\frac4{d-2}} \sum_{N_1<\frac{N}{10}} \bigl(\tfrac{N_1}{N}\bigr)^{\alpha}A(N_1),
$$
where $\alpha:=\min\{\tfrac2{d-2}, \tfrac 12\}$.
\end{lemma}
\begin{proof}
We first give the proof in dimensions $d\geq 6$. Once this is completed, we will explain the changes necessary to treat $d=5$.
Fix $N\leq 10 N_0$. By time-translation symmetry, it suffices to prove
\begin{align}\label{rec goal}
N^{-\frac2{d-2}} \| u_N(0)\|_{L_x^{\frac{2(d-2)}{d-4}}}
&\lesssim_u \bigl(\tfrac{N}{N_0}\bigr)^{\frac2{d-2}}
+ \eta^{\frac4{d-2}} \sum_{\frac{N}{10}\leq N_1\leq N_0} \bigl(\tfrac{N}{N_1}\bigr)^{\frac2{d-2}}A(N_1) \notag\\
&\qquad + \eta^{\frac4{d-2}} \sum_{N_1<\frac{N}{10}} \bigl(\tfrac{N_1}{N}\bigr)^{\frac2{d-2}}A(N_1).
\end{align}
Using the Duhamel formula \eqref{Duhamel} into the future followed by the triangle inequality, Bernstein, and
the dispersive inequality, we estimate
\begin{align}\label{to estimate}
N^{-\frac2{d-2}} \| u_N(0)\|_{L_x^{\frac{2(d-2)}{d-4}}}
&\leq N^{-\frac2{d-2}} \Bigl\| \int_0^{N^{-2}} e^{-it\Delta} P_N F(u(t))\, dt \Bigr\|_{L_x^{\frac{2(d-2)}{d-4}}} \notag\\
&\quad + N^{-\frac2{d-2}} \int_{N^{-2}}^\infty \bigl\|e^{-it\Delta} P_N F(u(t))\, dt \bigr\|_{L_x^{\frac{2(d-2)}{d-4}}} \notag \\
&\lesssim N \Bigl\| \int_0^{N^{-2}} e^{-it\Delta} P_N F(u(t)) \, dt \Bigr\|_{L_x^2}\notag \\
&\quad + N^{-\frac2{d-2}} \| P_N F(u)\|_{L_t^\infty L_x^{\frac{2(d-2)}d}} \int_{N^{-2}}^\infty t^{-\frac d{d-2}}\, dt \notag \\
&\lesssim N^{-1} \| P_N F(u)\|_{L_t^\infty L_x^2} + N^{\frac2{d-2}} \| P_N F(u)\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}\notag\\
&\lesssim N^{\frac2{d-2}} \| P_N F(u)\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}.
\end{align}
Using the Fundamental Theorem of Calculus, we decompose
\begin{align}\label{decomposition}
F(u)&= O(|u_{>N_0}| |u_{\leq N_0}|^{\frac4{d-2}}) + O(|u_{>N_0}|^{\frac{d+2}{d-2}}) + F(u_{\frac N{10}\leq \cdot\leq N_0})\notag \\
&\quad + u_{<\frac N{10}} \int_0^1 F_z \bigl( u_{\frac N{10}\leq \cdot\leq N_0} + \theta u_{<\frac N{10}}\bigr)\, d\theta \\
&\quad + \overline{u_{<\frac N{10}}} \int_0^1 F_{\bar z} \bigl( u_{\frac N{10}\leq \cdot\leq N_0} + \theta u_{<\frac N{10}}\bigr)\, d\theta. \notag
\end{align}
The contribution to the right-hand side of \eqref{to estimate} coming from terms that contain at least one copy of $u_{> N_0}$
can be estimated in the following manner: Using H\"older, Bernstein, and \eqref{ke bounded},
\begin{align}\label{1}
N^{\frac2{d-2}} \| P_N O(|u_{>N_0}| |u|^{\frac4{d-2}}) \bigr\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}
&\lesssim N^{\frac2{d-2}} \|u_{>N_0}\|_{L_t^\infty L_x^{\frac{2d(d-2)}{d^2-4d+8}}} \|u\|_{L_t^\infty L_x^{\frac{2d}{d-2}}}^{\frac4{d-2}}\notag\\
&\lesssim_u N^{\frac2{d-2}} N_0^{-\frac2{d-2}}.
\end{align}
Thus, this contribution is acceptable.
Next we turn to the contribution to the right-hand side of \eqref{to estimate} coming from the last two terms in \eqref{decomposition};
it suffices to consider the first of them since similar arguments can be used to deal with the second.
First we note that as $\nabla u \in L_t^\infty L_x^2$, we have $F_z(u) \in \dot \Lambda^{\frac{d-2}2,\infty}_{\frac4{d-2}}$.
Furthermore, as $P_{>\frac N{10}}F_z(u)$ is restricted to high frequencies, the Besov characterization of the
homogeneous H\"older continuous functions (see \cite[\S VI.7.8]{stein:large}) yields
$$
\bigl\|P_{>\frac N{10}}F_z(u)\bigr\|_{L_t^\infty L_x^{\frac{d-2}2}}
\lesssim N^{-\frac4{d-2}} \|\nabla u\|_{L_t^\infty L_x^2}^{\frac{4}{d-2}}.
$$
Thus, by H\"older's inequality and \eqref{ke small},
\begin{align}\label{2}
N^{\frac2{d-2}}&\Bigl\| P_N\Bigl( u_{<\frac N{10}} \int_0^1 F_z \bigl( u_{\frac N{10}\leq \cdot\leq N_0}
+ \theta u_{<\frac N{10}}\bigr)\, d\theta\Bigr) \Bigr\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}\notag\\
&\lesssim N^{\frac2{d-2}}\|u_{<\frac N{10}}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}}
\Bigl\| P_{>\frac N{10}}\Bigl(\int_0^1 F_z \bigl( u_{\frac N{10}\leq \cdot\leq N_0}+ \theta u_{<\frac N{10}}\bigl)\, d\theta\Bigr)\Bigr\|_{L_t^\infty L_x^{\frac{d-2}2}}\notag\\
&\lesssim N^{-\frac2{d-2}} \|u_{<\frac N{10}}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}} \|\nabla u_{\leq N_0}\|_{L_t^\infty L_x^2}^{\frac{4}{d-2}}\notag\\
&\lesssim \eta^{\frac4{d-2}} \sum_{N_1<\frac{N}{10}} \bigl(\tfrac{N_1}{N}\bigr)^{\frac2{d-2}}A(N_1).
\end{align}
Hence, the contribution coming from the last two terms in \eqref{decomposition} is acceptable.
We are left to estimate the contribution of $F(u_{\frac N{10}\leq \cdot\leq N_0})$ to the right-hand side of \eqref{to estimate}.
We need only show
\begin{align}\label{last goal}
\|F(u_{\frac N{10}\leq \cdot\leq N_0})\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}
\lesssim \eta^{\frac4{d-2}} \sum_{\frac{N}{10}\leq N_1\leq N_0} N_1^{-\frac2{d-2}}A(N_1).
\end{align}
As $d\geq 6$, we have $\tfrac4{d-2}\leq 1$. Using the triangle inequality, Bernstein,
\eqref{ke small}, and H\"older, we estimate
\begin{align*}
\|F(& u_{\frac N{10}\leq \cdot\leq N_0})\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}\\
&\lesssim \sum_{\frac N{10}\leq N_1 \leq N_0} \bigl\|u_{N_1} |u_{\frac N{10}\leq \cdot\leq N_0}|^{\frac4{d-2}}\bigr\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}\\
&\lesssim \sum_{\frac N{10}\leq N_1, N_2 \leq N_0} \bigl\|u_{N_1} |u_{N_2}|^{\frac4{d-2}}\bigr\|_{L_t^\infty L_x^{\frac{2(d-2)}d}}\\
&\lesssim \sum_{\frac N{10}\leq N_1\leq N_2 \leq N_0} \|u_{N_1}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}} \|u_{N_2}\|_{L_t^\infty L_x^2}^{\frac4{d-2}}\\
&\qquad + \sum_{\frac N{10}\leq N_2\leq N_1 \leq N_0} \|u_{N_1}\|_{L_t^\infty L_x^2}^{\frac4{d-2}}
\|u_{N_1}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}}^{\frac{d-6}{d-2}} \|u_{N_2}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}}^{\frac4{d-2}}\\
&\lesssim \sum_{\frac N{10}\leq N_1\leq N_2 \leq N_0} \|u_{N_1}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}} \eta^{\frac4{d-2}}N_2^{-\frac4{d-2}}\\
&\qquad + \sum_{\frac N{10}\leq N_2\leq N_1 \leq N_0}\eta^{\frac4{d-2}}N_1^{-\frac4{d-2}}
\|u_{N_1}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}}^{\frac{d-6}{d-2}} \|u_{N_2}\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}}^{\frac4{d-2}}\\
&\lesssim \eta^{\frac4{d-2}} \sum_{\frac N{10}\leq N_1\leq N_0} N_1^{-\frac2{d-2}}A(N_1)\\
&\qquad + \eta^{\frac4{d-2}} \sum_{\frac N{10}\leq N_2\leq N_1 \leq N_0} \bigl(\tfrac{N_2}{N_1} \bigr)^{\frac{16}{(d-2)^2}}
\bigl(N_1^{-\frac2{d-2}}A(N_1)\bigr)^{\frac{d-6}{d-2}} \bigl(N_2^{-\frac2{d-2}}A(N_2)\bigr)^{\frac{4}{d-2}}\\
&\lesssim \eta^{\frac4{d-2}} \sum_{\frac N{10}\leq N_1\leq N_0} N_1^{-\frac2{d-2}}A(N_1).
\end{align*}
This proves \eqref{last goal} and so completes the proof of the lemma in dimensions $d\geq 6$.
Consider now $d=5$. Arguing as for \eqref{to estimate}, we have
$$
N^{-\frac 12} \|u_N(0)\|_{L_x^5}\lesssim N^{\frac 12} \|P_N F(u)\|_{L_t^\infty L_x^{\frac54}},
$$
which we estimate by decomposing the nonlinearity as in \eqref{decomposition}. The analogue of \eqref{1} in this case is
\begin{align*}
N^{\frac12} \| P_N O(|u_{>N_0}| |u|^{\frac4{d-2}}) \bigr\|_{L_t^\infty L_x^{\frac54}}
&\lesssim N^{\frac12} \|u_{>N_0}\|_{L_t^\infty L_x^{\frac52}} \|u\|_{L_t^\infty L_x^{\frac{10}{3}}}^{\frac43}
\lesssim_u N^{\frac12} N_0^{-\frac12}.
\end{align*}
Using Bernstein and Lemma~\ref{F Lip} together with \eqref{ke small}, we replace \eqref{2} by
\begin{align*}
N^{\frac12}&\Bigl\| P_N\Bigl( u_{<\frac N{10}} \int_0^1 F_z \bigl( u_{\frac N{10}\leq \cdot\leq N_0}
+ \theta u_{<\frac N{10}}\bigr)\, d\theta\Bigr) \Bigr\|_{L_t^\infty L_x^{\frac54}}\\
&\lesssim N^{\frac12}\|u_{<\frac N{10}}\|_{L_t^\infty L_x^5}
\Bigl\| P_{>\frac N{10}}\Bigl(\int_0^1 F_z \bigl( u_{\frac N{10}\leq \cdot\leq N_0}+ \theta u_{<\frac N{10}}\bigl)\, d\theta\Bigr)\Bigr\|_{L_t^\infty L_x^{\frac53}}\\
&\lesssim N^{-\frac12} \|u_{<\frac N{10}}\|_{L_t^\infty L_x^5} \|\nabla u_{\leq N_0}\|_{L_t^\infty L_x^2} \|u_{\leq N_0}\|_{L_t^\infty L_x^{\frac{10}3}}^{\frac13}\\
&\lesssim \eta^{\frac43} \sum_{N_1<\frac{N}{10}} \bigl(\tfrac{N_1}{N}\bigr)^{\frac12}A(N_1).
\end{align*}
Finally, arguing as for \eqref{last goal}, we estimate
\begin{align*}
\|F&( u_{\frac N{10}\leq \cdot\leq N_0})\|_{L_t^\infty L_x^{\frac54}}\\
&\lesssim \sum_{\frac N{10}\leq N_1, N_2 \leq N_0} \bigl\|u_{N_1} u_{N_2}|u_{\frac N{10}\leq \cdot\leq N_0}|^{\frac13}\bigr\|_{L_t^\infty L_x^{\frac54}}\\
&\lesssim \sum_{\frac N{10}\leq N_1\leq N_2, N_3 \leq N_0} \|u_{N_1}\|_{L_t^\infty L_x^5}\|u_{N_2}\|_{L_t^\infty L_x^{\frac{20}9}}\|u_{N_3}\|_{L_t^\infty L_x^{\frac{20}9}}^{\frac13}\\
&\qquad + \sum_{\frac N{10}\leq N_3\leq N_1\leq N_2 \leq N_0} \|u_{N_1}\|_{L_t^\infty L_x^5}^{\frac23}\|u_{N_1}\|_{L_t^\infty L_x^{\frac{20}9}}^{\frac13}
\|u_{N_2}\|_{L_t^\infty L_x^{\frac{20}9}}\|u_{N_3}\|_{L_t^\infty L_x^5}^{\frac13}\\
&\lesssim \sum_{\frac N{10}\leq N_1\leq N_2, N_3 \leq N_0} \|u_{N_1}\|_{L_t^\infty L_x^5} \eta N_2^{-\frac 34} \eta^{\frac13} N_3^{-\frac14}\\
&\qquad + \sum_{\frac N{10}\leq N_3\leq N_1\leq N_2 \leq N_0} \|u_{N_1}\|_{L_t^\infty L_x^5}^{\frac23} \eta^{\frac13} N_1^{-\frac14}
\eta N_2^{-\frac34} \|u_{N_3}\|_{L_t^\infty L_x^5}^{\frac13}\\
&\lesssim \eta^{\frac43} \sum_{\frac N{10}\leq N_1 \leq N_0} N_1^{-\frac12} A(N_1)\\
&\qquad + \eta^{\frac43} \sum_{\frac N{10}\leq N_3\leq N_1 \leq N_0} \bigl( \tfrac{N_3}{N_1}\bigr)^{\frac13}
\bigl(N_1^{-\frac12}A(N_1)\bigr)^{\frac23} \bigl(N_3^{-\frac12}A(N_3)\bigr)^{\frac13}\\
&\lesssim \eta^{\frac43} \sum_{\frac N{10}\leq N_1 \leq N_0} N_1^{-\frac12} A(N_1).
\end{align*}
Putting everything together completes the proof of the lemma in the case $d=5$.
\end{proof}
This lemma leads very quickly to our first goal:
\begin{proposition}[$L^p$ breach of scaling]\label{P:L^p breach}
Let $u$ be as in Theorem~\ref{T:-reg}. Then
\begin{align}\label{step 1}
u\in L_t^\infty L_x^p \quad \text{for} \quad \tfrac{2(d+1)}{d-1}\leq p<\tfrac{2d}{d-2}.
\end{align}
In particular, by H\"older's inequality,
\begin{align}\label{breach 2}
\nabla F(u) \in L_t^\infty L_x^r \quad \text{for} \quad \tfrac{2(d-2)(d+1)}{d^2+3d-6} \leq r<\tfrac{2d}{d+4}.
\end{align}
\end{proposition}
\begin{proof}
We only present the details for $d\geq 6$. The treatment of $d=5$ is completely analogous.
Combining Lemma~\ref{L:recurrence} with Lemma~\ref{L:Gronwall}, we deduce
\begin{align}\label{breach 1}
\|u_N\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}}\lesssim_u N^{\frac4{d-2}-} \quad \text{for all} \quad N\leq 10 N_0.
\end{align}
In applying Lemma~\ref{L:Gronwall}, we set $N=10 \cdot 2^{-k} N_0$, $x_k=A(10 \cdot 2^{-k} N_0)$, and take $\eta$ sufficiently small.
By interpolation followed by \eqref{breach 1}, Bernstein, and \eqref{ke bounded},
\begin{align*}
\|u_N\|_{L_t^\infty L_x^p}
&\leq \|u_N\|_{L_t^\infty L_x^{\frac{2(d-2)}{d-4}}}^{(d-2)(\frac12-\frac1p)} \|u_N\|_{L_t^\infty L_x^2}^{\frac{d-2}p-\frac{d-4}2}\\
&\lesssim_u N^{\frac{2(p-2)}p-} N^{\frac{d-4}2-\frac{d-2}p}\\
&\lesssim_u N^{\frac1{d+1}-}
\end{align*}
for all $N\leq 10 N_0$. Thus, using Bernstein together with \eqref{ke bounded}, we obtain
\begin{align*}
\|u\|_{L_t^\infty L_x^p}
\leq \|u_{\leq N_0}\|_{L_t^\infty L_x^p} + \|u_{> N_0}\|_{L_t^\infty L_x^p}
\lesssim_u \sum_{N\leq N_0} N^{\frac1{d+1}-} + \sum_{N>N_0} N^{\frac{d-2}2-\frac dp}
\lesssim_{u} 1,
\end{align*}
which completes the proof of the proposition.
\end{proof}
\begin{remark}
With a few modifications, the argument used in dimension five can be adapted to dimensions three and four.
However, $u(t,x)=W(x)$ provides an explicit counterexample to Theorem~\ref{T:-reg} in these dimensions.
At a technical level, the obstruction is that the strongest dispersive estimate available is $|t|^{-d/2}$,
which is insufficient to perform both integrals in the double Duhamel trick for $d\leq 4$.
\end{remark}
Our second step is to use the double Duhamel trick to upgrade \eqref{step 1} to `honest' negative regularity (i.e. in Sobolev
sense). We start with
\begin{proposition}[Some negative regularity]\label{P:some -reg}
Let $d\geq 5$ and let $u$ be as in Theorem~\ref{T:-reg}. Assume further that $|\nabla|^s F(u) \in L_t^\infty L_x^r$ for some
$\tfrac{2(d-2)(d+1)}{d^2+3d-6}\leq r<\tfrac{2d}{d+4}$ and some $0\leq s\leq 1$. Then there exists $s_0=s_0(r,d)>0$ such that
$u \in L_t^\infty \dot H^{s-s_0+}_x$.
\end{proposition}
\begin{proof}
The proposition will follow once we establish
\begin{align}\label{neg reg 0}
\bigl\| |\nabla|^s u_N \bigr\|_{L_t^\infty L_x^2} \lesssim_u N^{s_0} \quad \text{for all} \quad N>0 \quad \text{and} \quad
s_0:=\tfrac{d}r-\tfrac{d+4}2>0.
\end{align}
Indeed, by Bernstein combined with \eqref{ke bounded},
\begin{align*}
\bigl\| |\nabla|^{s-s_0+} u \bigr\|_{L_t^\infty L_x^2}
&\leq \bigl\| |\nabla|^{s-s_0+} u_{\leq 1} \bigr\|_{L_t^\infty L_x^2} + \bigl\| |\nabla|^{s-s_0+} u_{>1} \bigr\|_{L_t^\infty L_x^2}\\
&\lesssim_u \sum_{N\leq 1} N^{0+} + \sum_{N>1} N^{(s-s_0+)-1}\\
&\lesssim_u 1.
\end{align*}
Thus, we are left to prove \eqref{neg reg 0}. By time-translation symmetry, it suffices to prove
\begin{align}\label{neg reg 1}
\bigl\| |\nabla|^s u_N(0) \bigr\|_{L_x^2} \lesssim_u N^{s_0} \quad \text{for all} \quad N>0 \quad \text{and} \quad
s_0:=\tfrac{d}r-\tfrac{d+4}2>0.
\end{align}
Using the Duhamel formula \eqref{Duhamel} both in the future and in the past, we write
\begin{align*}
\bigl\| |\nabla|^s & u_N(0) \bigr\|_{L_x^2}^2 \\
&= \lim_{T\to\infty} \lim_{T'\to-\infty} \bigl \langle i\int_0^T e^{-it\Delta}P_N |\nabla|^s F(u(t))\, dt,
-i\int_{T'}^0 e^{-i\tau\Delta}P_N |\nabla|^s F(u(\tau))\, d\tau \bigr\rangle\\
&\leq \int_0^\infty \int_{-\infty}^0 \Bigl| \bigl\langle P_N |\nabla|^s F(u(t)) ,
e^{i(t-\tau)\Delta}P_N |\nabla|^s F(u(\tau)) \bigr\rangle \Bigr| \,dt\, d\tau.
\end{align*}
We estimate the term inside the integrals in two ways. On one hand, using H\"older and the dispersive estimate,
\begin{align*}
\Bigl|\bigl\langle P_N |\nabla|^s F(u(t)) , e^{i(t-\tau)\Delta} & P_N |\nabla|^s F(u(\tau)) \bigr\rangle\Bigr|\\
&\lesssim \bigl\| P_N |\nabla|^s F(u(t))\bigr\|_{L_x^r} \bigl\| e^{i(t-\tau)\Delta} P_N |\nabla|^s F(u(\tau))\bigr\|_{L_x^{r'}}\\
&\lesssim |t-\tau|^{\frac d2-\frac dr} \bigl\| |\nabla|^s F(u)\bigr\|_{L_t^\infty L_x^r}^2.
\end{align*}
On the other hand, using Bernstein,
\begin{align*}
\Bigl|\bigl\langle P_N |\nabla|^s F(u(t)) , e^{i(t-\tau)\Delta} & P_N |\nabla|^s F(u(\tau)) \bigr\rangle\Bigr|\\
&\lesssim \bigl\| P_N |\nabla|^s F(u(t))\bigr\|_{L_x^2} \bigl\| e^{i(t-\tau)\Delta} P_N |\nabla|^s F(u(\tau))\bigr\|_{L_x^2}\\
&\lesssim N^{2(\frac d2-\frac dr)} \bigl\| |\nabla|^s F(u)\bigr\|_{L_t^\infty L_x^r}^2.
\end{align*}
Thus,
\begin{align*}
\bigl\| |\nabla|^s u_N(0) \bigr\|_{L_x^2}^2
&\lesssim \bigl\| |\nabla|^s F(u)\bigr\|_{L_t^\infty L_x^r}^2 \int_0^\infty \int_{-\infty}^0 \min\{ |t-\tau|^{-1} , N^2 \}^{\frac dr -\frac d2} \, dt\, d\tau\\
&\lesssim N^{2s_0} \bigl\| |\nabla|^s F(u)\bigr\|_{L_t^\infty L_x^r}^2.
\end{align*}
To obtain the last inequality we used the fact that $\tfrac dr -\tfrac d2 >2$ since $r<\tfrac{2d}{d+4}$.
Thus \eqref{neg reg 1} holds; this finishes the proof of the proposition.
\end{proof}
The proof of Theorem~\ref{T:-reg} will follow from iterating Proposition~\ref{P:some -reg} finitely many times.
\begin{proof}[Proof of Theorem~\ref{T:-reg}]
Proposition~\ref{P:L^p breach} allows us to apply Proposition~\ref{P:some -reg} with $s=1$.
We conclude that $u \in L_t^\infty \dot H^{1-s_0+}_x$ for some $s_0=s_0(r,d)>0$.
Combining this with the fractional chain rule Lemma~\ref{F Lip} and \eqref{step 1}, we deduce that $|\nabla|^{1-s_0+} F(u) \in L_t^\infty L_x^r$
for some $\tfrac{2(d-2)(d+1)}{d^2+3d-6}\leq r<\tfrac{2d}{d+4}$. We are thus in the position to apply Proposition~\ref{P:some -reg} again
and obtain $u \in L_t^\infty \dot H^{1-2s_0+}_x$. Iterating this procedure finitely many times, we derive
$u\in L_t^\infty \dot H_x^{-{\varepsilon}}$ for any $0<{\varepsilon}<s_0$.
This completes the proof of Theorem~\ref{T:-reg}.
\end{proof}
\section{The low-to-high frequency cascade}\label{S:cascade}
In this section, we use the negative regularity provided by Theorem~\ref{T:-reg} to preclude low-to-high frequency cascade solutions.
\begin{theorem}[Absence of cascades]\label{T:no cascade}
Let $d\geq 5$. There are no global solutions to \eqref{nls} that are low-to-high frequency cascades
in the sense of Theorem~\ref{T:enemies}.
\end{theorem}
\begin{proof}
Suppose for a contradiction that there existed such a solution $u$. Then, by Theorem~\ref{T:-reg}, $u\in L_t^\infty L_x^2$;
thus, by the conservation of mass,
$$
0\leq M(u)=M(u(t)) = \int_{{\mathbb{R}}^d} |u(t,x)|^2\, dx <\infty \quad \text{for all} \quad t\in {\mathbb{R}}.
$$
Fix $t\in {\mathbb{R}}$ and let $\eta>0$ be a small constant. By compactness (see Remark~\ref{R:c small}),
$$
\int_{|\xi|\leq c(\eta)N(t)} |\xi|^2 |\hat u(t,\xi)|^2\, d\xi\leq \eta.
$$
On the other hand, as $u\in L_t^\infty \dot H^{-{\varepsilon}}_x$ for some ${\varepsilon}>0$,
$$
\int_{|\xi|\leq c(\eta)N(t)} |\xi|^{-2{\varepsilon}} |\hat u(t,\xi)|^2\, d\xi\lesssim_u 1.
$$
Hence, by H\"older's inequality,
\begin{align}\label{close}
\int_{|\xi|\leq c(\eta)N(t)} |\hat u(t,\xi)|^2\, d\xi\lesssim_u \eta^{\frac {\varepsilon}{1+{\varepsilon}}}.
\end{align}
Meanwhile, by elementary considerations and \eqref{ke bounded},
\begin{align}\label{far}
\int_{|\xi|\geq c(\eta)N(t)} |\hat u(t,\xi)|^2\, d\xi
&\leq [c(\eta)N(t)]^{-2} \int_{{\mathbb{R}}^d} |\xi|^2 |\hat u(t,\xi)|^2\, d\xi \notag \\
&\leq [c(\eta)N(t)]^{-2} \|\nabla u(t)\|^2_2 \notag\\
&\leq [c(\eta)N(t)]^{-2} \|\nabla W\|^2_2.
\end{align}
Collecting \eqref{close} and \eqref{far} and using Plancherel's theorem, we obtain
$$
0\leq M(u)\lesssim_u c(\eta)^{-2} N(t)^{-2} + \eta^{\frac {\varepsilon}{1+{\varepsilon}}}
$$
for all $t\in {\mathbb{R}}$. As $u$ is a low-to-high cascade, there is a sequence of times $t_n\to\infty$ so that
$N(t_n)\to \infty$. As $\eta>0$ is arbitrary, we may conclude $M(u)=0$ and hence $u$ is identically zero.
This contradicts the fact that $S_I(u)=\infty$, thus settling Theorem~\ref{T:no cascade}.
\end{proof}
\section{The soliton}\label{S:Soliton}
In this case the contradiction will follow from a virial-type argument. In order to successfully use the virial
inequality, we need to control the motion of $x(t)$. As we now know that soliton solutions have finite mass
(see Theorem~\ref{T:-reg}), we will be able to use an argument from \cite{DHR}
to prove that $|x(t)|=o(t)$ as $t\to \infty$. The first step is to note that a minimal kinetic energy
blowup solution with finite mass must have zero momentum. Let us quickly remark that simple arguments show
$|x(t)|=O(t)$ without the knowledge that the mass is finite; however, the passage from $O(t)$ to $o(t)$ is essential
for the virial argument.
\begin{proposition}[Zero momentum]\label{P:p=0}
Let $u$ be a minimal kinetic energy blowup solution to \eqref{nls} that obeys $u\in L_t^\infty H^1_x$.
Then its total momentum, which is a conserved quantity, vanishes:
$$P(u):=2\, \Im \int_{{\mathbb{R}}^d} \overline{u(t,x)}\nabla u(t,x)\, dx \equiv 0.$$
\end{proposition}
\begin{proof}
Let $u:I\times{\mathbb{R}}^d\to {\mathbb{C}}$ be as in Proposition~\ref{P:p=0}. Then the momentum $P(u)$ and the mass $M(u)$ are finite and conserved.
Moreover, $M(u)\neq 0$ since otherwise $u$ would be identically zero and hence not a blowup solution.
Let $\tilde u$ be the Galilei boost of $u$ by $\xi_0:= - [2M(u)]^{-1} P(u)$:
$$
\tilde u(t,x):= e^{ix\xi_0}e^{-it|\xi_0|^2} u(t,x-2\xi_0t).
$$
Simple computations then show that
\begin{equation}\label{ECofM}
\|\nabla \tilde u(t)\|_2^2
= \|\nabla u(t)\|_2^2 + |\xi_0|^2 M(u) + \xi_0 P(u)
= \|\nabla u(t)\|_2^2 - [4M(u)]^{-1}P(u)^2.
\end{equation}
Equivalently, we may write $E(u) = E(\tilde u) + [4M(u)]^{-1}P(u)^2$, which expresses the well-know physical fact that the
total energy can be decomposed as the energy viewed in the center of mass frame plus the energy arising from the motion of
the center of mass (cf \cite[\S 8]{ll}).
As $\tilde u$ is also a blowup solution of \eqref{nls}, indeed $S_I(\tilde u)=S_I(u)=\infty$, we see that $P(u)=0$;
for otherwise, $\tilde u$ would have less kinetic energy than $u$.
\end{proof}
A second ingredient needed to control the motion of $x(t)$ is a compactness property of the orbit $\{u(t)\}$ in $L_x^2$.
This requires the full force of Theorem~\ref{T:-reg}.
\begin{lemma}[Compactness in $L_x^2$]\label{L:mass compact}
Let $d\geq 5$ and let $u$ be a soliton in the sense of Theorem~\ref{T:enemies}.
Then for every $\eta>0$ there exists $C(\eta)>0$ such that
$$
\sup_{t\in {\mathbb{R}}} \int_{|x-x(t)|\geq C(\eta)}|u(t,x)|^2\, dx \lesssim_u \eta.
$$
\end{lemma}
\begin{proof}
The entire argument takes place at a fixed $t$; in particular, we may assume $x(t)=0$.
First we control the contribution from the low frequencies: by Theorem~\ref{T:-reg},
$$
\bigl\|u_{< N}(t)\bigl\|_{L_x^2(|x|\geq R)}
\leq \bigl\|u_{< N}(t)\bigl\|_{L_x^2} \lesssim N^{{\varepsilon}} \bigl\| |\nabla|^{-{\varepsilon}} u\bigl\|_{L_t^\infty L_x^2}\lesssim_u N^{{\varepsilon}}.
$$
This can be made smaller than $\eta$ by choosing $N=N(\eta)$ small enough.
We now turn to the contribution from the high frequencies.
A simple application of Schur's test reveals the following: For any $m\geq 0$,
$$
\bigl\| \chi_{|x|\geq 2R} \Delta^{-1} \nabla P_{\geq N} \chi_{|x|\leq R} \bigr\|_{L^2\to L^2}
\lesssim N^{-1} \langle RN \rangle^{-m}
$$
uniformly in $R,N>0$. On the other hand, by Bernstein,
$$
\bigl\| \chi_{|x|\geq 2R} \Delta^{-1} \nabla P_{\geq N} \chi_{|x|\geq R} \bigr\|_{L^2\to L^2} \lesssim N^{-1}.
$$
Together, these lead quickly to
\begin{align*}
\int_{|x|\geq 2R} |u_{\geq N}(t,x)|^2 \,dx \lesssim N^{-2} \langle RN \rangle^{-2} \|\nabla u(t)\|_{L^2_x}^2
+ N^{-2} \int_{|x|\geq R} |\nabla u(t,x)|^2 \,dx.
\end{align*}
By choosing $R$ large enough, we can render the first term smaller than $\eta$; the same is true of the
second summand by virtue of $\dot H^1$-compactness:
\begin{align*}
\sup_{t\in{\mathbb{R}}} \int_{|x-x(t)|\geq C(\eta_1)}|\nabla u(t,x)|^2\, dx \leq \eta_1.
\end{align*}
The lemma follows by combining our estimates for $u_{< N}$ and $u_{\geq N}$.
\end{proof}
Following the argument in \cite{DHR}, we can now prove
\begin{lemma}[Control over $x(t)$]\label{L:x(t) control}
Fix $d\geq 5$ and let $u$ be a minimal kinetic energy soliton in the sense of Theorem~\ref{T:enemies}. Then
\begin{align*}
|x(t)| =o(t) \quad \text{as} \quad t\to \infty.
\end{align*}
\end{lemma}
\begin{proof}
We argue by contradiction. Suppose there exist $\delta>0$ and
a sequence $t_n\to \infty$ such that
\begin{align}\label{assume not}
|x(t_n)|> \delta t_n \quad \text{for all} \quad n\geq 1.
\end{align}
By spatial-translation symmetry, we may assume $x(0)=0$.
Let $\eta>0$ be a small constant to be chosen later. By compactness and Lemma~\ref{L:mass compact},
\begin{align}\label{compact}
\sup_{t\in {\mathbb{R}}} \int_{|x-x(t)|>C(\eta)}\bigl( |\nabla u(t,x)|^2 + |u(t,x)|^2 \bigr)\, dx\leq \eta
\end{align}
Define
\begin{align}\label{define}
T_n:=\inf_{t\in[0, t_n]} \{|x(t)|=|x(t_n)|\}\leq t_n \quad \text{and} \quad R_n:=C(\eta) + \sup_{t\in[0,T_n]}|x(t)|.
\end{align}
Now let $\phi$ be a smooth, radial function such that
\begin{align*}
\phi(r)=\begin{cases}
1 & \text{for } r\leq 1\\
0 & \text{for } r\geq 2,
\end{cases}
\end{align*}
and define the truncated `position'
$$
X_R(t): =\int_{{\mathbb{R}}^d} x\phi\bigl(\tfrac{|x|}R\bigr) |u(t,x)|^2\,dx.
$$
By Theorem~\ref{T:-reg}, $u\in L_t^\infty L_x^2$; together with \eqref{compact} this implies
\begin{align*}
|X_{R_n}(0)|
&\leq \bigl|\int_{|x|\leq C(\eta)}x\phi\bigl(\tfrac{|x|}R\bigr) |u(t,x)|^2\,dx\Bigr|
+ \bigl|\int_{|x|\geq C(\eta)} x\phi\bigl(\tfrac{|x|}R\bigr) |u(t,x)|^2\,dx\Bigr|\\
&\leq C(\eta) M(u) + 2 \eta R_n.
\end{align*}
On the other hand, by the triangle inequality combined with \eqref{compact} and \eqref{define},
\begin{align*}
|X_{R_n}(T_n)|
&\geq |x(T_n)|M(u) - |x(T_n)| \Bigl| \int_{{\mathbb{R}}^d}\Bigl[1-\phi\bigl(\tfrac{|x|}R\bigr)\bigr] |u(T_n,x)|^2\,dx\Bigr|\\
& \quad- \Bigl| \int_{|x-x(T_n)|\leq C(\eta)}\bigl[x-x(T_n)\bigr]\phi\bigl(\tfrac{|x|}R\bigr) |u(T_n,x)|^2\,dx\Bigr|\\
& \quad - \Bigl| \int_{|x-x(T_n)|\geq C(\eta)}\bigl[x-x(T_n)\bigr]\phi\bigl(\tfrac{|x|}R\bigr) |u(T_n,x)|^2\,dx\Bigr|\\
& \geq |x(T_n)| [M(u)-\eta] - C(\eta)M(u) - \eta [R_n + |x(T_n)|]\\
& \geq |x(T_n)| [M(u)-3\eta] - 2C(\eta)M(u).
\end{align*}
Thus, taking $\eta>0$ sufficiently small (depending on $M(u)$),
\begin{align*}
\bigl| X_{R_n}(T_n) - X_{R_n}(0) \bigr|\gtrsim_{M(u)}|x(T_n)| - C(\eta).
\end{align*}
A simple computation establishes
\begin{align*}
\partial_t X_R(t)
&= 2\Im \int_{{\mathbb{R}}^d} \phi\bigl(\tfrac{|x|}R\bigr)\nabla u(t,x) \overline{u(t,x)}\, dx\\
&\quad + 2\Im \int_{{\mathbb{R}}^d} \frac{x}{|x|R}\phi'\bigl(\tfrac{|x|}R\bigr)\,x\cdot \nabla u(t,x) \overline{u(t,x)}\, dx.
\end{align*}
By Proposition~\ref{P:p=0}, $P(u)=0$; together with Cauchy-Schwarz and \eqref{compact} this yields
\begin{align*}
\bigl|\partial_t X_{R_n}(t)\bigr|
&\leq \Bigl| 2\Im \int_{{\mathbb{R}}^d}\Bigl[1- \phi\bigl(\tfrac{|x|}R\bigr)\Bigr]\nabla u(t,x) \overline{u(t,x)}\, dx \Bigr|\\
&\quad + \Bigl|2\Im \int_{{\mathbb{R}}^d} \frac{x}{|x|R}\phi'\bigl(\tfrac{|x|}R\bigr) \, x\cdot \nabla u(t,x) \overline{u(t,x)}\, dx\Bigr|\\
&\leq 6\eta
\end{align*}
for all $t\in [0, T_n]$.
Thus, by the Fundamental Theorem of Calculus,
$$
|x(T_n)| - C(\eta)\lesssim _{M(u)} \eta T_n.
$$
Recalling that $|x(T_n)|=|x(t_n)|>\delta t_n\geq \delta T_n$ and letting $n\to \infty$ we derive a contradiction.
\end{proof}
We are finally in a position to preclude the soliton-like enemy by using a truncated virial identity. When $x(t)\equiv0$,
as in the radial case, the necessary argument can be found in \cite{kenig-merle}.
As the reader will see, it is the finiteness of the $L^2_x$ norm that allows us to extend the argument to the case $|x(t)|=o(t)$.
\begin{theorem}[No soliton]\label{T:no soliton}
Let $d\geq 5$. A minimal kinetic energy blowup solution of \eqref{nls} cannot be a soliton in the sense of Theorem~\ref{T:enemies}.
\end{theorem}
\begin{proof}
Suppose for a contradiction that there existed such a solution $u$.
Let $\eta>0$ be a small constant to be specified later. Then, by Definition~\ref{D:ap} and Remark~\ref{R:pot energy},
\begin{equation}\label{decay u}
\sup_{t\in {\mathbb{R}}} \int_{|x-x(t)|>C(\eta)}\bigl( |\nabla u(t,x)|^2+|u(t,x)|^{\frac {2d}{d-2}} \bigr)\, dx\leq \eta.
\end{equation}
Moreover, by Lemma~\ref{L:x(t) control}, $|x(t)|=o(t)$ as $t \to \infty$. Thus, there exists $T_0=T_0(\eta)\in {\mathbb{R}}$ such that
\begin{align}\label{control}
|x(t)|\leq \eta t \quad \text{for all} \quad t\geq T_0.
\end{align}
Now let $\phi$ be a smooth, radial function such that
\begin{align*}
\phi(r)=\begin{cases}
r & \text{for } r\leq 1\\
0 & \text{for } r\geq 2,
\end{cases}
\end{align*}
and define
$$
V_R(t): =\int_{{\mathbb{R}}^d} \psi(x) |u(t,x)|^2\,dx,
$$
where $\psi(x):=R^2\phi\bigl(\tfrac{|x|^2}{R^2}\bigr)$ for some $R>0$.
Differentiating $V_R$ with respect to the time variable, we find
$$
\partial_t V_R(t) = 4\Im \int_{{\mathbb{R}}^d} \phi'\bigl(\tfrac{|x|^2}{R^2}\bigr) \overline{u(t,x)} \; x\cdot \nabla u(t,x) \,dx.
$$
By Theorem~\ref{T:-reg}, $u\in L_t^\infty L_x^2$ and so
\begin{align}\label{one deriv}
|\partial_t V_R(t)|\lesssim R \|\nabla u(t)\|_2 \|u(t)\|_2
\lesssim_u R
\end{align}
for all $t\in I$ and $R>0$.
Further computations establish
\begin{align*}
\partial_{tt}V_R(t) &= 4 \Re \int_{{\mathbb{R}}^d}\psi_{ij}(x)u_{i}(t,x)\bar u_j(t,x)\, dx
- \tfrac 4d \int_{{\mathbb{R}}^d} \bigl(\Delta \psi\bigr)(x) |u(t,x)|^{\frac{2d}{d-2}}\, dx\\
&\quad - \int_{{\mathbb{R}}^d} \bigl(\Delta \Delta \psi\bigr)(x) |u(t,x)|^2\, dx,
\end{align*}
where subscripts denote spatial derivatives and repeated indices are summed. Substituting our choice of $\psi$ and using
H\"older's inequality on the last term,
\begin{align*}
\partial_{tt}V_R(t)
&=8\int_{{\mathbb{R}}^d} \bigl(|\nabla u(t,x)|^2-|u(t,x)|^{\frac {2d}{d-2}}\bigr)\, dx\\
&\quad + O\Bigl(\int_{|x|\geq R} \bigl(|\nabla u(t,x)|^2 + |u(t,x)|^{\frac {2d}{d-2}}\bigr)\, dx\Bigr)\\
&\quad + O\Bigl(\int_{R\leq |x|\leq 2R} |u(t,x)|^{\frac {2d}{d-2}}\, dx\Bigr)^{\frac{d-2}{d}}.
\end{align*}
From \eqref{hype} and Lemma~\ref{equal},
\begin{align*}
\int_{{\mathbb{R}}^d} \bigl(|\nabla u(t,x)|^2-|u(t,x)|^{\frac {2d}{d-2}}\bigr)\, dx \gtrsim \|\nabla u_0\|_2^2.
\end{align*}
Thus, choosing $\eta>0$ sufficiently small and $R:=C(\eta) +\sup_{T_0\leq t\leq T_1} |x(t)|$ and invoking \eqref{decay u},
\begin{align}\label{two deriv}
\partial_{tt}V_R(t) \gtrsim \|\nabla u_0\|_2^2.
\end{align}
Using the Fundamental Theorem of Calculus on the interval $[T_0, T_1]$ together with \eqref{one deriv} and \eqref{two deriv}, we obtain
$$
(T_1-T_0) \|\nabla u_0\|_2^2\lesssim_u R \lesssim_u C(\eta) +\sup_{T_0\leq t\leq T_1} |x(t)|
$$
for all $T_1\geq T_0$. Invoking \eqref{control} and taking $\eta$ sufficiently small and $T_1$ sufficiently large,
we derive a contradiction unless $u_0\equiv 0$. But $u_0\equiv 0$ is not consistent with the fact that for a soliton, $S_{\mathbb{R}}(u)=\infty$.
\end{proof}
\section{Blowup}\label{S:blowup}
In this section we prove Proposition~\ref{P:blowup}. To this end, let $u_0\in \dot H^1_x({\mathbb{R}}^d)$ and $\delta_0>0$ be such that
$$
\|\nabla u_0\|_2 \geq \|\nabla W\|_2 \quad \text{and} \quad E(u_0)\leq (1-\delta_0) E(W).
$$
Let $u: I\times\R^d\to {\mathbb{C}}$ be the maximal-lifespan solution to \eqref{nls} with initial data $u_0$ at time $t=t_0\in I$. By
Corollary~\ref{trap}, there exist $\delta_2, \delta_3>0$ such that for all $t\in I$
\begin{align}
\|\nabla u(t)\|_2^2 &\geq (1+\delta_2) \|\nabla W\|_2^2 \label{high ke}\\
\int_{{\mathbb{R}}^d}\bigl(|\nabla u(t,x)|^2-|u(t,x)|^{\frac {2d}{d-2}}\bigr)\,dx &\leq -\delta_3. \label{neg viriel}
\end{align}
To prove that the solution $u$ blows up in finite time (in either of the two cases described in Proposition~\ref{P:blowup}), we
will use the convexity method \cite{glassey, zack}.
Let us first treat the case when $xu_0\in L_x^2({\mathbb{R}}^d)$; see also \cite{kenig-merle}. In this case, the second moment
$$
V(t):=\int_{{\mathbb{R}}^d} |x|^2 |u(t,x)|^2 \, dx
$$
is well-defined and moreover, $V\in C^2(I)$; see, for example, \cite{cazenave:book}. As $u$ is not identically zero (by
\eqref{high ke}), $V(t)>0$ for all $t\in I$. On the other hand, a quick computation together with \eqref{neg viriel} shows
$$
\partial_{tt} V(t) = 8\int_{{\mathbb{R}}^d}\bigl(|\nabla u(t,x)|^2-|u(t,x)|^{\frac {2d}{d-2}}\bigr)\,dx \leq -8\delta_3.
$$
Thus, the graph of $V$ lies under an inverted parabola, so the solution $u$ blows up in both time directions.
We consider next the case when $u_0\in H^1_x({\mathbb{R}}^d)$ is radial. Blowup for the energy-subcritical problem for this type of
initial data and negative energy was addressed by Ogawa and Tsutsumi \cite{OgawaTsutsumi}; see also \cite{cazenave:book}.
While our exposition is a little different, the argument is very close to theirs.
As the second moment is no longer finite for this initial data, we define the truncated virial quantity
$$
V_R(t):=\int_{{\mathbb{R}}^d} \psi(x)|u(t,x)|^2\,dx,
$$
where $\psi(x):=R^2 \phi\bigl(\frac{|x|^2}{R^2}\bigr)$ with $R>0$ and $\phi$ a smooth, concave function on $[0,\infty)$ such
that
\begin{equation*}
\phi (r) = \begin{cases}
r & \text{for } r\leq 1\\
2 & \text{for } r\geq 3
\end{cases}
\end{equation*}
and $\phi''(r)$ is non-increasing on $r\leq 2$ and non-decreasing on $r\geq 2$.
A computation establishes
\begin{align*}
\partial_{tt}V_R(t) &= 4 \Re \int_{{\mathbb{R}}^d}\psi_{ij}(x)u_{i}(t,x)\bar u_j(t,x)\, dx
- \tfrac 4d \int_{{\mathbb{R}}^d} \bigl(\Delta \psi\bigr)(x) |u(t,x)|^{\frac{2d}{d-2}}\, dx\\
&\quad - \int_{{\mathbb{R}}^d} \bigl(\Delta \Delta \psi\bigr)(x) |u(t,x)|^2\, dx.
\end{align*}
Substituting our choice of $\psi$ in the formula above and recalling that $u$ is radial,
\begin{align*}
\partial_{tt}V_R(t)
&= 8\int_{{\mathbb{R}}^d}\bigl( |\nabla u(t,x)|^2 - |u(t,x)|^{\frac{2d}{d-2}}\bigr)\, dx + \tfrac1{R^2} O\Bigl( \int_{|x|\sim R} \! |u(t,x)|^2 \, dx\Bigr)\\
&\quad + 8\int_{{\mathbb{R}}^d}\! \Bigl( \phi'\bigl(\tfrac{|x|^2}{R^2}\bigr) -1 + \tfrac{2|x|^2}{R^2}
\phi''\bigl(\tfrac{|x|^2}{R^2}\bigr)\Bigr)
\bigl(|\nabla u(t,x)|^2- |u(t,x)|^{\frac{2d}{d-2}}\bigr)\,dx\\
&\quad + \tfrac{16(d-1)}{d} \int_{{\mathbb{R}}^d} \tfrac{|x|^2}{R^2} \phi''\bigl(\tfrac{|x|^2}{R^2}\bigr)|u(t,x)|^{\frac{2d}{d-2}}\,dx.
\end{align*}
By our choice of $\phi$, we have $\phi''\leq 0$. Moreover, as $u\in L_x^2({\mathbb{R}}^d)$, one can choose $R$ sufficiently large
(depending on the mass of $u$) so that the contribution of the second term on the right-hand side of the equality above is less
than half that of the first term. Thus, invoking \eqref{neg viriel},
\begin{align}\label{deriv V}
\partial_{tt}V_R(t) \leq -4 \delta_3
- 8\int_{{\mathbb{R}}^d}\! \omega(x)\bigl(|\nabla u(t,x)|^2- |u(t,x)|^{\frac{2d}{d-2}}\bigr) \, dx,
\end{align}
where $\omega(x):=1-\phi'\bigl(\tfrac{|x|^2}{R^2}\bigr) - \tfrac{2|x|^2}{R^2} \phi''\bigl(\tfrac{|x|^2}{R^2}\bigr)$. Note that
$0\leq \omega\lesssim 1$ is radial, $\supp(\omega)\subseteq \{ |x|\geq R\}$, and $\omega(x)\lesssim \omega(y)$ uniformly
for all $|x|\leq |y|$.
As in the first case, finite time blowup for $u$ will follow once we establish $\partial_{tt}V_R<0$. To achieve this we will
need the following
\begin{lemma}[Weighted radial Sobolev embedding]\label{wait}
Let $\omega$ be as above and let $f$ be a radial function on ${\mathbb{R}}^d$. Then
\begin{align*}
\bigl\| |x|^{\frac{d-1}2} \omega^{\frac 14} f \bigr\|_{L_x^\infty({\mathbb{R}}^d)}^2 \lesssim \| f \bigr\|_{L_x^2({\mathbb{R}}^d)} \|\omega^{\frac
12} \nabla f \bigr\|_{L_x^2({\mathbb{R}}^d)}.
\end{align*}
\end{lemma}
\begin{proof}
It suffices to establish the claim for radial Schwartz functions $f$. Let $r\geq 0$. By the Cauchy--Schwarz inequality,
\begin{align*}
r^{d-1}\omega(r)^{\frac12} |f(r)|^2
&= 2 r^{d-1}\omega(r)^{\frac12} \Re \int_r^\infty \bar f(\rho) f'(\rho) \, d\rho\\
&\lesssim \int_r^\infty \rho^{d-1}\omega(\rho)^{\frac12} |f(\rho)|\,|f'(\rho)| \, d\rho\\
&\lesssim \Bigl(\int_r^\infty \rho^{d-1} |f(\rho)|^2 \, d\rho \Bigr)^{\frac12}
\Bigl(\int_r^\infty \rho^{d-1} \omega(\rho) |f'(\rho)|^2 \, d\rho \Bigr)^{\frac12}\\
&\lesssim \| f \bigr\|_{L^2(\rho^{d-1}d\rho)}\|\omega^{\frac 12} f' \bigr\|_{L^2(\rho^{d-1}d\rho)}.
\end{align*}
The claim follows.
\end{proof}
Returning to the proof of Proposition~\ref{P:blowup}, we use Lemma~\ref{wait} together with the fact that $u_0\in L_x^2({\mathbb{R}}^d)$,
the conservation of mass, and the properties of $\omega$ described earlier to estimate
\begin{align*}
\int_{{\mathbb{R}}^d}\! \omega(x) |u(t,x)|^{\frac{2d}{d-2}}\, dx
&\lesssim \|\omega^{\frac14}u(t)\|_{L_x^\infty}^{\frac4{d-2}} \int_{{\mathbb{R}}^d} |u(t,x)|^2\, dx\\
&\lesssim R^{-\frac{2(d-1)}{d-2}}\bigl\| |x|^{\frac{d-1}2} \omega^{\frac 14} u(t) \bigr\|_{L_x^\infty}^{\frac4{d-2}}\|u_0\|_{L_x^2}^2\\
&\lesssim R^{-\frac{2(d-1)}{d-2}}\|\omega^{\frac12} \nabla u(t)\|_{L_x^2}^{\frac2{d-2}}\|u_0\|_{L_x^2}^{\frac{2(d-1)}{d-2}}\\
&\lesssim \bigl(R^{-1}\|u_0\|_{L_x^2}\bigr)^{\frac{2(d-1)}{d-2}} \bigl(\|\omega^{\frac12} \nabla u(t)\|_{L_x^2}^2 +1\bigr).
\end{align*}
Thus, taking $R$ sufficiently large depending on the mass of $u$ and recalling that $\omega$ is positive, \eqref{deriv V} yields
$\partial_{tt}V_R<0$. This finishes the proof of Proposition~\ref{P:blowup}. \qed
\section{Concentration at blowup}\label{S:Conc}
The paper of Kenig and Merle contains a sketch of an argument to prove Theorem~\ref{T:conc} in the low dimensional
spherically symmetric case treated in that paper; see \cite[Corollary~5.18]{kenig-merle}.
As far as we understand it, that paper does not satisfactorily address the problem of quadratic oscillation,
namely, that there are radial functions $\phi_n$ obeying
$$
\|\phi_n\|_{L^2_x({\mathbb{R}}^d)} =1,\quad \|e^{it\Delta} \phi_n\|_{L^{\frac{2(d+2)}{d}}_{t,x}([0,1]\times{\mathbb{R}}^d)} \gtrsim 1,
\quad \text{but}\quad \int_{|x|\leq R} |\phi_n(x)|^2\,dx \to 0
$$
for all $R>0$. This difficulty is described, for instance, in the works of Merle--Vega \cite{merle-vega} and Keraani
\cite{keraani-l2} on the mass-critical equation.
The approach we take here is inspired by \cite{bourg.2d,KTV}. These papers considered the mass-critical equation; however,
unlike mass, kinetic energy is not conserved and this leads to several additional difficulties. One example is that we
need to appeal to the full strength of Theorem~\ref{T:main}; Corollary~\ref{C:main} is not sufficient.
\begin{proof}[Proof of Theorem~\ref{T:conc}]
Without loss of generality, we may assume that the solution $u$ blows up forward in time at $0<T^*\leq \infty$. We will further
assume that $T^*$ is finite; the proof in the case when $T^*=\infty$ requires only a few minor changes.
Let $t_n\nearrow T^*$ and define $u_n(0):=u(t_n)$; then each $u_n$ is a solution to \eqref{nls} on $[0, T^*-t_n)$. Invoking
\eqref{type II}, we apply Lemma~\ref{L:cc} to decompose
$$
u_n(0)= \sum_{j=1}^J g_n^j e^{it_n^j \Delta} \phi^j +w_n^J
$$
and define $v_n^j:I_n^j\times{\mathbb{R}}^d\to {\mathbb{C}}$ to be the maximal-lifespan solution to \eqref{nls} with initial data
\begin{align*}
v_n^j(0):=g_n^j e^{it_n^j \Delta} \phi^j.
\end{align*}
By \eqref{decouple}, there exists $J_0\geq 1$ such that
$$
\|\nabla \phi^j\|_2\leq \eta_0 \quad \text{for all} \quad j\geq J_0,
$$
where $\eta_0=\eta_0(d)$ is the threshold from the small data theory. By Theorem~\ref{T:local}, for all $n\geq 1$ and $j\geq
J_0$, the solutions $v_n^j$ are global and moreover,
$$
\sup_{t\in {\mathbb{R}}}\|\nabla v_n^j(t)\|_2^2 + S_{{\mathbb{R}}}(v_n^j)\lesssim \|\nabla \phi^j\|_2^2 \quad \text{for all } n\geq 1 \text{ and }
j\geq J_0.
$$
\begin{lemma}[A bad profile]\label{L:bad profile II}
There exists $1\leq j_0<J_0$ such that
\begin{align}\label{E:bad}
\limsup_{n\to \infty}S_{[0, T^*-t_n)}(v_n^{j_0})=\infty.
\end{align}
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{L:bad profile}. Arguing by contradiction, one approximates $u_n$ by
$u_n^J:=\sum_{j=1}^J v_n^j + e^{it\Delta}w_n^J$ and concludes that if \eqref{E:bad} were to fail, then $u$ would not blow up at
$T^*$.
\end{proof}
Reordering the indices, we may assume that there exists $1\leq J_1< J_0$ such that
\begin{align}\label{J1 def}
\limsup_{n\to \infty}S_{[0, T^*-t_n)}(v_n^j)=\infty \quad \text{for all} \quad j\leq J_1
\end{align}
and
$$
\limsup_{n\to \infty}S_{[0, T^*-t_n)}(v_n^j)<\infty \quad \text{for all} \quad j> J_1.
$$
To continue, we pass to a subsequence in $n$ so that $S_{[0,T^*-t_n)}(v_n^1) \to \infty$.
For each $m,n\geq 1$, there exist $1\leq j(m,n) \leq J_1$ and $0<T_n^m<T^*-t_n$ such that
\begin{equation}\label{S is m}
\sup_{1\leq j \leq J_1} S_{[0, T_n^m]}(v_n^j) = S_{[0, T_n^m]}(v_n^{j(m,n)}) = m.
\end{equation}
By the pigeonhole principle, there exists $1\leq j_1 \leq J_1$ so that for infinitely many $m$ we have $j(m,n)=j_1$ for
infinitely many $n$. Reordering the indices if necessary, we may assume $j_1=1$. Now by Theorem~\ref{T:main}, there exists $0\leq
\tau_n^m \leq T_n^m$ such that
$$
\limsup_{m\to \infty} \limsup_{n\to\infty} \|\nabla v_n^1(\tau_n^m)\|_2 \geq \|\nabla W\|_2.
$$
Given ${\varepsilon}>0$, we may set $m_0=m_0({\varepsilon})$ so that
\begin{align*}
\|\nabla v_n^1(\tau_n^{m_0})\|_2\geq \|\nabla W\|_2 -{\varepsilon} \quad \text{for infinitely many } n.
\end{align*}
In what follows we will drop the superscript $m_0$ and denote $\tau_n^{m_0}$, $T_n^{m_0}$ by $\tau_n$, $T_n$, respectively. We
will also pass to an ${\varepsilon}$-dependent subsequence in $n$ so that
\begin{align}\label{big ke}
\|\nabla v_n^1(\tau_n)\|_2\geq \|\nabla W\|_2 -{\varepsilon} \ \text{ for all $n$ and }
\ \lim_{n\to\infty} \|\nabla v_n^1(\tau_n)\|_2 \ \text{ exists.}
\end{align}
Let $\eta>0$ be a small constant to be chosen later. Fix $n$. As $S_{[0, T_n]}(v_n^1)=m_0$, there exists an interval
$[\tau_n^-, \tau_n^+]\subset [0, T_n]$ containing $\tau_n$ such that
\begin{equation}\label{eta S}
S_{[\tau_n^-, \tau_n^+]}(v_n^1)=\eta.
\end{equation}
Using \cite[Lemma~5.1]{KVZ-Q} as in that paper, one may deduce
$$
S_{[\tau_n^- -\tau_n, \tau_n^+ -\tau_n]}\bigl( e^{it\Delta} v_n^1(\tau_n) \bigr) \gtrsim \eta^{C(d)},
$$
where $C(d)$ is a dimension-dependent constant. Thus, by Lemma~\ref{L:B conc} there exist $x_n\in {\mathbb{R}}^d$ and $\tau_n^- -\tau_n\leq s_n\leq
\tau_n^+ -\tau_n$ such that
\begin{equation}\label{something is there}
\int_{|x - x_n|\lesssim |T^*-t_n'|^{\frac12}} |e^{is_n\Delta} \nabla v_n^1(\tau_n)|^2 \, dx \gtrsim_\eta 1,
\end{equation}
where $t_n':= t_n + s_n +\tau_n$. Note that we choose $s_n:=\inf J$ where $J$ is the interval from Lemma~\ref{L:B conc}. In the
case $T^*=\infty$, we choose $s_n:=\sup J$ and thus the diameter of the bubble is no more than $|t_n'|^{\frac12}$.
Passing to a subsequence in $n$, we may assume $t_n^1$ converges (possibly to $\pm\infty$). Arguing as in the beginning of the
proof of Proposition~\ref{P:palais-smale}, we may assume that $t_n^1\equiv 0$ or $t_n^1\to\pm\infty$. Continuing as there, we
define $v^1$ to be the maximal-lifespan solution that matches $\phi^1$ at $t=0$ (in the case $t_n^1\equiv0$) or scatters to
$e^{it\Delta} \phi^1$ (in the case $t_n^1\to\pm\infty$). In truth, $t_n^1\to \infty$ is incompatible with \eqref{J1 def},
though we will not make use of this fact.
Tracing back through the definitions, we obtain
\begin{equation}\label{friends}
\bigl\| v_n^1 - T_{g_n^1} [ v^1(\cdot + t_n^1) ] \bigr\|_{\dot S^1([0,T_n])} \to 0.
\end{equation}
Note that in the case $t_n^1\equiv0$, the left-hand side is actually identically zero. Combining this with \eqref{S is m} we
may deduce
\begin{equation}\label{can choose I}
S_{[t_n^1, t_n^1 + T_n(\lambda_n^1)^{-2}]}(v^1) \to S_{[0,T_n]}(v_n^1) = m_0.
\end{equation}
Moreover, combining \eqref{friends} with \eqref{something is there} and accounting for scaling and space translation,
\begin{equation*}
\int_{|\lambda_n^1 y + x_n^1 - x_n|\lesssim |T^*-t_n'|^{\frac12}} |e^{is_n(\lambda^1_n)^{-2}\Delta} \nabla
v^1((\lambda^1_n)^{-2}\tau_n+t_n^1,y)|^2 \, dy \gtrsim_\eta 1.
\end{equation*}
We now apply Proposition~\ref{P:comp prof}, noting that \eqref{can choose I} implies that $v^1$ has finite scattering size on
the relevant interval. Rescaling and invoking \eqref{friends}, we find
\begin{align}\label{bubble}
\Bigl| \|\nabla v_n^1(\tau_n)\|_2^2 - \int_{|x- x_n^1|\leq R_n} |e^{is_n\Delta} \nabla v_n^1(\tau_n)|^2 \, dx \Bigr| \to 0
\end{align}
for any sequence $R_n\in (0, \infty)$ such that $(T^*-t_n')^{-\frac12}R_n\to \infty$ as $n\to \infty$.
It remains to show that a similar bubble can be found inside $u(t_n')$. In view of \eqref{S is m}, we see that $u_n^J$ is a good
approximation to $u_n$ on $[0, T_n]$ for $n$ and $J$ sufficiently large. In particular,
\begin{align}\label{good approx ke II}
\lim_{J\to \infty}\limsup_{n\to \infty} \|u_n^J(s_n + \tau_n) -u(t_n')\|_{\dot H^1_x({\mathbb{R}}^d)}=0.
\end{align}
On the other hand, using \eqref{crazy} and Lemma~\ref{L:strong decouple}, one may deduce (arguing as in the proof of
Lemma~\ref{L:decouple ke}) that
\begin{align}\label{orthog II}
\limsup_{n\to \infty} \bigl|\bigl\langle \nabla u_n^J(s_n +\tau_n), \nabla v_n^1(s_n+\tau_n) \bigr\rangle\bigr| = \limsup_{n\to
\infty}\|\nabla v_n^1(s_n +\tau_n)\|_2^2
\end{align}
for all $J\geq 1$. Combining \eqref{good approx ke II} and \eqref{orthog II}, we derive
\begin{align*}
\limsup_{n\to \infty} \bigl|\bigl\langle \nabla u_n(t_n'), \nabla v_n^1(s_n+\tau_n) \bigr\rangle\bigr| = \limsup_{n\to
\infty}\|\nabla v_n^1(s_n +\tau_n)\|_2^2.
\end{align*}
Invoking \eqref{eta S} and using the Strichartz inequality, we see that
$$
\bigl\| v_n^1(s_n+\tau_n)- e^{is_n\Delta} \nabla v_n^1(\tau_n) \bigr\|_{\dot H^1_x} \lesssim \eta^{\frac2{d+2}}.
$$
Applying this on both sides of the equality above leads to
\begin{align}\label{some bubble}
\limsup_{n\to \infty} \bigl|\bigl\langle \nabla u_n(t_n'), e^{is_n\Delta} \nabla v_n^1(\tau_n) \bigr\rangle\bigr|
\geq \lim_{n\to \infty}\|\nabla v_n^1(\tau_n)\|_2^2 -c(\eta),
\end{align}
provided $\eta$ is chosen sufficiently small. Here $c(\eta)$ denotes a small power of $\eta$ (depending upon the dimension
$d$). Using the Cauchy--Schwarz inequality together with \eqref{some bubble} and \eqref{bubble}, we obtain
\begin{align*}
\limsup_{n\to \infty} \int_{|x- x_n^1|\leq R_n} |\nabla u(t_n',x)|^2 \, dx
\geq \frac{ \bigl[\lim_{n\to \infty}\|\nabla v_n^1(\tau_n)\|_2^2-c(\eta)\bigr]^2}{\lim_{n\to \infty}\|\nabla v_n^1(\tau_n)\|_2^2}.
\end{align*}
Invoking \eqref{big ke} and recalling that ${\varepsilon}$ and $\eta$ may be taken arbitrarily small completes the proof of
Theorem~\ref{T:conc}.
\end{proof}
\begin{remark}
One may wonder whether it is possible to show that kinetic energy concentrates along every sequence
approaching the blowup time.
In general, we are not able to prove results of this nature; in particular, we are unable to verify such a
claim in \cite[Corollary~5.18]{kenig-merle}. The obstruction is as follows: we cannot preclude the possibility
that the solution rapidly alternates between being spread out and being concentrated as one approaches the blowup time.
One exception is when $\sup_{t\in I} \|\nabla u(t)\|_{L_x^2}^2 < 2 \|\nabla W\|_{L_x^2}^2$. In this case, one may apply
Keraani's argument from the proof of \cite[Theorem~1.6]{keraani-l2}.
\end{remark}
|
1,314,259,993,511 | arxiv | \section{\label{sec:intro}Introduction}
Amino acids form the basis of peptides and proteins, which are fundamental building blocks of life, and they are of great scientific interest for a multitude of reasons, first and foremost due to their role in biology and related use in pharmacology and medicine. Their systematic nature also makes them perfect test systems to understand important aspects of the behaviour of molecular systems, including local and long-range structure and interactions, polymorphism, the three dimensional arrangement of proteins, and ionic behaviour and its tunability by the environment. Whilst the motivation to study amino acids is clear, experimental strategies are generally limited to structural techniques such as X-ray diffraction (XRD). A complementary technique, which can provide an additional level of information on chemical states and electronic structure not accessible to XRD, is X-ray photoelectron spectroscopy (XPS). Recently, we have started to explore the application of XPS to amino acids in their crystalline, powder form in combination with theoretical calculations based on density functional theory (DFT)~\cite{Hohenberg1964,Kohn1965}. Our first study established a combined experiment-theory approach to predict and interpret primarily the C~1\textit{s} core level spectra of the simple amino acids glycine (Gly), alanine (Ala), and serine (Ser)~\cite{Pi2020}. Here, we expand and improve our previous approach to amino acids with aromatic side chains, including phenylalanine (Phe), tyrosine (Tyr), tryptophan (Trp), and histidine (His). Fig.~\ref{fig:aa_schem} shows a schematic of their atomic structures, including Alanine (Ala) which is used as a reference throughout and which we have reported previously\cite{Pi2020}.\\
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{AA_schematic_2.png}
\caption{Schematic of (a) Ala, (b) Phe, (c) Tyr, (d) Trp and (e) His showing the atomic structures and atom labels, which will be used in the following. \label{fig:aa_schem}}
\end{figure}
As for X-ray photoelectron spectroscopy studies on amino acids in general, very few studies exist on the aromatic subgroup. A small number of experiments have been performed on Phe, Tyr and His adsorbed on single crystal substrates, including Au, Ag, Cu and TiO\textsubscript{2}\cite{Thomas2007AdsorptionSurface,Feyer2008,Feyer2010AdsorptionAu111,Reichert2010L-tyrosineScheme}. Whilst gas phase experiments are often used to study amino acids, this is difficult to achieve for the aromatic subgroup as they generally have high melting points (and consequently low vapour pressures) as well as low thermal stability\cite{Zhang2009ElectronicSpectroscopy}.
A very limited number of studies on solid powders has been reported, which often suffer from low experimental resolution complicating peak assignments\cite{Zubavichus2004SoftStudy,Cardenas2006TheSpectroscopy}. The 2013 work by Stevens \textit{et al.}\ provides the most systematic and detailed study of solid phase amino acids to date, in which only His of the aromatic subgroup is included\cite{Stevens2013}. Beyond XPS, X-ray absorption spectroscopy and electron energy loss spectroscopy have been employed to understand the chemistry and structure of the aromatic amino acids\cite{Boese1997CarbonPeptides,Cooper2004InnerPhenylalanine}. Due to the complexity of aromatic amino acids, the use of theory to guide the interpretation of spectra is essential. This is particularly true in the solid state, where intermolecular interactions can have an important effect, posing a further challenge to peak assignment.
Nonetheless, from a theoretical point of view, only a handful of examples of core binding energy (BE) calculations of aromatic amino acids exist, which are all limited to the gas phase~\cite{Wang2008,Ganesan2009,Zhang2009ElectronicSpectroscopy,Ganesan2014,Wang2014}. Furthermore, to the best of our knowledge the core state BEs of His have not previously been calculated.\\
In this work, the aromatic amino acids phenylalanine (Phe), tyrosine (Tyr), tryptophan (Trp) and histidine (His) are explored using both experiment and theory. The subgroup classification of amino acids usually includes Phe, Tyr and Trp in the aromatic group with Tyr also sometimes grouped with the polar amino acids. Due to the basic properties of His it is often classified as a polar amino acid. For completeness, we include all amino acids containing aromatic side chains here, independent of their polar nature. XPS experiments in the solid phase are compared to theoretical calculations based on DFT using the $\Delta$SCF (self-consistent field) approach as implemented in systematic basis sets. Due to the complexity of the observed core level spectra and the apparent strong influence of not only nearest, but also next-nearest and even further removed neighbouring atoms, a molecular subspecies approach was followed to aid in the rationalisation and explanation of observed BE shifts, particularly for the C and N~1\textit{s} core states. This is shown to be an extremely useful approach to gain a full and detailed understanding of the chemical and electronic structure of these important biological building blocks.\\
\section{Methods}
\subsection{Theoretical Approach}
Density functional theory was used to calculate the core state BEs of Ala, Phe, Tyr, Trp, and His. The primary motivation is the calculation of solid state BEs to aid the interpretation of experimental spectra. However, the solid state BEs are influenced by a combination of factors including the presence of different functional groups, the molecular and crystal structure, and intermolecular interactions. Theory is essential to disentangle these competing effects. Initial crystal structures for Ala, Phe, Tyr, Trp and His were obtained from Refs.~\cite{Destro1988,Ihlefeldt2014,Mostad1972,Gorbitz2015,Fronczek2016}, respectively. In order to assess the influence of the molecular structure, different gas phase conformers were tested for each molecule and selected as follows. Four low energy conformers for Phe, Tyr and Trp, and three low energy conformers for His were taken from the literature~\cite{Purushotham2012,Ropo2016}. Following geometry optimisation and BE calculations, the two conformers of each amino acid with the most distinct BEs were retained for further investigation. For Ala, only the lowest energy conformer used in Ref.~\cite{Pi2020} was considered. The structures of each conformer are presented in the Supplementary Information. The gas phase conformers were compared to the molecule extracted directly from the bulk, which was allowed to relax away from the zwitterionic state into its neutral form. BEs were calculated using the $\Delta$SCF approach, however, in order to distinguish between initial and final state effects, gas phase BEs were additionally calculated at the level of Koopmans'. To determine the contribution from intermolecular interactions, the BEs of both the gas and solid phase are compared. Finally, in order to assess the impact of different functional groups, a systematic series of subspecies molecules was investigated, as depicted in Fig.~\ref{fig:all_molecules}, which are derived from the aromatic amino acids studied here.\\
In order to aid interpretation of experimental spectra, the relative BE positions of contributing chemical environments are needed. Absolute BEs are not necessary for this approach, and DFT is more reliable for relative than absolute BEs. However, some recent work exists where DFT has been shown to accurately reproduce absolute BEs~\cite{Kahk2019,Ozaki2017}. When comparing calculations across the molecules, it is important to note that while BEs calculated for molecules in the gas phase can be directly compared between molecules, this is not the case for solid state calculations, since the core hole calculations are performed in charged supercells. Although schemes exist to account for the use of periodic boundary conditions (e.g.\ Ref.~\cite{Ozaki2017}), this can introduce an additional source of uncertainty. Therefore, since it is not essential for the current work, BEs of the amino acids in the solid state are not directly compared.\\
\begin{landscape}
\vspace*{\fill}
\begin{figure*}
\centering
\includegraphics[width=\paperwidth]{molecules_schematic_v2.1.png}
\caption{Molecular structure of the amino acids Ala, Phe, Tyr, Trp and His, as well as the molecular subspecies calculated, including atom labels used throughout this work. The molecules included are (\textbf{1}) benzene, (\textbf{2}) methylbenzene, (\textbf{3}) ethylbenzene, (\textbf{11}) phenol, (\textbf{12}) 4-methylphenol, (\textbf{13}) 4-ethylphenol, (\textbf{21}) 1H-pyrrole, (\textbf{22}) 1H-indole, (\textbf{23}) 3-methyl-1H-indole, (\textbf{24}) 3-ethyl-1H-indole, (\textbf{25}) 2-amino-3-(5-methyl-1H-pyrrol-3-yl)propanoic acid, (\textbf{26}) 2-amino-3-(1H-pyrrol-2-yl)propanoic acid, (\textbf{31}) 1H-imidazole, (\textbf{32}) 4-methyl-1H-imidazole, and (\textbf{33}) 4-ethyl-1H-imidazole.\label{fig:all_molecules}}
\end{figure*}
\vfill
\end{landscape}
\subsubsection{Computational Details}
Gas phase geometry optimisations were performed using BigDFT~\cite{Genovese2008,Ratcliff2020}, in open boundary conditions, with a wavelet grid spacing of 0.185~\AA, coarse and fine radius multipliers of 5 and 8, respectively, and HGH-GTH pseudopotentials (PSPs)~\cite{Goedecker1996,Hartwigsen1998}.
Gas phase BE calculations were performed at the level of both Koopmans' and $\Delta$SCF using the MADNESS molecular DFT code~\cite{Harrison2016} with open boundary conditions. A mixed all-electron (AE)/PSP approach was used~\cite{Ratcliff2019}, wherein the atom of interest was treated at the AE level, with remaining atoms treated at the PSP level, as described in Ref.~\cite{Pi2020}.
Ground state calculations used a wavelet threshold of $10^{-4}$ followed by $10^{-6}$ (wavelet order $k=6$ and $k=8$), while core hole
calculations directly used a wavelet threshold of $10^{-6}$ ($k=8$). A convergence criterion of $10^{-3}$ was
used for both the density and Kohn-Sham wavefunction residuals.
Following Ref.~\cite{Pi2020} the ground state wavefunctions were used as an input guess for the core hole calculations, localisation was imposed on the
wavefunctions for the ground state while core hole calculations used canonical orbitals, and the B-spline projection based derivative operator was used (except for the
calculation of the kinetic energy operator)~\cite{Anderson2019}. Calculations employed the same PSPs as BigDFT.\\
Solid state geometry optimisations and BE calculations using the $\Delta$SCF approach were performed with the CASTEP plane-wave DFT code~\cite{Clark2005}. Core hole PSPs were used to represent the core-excited atom, following the same procedure and with the same norm-conserving on-the-fly generated PSPs as Ref.~\cite{Pi2020}.
Calculations were performed with a cut-off energy of 900~eV, and Monkhorst-Pack~\cite{Monkhorst1976} $k$-point grids of $2 \times 1 \times 2$, $2 \times 2 \times 1$, $2 \times 1 \times 2$, and $2 \times 2 \times 2$ for Ala, Phe, Tyr and His, respectively, with Trp calculations performed at the $\Gamma$-point only.
Geometry optimisations used the semi-empirical dispersion correction scheme of Grimme~\cite{Grimme2006}. \\
Gas phase BE calculations were performed using PBE only~\cite{Perdew1996}, while solid state BE calculations were performed using both PBE and PBE0~\cite{Adamo1999}, except for Phe and Trp where PBE0 calculations were prohibitively expensive due to their large unit cells containing 184 and 432 atoms, respectively. All BEs were calculated in the vertical approximation.\\
All geometry optimisations used the PBE functional and a force tolerance of 0.02~eV/\AA. For solid state geometry optimisations the cell was also allowed to relax.
For molecules extracted from the optimised crystals, only the H atoms were relaxed, with all other atoms frozen. In order to prevent collapse back to the zwitterionic state, an initial perturbation was applied to one of the H atoms. For all other gas phase calculations, all atoms were allowed to relax. All calculations were spin restricted and relativistic effects were neglected, since although these can have a significant effect when calculating absolute BEs (see e.g.\ Ref.~\cite{Besley2009}), they are less significant when considering relative BEs, as in this work.
The same computational parameters were used for both gas phase amino acids and molecular subspecies calculations.
Molecule and crystal structures were visualized using VESTA~\cite{Momma2008}.
\subsection{Experimental Approach}
Powders of the L-stereoisomers of all investigated amino acids were purchased from Sigma-Aldrich (Ala~$\geqslant$99\%, Phe~$\geqslant$98\%, Tyr~$\geqslant$98\%, Trp~$\geqslant$98\%, His~$\geqslant$99\%). Core level spectra were recorded on a Thermo Scientific K-Alpha+ XPS system with a monochromated, microfocused Al K$\alpha$ X-ray source (h$\nu$ = 1486.7~eV), which was operated a 6~mA emission current and 12~kV anode bias. The base pressure was 2$\times10^{-9}$ mbar. All core level spectra were collected at a pass energy of 20~eV using an X-ray spot size of 400~$\mu$m. Samples were mounted on conducting carbon tape and a flood gun was employed to prevent sample charging. As amino acids are prone to suffer from radiation damage, samples were rastered and data collected at four points across the samples, which were then averaged to achieve the necessary signal statistics for peak fitting. All data were analysed using the Avantage software package. Differences in peak positions across the different measurement points were less than 50~meV for all core levels. For peak fit analysis, Shirley-type backgrounds and Voigt functions were used with both the full width at half maximum (FWHM) and Lorentzian/Gaussian (L/G) ratios refined.\\
\section{Results}
\subsection{Calculated Solid State Binding Energies}
In line with our previous work~\cite{Pi2020}, the use of PBE with semi-empirical dispersion corrections for the solid state geometry optimisations resulted in a good description of the crystal structure. Relaxed lattice parameters and angles, which are reported in the Supplementary Information alongside the relaxed crystal structures, are in good agreement with the experimental values, with maximum discrepancies of 3.0~\% and 1.2~\%, respectively.\\
Calculated BEs for the solid state amino acids are presented in Tab.~\ref{tab:bes}.
Due to the relatively large unit cell sizes of the aromatic amino acids, it is highly desirable to perform calculations using semi-local functionals such as PBE, rather than hybrid functionals such as PBE0. Indeed, for Phe and Trp PBE0 BE calculations were prohibitively expensive. For the amino acids where PBE0 calculations were possible (Ala, Tyr, and His), significant quantitative differences can be seen between the two functionals for C~1\textit{s}, up to 0.7~eV in the most severe cases. However, qualitatively the differences are less significant, and it is primarily the BE of C$'$ relative to the other states which is most strongly affected. Importantly, the order of BEs remains constant to within 0.1~eV, so that for the purposes of aiding in peak assignment in experimental spectra it is not necessary to go beyond PBE. Furthermore, the differences for O and N~1\textit{s} core states are negligible. Therefore, the calculations presented in the following sections were performed using PBE only.
\begin{table*}[h]
\centering
\begin{threeparttable}
\caption{\label{tab:bes} Experimental (`Exp') and PBE/PBE0 calculated relative C, O and N 1\textit{s} core state BEs for solid state amino acids.}
\begin{tabular*} {1.0\textwidth}{c @{\extracolsep{\fill}} rrr rr rrr rr rrr}
\hline\hline
& \multicolumn{3}{c}{Ala} & \multicolumn{2}{c}{Phe} & \multicolumn{3}{c}{Tyr} & \multicolumn{2}{c}{Trp} & \multicolumn{3}{c}{His} \\
& PBE & PBE0 & Exp & PBE & Exp & PBE & PBE0 & Exp & PBE & Exp & PBE & PBE0 & Exp \\
\cline{1-1} \cline{2-4} \cline{5-6} \cline{7-9} \cline{10-11} \cline{12-14} \\[-2.5ex]
\\[-2.5ex]
C$'$ & 0.0 & 0.0 & 0.0 &
0.0 & 0.0 &
0.0 & 0.0 & 0.0 &
0.0 & 0.0 &
0.0 & 0.0 & 0.0 \\
C$^\alpha$ & -1.7 & -2.1 & -2.0 &
-1.4 & -1.9 &
-1.5 & -2.1 & -2.5 &
-1.3 & -2.1 &
-1.8 & -2.1 & -1.9\\
C$^\beta$ & -3.1 & -3.5 & -3.3 &
-2.5 & -2.6 &
-2.7 & -3.4 & -3.6 &
-2.3 & -3.1 &
-2.7 & -3.2 & -3.2 \\
C$_1$ & - & - & - &
-2.9 & -3.1 &
-3.6 & -4.3 & -4.3 &
- & - &
- & - & - \\
C$_2$ & - & - & - &
-3.2 & -3.7 &
-3.7 & -4.4 & -4.3 &
-2.7 & -3.7 &
-2.1 & -2.5 & -2.4\\
C$_3$ & - & - & - &
-3.1 & -3.7 &
-3.8 & -4.5 & -4.3 &
-3.3 & -4.3 &
- & - & - \\
C$_{3\mathrm{a}}$ & - & - & - &
- & - &
- & - & - &
-3.2 & -4.3 &
- & - & - \\
C$_4$ & - & - & - &
-3.2 & -3.7 &
-2.1 & -2.8 & -3.1 &
-3.4 & -4.3 &
-2.8 & -3.2 & -3.2 \\
C$_5$ & - & - & - &
-3.2 & -3.7 &
-3.8 & -4.5 & -4.3 &
-3.4 & -4.3 &
-2.8 & -3.3 & -3.2\\
C$_6$ & - & - & - &
-3.1 & -3.7 &
-3.6 & -4.3 & -4.3 &
-3.3 & -4.3 &
- & - & - \\
C$_7$ & - & - & - &
- & - &
- & - & - &
-3.2 & -4.3 &
- & - & - \\
C$_{7\mathrm{a}}$ & - & - & - &
- & - &
- & - & - &
-2.3 & -3.1 &
- & - & - \\
\cline{1-1} \cline{2-4} \cline{5-6} \cline{7-9} \cline{10-11} \cline{12-14} \\[-2.5ex]
O$^1$ & 0.0 & 0.0 & 0.0 &
0.0 & 0.0 &
0.0 & 0.0 & 0.0 &
0.0 & 0.0 &
0.0 & 0.0 & 0.0 \\
O$^2$ & 0.2 & 0.2 & 0.0 &
0.0 & 0.0 &
0.1 & 0.1 & 0.0 &
-0.1 & 0.0 &
0.0 & 0.0 & 0.0 \\
O$^3$ & - & - & - &
- & - &
0.8 & 0.8 & 1.2 &
- & - &
- & - & \\
\cline{1-1} \cline{2-4} \cline{5-6} \cline{7-9} \cline{10-11} \cline{12-14} \\[-2.5ex]
N$^1$ & 0.0 & 0.0 & 0.0 &
0.0 & 0.0 &
0.0 & 0.0 & 0.0 &
0.0 & 0.0 &
0.0 & 0.0 & 0.0 \\
N$^2$ & - & - & - &
- & - &
- & - & - &
-1.2 & -1.5 &
-0.6 & -0.7 & -1.0\\
N$^3$ & - & - & - &
- & - &
- & - & - &
- & - &
-2.2 & -2.3 & -2.7 \\
\hline\hline
\end{tabular*}
\end{threeparttable}
\end{table*}
Whilst the calculated BE positions describe the experimental core level spectra very well, which will be discussed in detail in Section~\ref{sec:spectra}, it is not easy to intuitively rationalise the order and relative positions of the different constituents, in particular for the case of C~1\textit{s} with its many chemical states. Therefore, a molecular subspecies approach was chosen to systematically explore core level energy changes with the removal or introduction of part of the amino acids and their functional groups.\\
\subsection{Molecular Subspecies Series}
Twenty-two additional small molecular systems were explored theoretically to aid our understanding of the core level spectra observed for the aromatic amino acids. Fig.~\ref{fig:all_molecules} gives an overview of the main set of molecular subspecies calculated and their relationship to the aromatic amino acids and Ala. Fig.~\ref{fig:mol_series} provides an overview of the C~1\textit{s} BEs of the molecular series, while the tables of the corresponding BEs are also given in the Supplementary Information. A set of additional subspecies was explored to understand specific questions arising around nitrogen groups and aromatic systems, which is shown in the Supplementary Information. In the following subsections the results and main conclusions for each of the aromatic amino acids are discussed.\\
\begin{figure*}[p]
\centering
\includegraphics[scale=0.25]{molecule_series.png}
\caption{PBE-calculated C~1\textit{s} BEs for the amino acids and the series of subspecies molecules. Gas phase BEs are relative to Ala C$'$, while solid state BEs are relative to C$'$ of that amino acid. \label{fig:mol_series}}
\end{figure*}
\subsubsection{Phe}
In parallel to Phe being the simplest of the amino acids explored here, it also reduces to the simplest submolecule, benzene (\textbf{1}). As expected, all C atoms for benzene have the same BEs as each other in both the Koopmans' and the $\Delta$SCF approaches. Moving to methylbenzene (\textbf{2}) a clear difference between C$_1$ and the remaining C atoms of the aromatic ring is noticeable, in line with previous calculations~\cite{Myrseth2007}.
This is clearly illustrated by the differences between the ground state electronic densities of (\textbf{2}) and (\textbf{1}), which are depicted in the Supplementary Information, where the addition of the CH\textsubscript{3} group changes the density around C$_1$. There are also non-negligible changes in the density around all other aromatic C atoms C$_{\mathrm{arom}}$.
Combined with changes in the atomic structure between (\textbf{2}) and (\textbf{1}), and which are not accounted for in the visualisation of the densities, this explains why the Koopmans' BEs of all C atoms change between the two molecules.
Comparing to experimental gas phase measurements by Ohta \textit{et al.}~\cite{Ohta1975Core-electronSpectroscopy}, we observe that while the relative BEs agree reasonably well with experiment, their peak assignments are more in line with the Koopmans' values. In particular, C$_1$ has the highest BE, while C$^\beta$ is at the lowest BE.\\
Whilst C$^\beta$ in (\textbf{2}) and ethylbenzene (\textbf{3}) and C$^\alpha$ in (\textbf{3}) occur at the lowest BEs, this changes completely using the $\Delta$SCF approach, where C$^\alpha$ and C$^\beta$ move to the higher BE side of all other C atoms.
In addition, a clear chemical shift between the CH\textsubscript{2} and CH\textsubscript{3} groups of the side chain for (\textbf{3}) is also apparent.
In order to understand to what extent the shift in C$^\beta$ for $\Delta$SCF is affected by the aromaticity of (\textbf{2}), we also compare with methylcyclohexane (\textbf{44}), for which results are given in the Supplementary Information. In particular, the $\Delta$SCF results for (\textbf{44}) only show a small spread, but otherwise both C$^\beta$ and C$_1$ are at very similar energies to the remaining C atoms, in contrast to (\textbf{2}). In other words, the conjugated system is much more sensitive to the addition of the CH\textsubscript{3} group when final state effects are taken into account.
When examining the density difference between (\textbf{3}) and (\textbf{2}), there is a small change in the density around C$_1$, which in turn gives rise to a small change in the Koopmans' BEs. All other C atoms in the ring, however, remain unaffected by the addition of the CH\textsubscript{3} group, so that the corresponding BEs of the C$_{\mathrm{arom}}$ atoms do not change between (\textbf{2}) and (\textbf{3}).\\
For Phe itself the C$_{\mathrm{arom}}$ atoms including C$_1$ behave similarly to (\textbf{1})-(\textbf{3}), with a clear spreading in BE of C$_1$-C$_6$. With the addition of the carboxylic COO\textsuperscript{-} group the separation between C$_{\mathrm{arom}}$ and C$^\alpha$ and C$^\beta$ increases significantly and C$^\alpha$ and C$^\beta$ switch places.
The $\Delta$SCF results for the gas phase molecules show similar variations between conformers and are in agreement with previous calculations from Zhang \textit{et al.}~\cite{Zhang2009ElectronicSpectroscopy}\ who included four different conformers. When comparing the $\Delta$SCF gas phase conformers with the solid Phe a clear bunching up of BEs is observed, whilst the relative BE order of the different C environments remains the same. The significant change in BE of C$^\alpha$ and C$'$ can be explained by the change from COOH/NH\textsubscript{2} to the zwitterionic COO\textsuperscript{-}/NH\textsubscript{3}\textsuperscript{+} environments and the resulting intermolecular interactions. The main observable difference between the Koopmans' and $\Delta$SCF results for Phe lies in the differentiation of C$^\beta$ from C$_{\mathrm{arom}}$. Whilst they are very close in energy or even overlap for some conformers, C$^\beta$ moves to significantly higher BEs in $\Delta$SCF due to final state effects, which is consistent with the behaviour of C$^\beta$ in (\textbf{2}) and (\textbf{3}).\\
\subsubsection{Tyr}
Across the series from phenol (\textbf{11}) to 4-methylphenol (\textbf{12}) and 4-ethylphenol (\textbf{13}) a common feature is the spreading out of C$_{\mathrm{arom}}$ BEs due to the presence of the hydroxyl group.
Similarly to the equivalent series for Phe, there is a significant change in the electronic density (shown in the Supplementary Information) on all C$_{\mathrm{arom}}$ going from (\textbf{11}) to (\textbf{12}), with corresponding changes in the BEs. However, the changes in density between (\textbf{12}) and (\textbf{13}) are again primarily localized on C$_1$, with the remaining C$_{\mathrm{arom}}$ unaffected by the addition of the CH\textsubscript{3} group.
In parallel to the spreading out of the C$_{\mathrm{arom}}$ BEs, a large gap also opens up between C$_4$ and the remaining C atoms. Comparing (\textbf{11}) with cyclohexanol (\textbf{47}), for which results are presented in the Supporting Information, this gap is much larger in (\textbf{11}) than in (\textbf{47}) for both Koopmans' and $\Delta$SCF, demonstrating the strong influence of the aromaticity and the importance of final state effects. Both Koopmans' and $\Delta$SCF results for (\textbf{11}) agree well with experimental gas phase results from Ohta \textit{et al.}~\cite{Ohta1975Core-electronSpectroscopy}. One interesting point to note about (\textbf{11}) is the difference in BEs between C$_3$ and C$_5$, and C$_2$ and C$_6$, which in contrast have the same BEs in both (\textbf{2}) and aniline (\textbf{46}) and the same is true for the equivalent non-conjugated molecules. However, due to the presence of the hydroxyl group, neither (\textbf{11}) nor (\textbf{47}) have symmetric structures, and the small asymmetry of the BEs can be attributed to this asymmetry of the atomic structures. \\
As was the case for the subspecies molecules for Phe, C$^\alpha$ and C$^\beta$ in (\textbf{12}) and (\textbf{13}) occur at the lowest BEs in the Koopmans' approach, but swap when $\Delta$SCF is used. Compared to Phe, C$^\beta$ in Tyr shifts to even higher BE relative to C$_{\mathrm{arom}}$ in the $\Delta$SCF approach. This is a direct result of the addition of the hydroxyl group onto the aromatic ring and showcases the strong long-range intramolecular interactions taking place. Of course C$_4$ is now also clearly separated from the rest of the aromatic ring and located at a BE intermediate between C$^\alpha$ and C$^\beta$.\\
As with Phe, there is also variation between Tyr conformers, where the results are again in line with calculations from Zhang \textit{et al.}~\cite{Zhang2009ElectronicSpectroscopy}. A significant change in the BE separation of C$^\alpha$ and C$_4$ occurs when moving from the gas phase calculations to the solid state case. Whilst in the gas phase their binding energies are almost identical across all Tyr molecules considered, they separate significantly in the solid. This is due to the hydroxyl group taking part in intermolecular hydrogen bonding as can be clearly seen from the crystal structures shown in the Supplementary Information.\\
\subsubsection{Trp}
1H-pyrrole (\textbf{21}) nicely exemplifies the symmetric nature of the ring with C$_2$/C$_{7a}$ and C$_3$/C$_{3a}$ grouping together for both Koopmans' and $\Delta$SCF. In 1H-indole (\textbf{22}) C$_2$ and C$_{7a}$ remain at significantly higher BEs than all other C atoms. Comparing the Koopmans' and $\Delta$SCF results for 3-methyl-1H-indole (\textbf{23}) and 3-ethyl-1H-indole (\textbf{24}) a considerable change in BE for C$^\alpha$ and C$^\beta$ is observed as in the previous cases discussed. A systematic difference in the $\Delta$SCF BEs of C$_2$ and C$_{7a}$ is noted across all molecules in the series except (\textbf{21}), even if the six-membered ring is removed as is the case in 2-amino-3-(5-methyl-1H-pyrrol-3-yl)propanoic acid (\textbf{25}) and 2-amino-3-(1H-pyrrol-2-yl)propanoic acid (\textbf{26}).\\
As with Phe and Tyr, and again in agreement with Zhang \textit{et al.}~\cite{Zhang2009ElectronicSpectroscopy}, there is noticeable variation between the Trp conformers. While the Koopmans' BEs for Trp are in line with chemical intuition, the $\Delta$SCF BEs are harder to explain. In particular, contrary to the expectation that aromatic and aliphatic C atoms should have similar BEs, C$^\beta$ is noticeably higher in BE than the C$_{\mathrm{arom}}$ which do not neighbour a N atom. Indeed, the BE of C$^\beta$ is especially sensitive to final state effects, as evidenced by the difference between Koopmans' and $\Delta$SCF values.
This is also the case for both Phe and Tyr, and by comparing (\textbf{2}) and (\textbf{44}) was attributed to the conjugated nature of the ring. Similarly, the density comparisons discussed in relation to Phe and Tyr demonstrated that the functionalisation of an aromatic ring can impact on the density and thus the BEs of \textit{all} atoms in the ring, not just the nearest neighbour. This explains for example why it is not just the BE of C$^\alpha$ which is affected by the addition of the amino group when going from (\textbf{24}) to Trp.\\
Furthermore, in Trp the BE of C$^\beta$ is surprisingly close to that of both C$_2$ and C$_{7\mathrm{a}}$ in the gas phase and the same as C$_{7\mathrm{a}}$ in the solid state, which cannot be explained by arguments based purely on electronegativity. On the contrary, since they each neighbour a N atom, one would expect the BE of C$^\alpha$ to be close to that of C$_2$ and C$_{7\mathrm{a}}$, which is not the case in either Trp, (\textbf{25}), or (\textbf{26}). In addition to next-nearest neighbour effects, this can also be explained by the protonation state of the N atoms. In order to provide further insights on the influence of different protonation states of N on C~1\textit{s} BEs, we also considered an additional set of subspecies molecules containing nitrogen, for which results are given in the Supplementary Information. Taking for example the series of ethylamine (\textbf{41}) to diethylamine (\textbf{42}) to triethylamine (\textbf{43}), one can see a clear trend in the $\Delta$SCF BEs, where the higher the protonation state of the N atom, the higher the BE of C$^\alpha$. This trend is in agreement with C$^\alpha$ having a higher BE than C$_2$ and C$_{7\mathrm{a}}$. Finally, we note that the BEs of C$^\beta$ in the alkylamine series are also affected by the change in N protonation state, providing further support for the importance of next-nearest neighbour effects, although the magnitude of variations is much smaller than for C$^\alpha$.\\
To further test the influence of aromaticity, the BEs for (\textbf{46}) and cyclohexanamine (\textbf{45}) were calculated, for which results are given in the Supplementary Information. Both Koopmans' and $\Delta$SCF results for (\textbf{46}) are in good agreement with experimental gas phase results from Ohta \textit{et al.}~\cite{Ohta1975Core-electronSpectroscopy}. Consistent with (\textbf{11}) and (\textbf{47}), a larger gap between C$_1$ and the remaining C atoms is observed for (\textbf{46}) than (\textbf{45}), while there is also a larger spread of the C atoms in the ring in (\textbf{46}) compared to (\textbf{45}). A clear overall trend is observed upon the addition of a functional group to a ring, whether conjugated and non-conjugated, where an increasing split between the C atom the group binds to and the remaining C atoms is observed in line with the increasing electronegativity in going from C to N to O in the functional groups CH\textsubscript{3}, NH\textsubscript{2}, and OH. Comparing the effect on the conjugated versus non-conjugated rings, this difference is always bigger for the conjugated ring.\\
\subsubsection{His}
In 1H-imidazole (\textbf{31}) the three C atoms all have considerably different BEs, including a clear distinction in C BE depending on the protonation of the neighbouring N atom in line with the previous observations for molecules (\textbf{41})-(\textbf{43}). The addition of the methyl and ethyl side chains in 4-methyl-1H-imidazole (\textbf{32}) and 4-ethyl-1H-imidazole (\textbf{33}), respectively, reduces the difference in BE between C$_4$ and C$_5$. Going from Koopmans' to $\Delta$SCF a significant change in the BEs of C$^\beta$ and C$^\alpha$ relative to the three C atoms in the aromatic ring, C$_2$, C$_4$, and C$_5$, is observed. The relative differences between C$_2$, C$_4$, and C$_5$ remain very similar between the two approaches.\\
Across all $\Delta$SCF gas phase calculations of His, C$_4$, C$_5$ and C$^\beta$ are very close in BE. This is comparable to the observations made for C$_2$, C$_{7\mathrm{a}}$ and C$^\beta$ in Trp. Another similarity between Trp and His is that C$^\alpha$ is the most sensitive to changes in conformer and gas/solid phases, and its BE changes significantly between calculations. In the solid phase C$^\alpha$ is even higher in BE than C$_2$, which is not the case in either the Koopmans' or $\Delta$SCF gas phase calculations, and this is most certainly not immediately intuitive. However, based on the results presented so far, this is a consequence of a complex interplay between the protonation of the N atoms, the influence of the aromatic ring, and the intermolecular interactions of both the NH\textsubscript{3}\textsuperscript{+} and NH groups in His. As will be discussed in more detail in the following section, previous experimental work by Stevens \textit{et al.}\ assigned the chemical states present closely to the results we find for the Koopmans' approach \cite{Stevens2013}.\\
To summarise the observations made to this point, the molecular subspecies approach is invaluable to rationalise and discuss the complex relative BE changes observed in the amino acids. A fascinating, if somewhat subjective, result from the combination of experiment and theory and the exploration of the molecular subspecies is that chemical intuition and the experience of a spectroscopist usually reflects the results given by Koopmans' theorem. The additional rearrangement of BE positions observed in $\Delta$SCF is often surprising, resulting in our hypothesis that human brains are not best placed to compute final state effects ad hoc without the aid of DFT.\\
\subsection{Core Level Spectra of the Amino Acids}\label{sec:spectra}
Where experimental core level spectra exist in the literature, they are very similar to the data presented here, albeit often with lower energy resolution~\cite{Zhang2009ElectronicSpectroscopy,Clark1976,Zubavichus2004SoftStudy,Stevens2013}. The main difference is often found in the peak fits, including the number of peaks fitted and their relative BEs and intensities. The peak fits presented here are based on robust, physically justifiable line shapes, including FWHM and L/G ratio, with the number of peaks informed from theory where needed due to overlap.
In Fig.~\ref{fig:spectra} a Shirley-type background has been subtracted to aid comparison with theory, while the relative BEs are presented in Tab.~\ref{tab:bes} alongside the calculated values. Absolute BEs resulting from the peak fits are given in the Supplementary Information. It should be noted that adventitious carbon at around 285 eV is present in all samples as is expected for XPS of ex-situ prepared powders, leading to a slight deviation from expected relative intensities.\\
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{DSCF_pfit.png}
\caption{C and N~1\textit{s} core level spectra, with experiments depicted as black dots, experimental peak fits denoted as grey/black solid lines, and calculated BEs shown as coloured vertical lines. PBE0 calculations are omitted for N~1\textit{s} due to the similarity with PBE results. Calculated BEs have been aligned with the experimental spectra by aligning with respect to the lowest BE peak, taking the average calculated BE where appropriate. A Shirley-type background has been subtracted from all core level spectra to aid comparison with theory.\label{fig:spectra}}
\end{figure*}
\subsubsection{C~1\textit{s}}
Considering first the C~1\textit{s} BEs, it is clear that PBE0-calculated values agree more closely with experiment. The main discrepancy is that for PBE calculations the BE of C$'$ is much closer to C$^\alpha$ than for PBE0. In the worst case, Tyr, the difference between C$'$ and C$^\alpha$ is 1~eV smaller than for the experimental BEs, while the difference for PBE0-calculated BEs is much closer to experiment. This is clearly evident in Fig.~\ref{fig:spectra}. However, as previously discussed, the relative BEs of all C atoms other than C$'$ are in very similar positions relative to each other for both PBE and PBE0. As a result, where the calculated BEs are aligned with respect to the lowest BE peak as in Fig.~\ref{fig:spectra}, the only visible difference between PBE and PBE0 is in the position of C$'$. This is reflected in the mean absolute error (MAE) of the BEs between experiment and theory -- taking C$'$ as a reference the MAE is at 0.2~eV or less for PBE0, while in the worst case for PBE, Trp, this is much higher at 0.9~eV. If, however, the BEs are aligned with respect to the lowest BE peak, the PBE MAEs are similar to PBE0.\\
His is the only amino acid included here for which high resolution solid state spectra have previously been reported~\cite{Stevens2013}. As the work by Stevens \textit{et al.}\ includes detailed information on the peak fits and resulting peak positions, this can be directly compared with the present results. The peak assignments made in the Stevens work agree well with what the Koopmans' level of theory predicts for gas phase His, with C$'$, C$^\alpha$ and C$^\beta$ in good agreement with the present results. The main difference lies in the assignment of the subpeaks of the aromatic C atoms C$_2$, C$_4$ and C$_5$. C$_4$ and C$_5$ are assigned an intermediate BE between C$^\beta$ and C$^\alpha$ in the Stevens work, but based on the solid state $\Delta$SCF theory results presented here, it is clear that both overlap with C$^\beta$. And whilst C$_2$ is assigned the second highest BE in the previous work, it becomes clear that it actually lies below C$^\alpha$. The peak fits presented in Fig.~\ref{fig:spectra} take into account the theoretical results and a good agreement between the two is found. \\
In addition to the main photoionisation features all C~1\textit{s} spectra include $\pi-\pi^*$ shake-up satellites at 6-7 eV above the main photoionisation peak at lowest BE with relative intensities of $\leq$3\% compared to the aromatic contribution of the C~1\textit{s} core level. This is in good agreement with observations made for many conjugated systems, including early studies of Phe, Tyr and Trp by Clark \textit{et al.}\cite{Clark1976}. The calculation of satellite features is challenging and they are not included in the theoretical calculations presented here, although we note that approaches based on both DFT and time-dependent DFT have been successfully employed for large molecules~\cite{Brena2004EquivalentPhthalocyanine,Gao2008}.\\
\subsubsection{N~1\textit{s}}
In contrast to C~1\textit{s}, where a considerable difference in PBE vs.\ PBE0-calculated BE values is observed, the N~1\textit{s} BEs are not strongly affected by the functional. The calculated BEs are closer together than the experimental BEs, however the MAEs are in line with those for C~1\textit{s}. Only Trp and His have more than one N atom and therefore only these two will be discussed in detail in this section. The BEs for the molecular subspecies series as well as gas phase amino acids are given in the Supplementary Information.\\
For Trp, a big change in the difference between the BEs for N$^1$ and N$^2$ is observed when going from Koopmans' to $\Delta$SCF for the gas phase calculations, but in both cases N$^2$ is at a higher BE, in agreement with calculations from Zhang \textit{et al.}~\cite{Zhang2009ElectronicSpectroscopy}. The order of the calculated BEs in (\textbf{25}) and (\textbf{26}) is also consistent with gas phase Trp. The calculations by Zhang \textit{et al.}\ also show a strong variation between conformers, particularly for N$^1$ which varies by up to 0.7~eV, which they attribute to differences in the nature of the internal hydrogen bonding present in a given conformer. In the solid phase the BE order of N$^1$ and N$^2$ flips compared to the gas phase, which is attributed to the presence of the zwitterion state in the solid phase and the resulting intermolecular interactions. To understand the differences in the BEs observed for N atoms with varying protonation further, subspecies molecules (\textbf{41})-(\textbf{43}) were calculated. In the Koopmans' approach the BEs are in the order N$^3$$>$N$^2$$>$N$^1$, whilst this is reversed in $\Delta$SCF. Both the ordering and values from the $\Delta$SCF approach agree very well with gas phase measurements from Cavell and Allison~\cite{Cavell1977SiteSpectroscopy}. Therefore, the observed flipping of N$^1$ and N$^2$ is most likely not solely caused by intermolecular interactions but also originates from intrinsic final state effects. The molecules (\textbf{45}) and (\textbf{46}) once again reinforce the observed influence of aromatic systems on the BEs. In particular, the aromatic aniline (\textbf{46}) molecule is more affected by $\Delta$SCF, with a relative change of 0.3 eV compared to Koopmans'. Furthermore, there is a large difference between the BEs of (\textbf{45}) and (\textbf{46}) -- 0.7~eV for Koopmans' and 0.9~eV for $\Delta$SCF, where again the $\Delta$SCF results are in good agreement with the difference of 0.6~eV measured by Cavell and Allison.\\
The calculated N 1\textit{s} BEs for Trp agree well with the experimentally observed values. In the experimental N 1\textit{s} spectrum of Trp a higher intensity of the peak assigned to N$^2$ relative to N$^1$ is observed. This deviation from the 1:1 ratio of the two N components has been reported previously~\cite{Clark1976},
and is most likely caused by a partial deprotonation of the NH\textsubscript{3}\textsuperscript{+} group at the surface of the powder sample.\\
The N~1\textit{s} BEs of His show a similar sensitivity to a range of factors as for Trp. Looking at the gas phase conformers, N$^2$ shows a consistently higher BE for (\textbf{31}) - (\textbf{33}) and all His conformers, for both Koopmans' and $\Delta$SCF results. However, the ordering of N$^1$ and N$^3$ changes between different conformers. Quantitatively, the BEs also vary significantly between Koopmans' and $\Delta$SCF, with N$^3$ typically being affected most strongly, although there do not appear to be any general trends. This again highlights the importance of taking final state effects into account. Furthermore, the trend in energies cannot be explained purely by considering protonation states, but is likely influenced by both aromaticity and interactions between the two N atoms in the ring. As with Trp, the solid state BEs are qualitatively different from the gas phase conformers, with N$^1$ now having the highest BE and N$^3$ having the lowest BE. The fact that N$^1$ has the highest BE agrees with the behaviour in Trp, and as for Trp the change between gas and solid state BEs is likely due to a combination of the zwitterionic nature of the amino acid in the solid state as well as the related intermolecular interactions.\\
Two previous experimental studies have reported N~1\textit{s} spectra for His. Feyer \textit{et al.}\ show N~1\textit{s} core level spectra comparable to those reported here, but are not able to resolve N$^1$ and N$^2$ in their analysis~\cite{Feyer2008TheCu110}. Stevens \textit{et al.}\ report BE values of 398.8 eV (N$^3$), 400.4 eV (N$^2$), and 401.4 eV (N$^1$) for His, which are in good agreement with our measurements and peak assignments, and both agree well with the calculated values.\\
\subsubsection{O~1\textit{s}}
To complete the set of core states present in the aromatic amino acids, the O~1\textit{s} spectra are presented in the Supplementary Information.
As with N~1\textit{s} BEs, the calculated values are not affected by the functional, and the MAE between theory and experiment is also in line with N~1\textit{s}.
However, overall, these spectra do not provide much additional information beyond what has been discussed based on the C and N~1\textit{s} results and only Tyr has more than one oxygen environment present in the solid state. In addition, O~1\textit{s} has an intrinsically high lifetime width and small magnitude of chemical shifts, which in combination with the presence of surface states, limits its usefulness for the study of amino acids.\\
\section{Conclusion}
This work presents the first detailed, systematic exploration of the core state energies of the four aromatic amino acids combining both high resolution XPS and state-of-the-art DFT. A $\Delta$SCF approach, which we have successfully developed and applied to simpler amino acids previously, is extended to amino acids with aromatic side chains and proves robust in predicting the core levels observed in XPS and all contributing local chemical environments. More than 20 additional molecular subspecies are calculated to aid in the discussion and interpretation of the amino acid core states and underpin the assignments made in experimental spectra. This approach provides further understanding and rationalisation of the often complicated and surprising changes in binding energies observed in the calculations for the solid state amino acids. This work substantially improves our understanding of the aromatic amino acids and gives crucial insights into their intra- and intermolecular structure. Furthermore, it reemphasises the need to combine theory with experiment in order to obtain an accurate and robust picture of the local chemistry and electronic structure and forms the basis for future work on conjugated molecular systems in general. \\
\section*{Acknowledgements}
AR acknowledges support from the Analytical Chemistry Trust Fund for her CAMS-UK Fellowship.
NKF acknowledges support from the Engineering and Physical Sciences Research Council (EP/L015277/1).
LER acknowledges support from an EPSRC Early Career Research Fellowship (EP/P033253/1) and the Thomas Young Centre under grant number TYC-101.
Calculations were performed on the Imperial College High Performance Computing Service and the ARCHER UK National Supercomputing Service.
\section*{References}
\bibliographystyle{iopart-num}
\section*{References}
\vspace{-5pt}
\bibliographystyle{iopart-num}
|
1,314,259,993,512 | arxiv | \section{Introduction}
Hash coding has become an essential method for similarity queries due to its computational efficiency and storage cost advantages. A series of hash representation learning methods~\cite{wang2017survey,cao2017hashnet,jiang2017deep} developed from local similarity hashing further improve the hashing retrieval efficiency. However, hash learning needs to consider additional factors for multi-modal data.
In cross-modal hashing, instances from different modalities
need to be compared and ranked together, which means the hamming distance of instances from different modalities can reflect the shared semantic relationship.
Thanks to the rapid development of representation learning, current cross-modal hashing methods often perform fine-tuning on the backbone encoding models with a joint-optimization alignment loss and a binary optimization strategy to obtain the final hashing model. High-quality encoding offered by the pre-trained models predominantly improves the cross-modal retrieval performance, especially the unsupervised performance.
In recent years, unsupervised models, such as UnifiedVSE~\cite{wu2019unified}, VL-BERT~\cite{su2019vl}, and VLP~\cite{zhou2020unified}, have made breakthroughs on cross-modal retrieval tasks. In this paper, our hash-learning targets at a more general unsupervised cross-modal scenario~\cite{su2019deep,liu2020joint,zhang2021high}, where hash models are learned to map the multimodal semantic feature vectors to a binary space such that the hash-learning can improve the performance of multimedia data management and retrieval in a broad manner.
Meanwhile, our research focuses on the representative task of image-text cross-modal retrieval.
However, as our method is not limited to feature extraction backbones, it can be extended to other cross-modal retrieval between/among other modalities.
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig1.pdf}
\caption{A simple example of an image-text cross-modal retrieval system. When an image is received as a query, the system returns the corresponding textual description from the database based on the hamming retrieval and vice versa.
}
\vspace{-0.25in}
\label{fig:fig1}
\end{figure}
\end{comment}
According to optimization objectives, existing unsupervised cross-modal hashing methods can be clustered into alignment methods~\cite{zhang2018unsupervised} and reconstruction methods~\cite{su2019deep,hu2020creating,yu2021deep}.
Both classes of methods lack reliable cross-modal similarity as the supervision during the training~\cite{zhang2021high}. Although some in-depth studies~\cite{yu2021deep,zhang2021high} have been conducted in constructing more accurate cross-modal uniform metrics, they ignore the fact that the final goal of our hashing study is to obtain a plausible cross-modal metric, which is difficult to achieve based on a simple combination of raw multi-modal similarity distribution.
To exploit the inter- and intra-modal semantic relationships, existing unsupervised hashing methods have widely adopted graph-based paradigms to regularize similarity relationships. However, existing methods often suffer from the ``static graph'' problem~\cite{shen2020auto}. More specifically, they often employ features from original data or pre-trained models to build the explicit precomputed graphs. However, those pre-defined graphs cannot be adaptively learned during the training to better express the semantic structures.
Different from the label-based reliable static association graph in the supervised methods~\cite{chen2021local,liu2021graph}, the ''static graph'' constructed based on the original data measurement in the unsupervised methods~\cite{liu2020joint,yu2021deep} actually introduces the bias in the original feature measurement.
As there is no reliable static optimization objective in unsupervised cross-modal hash learning, the difficulty in imporving the unsupervised performance lies in how to smoothly refine the optimization objective and reduce the prior noise in the training process, thereby improving the performance. It has been confirmed in existing work~\cite{caron2018deep} that a deep representation learning model with prior knowledge can be trained on middle results and iteratively converge. In recent research, the mainstream unsupervised research~\cite{he2020momentum} provides excellent examples through the optimization of memory banks and various variants. However, the memory-bank-based methods require massive training data as the basis, which is difficult to obtain for general application scenarios.
We aim at optimizing the optimization goal (``static graph'') in the learning process to mine more information under unsupervised conditions. In order to ensure a smooth convergence of training, we do not directly adopt the continuous similarity between intermediate results, but propose a two-stage correlational relationships mining strategy to get more reliable prior information. This relationship also serves as additional pseudo-supervision information to resolve the prior noise in the original static graph. In brief, in this paper, we propose a novel adaptive hash learning framework based on adaptive structural similarity preservation to overcome the above problem from two aspects.
Firstly, we design an adaptive correlation mining scheme to provide reliable positive samples as supervision while being able to smoothly update the correlational relationship set during the training to enrich the semantics correlations.
Secondly, we introduce a structural semantic representation and an asymmetric binary code learning scheme together to preserve the semantic similarity in the final hash space.
In summary, this paper makes a three-fold contribution.
First, we propose a novel adaptive cross-modal correlation mining and structural semantic maintenance strategy. The optimization target is updated indirectly by adaptively expanding the semantic-similar neighborhood to solve the ``static graph'' problem and maintain a smooth convergence of the optimization process.
Though traditional unsupervised learning methods are strictly limited by static constraints, our method does not rely on expensive calculation and data resources in contrastive learning and can be widely adopted in hash retrieval optimization.
Second, we introduce a novel cross-modal semantic preservation framework as the backbone of our hash learning to bridge the modality gap. Specifically, we learn common hash representations through multi-level structural semantic consistency constraints, which serve as the basis of adaptive learning. In addition, we design an asymmetric similarity-preserving binary optimization algorithm to reduce the information loss after binarization.
Last but not least, we perform extensive experimental studies on two publicly available datasets, NUSWIDE and MIRFlickr-25K, and our approach is able to significantly outperform state-of-the-art methods proposed in the last three years.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{Fig2.pdf}
\vspace{-0.1in}
\caption{An overview of our Adaptive Structural Similarity Preserving Hash (ASSPH) model.}
\label{fig:assph}
\vspace{-0.2in}
\end{figure*}
\section{Related Works}
Hashing learning~\cite{wang2015learning,wang2017survey} in uni-modal and cross-modal datasets aims to learn a hashing function to map the original data to a binary hashing space while the hashing metric is expected to preserve the semantic similarities between original multimedia data.
For cross-modal hashing, the additional modality gives new opportunities to describe and dig up the correlation between/among modalities, and thus, the contrastive-based methods can perform well. However, due to the distribution difference in various data modalities, the structural-semantic difference, called the \emph{semantic gap}, impedes the construction of semantic mapping. As labels play an important role in existing cross-modal learning to determine whether two instances are similar, related works in the past decade can be categorized into the following two classes, i.e., \emph{supervised hashing} and \emph{unsupervised hashing}.
\vspace{0.03in}
\noindent
\textbf{Supervised Cross-Modal Hashing.}
Early supervised cross-modal hashing methods, e.g., Semantic Correlation Maximization (SCM)~\cite{zhang2014large} and Semantics-Preserving Hashing (SePH)~\cite{lin2015semantics}, perform the reconstruction via different approaches, including semantic similarity matrix preserving and KL-divergence based correlation measurement.
Recent works~\cite{wang2017adversarial,jiang2017deep,hu2018deep,li2018self} start using the deep neural network and semantic embeddings provided by pre-trained models.
A subsequent series of works~\cite{liu2018fast, chen2019two, wang2020label,liu2019mtfh,meng2020asymmetric, yang2021rethinking} explore the relationships of semantic tags and latent semantics with different conditions and improve the hashing performance.
Supervised methods achieve good performance by using labels as supervision information to align different modalities; however, due to the high cost of labeling, they cannot be widely used. On the contrary, unsupervised methods, to be reviewed next, are not limited by labels and thus can be applied to a wider range of applications.
\vspace{0.02in}
\noindent
\textbf{Unsupervised Cross-Modal Hashing.}
Unsupervised cross-modal hashing methods~\cite{hardoon2004canonical,long2016composite,weiss2008spectral,ding2016large} aim to maximize the correlation between different modals or to preserve the inter-modal and intra-modal similarity, like Canonical Correlation Analysis (CCA)~\cite{hardoon2004canonical} and Composite Correlation Quantization (CCQ)~\cite{long2016composite}. With the help of deep representation learning, some recent methods~\cite{lee2013pseudo,caron2018deep} can describe inter-modal correlations more precisely and perform well on semantic alignment.
According to their optimization objectives, recent unsupervised hashing methods can be broadly divided into alignment methods and reconstruction methods.
To overcome the lack of semantic supervision under unsupervised conditions, the first category usually utilizes some data-mining methods to describe the correlational relationship (pairwise or triplet relationship), a measurement of the similarity or semantic difference between two cross-modal instances~\cite{hu2017pseudo,zhu2016unsupervised,he2017unsupervised,wu2018unsupervised,zhang2018unsupervised,wang2020set,wang2021cluster}. Some contrastive-learning-based frameworks are introduced to learn the hash representation. Some works~\cite{gu2018look,li2019coupled} even try to use the generative models~\cite{mirza2014conditional} to describe the distribution of each modality and then make an alignment while introducing additional overhead during the hash coding.
The second category adopts the semantic reconstruction~\cite{su2019deep, yang2020deep, liu2020joint, yu2021deep,zhang2021high}, which considers the similarity as the exact distance value in the hash space instead of relevance similarity or probability. While keeping the semantic metric in the hash space directly, they are all constrained by the noise arising in the feature extraction. Some works~\cite{wang2020set,zhang2021high} use a two-stage optimization and independent codebook to update the similarity guideline simultaneously, e.g., UKD~\cite{hu2020creating} proposes a two-stage data distillation framework to the cross-modal hashing learning and demonstrates the effectiveness of the cross-modal hidden embeddings to optimize the following cross-modal hashing network further.
\section{Problem Formulation}
This paper focuses on cross-modal retrieval between image and text modalities. Let $O_i = \{v_{i},t_{i}\}$, $i \in \{1, 2, \cdots, M\}$ indicate an $i^{th}$ cross-modal instance, $M$ represent the total number of instances, $v_{i}\in \mathbb{R}^{d_{v}}$/ $t_{i}\in \mathbb{R}^{d_{t}}$ refer to the visual/text feature in the pairwise instance, and $d_{v}$/$d_{t}$ represents the dimension of the image/text features.
Given the cross-modal set \begin{math}O\end{math} with the corresponding image feature set \begin{math}I=\{v_{i}\}_{i=1}^{M}\end{math} and text feature set \begin{math}T=\{t_{i}\}_{i=1}^{M}\end{math}, we want to learn hash functions \begin{math}f_{I}(v,\theta ^{I})\end{math} and \begin{math}f_{T}(t,\theta ^{T})\end{math} to generate hash code \begin{math}B^{I}, B^{T}\in \{-1,+1\}_{}^{M\times K}\end{math}
for image and text modalities respectively. $K$ is the length of the hash code, $\theta ^{I}$ and $\theta ^{T}$ are the parameters of ImgNet and TxtNet for hash code to be optimized. In addition, the similarity between hash codes $b_{i}^{I}$ and $b_{j}^{T}$, i.e., their Hamming distance $D(b_{i}^{I},b_{j}^{T})=\frac{1}{2}(K-\left \langle b_{i}^{I},b_{j}^{T} \right \rangle)$, is able to reflect the semantic similarity between original instances $v_{i}$ and $t_{j}$.
\section{Proposed Approach}
Despite recent significant breakthroughs in unsupervised work based on contrast learning~\cite{he2020momentum,chen2020simple,grill2020bootstrap,chen2021exploring}, it is still challenging to apply this learning paradigm in many scenarios with limited data and computational resources. The main problem is that it is difficult to obtain enough stable intermidiate representations as supervision to optimize the model during the learning process smoothly. Existing unsupervised cross-modal hashing methods use the metric relationship within the original features of the data as the optimization objective directly or use data distillation~\cite{hu2020creating} to divide the training process into multiple stages. However, these approaches amplify the interference of noise in the original features, which limits the performance of unsupervised hashing.
In this paper, we propose \emph{Adaptive Structural Similarity Preserving Hash (ASSPH)} model, as a solution to the problem formulated above. ASSPH designs a comprehensive
structured semantic retention strategy with a new positive sample expansion technique to stablize the training process while smoothening the updates of the optimization target to enrich the cross-modal semantic~\cite{chang2017deep,caron2018deep}. In addition, ASSPH introduces an asymmetric deep hash learning method to further reduce the quantization loss and to preserve the structural semantic in the hashing space to improve the retrieval performance. ASSPH consists of four major components, as shown in Figure~\ref{fig:assph}, and Algorithm~\ref{alg:TEmax} provides the pseudocode of the proposed hash learning method. In the rest of this section, we will detail the four components.
\subsection{Feature Extraction}
Feature extraction is an independent component in ASSPH. We follow the setup of existing work and use AlexNet and Bag-of-Words for extracting image feature $F^{I}$ and text feature $F^{T}$ respectively. Switching to a more efficient model like VGG or ResNet does improve the final result, but it is not the main focus of this work. ASSPH supports different feature extraction models in order to support a wider range of application scenarios.
\subsection{Structural Semantic Similarity Construction}
Once the original semantic features $F^{I}$ and $F^{T}$ of the cross-modal training set $O=\{(v_{i}, t_{i})\}_{1}^{M}$ are obtained, ASSPH constructs their structural similarities. It first transforms the original semantic metric into a similarity probability based on the metric distribution and practical application implications; it then defines the joint-modal similarity and structural semantic similarity between pairs of data respectively; it finally combines these two similarities together to form the structural semantic similarity metric.
To be more specific, we firstly follow the previous works~\cite{hu2018deep} to calculate the pair-wise modality-specific cosine metrics within different modals, $S^{*}=\{s_{ij}^{*}=\cos(f_{i}^{*},f_{j}^{*})\}_{i,j=1}^{M}$ to capture pair-wise semantic similarities between instances within a single modality. Note, in this paper, $* \in \{I, T\}$ represents either image or text modality.
As we find that the distribution shape of $S^{*}$ is approximately a skewed normal distribution, we map $S^{*}$ from $[-1,1]$ to the interval $[0,1]$, denoting it as $S^{*}_p=2S^{*}-1$. $S^{*}_p$ is regarded as the probability that can describe whether two instances in a single modality (i.e., $v_i$ and $v_j$ in image modality or $t_i$ and $t_j$ in text modality) are semantically similar. We then calculate its cross-modal similarity $S_{fusion}$ under the independence condition to subtract the common part after adding the probabilities.
\begin{equation}
S_{fusion} = S_{p}^{I} + S_{p}^{T} - S_{p}^{I} \cdot S_{p}^{T}
\label{equ:s_fusion}
\end{equation}
To define the structural semantic, existing works~\cite{su2019deep,liu2020joint} directly compare the fusion similarity of neighbor distance distributions between instances as the structural similarity. However, we observe that when the fusion similarity value is small, its accuracy as a metric is correspondingly reduced. Take the fusion similarity constructed on MIRFlickr as an example. MAP@50 is around 0.9 when MAP@all decreases to about 0.75. To tackle this issue, we incorporate two optimizations. Firstly, we keep only the top $K_{S}$ nearest distances based on fusion similarity $S_{fusion}$, but remove the rest of distances that are much noisier and less useful. In our implementation, $NN_{K_S}(i)$ w.r.t. an instance $i$ captures its top-$K_{S}$ nearest neighbors.
Secondly, we reweight the distances between the remaining instances based on their original cosine distance to reduce the impact of uneven density.
That is to say, if $j\in NN_{K_S}(i)$,
$\hat{S}(i,j)=\frac{S_{fusion}(i,j)}{ {\textstyle \sum_{k \in NN_{K_S}(i)}^{} S_{fusion}(i,k)} }$; otherwise, $\hat{S}(i,j)=0$.
The structural similarity is defined in Eq.~(\ref{equ:s_str}).
Here, $K_{S}$ is used again to rebalance data range; $\hat{S}^{'}$ refers to the matrix transpose of $\hat{S}$, and we will follow this notation later.
\begin{equation}
S_{structure} = K_{S} \hat{S} \times \hat{S}^{'} \label{equ:s_str}
\end{equation}
Based on the above definition, we define the structural semantic similarity $S$ by considering both cross-modal similarity and structural similarity as a multi-level semantic metric to fully represent the structural semantic information. We finally map $S$ back to the range of the cosine metric, i.e. $S=2S^{'}-1$.
\begin{equation}
S^{'}=(1-\gamma)S_{fusion} + \gamma S_{structure}, \gamma \in [0,1]
\label{S_CAL}
\end{equation}
\vspace{-0.03in}
\subsection{Correlational Relationship Mining}
Our ASSPH model proposes to construct a set of correlational relationships as the positive samples to further guide the pair-wise cross-modal hashing alignment. In addition to correcting bias in static structural semantic similarity, we introduce the correlational relationships because of the following considerations.
On the one hand, limited by the similarity distribution between the original data, the existing static similarity measures~\cite{su2019deep,liu2020joint,yu2021deep} tend to have poor local consistency. That is, most of the measurement results are in the insensitive range, which makes it challenging to effectively map correlated instances into adjacent hash codes.
On the other hand, adjusting the whole continuous similarity metric space directly can have uncontrollable effects on the training process and lead to training collapse, especially when the training data is not large enough.
Inspired by DeepCluster~\cite{caron2018deep}, we try to solve this problem by revising the discrete constraints. We introduce a new discrete training target here, which can use the knowledge learned by the model during each training step to further strengthen the local consistency of the hash representation and ensure that the training process can eventually converge smoothly.
To be more specific, we define a strict correlation discrimination scheme based on the common adjacency relations in inter- and intra- modals. We initialize the correlated instance set by considering their original semantic features, and then adaptively expand the set based on the currently learned cross-modal hidden-layer representations during the model training process.
\vspace{0.03in}
\noindent
\textbf{Initialization of the Correlational Relationship.}
We introduce a two-stage positive sample expansion strategy to initialize the the binary-valued correlational relationship set defined as $R=\{r_{ij} \in \{0, 1\}\}_{i,j=1}^{M}$. In the first stage, we perform proximity instance mining based on $S^{I}$ and $S^{T}$ (the pair-wise modality-specific cosine metrics introduced previously) respectively. To be more specific, for $j\in NN_{K_R}(i)$,
$R_{1}^{*}[i,j]=1$; otherwise, $R_{1}^{*}[i,j]=0$.
Note $K_{R}$ controls the range of neighborhood. To preserve the semantic consistency of proximity instances, $K_{R}$ will be much smaller than $K_{S}$ and it will be dynamically expanded in adaptive similarity learning to be detailed next.
However, KNN-based results generated in the above stage still consist of many errors, e.g., the neighboring instances of some marginal data or outliers often have different semantics. Therefore, we further define the second-order proximity relation $R_{2}^{*}$ to measure the similarity between intances by their shared neighbors. If $R_{1}^{*} \times R_{1}^{*'} \ge \tau $, $R_{2}^{*}=1$; otherwise, $R_{2}^{*}=0$.
Here, $\tau$ is a similarity threshold, and it is set to 1 in practice to ensure better results.
Similarly, we can define the cross-modal correlation $R_{2}^{cross}$. If max($R_{1}^{I} \times R_{1}^{T'},R_{1}^{T} \times R_{1}^{I'}) \ge \tau$,
$R_{2}^{cross}=1$; otherwise, $R_{2}^{cross}=0$.
Finally, Eq.~(\ref{R_CAL}) defines the binary-valued correlational relationship $R$.
\begin{equation}
R=R_{2}^{I} \cup R_{2}^{T} \cup R_{2}^{cross} \label{R_CAL}
\end{equation}
\vspace{0.03in}
\noindent
\textbf{Adaptive Similarity Learning.} Adaptive learning aims to keep the optimization target consistent~\cite{he2020momentum} to avoid training collapse due to convergence to a trivial solution~\cite{grill2020bootstrap}. To maintain stable iterations during the training process under the conditions of limited training batches and small datasets, we adopt a clear and effective expansion strategy based on the correlational relationship defined above.
For each epoch online, we update the correlational relationship. After the $e^{th}$ training round, we can obtain hidden embeddings $H$ containing cross-modal semantics from the latest hash functions $ImgNet(F_{I},\theta_{e}^{I})$ and $TxtNet(F_{T},\theta_{e}^{T})$, i.e., $H_{i}^{I}=ImgNet_{e}(F^{I},\theta_{e}^{I})$ and $H_{i}^{T}=TxtNet_{e}(F^{T},\theta_{e}^{T})$.
Then, we replace the original features with the hidden features of the current hash layer and use the two-stage correlation mining strategy presented above to obtain new cross-modal relationships $R_{e}$. Finally, we union it with the current $R_{e-1}$, as stated in Eq.~(\ref{R_UPDATE}).
\begin{equation}
R_{e}=R_{e-1} \cup R(H^{I}, H^{T}) \label{R_UPDATE}
\end{equation}
To ensure convergence, we adopt a lower learning rate while strictly limiting the size of $K_{R}$ to control the reliability of the positive correlations. Subsequent experiments demonstrate that our correlation mining process further enriches the semantic correlation distribution while maintaining convergence. Due to structural semantic construction, our method requires $O(M^2)$ complexity for training, where $M$ represents the training set size. We argue that it is acceptable in most cases because of limited training set size.
\subsection{Cross-Modal Hashing Learning}
The key point of hash learning is to ensure that the distance relationship between hash codes can reflect the semantic similarity of original data instances. However, it is challenging to optimize binary representations in gradient backpropagation. Therefore, in our cross-modal hash learning module, we first design a set of cross-modal alignment loss functions to optimize the continuous hash hidden layer activated by the $tanh()$ function, and then introduce an asymmetric binary optimization approach to improve the semantic expression ability of the binary hashing representation.
\vspace{0.03in}
\noindent
\textbf{Structural Semantic Similarity Preservation.} We adopt two independent networks $ImgNet(F^{I}$, $\theta ^{I})$ and $TxtNet(F^{T}, \theta ^{T})$ to get the continuous cross-modal representations. Take the hidden features $H^{I}$ from the $ImgNet()$ and $H^{T}$ from the $TxtNet()$ as input, we design a multi-task learning paradigm to jointly learn the hash code to maintain the semantics within the original cross-modal data. The optimization task consists of three parts: structural semantic reconstruction, correlational relationship preservation, and semantic alignment.
First, we follow the similarity preservation method in existing unsupervised learning approaches. Existing cross-modal unsupervised methods can be classified into contrastive learning, autoencoders, and similarity preservation, according to the category of the loss function used. Among them, similarity preservation has the lowest dependence on training batch size, which benefits scenarios with limited computational resources. Therefore, we first define a similarity preserving loss in Eq.~(\ref{equ:L_sr}). That is, both intra-modal and inter-modal metric results should be consistent with a pre-defined structural similarity metric.
\begin{equation}
\begin{split}
\mathcal{L}_{sr}(H^{I}, H^{T})&=\left \| S-cos(H^{I},H^{T}) \right \|_{F}^{2} + \left \| S-cos(H^{I},H^{I}) \right \|_{F}^{2} \\ &+
\left \| S-cos(H^{T},H^{T}) \right \|_{F}^{2} \label{equ:L_sr}
\end{split}
\end{equation}
Second, we want to achieve symmetric consistency,
i.e., both intra and inter-modal similarities are consistent. Therefore, we construct a semantic alignment loss in Eq.~(\ref{LOSS2}) to ensure that both the intra-modality and inter-modality metrics are consistent.
\begin{equation}
\begin{aligned}
\label{LOSS2}
\mathcal{L}_{sa}(H^{I}, H^{T})&=\left \| cos(H^{I},H^{I})-cos(H^{T},H^{T}) \right \|_{F}^{2} \\& + \left \| cos(H^{I},H^{T}) - cos(H^{I},H^{I}) \right \|_{F}^{2} \\&+
\left \| cos(H^{I},H^{T})-cos(H^{T},H^{T}) \right \|_{F}^{2}
\end{aligned}
\end{equation}
As the above-mentioned two optimization goals are both
based on static graph and hence will be affected by the prior deviation of the static graph, we introduce the set of correlational relationships $R$
to rectify the training process. That is, the distance of related instances shall be as close as possible. Meanwhile, considering the massive difference in the number of positive and negative instance pairs, we introduce a re-balance coefficient $\beta$ to optimize the performance. The semantic consistency loss is defined as follows:
\begin{equation}
\begin{split}
\label{LOSS3}
\mathcal{L}_{cp}(H^{I}, H^{T})=\left \| cos(H^{I},H^{T})\cdot R - \beta R \right \|_{F}^{2}
\end{split}
\end{equation}
Eq.~(\ref{equ:L}) defines the final training objective of our ASSPH, where $\mu_{1}$ and $\mu_{2}$ are the trade-off parameters to balance different optimization goals.
\begin{equation}
\min\nolimits_{\theta^{I},\theta^{T}}\mathcal{L}=\mathcal{L}_{sr} + \mu _{1}\mathcal{L}_{cp} + \mu _{2}\mathcal{L}_{sa} \label{equ:L}
\end{equation}
\vspace{0.03in}
\noindent
\textbf{Asymmetric Similarity-Preserving Binary Optimization.} As our goal is to optimize the semantic loss $\mathcal{L}(B^{I},B^{T})$ between binary hash codes, we design the asymmetric binary optimization to constrain the hidden semantic space and the binary hashing space togethers. Traditional work mainly optimizes hash coding by reducing the quantization loss from real value to binary value. Still, gradient descent based algorithms in symmetric hashing learning will force the pair-wise instance to be updated towards the same directions and switch the sign of the hash code in the worst case~\cite{huang2019accelerate}.
Inspired by the asymmetric hashing methods~\cite{zhang2019scalable,meng2020asymmetric}, we propose a two-stage asymmetric binary similarity optimization strategy and stabilize the hash learning by optimizing one side of the sub-network at each stage. Specifically, we optimize the continuous coding of one modality with the help of the binary coding of the other modality using $\mathcal{L}(H^{I},B^{T})$ and $\mathcal{L}(B^{I},H^{T})$, which also ensures the consistency of both the structure and the semantics of the hash space.
Besides, we also follow the previous approximation strategy~\cite{liu2020joint} to reduce the quantization loss by taking advantage of the convergence of the $tanh()$ function to $sgn()$. We replace the activation function of the final layer with $tanh()$ and adopt the following strategy $h_{i} = tanh(\eta x_{i}), \eta \to +\infty $. During the training stage, $\eta $ will gradually increase with the increase of epoch.
\begin{algorithm}[tb]
\LinesNumbered
\KwIn{Training set $\{I_{k}, T_{k}\}_{k=1}^{M}$and their corresponding features $F^{I}$ and $F^{T}$; batch size $m$; train epoch $E$, $NN_{K_R}$ scale $K_R$ and $NN_{K_S}$ scale $K_S$.}
\KwOut{Hashing function $ImgNet(F^{I},\theta^{I})$ and $TxtNet(F^{T},\theta^{T})$}
Initialize $\theta^{I}$, $\theta^{T}$, epoch $e \gets 0$, $S\gets$ Eq.~\eqref{S_CAL}, $R\gets$ Eq.~\eqref{R_CAL};\\
\For{each $e \in [1, E]$}
{
\For {$\left \lfloor \frac{M}{m} \right \rfloor iterations$}
{
Sample $m$ instances $\{f_{k}^{I}, f_{k}^{T}\}_{k=1}^{m}$ from training set;\\
Calculate the representations $H_{k}^{I}\gets ImgNet(f_{k}^{I},\theta^{I})$, $H_{k}^{T}\gets TxtNet(F_{k}^{T},\theta_{T})$ and get the corresponding $S(H^{I}, H^{T})$;\\
Update $\theta^{I}$ and $\theta^{T}$ using $\mathcal{L}(H^{I},H^{T})$;\\
Calculate $B^{I}\gets sgn(H^{I})$ and $B^{T}\gets sgn(H^{T})$;\\
Update $\theta^{I}$/$\theta^{T}$ using $\mathcal{L}(H^{I},B^{T})$/$\mathcal{L}(B^{I},H^{T})$;\\
}
Update $R_{e}$ with Eq.~\eqref{R_UPDATE};\\
}
Return $ImgNet(F^{I},\theta^{I})$ and $TxtNet(F^{T},\theta^{T})$;\\
\caption{Adaptive Structural Sim. Preserving Hashing}
\label{alg:TEmax}
\end{algorithm}
\begin{comment}
\begin{table*}[!ht]
\caption{The MAP performances of MIRFlickr-25K and NUS-WIDE at various hashing code lengths}
\label{MAP}
\vspace{-0.1in}
\centering
\begin{tabular}{|c|c|cccc|cccc|}
\hline
& & \multicolumn{4}{c|}{MIRFlickr-25K} & \multicolumn{4}{c|}{NUS-WIDE} \\ \cline{3-10}
\multirow{-2}{*}{Task} & \multirow{-2}{*}{Method} & 16-bit & 32-bit & 64-bit & 128-bit & 16-bit & 32-bit & 64-bit & 128-bit \\ \hline
& UGACH & 0.6595 & 0.6674 & 0.6777 & 0.6805 & 0.6107 & 0.6180 & 0.6167 & 0.6227 \\
& UKD-SS & 0.6922 & 0.6903 & 0.6982 & 0.6963 & 0.6026 & 0.6202 & 0.6264 & 0.6112 \\
& $UCH^{*}$ & 0.6540 & 0.6690 & 0.6790 & - & - & - & - & - \\
& $SRCH^{*}$ & 0.6808 & 0.6916 & 0.6997 & - & 0.5441 & 0.5565 & 0.5671 & - \\
& DJSRH & 0.6576 & 0.6606 & 0.6758 & 0.6856 & 0.5073 & 0.5239 & 0.5422 & 0.5530 \\
& JDSH & 0.6417 & 0.6567 & 0.6731 & 0.6850 & 0.5128 & 0.5170 & 0.5475 & 0.5563 \\
& DSAH & 0.6914 & 0.6981 & 0.7035 & 0.7086 & 0.5677 & 0.5839 & 0.5959 & 0.6053 \\
& DGCPN & 0.6991 & 0.7116 & 0.715 & 0.7243 & 0.5993 & 0.6150 & 0.6262 & 0.6338 \\
\multirow{-9}{*}{I2T} & \textbf{ASSPH} & {
\color[HTML]{FE0000} \underline{0.7138}} & {\color[HTML]{FE0000} \underline{0.7244}} & {\color[HTML]{FE0000} \underline{0.7301}} & {\color[HTML]{FE0000} \underline{0.7341}} & {\color[HTML]{FE0000} \underline{0.6238}} & {\color[HTML]{FE0000} \underline{0.6340}} & {\color[HTML]{FE0000} \underline{0.6477}} & {\color[HTML]{FE0000} \underline{0.6512}} \\ \hline
& UGACH & 0.6512 & 0.6587 & 0.6677 & 0.6693 & 0.5993 & 0.6022 & 0.6019 & 0.6006 \\
& UKD-SS & 0.6760 & 0.6799 & 0.6689 & 0.6722 & 0.5830 & 0.6042 & 0.5963 & 0.6075 \\
& $UCH^{*}$ & 0.6610 & 0.6670 & 0.6680 & - & - & - & - & - \\
& $SRCH^{*}$ & 0.6971 & 0.7081 & 0.7146 & - & 0.5533 & 0.5670 & 0.5754 & - \\
& DJSRH & 0.6594 & 0.6528 & 0.6688 & 0.6801 & 0.4936 & 0.5325 & 0.5508 & 0.5470 \\
& JDSH & 0.6352 & 0.6608 & 0.6686 & 0.6736 & 0.5153 & 0.4982 & 0.5683 & 0.5704 \\
& DSAH & 0.6903 & 0.6916 & 0.7013 & 0.7038 & 0.5695 & 0.5874 & 0.6066 & 0.6114 \\
& DGCPN & 0.6991 & 0.7074 & 0.7133 & 0.7233 & 0.6141 & 0.6353 & 0.6456 & 0.6522 \\
\multirow{-9}{*}{T2I} & \textbf{ASSPH} & {\color[HTML]{FE0000} \underline{0.7170}} & {\color[HTML]{FE0000} \underline{0.7265}} & {\color[HTML]{FE0000} \underline{0.7315}} & {\color[HTML]{FE0000} \underline{0.7380}} & {\color[HTML]{FE0000} \underline{0.6332}} & {\color[HTML]{FE0000} \underline{0.6502}} & {\color[HTML]{FE0000} \underline{0.6646}} & {\color[HTML]{FE0000} \underline{0.6628}} \\ \hline
\end{tabular}%
\vspace{-0.15in}
\end{table*}
\end{comment}
\begin{figure*}[ht]
\centering
\subfigure[I2T on NUS-WIDE]{
\label{Fig.pr.1}
\includegraphics[width=0.24\textwidth]{PRI2TonNUSWIDE.pdf}}
\subfigure[T2I on NUS-WIDE]{
\label{Fig.pr.2}
\includegraphics[width=0.24\textwidth]{PRT2IonNUSWIDE.pdf}}
\subfigure[I2T on MIRFlickr-25K]{
\label{Fig.pr.3}
\includegraphics[width=0.24\textwidth]{PRI2TonMIRFlickr.pdf}}
\subfigure[T2I on MIRFlickr-25K]{
\label{Fig.pr.4}
\includegraphics[width=0.24\textwidth]{PRT2IonMIRFlickr.pdf}} \\
\vspace{-0.15in}
\subfigure[I2T on NUS-WIDE]{
\label{Fig.topk.1}
\includegraphics[width=0.24\textwidth]{TopKI2TonNUSWIDE.pdf}}
\subfigure[T2I on NUS-WIDE]{
\label{Fig.topk.2}
\includegraphics[width=0.24\textwidth]{TopKT2IonNUSWIDE.pdf}}
\subfigure[I2T on MIRFlickr-25K]{
\label{Fig.topk.3}
\includegraphics[width=0.24\textwidth]{TopKI2TonMIRFlickr.pdf}}
\subfigure[T2I on MIRFlickr-25K]{
\label{Fig.topk.4}
\includegraphics[width=0.24\textwidth]{TopKT2IonMIRFlickr.pdf}}
\vspace{-0.1in}
\caption{The precision-recall and Top-$k$ precision curves on NUS-WIDE and MIRFlickr-25K datasets with 64 bit hash code.
}
\vspace{-0.1in}
\label{Fig.pr}
\end{figure*}
\section{Experimental Evaluation}
\subsection{Experiment Setup}
\vspace{0.03in}
\noindent
\textbf{Datasets.} Two commonly used multi-label image-text cross-modal datasets, NUS-WIDE and MIRFlickr-25K, are chosen for our experiments. As a piece of unsupervised work, we do not use the label information in the dataset during the training process but only use it as a basis when verifying the performance.
\emph{NUS-WIDE}~\cite{chua2009nus} consists of 269,648 labeled multimodal instances, each containing an image and a text description in pairs. Following previous work~\cite{wu2018unsupervised, su2019deep,liu2020joint}, we keep the instances corresponding to the top-10 most frequent labeled classes, resulting in 186,577 instances.
\emph{MIRFlickr-25K}~\cite{huiskes2008mir} consists of 25,000 tagged multimodal instances collected from the Flickr website. Each instance contains an image and a corresponding text description, and they are divided into 24 categories in total. After removing the unlabeled instances, there are 20,015 instances left.
In both datasets, we follow previous work~\cite{wu2018unsupervised,su2019deep,liu2020joint} to select 2,000 random instances as the query set and use the remaining instances to form the database, which also includes a training set consisting of 5,000 instances.
As for the single-label data set, it is a complex scene for our method because the positive correlation between data is much sparser than that of the multi-label scene, resulting in more errors in the unsupervised construction of the correlation. However, our model still achieves some improvements over existing methods. We have evaluated our ASSPH on the single-label Wiki dataset and please refer to the appendix for the details.
\vspace{0.03in}
\noindent
\textbf{Evaluation Protocol.}
We verify the accuracy of hash retrieval in the task of mutual image and text retrieval, i.e., using an image as a query and retrieving the text associated with it (I2T) and vice versa (T2I).
To evaluate the retrieval quality, we adopt three standard evaluation metrics, including Mean Average Precision (MAP), Precision-Recall Curve (PR-Curve), and Top-$k$-Precision.
At the same time, considering the actual application scenario, we also evaluate our ASSPH under MAP@50 and again please refer to the appendix for the detailed MAP@50 results.
\vspace{0.03in}
\noindent
\textbf{Baselines.} We implement eight state-of-the-art methods based on deep hash learning proposed in the last three years as competitors, including UGACH~\cite{zhang2018unsupervised}, UKD~\cite{hu2020creating}, SRCH~\cite{wang2020set}, DJSRH~\cite{su2019deep}, DSAH~\cite{yang2020deep}, JDSH~\cite{liu2020joint}, DGCPN~\cite{yu2021deep} and UCH~\cite{li2019coupled}.
\vspace{0.03in}
\noindent
\textbf{Implementation Details.}
In our implementation, we take 4,096 dimensional features extracted from AlexNet for images and use the 1,000-dimensional BoW features for texts. For the hashing representation network, we directly use two nonlinear layers with 4,096 dimensions in the hidden layer.
For the competitors, we adopt their original implementations released by the authors and follow their best settings to perform the experiments. We also choose the same image and text embeddings (AlexNet and BoW features) for fair comparisons. The training epoch is uniformly set to 50 on both datasets, except UGACH and UKD with their own settings. Our method and competitors are implemented based on python 3.6 and trained on a single 2080ti GPU.
Last but not least, we fix the batch size to 32 and use the SGD optimizer with 0.9 momentum and 0.0005 weight decay. The learning rates are set to 0.001. For other parameters in our ASSPH, we perform cross-validation and finally select the following set of parameters with the selection range in brackets: $K_{R}=50,[5,500]$, $K_{S}=2000,[1000,4000]$, $\mu_{1}=2[0.5,5]$, $\mu_{2}=1[0.5,5]$, $\beta=1.5,[1,3]$ and $\gamma=0.3[0,1]$. We use the same parameters on both datasets.
\begin{comment}
\begin{table*}[ht]
\caption{The MAP performances of MIRFlickr-25K and NUS-WIDE at various hashing code lengths with VGG-16}
\label{MAP_VGG}
\vspace{-0.1in}
\centering
\begin{tabular}{|c|c|cccc|cccc|}
\hline
& & \multicolumn{4}{c|}{\textbf{MIRFlickr-25K}} & \multicolumn{4}{c|}{\textbf{NUS-WIDE}} \\ \cline{3-10}
\multirow{-2}{*}{Task} & \multirow{-2}{*}{Method} & \textbf{16-bit} & \textbf{32-bit} & \textbf{64-bit} & \textbf{128-bit} & \textbf{16-bit} & \textbf{32-bit} & \textbf{64-bit} & \textbf{128-bit} \\ \hline
& UGACH & 0.685 & 0.693 & 0.704 & 0.702 & 0.613 & 0.623 & 0.628 & 0.631 \\
& UKD-SS & 0.714 & 0.718 & 0.725 & 0.720 & 0.614 & 0.637 & 0.638 & 0.645 \\
& UCH & 0.654 & 0.669 & 0.679 & - & - & - & - & - \\
& SRCH & 0.6808 & 0.6916 & 0.6997 & - & 0.5441 & 0.5565 & 0.5671 & - \\
& DGCPN & 0.732 & 0.742 & 0.751 & - & 0.625 & 0.635 & 0.654 & - \\
\multirow{-6}{*}{I2T} & \textbf{ASSPH} & {\color[HTML]{FE0000} \textbf{0.739}} & {\color[HTML]{FE0000} \textbf{0.753}} & {\color[HTML]{FE0000} \textbf{0.757}} & {\color[HTML]{FE0000} \textbf{0.762}} & {\color[HTML]{FE0000} \textbf{0.639}} & {\color[HTML]{FE0000} \textbf{0.660}} & {\color[HTML]{FE0000} \textbf{0.667}} & {\color[HTML]{FE0000} \textbf{0.669}} \\ \hline
& UGACH & 0.673 & 0.676 & 0.686 & 0.690 & 0.603 & 0.614 & 0.640 & 0.641 \\
& UKD-SS & 0.715 & 0.716 & 0.721 & 0.719 & 0.630 & 0.656 & 0.657 & 0.663 \\
& UCH & - & - & - & - & - & - & - & - \\
& SRCH & 0.5441 & 0.5565 & 0.5671 & - & 0.5533 & 0.567 & 0.5754 & - \\
& DGCPN & 0.729 & 0.741 & 0.749 & - & 0.631 & 0.648 & 0.660 & - \\
\multirow{-6}{*}{T2I} & \textbf{ASSPH} & {\color[HTML]{FE0000} \textbf{0.746}} & {\color[HTML]{FE0000} \textbf{0.756}} & {\color[HTML]{FE0000} \textbf{0.764}} & {\color[HTML]{FE0000} \textbf{0.767}} & {\color[HTML]{FE0000} \textbf{0.652}} & {\color[HTML]{FE0000} \textbf{0.673}} & {\color[HTML]{FE0000} \textbf{0.676}} & {\color[HTML]{FE0000} \textbf{0.674}} \\ \hline
\end{tabular}%
\end{table*}
\end{comment}
\begin{table*}[!ht]
\caption{The MAP performances of MIRFlickr-25K and NUS-WIDE at various hashing code lengths}
\label{MAP}
\vspace{-0.1in}
\centering
\begin{tabular}{|c||c|c|cccc|cccc|}
\hline
& & & \multicolumn{4}{c|}{MIRFlickr-25K} & \multicolumn{4}{c|}{NUS-WIDE} \\ \cline{4-11}
& \multirow{-2}{*}{Task} & \multirow{-2}{*}{Method} & 16-bit & 32-bit & 64-bit & 128-bit & 16-bit & 32-bit & 64-bit & 128-bit \\ \hline
\multirow{18}{*}{\rotatebox[origin=c]{90}{CNN backbone:AlexNet}}& \multirow{9}{*} {I2T} & UGACH & 0.6595 & 0.6674 & 0.6777 & 0.6805 & 0.6107 & 0.6180 & 0.6167 & 0.6227 \\
& & UKD-SS & 0.6922 & 0.6903 & 0.6982 & 0.6963 & 0.6026 & 0.6202 & 0.6264 & 0.6112 \\
& & $UCH^{*}$ & 0.6540 & 0.6690 & 0.6790 & - & - & - & - & - \\
& & $SRCH^{*}$ & 0.6808 & 0.6916 & 0.6997 & - & 0.5441 & 0.5565 & 0.5671 & - \\
& & DJSRH & 0.6576 & 0.6606 & 0.6758 & 0.6856 & 0.5073 & 0.5239 & 0.5422 & 0.5530 \\
& & JDSH & 0.6417 & 0.6567 & 0.6731 & 0.6850 & 0.5128 & 0.5170 & 0.5475 & 0.5563 \\
& & DSAH & 0.6914 & 0.6981 & 0.7035 & 0.7086 & 0.5677 & 0.5839 & 0.5959 & 0.6053 \\
& & DGCPN & 0.6991 & 0.7116 & 0.715 & 0.7243 & 0.5993 & 0.6150 & 0.6262 & 0.6338 \\
& & \textbf{ASSPH} & {
\color[HTML]{FE0000} \underline{0.7138}} & {\color[HTML]{FE0000} \underline{0.7244}} & {\color[HTML]{FE0000} \underline{0.7301}} & {\color[HTML]{FE0000} \underline{0.7341}} & {\color[HTML]{FE0000} \underline{0.6238}} & {\color[HTML]{FE0000} \underline{0.6340}} & {\color[HTML]{FE0000} \underline{0.6477}} & {\color[HTML]{FE0000} \underline{0.6512}} \\ \cline{2-10}
& \multirow{9}{*}{T2I} & UGACH & 0.6512 & 0.6587 & 0.6677 & 0.6693 & 0.5993 & 0.6022 & 0.6019 & 0.6006 \\
& & UKD-SS & 0.6760 & 0.6799 & 0.6689 & 0.6722 & 0.5830 & 0.6042 & 0.5963 & 0.6075 \\
& & $UCH^{*}$ & 0.6610 & 0.6670 & 0.6680 & - & - & - & - & - \\
& & $SRCH^{*}$ & 0.6971 & 0.7081 & 0.7146 & - & 0.5533 & 0.5670 & 0.5754 & - \\
& & DJSRH & 0.6594 & 0.6528 & 0.6688 & 0.6801 & 0.4936 & 0.5325 & 0.5508 & 0.5470 \\
& & JDSH & 0.6352 & 0.6608 & 0.6686 & 0.6736 & 0.5153 & 0.4982 & 0.5683 & 0.5704 \\
& & DSAH & 0.6903 & 0.6916 & 0.7013 & 0.7038 & 0.5695 & 0.5874 & 0.6066 & 0.6114 \\
& & DGCPN & 0.6991 & 0.7074 & 0.7133 & 0.7233 & 0.6141 & 0.6353 & 0.6456 & 0.6522 \\
& & \textbf{ASSPH} & {\color[HTML]{FE0000} \underline{0.7170}} & {\color[HTML]{FE0000} \underline{0.7265}} & {\color[HTML]{FE0000} \underline{0.7315}} & {\color[HTML]{FE0000} \underline{0.7380}} & {\color[HTML]{FE0000} \underline{0.6332}} & {\color[HTML]{FE0000} \underline{0.6502}} & {\color[HTML]{FE0000} \underline{0.6646}} & {\color[HTML]{FE0000} \underline{0.6628}} \\ \hline\hline
\multirow{12}{*}{\rotatebox[origin=c]{90}{CNN Backbone: VGG-16}}
&\multirow{6}{*}{I2T} & UGACH & 0.685 & 0.693 & 0.704 & 0.702 & 0.613 & 0.623 & 0.628 & 0.631 \\
& & UKD-SS & 0.714 & 0.718 & 0.725 & 0.720 & 0.614 & 0.637 & 0.638 & 0.645 \\
& & UCH & 0.654 & 0.669 & 0.679 & - & - & - & - & - \\
& & SRCH & 0.6808 & 0.6916 & 0.6997 & - & 0.5441 & 0.5565 & 0.5671 & - \\
& & DGCPN & 0.732 & 0.742 & 0.751 & - & 0.625 & 0.635 & 0.654 & - \\
& & \textbf{ASSPH} & {\color[HTML]{FE0000} \textbf{0.739}} & {\color[HTML]{FE0000} \textbf{0.753}} & {\color[HTML]{FE0000} \textbf{0.757}} & {\color[HTML]{FE0000} \textbf{0.762}} & {\color[HTML]{FE0000} \textbf{0.639}} & {\color[HTML]{FE0000} \textbf{0.660}} & {\color[HTML]{FE0000} \textbf{0.667}} & {\color[HTML]{FE0000} \textbf{0.669}} \\ \cline{2-11}
&\multirow{6}{*}{T2I} & UGACH & 0.673 & 0.676 & 0.686 & 0.690 & 0.603 & 0.614 & 0.640 & 0.641 \\
& & UKD-SS & 0.715 & 0.716 & 0.721 & 0.719 & 0.630 & 0.656 & 0.657 & 0.663 \\
& & UCH & 0.661 & 0.667 & 0.668 & - & - & - & - & - \\
& & SRCH & 0.6971 & 0.7081 & 0.7146 & - & 0.5533 & 0.567 & 0.5754 & - \\
& & DGCPN & 0.729 & 0.741 & 0.749 & - & 0.631 & 0.648 & 0.660 & - \\
& & \textbf{ASSPH} & {\color[HTML]{FE0000} \textbf{0.746}} & {\color[HTML]{FE0000} \textbf{0.756}} & {\color[HTML]{FE0000} \textbf{0.764}} & {\color[HTML]{FE0000} \textbf{0.767}} & {\color[HTML]{FE0000} \textbf{0.652}} & {\color[HTML]{FE0000} \textbf{0.673}} & {\color[HTML]{FE0000} \textbf{0.676}} & {\color[HTML]{FE0000} \textbf{0.674}} \\ \hline
\end{tabular}%
\vspace{-0.15in}
\end{table*}
\vspace{-0.03in}
\subsection{Retrieval Performance}
We compare the performance of ASSPH againt its competitors. Note that since the implementations of UCH and SRCH have not been released, we directly copy the MAP results from the original papers without PR-Curve and Top-$k$ Precision Curve.
First, we report in the top part of Table \ref{MAP} the MAP performance (averaged over three times) based on different code lengths for two tasks, I2T and T2I. Our ASSPH consistently performs the best.
Then, we report PR-Curve (without the two data points when recall is 0 and recall is 1) and Top-$k$ in Figure~\ref{Fig.pr} with 64-bit code length. As expected, our method also outperforms all the competitors in the PR curves for both tasks. For the Top-$k$ indicator, when $k$ is small, our improvement is not obvious, but as $k$ increases, our ASSPH performs better. This is because these original static similarity metrics in competitors are reliable in top-ranked results.
The semantic metric reconstructed in DJSRH, JDSH and DGCPN depends heavily on the accuracy of the original semantic metric, while the current deep learning features have a high confidence level in top-ranked results (e.g., Top-50). In other words, they can only take advantage of the relationship between a few data (such as the relationship between the top-50 related data), and most of the relationships can only be filtered out as noise.
However, ASSPH can make full use of the relationship between all data. While the improvement of ASSPH may be relatively limited in the top-50 retrieval performance, ASSPH can instead optimize the overall retrieval results.
Next, we report some results with a different CNN backbone. Some unsupervised hash works~\cite{zhang2018unsupervised,yu2021deep} use VGG to get the image features with higher quality to improve the effectiveness of unsupervised works. Accordingly, we report the retrieval performance of different methods based on VGG-16 in the bottom part of Table~\ref{MAP}.
For the convenience of comparison, we directly list the experimental results reported by each competitor in the original literature. Note that DJSRH, JDSH, and DSAH are missing here because their original papers only report the results of MAP@50, and the metric we compare are based on MAP@all. Using AlexNet will result in a 2\% to 4\% reduction in the overall effect of the experiments compared with the performance in the top part of Table~\ref{MAP}. This is the main reason why our results reported in the top part of Table~\ref{MAP}
deviate from the results in their original literature.
We observe that in the experiment based on VGG, our improvement is smaller than that based on AlexNet. This is because the prior knowledge used as the training targets in existing work, such as the neighboring network in UGACH~\cite{zhang2018unsupervised} or the cross-mode similarity measure in DGCPN~\cite{yu2021deep}, will be more reliable when the feature quality improves. However, our method still outperforms its competitors regardless of the original feature quality. The difference between the performance with different backbone models can further reflect the versatility of ASSPH, i.e., it remains effective even when the original similarity metric has low quality.
\vspace{-0.03in}
\subsection{Ablation Studies}
We design several variants of ASSPH to show the effectiveness of several key modules.
To be more specific, we implement four variants models, including
i) \textbf{ASSPH\_NoAdapt} that removes the adaptive learning strategy from ASSPH, so $R$ will remain unchanged during the whole training process; ii) \textbf{ASSPH\_PairCorr} that replaces the structural correlation with the pairwise correlation; iii) \textbf{ASSPH\_NoCorr} that removes the correlational relationship and the corresponding correlation preserving loss $\mathcal{L}_{cp}$;
and iv) \textbf{ASSPH\_NoBinOpt} that removes the similarity-preserving binary optimization strategy from ASSPH.
The performance comparison among different variants is reported in Table~\ref{tab:freq1}. In most cases, we can observe that these four key modules effectively improve the performance by different extents and the results fully illustrate their effectiveness in ASSPH. For the BinOpt module, it is introduced to speedup the convergence process to the optimal binary representation. As the training on MIRFlickr tends to converge quickly, our BinOpt module does not improve much in this fast-converging scenario, so ASSPH performs only slightly better than ASSPH\_NoBinOpt.
\begin{comment}
\begin{table}[h]
\caption{The MAP performance with different batch size}
\label{tab:MAP_BS}
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{batch size} & \multicolumn{2}{c}{NUS-WIDE} & \multicolumn{2}{c}{MIRFlickr-25K} \\ \cline{2-5}
& I2T & T2I & I2T & T2I \\ \hline
32 & 0.634 & 0.652 & 0.726 & 0.732 \\
64 & 0.636 & 0.651 & 0.727 & 0.723 \\
128 & 0.639 & 0.654 & 0.726 & 0.725 \\
256 & 0.638 & 0.655 & 0.727 & 0.729 \\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{The MAP performance with different $K_S$}
\label{tab:MAP_KS}
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{$K_S$} & \multicolumn{2}{c}{NUS-WIDE} & \multicolumn{2}{c}{MIRFlickr-25K} \\ \cline{2-5}
& I2T & T2I & I2T & T2I \\ \hline
1000 & 0.642 & 0.656 & 0.727 & 0.726 \\
2000 & 0.648 & 0.665 & 0.730 & 0.721 \\
3000 & 0.639 & 0.654 & 0.726 & 0.728 \\
4000 & 0.627 & 0.655 & 0.722 & 0.719 \\
\hline
\end{tabular}
\end{table}
\end{comment}
\vspace{-0.05in}
\subsection{Parameter analysis}
In this set of experiments, we conduct further analysis on the effects of different parameters in our method, including the $K_{S}$ and $\gamma$ in structural similarity construction, $K_{R}$ in correlational relationship mining and the training batch size. Parameter $\beta$, a balance parameter to adjust the data bias in Eq.~(\ref{LOSS3}), has a rather small optional range (e.g., [1,2]). We follow
previous works~\cite{su2019deep,liu2020joint,yang2020deep,yu2021deep} to set $\beta$ to 1.5, and perform cross-validation to confirm its plausibility.
First, we study the value of $K_{R}$, denoted by $N$, that records the neighborhood range used as positive samples in correlation mining. Figure~\ref{Fig.Nparam} shows the MAP performance achieved by our method under different $N$ values. As it is a single-peaked function of neighbors, we can see that when the adaptive learning strategy is effective under proper settings and our ASSPH can increase positive relations further while maintaining its reliability.
Next, we study the value of $K_{S}$ in Table~\ref{tab:MAP_parameters}. The table shows the cross-modal retrieval performance under different values of $K_S$. We can observe that, on the one hand, the final result is not sensitive to the value of $K_S$, and on the other hand, when $K_S$ increases significantly, the final result decreases. This is because as $K_S$ is approaching the training set size, we actually consider the correlation between all instances, which undoubtedly introduces additional noise because the similarity relations between distant instances are less credible.
In the same table, we also report the results of our experiments with different batch sizes/$\gamma$ values. As ASSPH has a low dependence on batch size, it allows the unsupervised learning strategy to learn effectively even in small samples and limited datasets. As the batch size increases from its default value 32 to 256, the performance changes are small. Parameter $\gamma$ controls the weight of structural similarity when calculating the structural semantic similarity $S$. We can observe that our method is still effective even when $\gamma=0$ (i.e., $S$ is purely based on structural similarity). However, combining structural similarity and static cross-modal similarity as a multi-level cross-mode similarity can further improve the performance.
\begin{comment}
\begin{table}[h]
\caption{The MAP performance with different $\gamma$}
\label{tab:MAP_gamma}
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{$\gamma$} & \multicolumn{2}{c}{NUS-WIDE} & \multicolumn{2}{c}{MIRFlickr-25K} \\ \cline{2-5}
& I2T & T2I & I2T & T2I \\ \hline
0 & 0.633 & 0.649 & 0.722 & 0.721 \\
0.2 & 0.632 & 0.651 & 0.73 & 0.729 \\
0.4 & 0.630 & 0.644 & 0.727 & 0.727 \\
0.6 & 0.628 & 0.646 & 0.719 & 0.722 \\
0.8 & 0.622 & 0.641 & 0.705 & 0.71 \\
1 & 0.623 & 0.639 & 0.668 & 0.662 \\
\hline
\end{tabular}
\end{table}
\end{comment}
\vspace{-0.03in}
\subsection{Further Analysis of the Adaptive Process}
Figure~\ref{Fig.Adap} respectively describes the convergence of the adaptive mining process and the cross-modal semantic complement brought by it. Figure~\ref{Fig.Adap.1} shows the variation of the number of correlated instances under different scales $K_{R}=N$, where $Num$ on y-axis indicates that there are $2^{Num}\times 10^{4}$ instances. We can observe that our adaptive mining strategy expands the correlation set 4 times larger than the initial one. At the same time, the growth tends to be zero, which ensures the convergence of the optimization process. Figure~\ref{Fig.Adap.2} reports the variation of the similarity distribution among the correlated instances mined during the training process. The number of correlations increases with the training process while their similarity distribution gradually shifts to a minor direction. Meanwhile, we test the accuracy of the correlation relationships, and it rises from the initial 0.861 to 0.885 after 25 training epochs, which indicates the reliability of our adaptive learning.
Therefore, we can conclude that ASSPH obtains additional joint-modal correlations independent of the original static metrics, while most initial correlations are of higher similarity.
These low-similarity correlations can be seen as hard samples that contribute to the enrichment of the semantic representation of our hashing model.
\begin{table}[h]
\vspace{-0.1in}
\caption{Ablations experiments
with 64-bit hashcode}
\label{tab:freq1}
\vspace{-0.12in}
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c}{NUS-WIDE} & \multicolumn{2}{c}{MIRFlickr-25K} \\ \cline{2-5}
& I2T & T2I & I2T & T2I \\ \hline
ASSPH & 0.650 & 0.667 & 0.730 & 0.732 \\
ASSPH\_NoAdapt & 0.638 & 0.654 & 0.723 & 0.729 \\
ASSPH\_PairCorr & 0.633 & 0.654 & 0.718 & 0.728 \\
ASSPH\_NoCorr & 0.604 & 0.617 & 0.696 & 0.691 \\
ASSPH\_NoBinOpt & 0.639 & 0.648 & 0.725 & 0.732 \\ \hline
\end{tabular}
\vspace{-0.15in}
\end{table}
\begin{table}[h]
\caption{The MAP performance with different parameters}
\label{tab:MAP_parameters}
\vspace{-0.15in}
\begin{tabular}{c|c||cccc}
\hline
\multicolumn{2}{c}{} & \multicolumn{2}{c}{NUS-WIDE} & \multicolumn{2}{c}{MIRFlickr-25K} \\ \cline{3-6}
\multicolumn{2}{c}{} & I2T & T2I & I2T & T2I \\ \hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{$K_S$}} &
1000 & 0.642 & 0.656 & 0.727 & 0.726 \\
&2000 & 0.648 & 0.665 & 0.730 & 0.721 \\
&3000 & 0.639 & 0.654 & 0.726 & 0.728 \\
&4000 & 0.627 & 0.655 & 0.722 & 0.719 \\
\hline
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{batch size}} &
32 & 0.634 & 0.652 & 0.726 & 0.732 \\
&64 & 0.636 & 0.651 & 0.727 & 0.723 \\
&128 & 0.639 & 0.654 & 0.726 & 0.725 \\
&256 & 0.638 & 0.655 & 0.727 & 0.729 \\
\hline\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{$\gamma$}} & 0 & 0.633 & 0.649 & 0.722 & 0.721 \\
&0.2 & 0.632 & 0.651 & 0.73 & 0.729 \\
&0.4 & 0.630 & 0.644 & 0.727 & 0.727 \\
&0.6 & 0.628 & 0.646 & 0.719 & 0.722 \\
&0.8 & 0.622 & 0.641 & 0.705 & 0.71 \\
&1 & 0.623 & 0.639 & 0.668 & 0.662 \\
\hline
\end{tabular}
\vspace{-0.15in}
\end{table}
\begin{figure}[h]
\vspace{-0.1in}
\centering
\subfigure[MAP on NUS-WIDE]{
\label{Fig.Nparam.1}
\includegraphics[width=0.22\textwidth]{NonNUSWIDE.pdf}}
\subfigure[MAP on MIRFlickr-25K]{
\label{Fig.Nparam.2}
\includegraphics[width=0.22\textwidth]{NonMIRFlickr.pdf}}
\vspace{-0.2in}
\caption{The MAP performance with different $K_{R}=N$ on NUS-WIDE and MIRFlickr-25K datasets (64 bit hash code).}
\label{Fig.Nparam}
\vspace{-0.2in}
\end{figure}
\begin{figure}[h]
\vspace{-0.15in}
\centering
\subfigure[Number of correlations during the training]{
\label{Fig.Adap.1}
\includegraphics[width=0.22\textwidth]{NUMofCorr.pdf}}
\subfigure[Distribution of Similarity between correlated instances]{
\label{Fig.Adap.2}
\includegraphics[width=0.22\textwidth]{SimDist.pdf}}
\vspace{-0.2in}
\caption{Descriptions of the adaptive correlation mining on MIRFlickr-25K with 64 bit hash code.}
\label{Fig.Adap}
\vspace{-0.2in}
\end{figure}
\section{Conclusion}
This paper proposes a novel unsupervised cross-modal hashing framework, named \emph{Adaptive Structural Similarity Preserving Hash (ASSPH)}, and conducts extensive experiments to verify its effectiveness. The framework is based on asymmetric structural semantic preserving with additional adaptive correlation expansion and constraints to learn the joint-semantic relationship adaptively. Combining the two parts is also a good solution for avoiding training collapse caused by unbalanced samples and small datasets, while mitigating the limitation from static metric reconstruction in traditional unsupervised cross-modal hashing.
\section{Acknowledgements}
This research is supported in part by the National Natural Science Foundation of China under grant 62172107 and the National Key Research and Development Program of China under grant 2018YFB0505000.
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,993,513 | arxiv | \section{Introduction}
Variational quantum algorithms such as the \ac{VQE} or the \ac{QAOA} \cite{Farhi_Goldstone_14} have received a lot of attention of late. They are promising candidates for gaining a quantum advantage already with \ac{NISQ} computers in areas such as quantum chemistry \cite{Cao_Aspuru-Guzik_19}, condensed matter simulations \cite{Smith_Knolle_19}, and discrete optimization tasks \cite{Zhou_Lukin_18}.
A major open problem is that of finding good classical optimizers which are able to guide such hybrid quantum-classical algorithms to desirable minima and to do that with the smallest possible number of calls to a quantum computer backend.
In classical machine learning, the \ac{ADAM} \cite{Kingma_Ba_14} optimizer is among the most widely used and recommended algorithms \cite{Karpathy_17, Ruder_16}, and has been one of the most important enablers of progress in deep learning in recent years. Such an accurate and versatile optimizer for quantum variational algorithms is yet to be found.
We are here mostly interested in variational algorithms for quantum many-body problems.
To make progress towards finding an efficient and reliably optimizer for this domain, we concentrate on cost functions derived from typical quantum many-body Hamiltonians such as the \ac{TFIM} and the \ac{XXZM} for two reasons:
First, their system size can be varied allowing us to systematically study scaling effects.
Second, for \textit{integrable} systems such as the \ac{TFIM} the exact ground states are known and it is possible to construct ansatz classes for \ac{VQE} circuits that provably contain the global minimum and can be simulated more efficiently.
Such systems thus allow us to distinguish between the performance of the optimizers and the expressiveness of the ansatz.
As a first result we show that the commonly used optimization strategies \ac{ADAM} \cite{Ostaszewski_Benedetti_19} and \ac{BFGS} \cite{Broyden_70, Fletcher_70, Goldfarb_70,Shanno_70,Guerreschi_Smelyanskiy_17, Mbeng_Santoro_19, Wang_Rieffel_19, Grimsley_Mayhall_19, Romero_Aspuru-Guzik_17, Gard_Barnes_20} both run into convergence problems when the system size of a \ac{VQE} is increased.
This happens already for system sizes within the reach of current and near future \ac{NISQ} devices, which underlines the importance to a systematic search for suitable optimization strategies.
The \ac{BFGS} algorithm fails systematically for bigger systems above about 20 spins in the \ac{TFIM} corresponding to $20$ variational parameters.
The performance of \ac{ADAM} is shown to depend strongly on the learning rate via multiple effects and the number of epochs required for convergence increases fast with the problem size.
Convergence can be improved but only with an expensive fine-tuning of the hyperparameters.
We then study the performance of an optimization strategy known as the Quantum Natural Gradient or \acs{NatGrad} \cite{Stokes_Carleo_19, Amari_98, Harrow_Napp_19} and introduce Tikhonov regularization to the classical processing step in the \ac{VQE} \cite{Martens_Sutskever_12}.
We find that \ac{NatGrad} regularized in this way does consistently find a global optimum for the largest system sizes we test (40 qubits) and requires significantly fewer epochs to do so than \ac{ADAM} (in the cases where \ac{ADAM} converges at all).
This is in sharp contrast to the usually very good performance of the \ac{ADAM} optimizer and related (stochastic) gradient descend based techniques in the optimization of classical neural networks.
A possible explanation for this good performance in usually overparametrized settings is the following:
For common activation functions and random initialization, increasing overparametrization tends to transform local minima into saddle points \cite{Livni_Shamir_14, Li_Sun_18}.
The optimizer then mainly needs to follow a deep and narrow valley with comparably flat bottom to find a global minimum.
The \ac{ADAM} optimizer is perfectly suitable to pursue this path as it has per-parameter learning rates that also take into account the average of recent updates.
In this way it avoids side-to-side oscillations in the valley and can build up momentum to slide down the relatively flat bottom of the valley.
The energy landscapes of typical variational quantum algorithms however look very different.
First, having deep and wide circuits with many parametrized gates is prohibitive on \ac{NISQ} computers, which excludes overparametrization as a tool to make the variational space more accessible.
Second, the variational parameters usually feed into exponentially generated gates and thus the cost function is a combination of trigonometric functions of the parameters.
It appears that \ac{NatGrad} is able to effectively use the information about the ansatz class to navigate the resulting energy landscape with many local minima.
Third, it is known that large parts of the parameter space form so-called barren plateaus with very small gradients \cite{McClean_Neven_18}.
A random initialization of the parameters in reasonably deep \ac{VQE}s is thus almost certainly going to leave one stuck in such a plateau.
Of course this also implies that one must prevent the optimizer from jumping to a random location in parameter space during optimization.
This can be achieved in \ac{NatGrad} by inhibiting unsuitably large steps by means of Tikhonov regularization.
Finally, due to the small number of variational parameters in VQEs, the added (classical) computational cost of inverting the Fubini-Study metric is neglegible as compared to the cost of sampling from the quantum backend.
This fact, combined with the highly correlated nature of the learning landscape in quantum many-body problems \cite{Park_Kastoryano_19}, might render second-order methods such as \ac{NatGrad} more amenable to quantum than to classical settings, where samples are cheap, but there are many variational parameters.
Our second set of results concerns the effect of overparametrization in \acp{VQE}.
We study the impact of adding redundant layers to the ideal circuit ansatz.
Not only does this overparametrization not improve the performance, it actually appears to make finding the optimum significantly harder.
The \ac{BFGS} algorithm but also the \ac{ADAM} optimizer, designed to thrive on additional degrees of freedom, fail frequently in this setting.
This cannot easily be mitigated by increasing the iteration budget and reducing the learning rate of the \ac{ADAM} optimizer.
While also affected, \ac{NatGrad} shows much higher resilience against this effect, compensating its higher per-epoch cost with a higher chance to succeed.
In order to generalize our results, we consider the \ac{XXZM} together with the Trotterized time evolution operator as circuit ansatz.
Indeed we find \ac{BFGS} to experience the same difficulties in high-dimensional parameter spaces and \ac{ADAM} to exhibit a similar behaviour of the required number of epochs as for the \ac{TFIM}.
The performance of \ac{NatGrad} however is not as reliable for this model as it shows very flat intermediate optimization curves which obscur the distinction between challenging phases in the computation and convergence to local minima.
A detailed investigation of the origin of the deviating behaviour and potential improvements of \ac{NatGrad} will be subject of future work.
\section{Main results}
In this section, we state and assess the main numerical results of the paper.
For a detailed description of the optimizers and circuit models, see the Methods section (sec.~\ref{sec:methods}).
\begin{figure}[h!]
\flushleft
\includegraphics[width=0.48\textwidth]{./gfx/tfi_nondist_prec_min.pdf}
\vspace{0.25cm}
\includegraphics[width=0.48\textwidth]{./gfx/tfi_nondist_epochs.pdf}
\caption{Relative error $\delta_\text{min}$ and epoch count $N_\text{epoch}$ for the three optimizers with random initialisation for the \ac{QAOA} circuit with $n=N$ variational parameters.
The \ac{ADAM} optimizer is chosen with a learning rate of $\eta=0.06$.
(a) NatGrad reaches the ground state for all instances and all system sizes, while BFGS and ADAM start systematically getting stuck in local minima beyond a system size of $N=20$. (b) The monomial fits to the mean number of epochs to global minimization yield the scalings $N^{2.1}$ (\ac{BFGS}), $N^{2.3}$ (\ac{ADAM}) and $N^{2.1}$ (\ac{NatGrad}). ADAM experiences a transition around $N=22$ qubits, where the number of epochs to convergence jumps by an order of magnitude.
\label{fig:tfi_nondist_prec}}
\end{figure}
\subsection{QAOA circuits for the TFIM}\label{sec:res_tfi_qaoa}
We start our numerical investigation with the \ac{QAOA} circuit for the \acs{TFIM} on $N$ qubits with a depth of $p=N/2$ blocks and analyze the \textit{accuracy, speed and stability} of all three optimizers \ac{BFGS}, \ac{ADAM} and \ac{NatGrad} (see sec.~\ref{sec:qaoa} for the ansatz and \ref{sec:tfi} for the model).
These circuits with $n=N$ parameters are sufficiently expressive to contain the ground state and respect the symmetries of the Hamiltonian.
For each system size we sample $20$ points in parameter space and initialize each optimizer at these positions.
This leads to statistically distributed performances of the algorithms and as we perform exact simulations without sampling and noise it is the only source of stochasticity.
The minimal relative error $\delta_\text{min}$ and the number of required epochs for each initial point and optimizer are shown in fig.~\ref{fig:tfi_nondist_prec}.
Before we analyze the results, recall that the optimization problem can be solved exactly, i.\,e.~ the ansatz contains the true ground state.
This enables us to identify optimization results with precisions $\delta_\text{min}\geq 10^{-3}$ as local minima and we consider them to be unsuccessful.
In practical applications the precision reached in both local and global minima would be much lower and in particular results with $\delta\approx 10^{-10}$ are unreasonable to measure in quantum machines.
This choice of benchmark is made in order to clearly reveal intrinsic features of the optimizers.
For realistic applications, a systematic study of noise needs to be taken into account as well.
Our first observation is that the \ac{BFGS} optimizer systematically fails to converge for systems sizes larger than $N=20$.
For small system sizes, however, it reaches a global minimum in the smallest number of iterations and at low cost per epoch (see tab.~\ref{tab:optimizercost}).
The runs of \ac{BFGS} interrupted at a $\delta<10^{-6}$ level could be improved to reach the goal of $\delta=10^{-10}$ by tuning the interrupt criterion. Therefore, these runs are considered successful.
For \ac{ADAM} we here show the optimization results with $\eta=0.06$ which similarly display a deterioration in accuracy for system sizes beyond $N=26$.
It is important to note that the failed \ac{ADAM} runs are interrupted after $5\cdot 10^4$ iterations and convergence with additional runtime is not excluded in general.
The question is then: How many iterations are needed for convergence?
We observe a polynomial scaling of the required iterations in the system size up to a transition point $N^*(\eta)$ which depends on the chosen learning rate.
Above this system size \textit{both} successful and failing runs take much longer and exceed the set budget of $5\cdot 10^4$ iterations.
In fig.~\ref{fig:tfi_nondist_prec} we present the \ac{ADAM} runs for a medium learning rate in order to demonstrate the described behavior but not the best possible performance of the \ac{ADAM} optimizer. We present a more detailed analysis of the influence of the learning rate on the performance of \ac{ADAM} in appendix~\ref{sec:app_eta_Adam}.
\ac{NatGrad} shows reliable convergence to a global minimum for all sampled initial parameters.
The number of epochs to convergence scales polynomially with the system size and there is little variance in the required number of epochs.
\begin{figure}
\flushleft
\includegraphics[width=0.48\textwidth]{./gfx/tfi_nondist_tq.pdf}
\caption{Estimated runtimes on a quantum computer for the optimization tasks shown in fig.~\ref{fig:tfi_nondist_prec} based on the scalings in tab.~\ref{tab:optimizercost}.
We only show successful runs with $\delta_{min}\leq10^{-5}$ and note that none of the \ac{ADAM} optimizations for $N\geq 30$ attained the full precision of $10^{-10}$ such that the scaling is truncated.
\label{fig:tfi_nondist_tq}}
\end{figure}
Using the scalings discussed in more detail in sec.~\ref{sec:optcost}, taking the translation symmetry of the \ac{TFIM} into account and employing the estimates $N_M/N_a\approx 10$ \cite{McArdle_Yuan_19} and $t_3\approx t_2\approx t_1$ (see sec.~\ref{sec:optcost} for definitions of these quantities) we show the expected optimization durations on a quantum computer in fig.~\ref{fig:tfi_nondist_tq}.
Due to the increased cost per epoch and a similar scaling of the number of iterations for all optimizers, the cost for \ac{NatGrad} are considerably higher than those for \ac{BFGS} and \ac{ADAM} in the regimes in which they converge and \ac{ADAM} does not suffer from the sudden increase in required epochs.
We expect the scaling for \ac{ADAM}, which is truncated in fig.~\ref{fig:tfi_nondist_prec} due to our epoch budget, to yield quantum runtimes comparable to those of \ac{NatGrad}.
As we show in appendix \ref{sec:app_eta_Adam}, reducing the learning rate makes bigger system sizes accessible to \ac{ADAM}, but also rather drastically increases run times because of slower convergence.
The structure of the investigated Hamiltonian has a major influence on the scalings as the translation symmetries in the presented spin chain models reduce $K_H$ to a small constant leading to high relative cost of obtaining the Fubini matrix.
For chemical systems, for example, with at least quadratic scaling of $K_H$ in $N$ and depending on the ansatz class, the relative additional cost per epoch for \ac{NatGrad} can be significantly smaller, which in combination with the unreliable convergence of \ac{ADAM} from system size $N^*$ onwards would make \ac{NatGrad} an attractive optimization technique.
In summary, we find the \ac{BFGS} optimizer to run into convergence problems already for medium sized systems, \ac{ADAM} to take a large number of epochs with a transition into unpredictable cost at a certain system size and \ac{NatGrad} to exhibit reliable convergence with fewer epochs than \ac{ADAM}, but an overall high cost when running on a real quantum computer.
Furthermore, the success of both commonly used optimizers, \ac{BFGS} and \ac{ADAM}, strongly depends on the initial parameters whereas \ac{NatGrad} shows stable convergence and a small variance of the optimization duration.
\subsection{Overparametrization by adding Y layers}
We now extend the optimal \ac{QAOA} circuit for the \ac{TFIM} by adding redundant layers of Pauli $Y$ rotations.
These additional rotations can be deactivated by setting their variational parameter $\kappa$ to zero. This means in particular that the new ansatz classes still contain the ground state and simply introduce a form of overparametrization.
As Pauli $Y$ rotations cannot be represented in the free fermion basis of the Hamiltonian (see eqn.~(\ref{eq.res:ff_mapping})), the overparametrized class can be seen as breaking a symmetry.
This means that for any given $\kappa\neq 0$, the ansatz state will not be a global minimum and it will be crucial for an optimization algorithm to find the submanifold with $\kappa=0$.
This is clear for a single additional layer of gates, but we expect it to hold for multiple layers as well.
Although the present situation is artificially constructed and the broken symmetry is manifest, similar behavior is expected in systems where we do not have an analytical solution.
More generally, even for a suitable ansatz class a very specific configuration of the variational parameters is necessary to find the ground state and the chosen optimization algorithm consequentially should be resilient to local minima.
Our choice of overparametrization leads to such local minima, constructing an optimization problem that can be used as a test for the resilience of the optimizer.
\begin{figure}
\flushleft
\includegraphics[width=0.48\textwidth]{./gfx/tfi_ylay_prec_min.pdf}
\vspace{0.25cm}
\includegraphics[width=0.48\textwidth]{./gfx/tfi_ylay_ratio.pdf}
\caption{(a) Achieved precisions $\delta_\text{min}$ and (b) fraction of successful optimizations with the three optimizers on \ac{QAOA} circuits extended by one or two Pauli $Y$-rotation layers.
Successful optimization runs and those only converging locally are separated by a gap in the attained minimal precision and in contrast to fig.~\ref{fig:tfi_nondist_prec} the iteration budget is almost never consumed entirely.
Instead the optimization is completed -- yielding either a global or a local minimum.
\label{fig:tfi_ylay_ratio}}
\end{figure}
We look at two configurations of the extended circuits with y-rotation layers included at positions $\left\{\left\lfloor \frac{N}{4}\right\rfloor\right\}$ and $\left\{\left\lfloor \frac{N}{4}\right\rfloor,\left\lfloor \frac{N}{2}\right\rfloor-1\right\}$ respectively.
With this choice we avoid special points in the circuit and expect these setups to properly emulate the problem of (additional) local minima.
Again we sample 20 positions in parameter space close to the origin and initialize the three optimizers at these points, resulting in the precisions and success ratios shown in fig.~\ref{fig:tfi_ylay_ratio} together with the estimated quantum computer runtimes in fig.~\ref{fig:tfi_ylay_tq}.
We observe a clear distinction between the optimizations that succeed to find a global minimum and those which converge to a local minimum only, such that we obtain a well-defined success ratio for this numerical experiment.
In contrast to the results for the minimal \ac{QAOA} circuit, no intermediate precisions caused by a finite iteration budget occur.
All optimizers suffer from the introduced gates as they show convergence to local minima for system sizes they tackled successfully without overparametrization.
\begin{figure}
\flushleft
\includegraphics[width=0.48\textwidth]{./gfx/tfi_ylay_tq.pdf}
\caption{Estimated runtime scaling on a quantum computer for the optimizations in fig.~\ref{fig:tfi_ylay_ratio} based on tab.~\ref{tab:optimizercost} and the same assumptions as in fig.~\ref{fig:tfi_nondist_tq}.
Here we also include unsuccessful instances and for the \ac{ADAM} optimizer the lower branch corresponds to successful minimizations.
\label{fig:tfi_ylay_tq}}
\end{figure}
For \ac{BFGS}, this effect appears for some system sizes for one layer of Pauli $Y$ rotations but is much stronger for two additional layers, reducing the fraction of globally minimized runs to less than 50\% for multiple system sizes.
We do not claim a scaling behaviour with the system size but note an alternating pattern for the configuration with two $Y$ layers, demonstrating large fluctuations of the success ratio (c.\,f.~ in particular system sizes $10$ and $
12$ for two $Y$ layers).
For the \ac{ADAM} optimizer we use a comparably small learning rate of $\eta=0.02$ which pushes the jump of the optimization duration observed before well out of the treated system size range.
Nonetheless, we observe runs stuck in local minima already for small systems without exceeding the iteration budget such that in contrast to sec.~\ref{sec:res_tfi_qaoa} allowing for a longer runtime would not improve the performance.
Also for \ac{ADAM} the fraction of successful instances fluctuates with the system size but in particular for two Pauli $Y$ rotation layers the effect becomes stronger for bigger systems and no successful runs were observed for $N\geq14$.
The performance of \ac{NatGrad} on the other hand, for which we reduced the learning rate to $\eta=0.05$, is more reliable and the success rate is the best for most of the circuits, with few exceptions.
In particular there are only few system sizes with local convergence for one and two additional degrees of freedom each and overall the success rate of \ac{NatGrad} does not drop below $60\%$.
For all optimizers we confirm that successful runs deactivate the additional Pauli $Y$ rotation layers by setting the corresponding parameters to 0 and that all optimizations with worse precision failed to do so, leading to a local minimization only.
The quantum runtimes demonstrate the expected scaling with \ac{NatGrad} as the most expensive optimizer, where the small iteration count compensates the increased cost per epoch for small systems. However, the increased effort is rewarded with significantly higher success rates, making \ac{NatGrad} a strong choice for (potentially) overparametrized \ac{VQE} optimization.
We again note that the relative cost of the Fubini matrix are high for spinchain systems and that the reduced number of epochs required by \ac{NatGrad} will have a bigger impact in other systems.
Overall our numerical experiments with the extended \ac{QAOA} circuits for the \ac{TFIM} demonstrate the fragility of the three tested optimizers to perturbations of the ansatz class.
A significant decrease in performance is caused by overparametrization outside of the symmetry sector of the model and the \ac{QAOA} ansatz class.
All algorithms were successful for the original \ac{QAOA} circuits on the considered system sizes such that the reduced success ratio can directly be attributed to the extension of the ansatz class.
This is in contrast to machine learning settings where heavy overparametrization is essential to make the cost function landscape tractable to local optimizers like \ac{ADAM}.
The strong fluctuations over the tested system sizes indicate that more repetitions of the optimization would be required to resolve systematic behaviour.
We note that the \ac{BFGS} algorithm in some instances converges to a local minimum although it has access to non-local information via its line search subroutine.
In particular in the presence of two misleading parameters in the search space, the local information determining the one-dimensional subspace does not seem to suffice any longer to find the global minimum, even though the approximated Hessian is used.
For the \ac{ADAM} optimizer the initial gradient leads to an activation of symmetry breaking layers and due to the restriction to local information the algorithm is not able to leave the resulting sector of the search space with local minima it enters initially.
\ac{NatGrad} also is affected by the limitation to local information but because of the access to geometric properties of the ansatz state class it was on average less likely to leave the Pauli $Y$-rotation layers activated.
We attribute this to the fact that \ac{NatGrad} performs the optimization in the locally undeformed Hilbert space by extracting the influence of the parametrization.
As a consequence the optimizer does not follow the incentive to activate the Pauli $Y$ rotations at the beginning when given the same gradient as \ac{ADAM}, but stays within the minimal parameter subspace.
A better foundation for this intuition and the observed exceptions will be subject to further investigations of \ac{NatGrad}.
In general, one could expect the cost function of \acp{VQE} to behave differently than those in common machine learning models as the parameters enter in a very non-linear manner via rotation gates.
We were able to demonstrate such a difference with \ac{ADAM}, which benefits from overparametrization in machine learning applications but suffers significantly from the additional parameters of the extended circuits.
The restriction of \ac{NISQ} devices to rather shallow circuits implies much smaller numbers of variational parameters than in machine learning such that \ac{NatGrad} can be considered a viable option for \ac{VQE} optimization.
\subsection{Results on the Heisenberg model}\label{sec:res_xxz}
\begin{figure}
\flushleft
\vspace{0.25cm}
\includegraphics[width=0.48\textwidth]{./gfx/xxz_prec_min.pdf}
\vspace{0.25cm}
\includegraphics[width=0.48\textwidth]{./gfx/xxz_epochs.pdf}
\caption{(a) Minimal achieved precisions and (b) iteration count of the three optimizers on the ansatz in eqn.~(\ref{eq.def:xxz_ansatz}) for the \ac{XXZM} at depth $p=N$.
The circuit contains $n=3N$ parameters and the learning rates are $0.03$ and $0.1$ for \ac{ADAM} and \ac{NatGrad} respectively. The epoch count is truncated at $10000$ iterations to improve the readability.
\label{fig:xxz}}
\end{figure}
To complement the study on scaling and overparametrization in the integrable \ac{TFIM} we present here numerical results on the \ac{XXZM} with the ansatz discussed in detail in sec.~\ref{sec:xxz}.
The performance of the three optimizers, initialized at $20$ distinct points close to $0$, is shown in fig.~\ref{fig:xxz} together with the number of epochs.
The behaviour of \ac{ADAM} and \ac{BFGS} is similar to the one observed on the \ac{TFIM}, i.\,e.~ \ac{ADAM} successfully achieves the target accuracy of $10^{-5}$ but shows an abrupt increase in the iteration number and \ac{BFGS} starts to fail for medium sized systems.
The number of variational parameters at which the respective transition occurs is similar to that in the \ac{TFIM}:
The cost of \ac{ADAM} jump abruptly at $n=24$ and $n=36$ and similarly the runs with comparable learning rate for the \ac{TFIM} show (less clear) transitions at $n=26$ and $n=30$.
Likewise the \ac{BFGS} optimizer starts failing significantly at $n=24$ and $n=22$ for the \ac{XXZM} and the \ac{TFIM}, respectively.
The Hilbert space dimension however clearly differs at the transition points.
It is intuitively clear that the main influence should be due to the properties of the parameter space, but in general the physical system size might affect the performance as well by shaping the energy landscape.
The \ac{NatGrad} optimizer is less performant on the \ac{XXZM} as it is sometimes interrupted during phases of small updates.
This might indicate either convergence to a local minimum or a too small learning rate.
A preliminary further analysis showed that reducing $\eta$ in \ac{NatGrad} can prevent convergence for some instances that were optimized successfully before.
This hints to the second scenario because a reduced learning rate should generally improve the quality of \ac{NatGrad}.
This will be investigated in a follow-up study.
We note that the attained precision in failed runs does not show a consistent gap across the system sizes which makes the analysis of the performance less clear.
Nonetheless the deviation from the target precision is significantly smaller for \ac{NatGrad} than for \ac{BFGS} and if one extends the gap visible for $N=8$ and $N=10$ many instances of \ac{BFGS} are categorized as unsuccessful.
We interpret the results on the \ac{XXZM} as follows:
Some difficulties of the commonly used \ac{BFGS} optimizer and \ac{ADAM} appear also in this model already for moderate system sizes.
The size of the parameter space seems to primarily determine whether performance (\ac{BFGS}) or runtime (\ac{ADAM}) issues arise, not so much the Hilbert space dimension of the underlying many-body model.
The very reliable performance of \ac{NatGrad} seen in the \ac{TFIM} can not necessarily be generalized to other spin chain models, let alone to other classes of Hamiltonians.
However, the characteristics of the failed runs let us hope that systematic improvements to \ac{NatGrad} might be possible.
\section{Methods}\label{sec:methods}
\subsection{Variational Quantum Eigensolver}
The framework of our work is the \ac{VQE}, a proposal to use parametrized circuits on a quantum computer in combination with classical optimization routines to prepare the ground state of a target Hamiltonian $H$.
In the first part of a \ac{VQE} one constructs a quantum circuit that contains parametrized gates.
Given input parameters $\theta$ for the circuit, a quantum computer can then prepare the corresponding ansatz state and measure an objective function, chosen to be the energy of the Hamiltonian
\begin{equation}\label{eq.def:energy_cost}
E(\theta)\coloneqq \bra{\psi(\theta)}H\ket{\psi(\theta)}
\end{equation}
and for benchmark problems with known ground state energy $E_0$, the relative error $\delta$ can be calculated as
\begin{equation}
\delta(\theta)\coloneqq\frac{E(\theta)-E_0}{|E_0|}.
\end{equation}
Additionally one can prepare modified versions of the circuit to determine auxiliary quantities like the energy gradient in the parameter space \cite{Schuld_Killoran_19}.
The second part of the \ac{VQE} scheme is an optimization strategy on a classical computer which is granted access to the quantum black box just constructed.
In the most straightforward scenario this is a black box minimization scheme, but using auxiliary quantities, more sophisticated optimization methods can be realized as well.
There are two main theoretical challenges for successfully applying \ac{VQE}:
First, the construction of a sufficiently complex, but not overly expensive, circuit that gives rise to an ansatz class containing the ground state -- \textit{expressivity}.
Second, the choice of a suitable optimizer that is able to search for the ground state within the created parameter space -- \textit{efficiency}.
The two challenges are often seen as independent, but explicit algorithms using information gathered about the variational space during optimization phases for adjusting the ansatz have been proposed as well, some of which are inspired by concrete applications in quantum chemistry or by evolutionary strategies \cite{Grimsley_Mayhall_19, Tang_Economou_19, Ostaszewski_Benedetti_19, Rattew_Wood_19}.
We now establish some notation for the general \ac{VQE} setting where we assume the most common objective: Finding the ground state energy of a Hamiltonian $H$.
Starting from an initial product state $\ket{\bar{\psi}}$, we apply parametrized unitaries $\{U_j(\theta_j)\}_{1\leq j\leq n}$ to construct the ansatz state
\begin{equation}
\ket{\psi(\theta)}\coloneqq \p{j}{N}{1} U_j(\theta_j)\ket{\bar{\psi}}.
\end{equation}
The parameters are typically initialized randomly close to zero to avoid the barren plateau problem \cite{McClean_Neven_18}.
For this work, the unitaries are going to be translationally invariant layers of one- or two-qubit rotations; Consider for instance
\begin{align}
L_{zz}(\theta_j) &\coloneqq \p{k}{1}{N}\exp\left[-\frac{i\theta_j}{2} Z^{(k)}Z^{(k+1)}\right]\\
&=\exp\left[-\frac{i\theta_j}{2} \s{k}{1}{N}Z^{(k)}Z^{(k+1)}\right] ,
\end{align}
where we identified the qubits with index 1 and N+1, i.\,e.~ we adopt periodic boundary conditions.
The ordering of the gates within a layer is not relevant because they commute but for convenience we write them such that terms acting on the first qubits are applied first.
$Z^{(k)}$ is the Pauli $Z$ operator acting on the $k$-th qubit and we tacitly assume the tensor product between operators that act on distinct qubits as well as the missing tensor factors of identities.
Compared to proposed ansatz circuits that employ full Hamiltonian time evolution $\exp[-i\theta H]$ (see sec.~\ref{sec:qaoa}), such a layer is rather easily implemented on present quantum machines because it only requires linear connectivity and one type of two-qubit rotation.
There have been many proposed circuits to generate ansatz classes for a variety of problems, all of which can be boiled down to combining rotational gates and possibly other fixed gates such as the \acs{CNOT} or \acs{SWAP} gate (see sec.~\ref{sec:ansatze}).
For the presented optimization methods the derivatives w.\,r.\,t.~ the variational parameters $\{\theta_j\}_j$ are important and for the above example we observe the special structure of translationally symmetric layers of Pauli rotation gates:
\begin{equation}\label{eq.vqe:layer_derivative}
\frac{\partial}{\partial\theta_j}L_{zz}(\theta_j) = \left(-\frac{i}{2}\s{k}{1}{N}Z^{(k)}Z^{(k+1)}\right) L_{zz}(\theta_j) .
\end{equation}
The derivative only produces an operator prefactor, and all prefactors can be summarized because the single gates commute.
While the basic gates composing a unitary $U_j(\theta_j)$ typically take the form of (local) Pauli rotations, the full unitary often is more complex than the above layer and in particular the terms in $U_j$ do not need to commute.
However, the structure of rotations enables us in general to evaluate required expressions involving derivatives on a quantum computer, either via measurements of rotation generators or via ancilla qubit schemes.
\subsubsection{A selection of ansatz classes}\label{sec:ansatze}
Among the ansatz families proposed in the literature we present the following which are used frequently and of which two are directly connected to this work:
\paragraph{QAOA}\label{sec:qaoa}
The Quantum Approximate Optimization Algorithm was first proposed by Farhi, Goldstone and Gutmann \cite{Farhi_Goldstone_14} in 2014 for approximate solutions to (classical) optimization problems by mapping them to a spinchain Hamiltonian.
The algorithm looks similar to adiabatic time evolution methods with an inhomogeneous time resolution which is rather coarse for typical circuit depths.
A lot of work has been put into proving properties of the \ac{QAOA} both, in general and for certain problem types, including extensions to quantum cost Hamiltonians \cite{Morales_Biamonte_19, Lloyd_18, Hastings_19, Farhi_Harrow_16}.
A the same time the algorithm has been refined, extended, and characterized on the basis of heuristics and numerical experiments, gaining insight into its properties beyond rigorous statements \cite{Wang_Rieffel_18, Mbeng_Santoro_19, Ho_Hsieh_19, Niu_Chuang_19, Akshay_Biamonte_19}.
The \ac{QAOA} circuit is constructed as follows:
For a \textit{cost Hamiltonian} $H_S$ and a so-called \textit{mixing Hamiltonian} $H_B$ one alternatingly applies the unitaries $\exp\left[-i\vartheta_j H_S\right]$ and $\exp\left[-i\varphi_j H_B\right]$ $p$ times, giving rise to a \ac{VQE} ansatz class with 'time' parameters $\{\vartheta_j, \varphi_j\}_{1\leq j\leq p}$.
Originally, the system Hamiltonian would encode a classical optimization problem and thus be diagonal while the mixing Hamiltonian was chosen to be off-diagonal and specifically has been kept fixed to the original $H_B=\s{k}{1}{N} X^{(k)}$ for many investigations.
However, new choices of mixers have been proposed and investigated as well, giving rise to the more general \ac{QAOa}\cite{Hadfield_Rieffel_19, Wang_Rieffel_19, Akshay_Biamonte_19}.
Note that for quantum systems, the terms comprising the Hamiltonian $H_S$ do not commute in general such that very large gate sequences would be necessary to realize the exact \ac{QAOA} approach including $\exp\left[-i\vartheta H_S\right]$.
In practice these blocks commonly are broken up in a Trotter-like fashion instead, yielding circuits that are implemented more readily but deviating from the original ansatz.
For the \ac{TFIM}, such a modified \ac{QAOA} ansatz has been studied intensively \cite{Wang_Rieffel_18, Ho_Hsieh_19, Mbeng_Santoro_19} and we are going to use it as a starting point for our investigations.
\paragraph{Adaptive ans\"atze}
Most prominently for this type of ans\"atze, \acs{ADAPT-VQE} tackles both the construction of a suitable ansatz class and the optimization within the constructed parameter space.
Instead of a fixed ansatz circuit layout, \acs{ADAPT-VQE} takes a pool of gates as input and iterates the two steps of the \ac{VQE} scheme:
After rating all gates the most promising one is appended to the circuit (construction) and afterwards all the circuit parameters are optimized (minimization).
The optimized parameters from the previous step are then used for both, the rating of the gates for the next construction step and the initialization for the following optimization, where newly added gates are initialized close to the identity.
For both, the concept of allowed gates and the gate rating criteria, there are multiple options and we refer the reader to \cite{Grimsley_Mayhall_19, Tang_Economou_19} for more detailed descriptions.
Besides \acs{ADAPT-VQE}, multiple other methods which grow the ansatz circuit in interplay with the optimization have been proposed and demonstrated, including \textit{Rotoselect} \cite{Ostaszewski_Benedetti_19} and \acs{EVQE} \cite{Rattew_Wood_19}.
These demonstrations include the solution of 5-qubit spinchains and small molecules (lithium hydride, beryllium dihydride and a Hydrogen chain) to chemical precision using simulations with and without sampling noise or quantum hardware.
We will not be using any adaptive scheme in our work, but our results on stability and overparametrization raise serious doubts as to the reliability of any adaptive ansatz method.
\subsection{Optimizers}
A variety of optimizers have been used in the context of variational quantum algorithms.
These optimizers are inspired by classical machine learning and can be sorted according to the order of information required about the cost function.
Zeroth-order or direct optimization methods only evaluate the function itself, first-order methods need access to the gradient, and second-order optimization need access to the Hessian of the cost function, or some other metric reflecting the local curvature of the learning landscape.
\subsubsection{Direct optimization}
The most naive approach to optimizing a function over an input space is to simply ``look at all possible inputs'', i.\,e.~ to set up a grid and to evaluate the function on all vertices of the grid.
Even though it is unlikely to find the minimum in this manner directly, subsequent refinements of the grid around potential minima make global optimization possible.
On the one hand this method becomes exponentially expensive in the number of parameters and a 15-dimensional grid generated by only two values per parameter already requires $2^{15}>3\cdot 10^4$ function evaluations.
On the other hand, the naive grid search can be improved significantly which allows for global optimization.
This approach has been demonstrated successfully for between 15 to 20 parameter with the \acs{DIRECT} method and a budget of $2\cdot 10^5$ evaluations \cite{Kokail_Zoller_19}.
For high-dimensional applications, i.\,e.~ circuits for realistic systems with parameter count at least linear in the size of the system, any global optimization strategy seems likely to suffer from the sparse information access and to become incapable of exploring a sufficiently big fraction of the search space.
As is the case for most of the work on \acp{VQE} we will not use any direct minimization methods, supported by the estimate that those strategies become unfeasible for relevant problem sizes and demonstrated deficiencies in comparison to gradient-based techniques \cite{Romero_Aspuru-Guzik_17}.
\subsubsection{First-Order Gradient Descent}
Optimization techniques using the gradient of the cost function are at this point the most widely used in machine learning.
Starting from the simple Gradient Descent method that updates the parameters according to the gradient and a fixed learning rate, a whole family of minimization strategies has been developed.
The improved routines are inspired by physical processes like momentum, based on heuristics like adaptive learning rate schedules, or a smart processing of the gradient information as in the Nesterov Accelerated Gradient.
A review of this development can be found e.\,g.~ in \cite{Ruder_16}, here we just present the first-order method we are going to use, the \ac{ADAM} optimizer.
\ac{ADAM}, which was proposed in 2014 \cite{Kingma_Ba_14}, is probably the most prevalent optimization strategy for deep feed-forward neural networks \cite{Karpathy_17} and has been used in \ac{VQE} settings as well \cite{Ostaszewski_Benedetti_19}.
For completeness, we briefly outline the \ac{ADAM} optimizer:
Given the cost function $E(\theta)$, where $\theta$ recollects all variational parameters, a starting point $\theta^{(0)}$ and a learning rate $\eta$, Gradient Descent computes the gradient $\nabla E(\theta^{(t)})$ at the current position and accordingly updates the parameters rescaled by $\eta$:
\begin{equation}
\theta^{(t+1)} = \theta^{(t)}-\eta \nabla E(\theta^{(t)}) .
\end{equation}
As the gradient points in the direction of steepest ascend, the parameter update is directed towards the steepest descend of the cost function and for $\eta$ small enough, the convergence towards a minimum can be understood intuitively.
Small learning rates yield slow convergence which increases the cost of the optimization whereas choosing $\eta$ too large leads to overshooting and oscillations which might prevent convergence.
Furthermore, although the optimizer will diagnose convergence to a minimum due to a vanishing gradient, it cannot distinguish between local and global minima.
In order to fix both issues, i.\,e.~ the need for an optimally scheduled learning rate and the liability of getting stuck in local minima, various improvements have been proposed and \ac{ADAM} uses several of these upgrades.
The first feature is an \textit{adaptive, componentwise learning rate}, which was introduced in \acs{AdaGrad} \cite{Duchi_Hazan_11} and improved in \acs{RMSprop} \cite{Hinton_12} to avoid suppressed learning.
The second feature \ac{ADAM} uses is \textit{momentum}, which is inspired by the physical momentum of a ball in a landscape with friction.
This is realized by reusing past parameter upgrades weighted with an exponential decay towards the past and enables \ac{ADAM} to overcome some local minima.
The final form of the \ac{ADAM} algorithm is as follows:
Initialize with hyperparameters $\{\eta,\beta_1,\beta_2,\varepsilon\}$, momentum $m^{(0)}=0$, average squared gradient $v^{(0)}=0$ and initial position $\theta^{(0)}$.
At the t-th step, compute the gradient and update the momentum and the cumulated squared gradient as
\begin{align}
m^{(t)} &= \frac{\beta_1-\beta_1^t}{1-\beta_1^t}m^{(t-1)}+\frac{1-\beta_1}{1-\beta_1^t}\nabla E(\theta^{(t)}) ,\\
v^{(t)} &= \frac{\beta_2-\beta_2^t}{1-\beta_2^t}v^{(t-1)}+\frac{1-\beta_2}{1-\beta_2^t}\left(\nabla E(\theta^{(t)})\right)^{\odot 2}
\end{align}
where $x^{\odot 2}$ denotes the elementwise square of a vector $x$.
The parameter update then is computed from these updated quantities via
\begin{equation}
\theta^{(t+1)}=\theta^{(t)}-\frac{\eta}{\sqrt[{\odot}]{v^{(t)}}+\varepsilon}m^{(t)}
\end{equation}
with the square root of $v^{(t)}$ taken elementwise.
Besides the learning rate $\eta$, we identify the hyperparameters $\beta_1$ and $\beta_2$ as exponential memory decay factors of $m$ and $v$ respectively and the small constant $\varepsilon$ as regularizer, which avoids unreasonably large updates in flat regions and division by zero at initialization or for irrelevant parameters.
Because of the advanced features that \ac{ADAM} uses, it has been very successful at many tasks and even though there are applications for which more basic gradient-based optimizers can be advantageous, we choose \ac{ADAM} to represent the family of local first-order optimizers.
\subsubsection{BFGS optimizer}
The second optimizer we look at is the \ac{BFGS} algorithm, which was proposed by its four authors independently in 1970 \cite{Broyden_70, Fletcher_70, Goldfarb_70,Shanno_70} .
Using first-order resources only it approximates the Hessian of the cost function and performs global line searches in the direction of the gradient transformed by the Hessian inverse.
Therefore it is a global quasi second-order method using local first-order information and its categorization is not obvious.
The algorithm is initialized with the starting point $\theta^{(0)}$ and a first guess for the approximate Hessian $H^{(0)}$ of the cost function $E$, which usually is set to the identity.
At each step of the optimization one determines the gradient, computes the direction
\begin{equation}
n^{(t)} = {H^{(t)}}^{-1}\nabla E(\theta^{(t)})
\end{equation}
and performs a line search on $\{\theta^{(t)}+\eta\; n^{(t)}|\eta\in\mathbb{R}\}$ which yields the optimal update in that direction and can optionally be restricted to a bounded parameter subspace.
Given the new point in parameter space, $\theta^{(t+1)}$, the change in the gradient $D^{(t)}=\nabla E(\theta^{(t+1)})-\nabla E(\theta^{(t)})$ is calculated and used to update the approximate Hessian via
\begin{equation*}
H^{(t+1)}=H^{(t)}+\frac{D^{(t)}{D^{(t)}}^T}{\eta^{(t)} {D^{(t)}}^T n^{(t)}}-\frac{H^{(t)}n^{(t)}{n^{(t)}}^TH^{(t)}}{{n^{(t)}}^TH^{(t)}n^{(t)}} .
\end{equation*}
As the parameter updates are found via line searches, the \ac{BFGS} algorithm is not strictly local but due to its use of local higher-order information, the global search is much more efficient than direct optimization.
The method has been successful in many applications and currently is of widespread use for \acp{VQE}. \cite{Guerreschi_Smelyanskiy_17, Mbeng_Santoro_19, Wang_Rieffel_19, Grimsley_Mayhall_19, Romero_Aspuru-Guzik_17, Gard_Barnes_20}
\subsubsection{Natural Gradient Descent}
The third optimization strategy we use is the \ac{NatGrad} \cite{Stokes_Carleo_19, Amari_98, Harrow_Napp_19}, which due to its increased cost per epoch is not adopted very often in machine learning settings itself but is connected to some successful methods.
As an example, Stochastic Reconfiguration which is closely related to \ac{NatGrad}\cite{Becca_Sorella_17} recently has been shown to work well for training \acp{RBM} to describe groundstates of spin models \cite{Carleo_Troyer_17}.
Despite this success, the insights into why and under which conditions the method works remain limited and recent work has been put into understanding the learning process for the mentioned application of \acp{RBM} and the Natural Gradient Descent \cite{Park_Kastoryano_19}.
Before discussing \ac{NatGrad} and its role in the \ac{VQE} setting, we outline its update rule:
Given a starting point $\theta^{(0)}$ and a learning rate $\eta$, a step is performed by first constructing the Fubini-Study metric of the ansatz class
\begin{equation}\label{eq.natgrad:fubini}
\begin{split}
\left(F_t\right)_{ij} &\coloneqq \real{\ip{\partial_{i}\psi^{(t)}}{\partial_{j}\psi^{(t)}}}\\
&-\ip{\partial_{i}\psi^{(t)}}{\psi^{(t)}}\ip{\psi^{(t)}}{\partial_{j}\psi^{(t)}}
\end{split}
\end{equation}
at the current position and then updating the parameters via
\begin{equation}
\theta^{(t+1)} = \theta^{(t)}-\eta\;{F^{(t)}}^{-1}\nabla E(\theta^{(t)})
\end{equation}
where we abbreviated $\ket{\psi^{(t)})}\coloneqq\ket{\psi(\theta^{(t)})}$ and $\ket{\partial_i\psi^{(t)}}\coloneqq\frac{\partial}{\partial\theta_i}\ket{\psi(\theta^{(t)})}$.
The Fubini-Study metric is the quantum analogue of the Fisher information matrix in the classical Natural Gradient \cite{Amari_98}.
It describes the curvature of the ansatz class rather than the learning landscape, but often performs just as well as Hessian based methods.
In order to avoid unreasonably large updates caused by very small eigenvalues of $F$ in standard Natural Gradient Descent $\eta$ has to be chosen very small for an unpredictable number of initial learning steps.
Alternatively one can use \textit{Tikhonov} regularization which amounts to adding a small constant to the diagonal of $F$ before inverting it.
Even though \ac{NatGrad} is simple from an operational viewpoint, it is epochwise the most expensive optimizer of the three presented here (also see sec.~\ref{sec:optcost}).
This is due to the fact that it not only uses the gradient but, in order to construct the
(Hermitian) matrix $F$ for $n$ parameters, one also needs to evaluate $\frac{1}{2}(n^2+3n)$ pairwise overlaps of the set $\{\ket{\psi}, \ket{\partial_1\psi},\dots,\ket{\partial_n\psi}\}$ (all but $\ip{\psi}{\psi}=1$).
Depending on the gates in the ansatz circuit, each of these overlaps requires at least one and possibly many individual circuit evaluations.
For circuits containing $\tilde{n}$ simple one- or two-qubit Pauli rotation gates, the number of circuits required is $\frac{1}{2}(\tilde{n}^2+3\tilde{n})$, independent of the number of shared parameters.
Symmetries of the circuit may reduce the number of distinct terms in which case fewer quantum machine runs suffice.
Taking the $j$-th parametrized unitary to have $K_j$ Hermitian generators $P_{j,k_j}$, e.\,g.~ Pauli words up to prefactors $\{c_{j,k_j}\}$, the factors in the second expression of $F$ take the shape of an expectation value (see also eqn.~(\ref{eq.vqe:layer_derivative}))
\begin{equation}
\ip{\psi}{\partial_j\psi}=\bra{\bar{\psi}}\p{l}{j-1}{1}U_l\dag \left[\s{k_j}{1}{K_j} c_{j,k_j}P_{j,k_j} \right]\p{l}{1}{j-1}U_l\ket{\bar{\psi}} .
\end{equation}
The first term in eqn.~(\ref{eq.natgrad:fubini}) requires slightly more complex circuits using one ancilla qubit and a depth which depends on the indices of the matrix entry \cite{Guerreschi_Smelyanskiy_17, Li_Benjamin_17, Romero_Aspuru-Guzik_17, Dallaire-Demers_Aspuru-Guzik_19}.
Both for simulation work and for applications on real quantum machines, the construction of the Fubini matrix is expected to take much more time than inverting it -- in sharp contrast to typical classical machine learning problems.
Given the scaling of the number of required circuits above and the fact that for a fixed number of qubits the depth has to grow at least linearly with the number of parameters, an asymptotic scaling of $\order{\tilde{n}^3}$ is a lower bound for the construction of the full matrix.
Standard matrix inversion algorithms do not only show smaller or equal scaling but also exhibit as prefactor the time cost of a \acs{FLOP} whereas the evaluation scaling has prefactors based on sampling for expectation values.
As the number of parameters in a typical \ac{VQE} circuit is considerably smaller than in neural networks and the circuit chosen in this work exhibits beneficial symmetries, the high cost of the method are expected to be less problematic for our setting and bearable for \ac{VQE} applications.
Indeed there have been some demonstrations of the Natural Gradient Descent and the Imaginary Time Evolution for small \ac{VQE} instances \cite{McArdle_Yuan_19, Stokes_Carleo_19, Koczor_Benjamin_19} as well as comparisons to standard gradient descent methods and imaginary time evolution for one- and two-qubit systems \cite{Yamamoto_19}.
Inspired by the classical machine learning context and aiming for reduced cost, modifications of Natural Gradient Descent have been proposed such as a (block) diagonal approximation to the Fubini-Study matrix \cite{Stokes_Carleo_19}.
We will later show that such simplifications have to be performed with caution and can disturb the optimization.
\begin{table*}[t!]
\begin{tabular}{lcl}
Operation & Quantum cost & \\\hline
Energy evaluation & $N_MK_Ht_1$ & Depending on measurement bases\\
Analytic gradient & $(Kn)N_MK_Ht_1$ & Ancilla qubit required\\
Numeric gradient (sym.) & $2nN_MK_Ht_1$ & \\
Numeric gradient (asym.) & $(n+1)N_MK_Ht_1$ & \\
\acs{SPSA} gradient &$2N_MK_Ht_1$ & \\
Fubini matrix &$\quad (Kn)^2N_at_3+(Kn)N_at_2\quad$&Ancilla qubit required\\\hline
\ac{BFGS} & $C_{\text{grad}}+\gamma N_MK_Ht_1$ & $\gamma=\order{n^{0\leq y<1}}$ expected\\
\ac{ADAM} & $C_{\text{grad}}$ & Monitoring adds $N_MK_Ht_1$ for some gradients \\
\ac{NatGrad} & $C_{\text{grad}}+C_{\text{Fubini}}$ & Cost for inverting $F$ can be neglected \\
\end{tabular}
\caption{\label{tab:optimizercost}Cost on a quantum computer for selected \ac{VQE} optimization methods and their subroutines. The optimizer cost are given per epoch, enabling us to compare the techniques beyond their simulation times with deviating scaling. We neglected terms which are comparably small for $d,n\gg1$.}
\end{table*}
\subsubsection{Optimization cost}\label{sec:optcost}
To make a fair comparison between the optimization schemes, we briefly lay out the scaling of the required operations and the resulting cost per epoch.
We will use the following notation during the comparison:
There are $n$ variational parameters in the circuit, $K_H$ terms in the Hamiltonian and on average $K=\s{j}{1}{n}K_j/n$ Pauli generators per variational parameter, with an average of $N_M$ samples required for each expectation value.
In practice, one of course would measure whole sets of operators both from the Hamiltonian and from the Pauli generator set simultaneously, such that $K$ and $K_H$ essentially are numbers of bases in which measurements are required.
For entries of the Fubini matrix we assume $N_a$ samples for sufficiently precise measurements, which has been shown to be smaller than $N_M$ numerically; For further discussion see \cite{McArdle_Yuan_19}.
Finally, we introduce the time scale $t_x \coloneqq \frac{d}{x}t_{\text{gate}}+t_{\text{wrap}}$ where $t_1$ is needed to initialize and measure the quantum register ($t_\text{wrap}$) and perform the circuit with depth $d$ inbetween ($dt_\text{gate}$).
Evaluating the gradient of the energy function can be done with different methods yielding a tradeoff between precision and cost.
On one hand, the analytic gradient can be evaluated up to measurement precision at the expense of an ancilla qubit and a scaling prefactor $Kn$.
On the other hand there is the finite difference method, which can be performed symmetrically, asymmetrically or via \ac{SPSA}, with cost prefactors $2n$, $n+1$ and $2$, respectively.
This means that robustness to imprecise gradients in general is a relevant property of any optimization scheme used for \acp{VQE} because these gradients are much cheaper to evaluate.
Computing the Fubini-Study metric requires two terms and although the measurement cost scales with $\order{(Kn)^2}$ for the first and with $\order{Kn}$ for the second, we keep both terms in the overall cost scaling because the \ac{VQE} regime implies moderate values of $Kn$.
For the scalings presented in table \ref{tab:optimizercost} we assume a homogeneous distribution of the variational gates in the circuit and that similar numbers of samples $N_M$ are required to measure expectation values of the Hamiltonian terms within one basis and each derivative for all gradient methods.
For the full optimization algorithms the cost are given per epoch as we do not have access to generic scaling of epochs to convergence.
Using the cost per epoch one can rescale the optimization cost from epochs to estimated runtime on a quantum computer beyond estimates that are based on the classical simulation runtimes.
For the \ac{BFGS} algorithm we can not predict the number $\gamma$ of evaluations that are required for the line searches but our numeric experiments and the linear scaling of the cost for non-\ac{SPSA} gradients suggest that it can be neglected as compared to the gradient computation.
For the quantum runtime scalings shown in figs. \ref{fig:tfi_nondist_tq} and \ref{fig:tfi_ylay_tq} we give the time in units of $t_{eval}=N_MK_Ht_1$, assumed $N_M/N_a\approx 10$ and approximated $t_1\approx t_2\approx t_3$.
\subsection{Models}\label{sec:models}
\subsubsection{\acl{TFIM}}\label{sec:tfi}
Our main model is the \ac{TFIM} on a spinchain with \ac{PBC}.
Its Hamiltonian reads
\begin{equation}\label{eq.def:H_TFI}
H_\text{TFI}= H_S + H_B \coloneqq - \s{k}{1}{N} Z^{(k)} Z^{(k+1)} - t \s{k}{1}{N} X^{(k)}
\end{equation}
where we identify the sites $1$ and $N+1$ because of the \ac{PBC} and $t$ is the transverse field.
For $t=0$, the system is the classical Ising chain, which is also called ring of disagrees and is a special case of the \textit{MaxCut} problem \cite{Farhi_Goldstone_14, Wang_Rieffel_18}.
For $t\neq 1$ the problem is no longer motivated by a classical optimization task and for the critical point $t=1$ the ground state exhibits long-ranged correlations.
The ground state of the \ac{TFIM} is found analytically by mapping it to a system of non-interacting fermions, such that the transformed Hamiltonian can be diagonalized \cite{Lieb_Mattis_61}.
The translational invariance of the Hamiltonian is crucial for this step and it will be important that only a small number of different (Pauli) terms can be mapped to \textit{non-interacting} fermions simultaneously.
We show the explicit computation via the Jordan-Wigner transformation in appendix \ref{sec:app_tfi}, it can also be found in e.\,g.~ \cite{Wang_Rieffel_18}.
Here we summarize the action of the mapping on the terms in the Hamiltonian which also generate the \ac{QAOA} circuit (see eqn.~(\ref{eq.res:tfi_alphas}) for the definition of $\alpha_q$):
\begin{align}\label{eq.res:ff_mapping}
\s{k}{1}{N} Z^{(k)}Z^{(k+1)} & \longrightarrow\\
(N-2r)+2&\bigoplus_{q=1}^{r} [\cos\alpha_q \;Z +\sin\alpha_q\;Y], \NN\\
\s{k}{1}{N} X^{(k)} & \longrightarrow (N-2r)+2\bigoplus_{q=1}^{r} Z
\end{align}
where the expressions on the right are understood in a \textit{fermionic operator basis}.
The ground state of $H_\text{TFI}$ is just the product of the single-fermion ground states in momentum basis and we can write out the state and its energy as
\begin{align}
E_{0} &=-E'-2\s{q}{1}{r} \sqrt{1+t^2+2t\cos \alpha_q}\;, \quad \w/\\
\alpha_q &\coloneqq \cas{(2q-1)\pi/N}{N=2r}{2q\pi/N}{N=2r+1}\label{eq.res:tfi_alphas}\\
E' &\coloneqq \cas{0}{N=2r}{1+h}{N=2r+1}.
\end{align}
Because of the free fermion mapping, we can not only obtain the exact ground state of the system but also justify the success of the modified \ac{QAOA} circuit for the \ac{TFIM}.
As mentioned in sec.~\ref{sec:qaoa}, the original \ac{QAOA} proposal would use the system Hamiltonian and a mixing term as generators for the parametrized gates.
For the \ac{TFIM}, however, separating the nearest-neighbour interaction terms $H_S$ from the transverse field terms $H_B$ recombines the latter with the mixing unitary next to it absorbing one variational parameter per block.
The resulting parametrized circuit contains two types of translationally invariant layers, $L_x(\varphi)$ and $L_{zz}(\vartheta)$, of one- and two-qubit rotation gates, respectively.
Starting in the ground state of $H_B$, that is $\ket{\bar{\psi}}=\ket{+}^{\otimes N}$, we alternatingly apply these two layers $p$ times starting with $L_{zz}$.
In the free fermion picture this translates to $\ket{\bar{\psi}}=\ket{0}^{\otimes r}$ and to rotations of the $r$ fermionic states about the z-axis ($L_x$) and an axis $e_q=(0,\sin\alpha_q, \cos\alpha_q)^T$ which depends on the fermion momentum $q$ ($L_{zz}$).
For $t=0$ one can prove that this circuit can prepare the ground state exactly if and only if $p\geq r$ \cite{Mbeng_Santoro_19}, whereas for the case $t\neq 0$ only numerical evidence and a non-rigorous explanation support this claim \cite{Ho_Hsieh_19}.
This explanation compares the number of independent parameters, $2p$ to the number of constraints from fixing the state of $r$ free fermions, $2r$.
While solvability would be implied for a linear system, the given problem is non-linear and the argument remains on a non-rigorous level.
Finally, the equivalence to a system of free fermions has a practical implication for our simulations of the \ac{QAOA} circuit:
Storing the state of $r$ free fermions just requires memory for $2r$ complex numbers.
Applying the entire circuit needs $2pr$ two-dimensional matrix-vector multiplications, which is contrasted by $2pN$ matrix-vector multiplications in $2^N$ dimensions for a full circuit simulation in the qubit picture.
Using the fermionic basis for numerical simulations, results on the \ac{VQE} optimization problem for up to $N=200$ and $p>120$ have been obtained for $t=0$ \cite{Mbeng_Santoro_19}.
\subsubsection{\acl{XXZM}}\label{sec:xxz}
As a second model we consider the 1D \ac{XXZM} with \ac{PBC} which is defined by
\begin{equation}
H_\text{XXZ}=\s{k}{1}{N} \left[X^{(k)}X^{(k+1)}+Y^{(k)}Y^{(k+1)}+\Delta Z^{(k)}Z^{(k+1)}\right] .
\end{equation}
$\Delta$ is the anisotropy parameter. As in the \ac{TFIM}, the sites $1$ and $N+1$ are identified.
The Bethe ansatz reduces the eigen value problem for the \ac{XXZM} to a system of $N/2$ non-linear equations that can be solved numerically with an iterative scheme \cite{Karbach_Tobochnik_97, Karbach_Mueller_98}.
This results in polynomial cost for computing the ground state energy but does not yield a simple ansatz class to construct the ground state on a quantum computer or a simulation scheme of reduced complexity.
We therefore use the \ac{XXZM} as a second benchmark which models the application case more closely:
We do not know a finite gate sequence that contains the ground state but instead employ circuits composed of symmetry-preserving layers which we found to be relatively successful in experiments.
The ansatz we choose is the first-order Trotterized version of the unitary time evolution with the system Hamiltonian applied to a antiferromagnetic ground state:
\begin{align}\label{eq.def:xxz_ansatz}
\ket{\psi(\theta)}&=\p{j}{L}{1} L_{zz}(\vartheta_j)L_{yy}(\kappa_j)L_{xx}(\varphi_j)\ket{\bar{\psi}} \\ \ket{\bar{\psi}}&=\frac{1}{\sqrt{2}}\left(\ket{01}^{\otimes N/2}\pm\ket{10}^{\otimes N/2}\right)
\end{align}
where we only treat even $N$ and $\ket{\bar{\psi}}$ is chosen symmetric under translation for $(N\mod 4)=0$ and antisymmetric for $(N\mod 4)=2$ in anticipation of the exact solution via the Bethe ansatz.
We found this circuit to be more successful at finding the ground state than the \ac{QAOA} circuit.
Even though the terms $\s{k}{1}{N} X^{(k)}X^{(k+1)}$ and $\s{k}{1}{N} Y^{(k)}Y^{(k+1)}$ do not preserve the magnetization in the $Z$-basis in general they do so within the sector described by the above ansatz.
\subsection{Simulation Details}\label{sec:sim_details}
The simulations of the \ac{QAOA} circuit for the \ac{TFIM} are done in the free fermion picture yielding a quadratic scaling in $N$ for the cost function evaluation.
The circuits including $L_y$ layers and for the \ac{XXZM} do not obey the same symmetries and therefore are implemented as a full circuit simulation using \textsc{ProjectQ} \cite{ProjectQ}.
The depth of the \ac{QAOA} circuit for the \ac{TFIM} is fixed to the smallest value containing the exact ground state $p=N/2$, which gives us $N$ variational parameters with one added per $L_y$ in the second main experiment.
For the \ac{XXZ} model we choose $p=N$ resulting in $3N$ variational parameters.
All circuit simulations are performed exactly, i.\,e.~ without noise or sampling.
Furthermore we use the \textsc{SciPy} implementation of the \ac{BFGS} algorithm and in-house routines for \ac{ADAM} and \ac{NatGrad} \cite{SciPy}.
All variational parameters are initialized uniformly i.i.d. over the interval $[0.0001,0.05]$ as this corresponds to initializing the circuit close to the identity and symmetric randomization around $0$ has shown slightly worse performance in our experiments.
We bound the \ac{BFGS} optimization to one period of the rotation parameters as this improves the line search efficiency and found only a small dependence on the position of the interval.
For the \ac{ADAM} optimizer we fixed $\beta_1=0.9$, $\beta_2=0.999$ and $\varepsilon=10^{-7}$ and vary $\eta$ in $[0.005,0.5]$ trying to build heuristics for the particular problems.
We found non-trivial behaviour of \ac{ADAM} w.\,r.\,t.~ the learning rate, observing a strong influence on the optimization duration, for details see sec.~\ref{sec:res_tfi_qaoa}.
Furthermore, an increased regularization constant $\varepsilon$ did not yield any improvements of \ac{ADAM}.
For \ac{NatGrad} we fix the Tikhonov regularization to $\varepsilon_T=10^{-4}$ and use learning rates of $0.5$, $0.05$ and $0.1$.
Employing block diagonal approximations to the Fubini-Study matrix as suggested in \cite{Stokes_Carleo_19} was not successful due to long-range correlations between the variational parameters in the circuit.
\section{Conclusion}\label{sec:conclusion}
Our first main result shows that the \ac{BFGS} optimizer, while quick and reliably for small systems, has an increased chance getting stuck in local minima already in medium sized \acp{VQE}, in the range of present day and near future \ac{NISQ} devices.
This may be surprising as it has access to non-local information due to its line search subroutine.
We suspect that this aspect of the algorithm becomes less helpful for finding a global minimum because of its sparsity in high-dimensional parameter spaces.
The \ac{ADAM} optimizer on the other hand is able to find global minima also in larger parameter spaces (up to $42$) for suitably small learning rates but this comes at the cost of a quickly increasing number of epochs to complete the optimization.
In particular we observed two effects of the learning rate $\eta$ on the runtime of \ac{ADAM}:
On the one hand, there is a threshold size of the parameter space that depends on $\eta$ above which the epoch count rapidly increases, which means that a small enough value of the learning rate is essential to avoid extremely long runtimes.
On the other hand, the optimization duration for sizes below the threshold is significantly increased when reducing $\eta$ such that it is undesirable to choose it smaller than strictly necessary.
It thus appears that tedious hyperparameter tuning may be necessary to balanced these two effects.
The \ac{NatGrad} optimizer recently proposed for \acp{VQE} shows very reliable convergence to a global minimum for all tested system sizes within fewer epochs but at high per-epoch cost.
We found that Tikhonov regularization can fix the problem of getting lost in barren plateaus even after a suitable initialization.
This makes the algorithm a promising, although expensive, candidate for the optimization of future \acp{VQE}.
The increased cost for determining the Fubini matrix at each step have a particularly strong effect on the estimated quantum runtime for spin chain systems, such that for other systems with more favourable scaling \ac{NatGrad} might not only be more reliable but additionally exhibit competitive cost.
Our second main experiment treats overparametrization in \ac{VQE} ansatz classes using the example of additional rotation gates that break the symmetry of the Hamiltonian.
The \ac{BFGS} optimizer fails to find a global minimum in some instances even for small systems and in general exhibits a strongly fluctuating performance which decreases with the number of additional gate layers.
The simulation cost restricted the maximal system size for this second experiment but there is no reason to assume that a stronger overparametrization with more symmetry breaking layers would resolve these problems.
Also \ac{ADAM} showed strong susceptibility to the additional degrees of freedom.
Beyond the implications on applications, this is interesting because overparametrization is heavily used in machine learning to make the cost function tractable for optimizers like \ac{ADAM} and we therefore appear to observe a fundamental difference between classical machine learning and \acp{VQE}.
Finally, \ac{NatGrad} showed some failed optimization runs for selected system sizes as well but mostly remained successful even for multiple additional gate layers.
It therefore rewards its increased cost per epoch with higher success rates and is the only tested optimization strategy that showed resilience to both, big search spaces and local minima caused by overparametrization.
The extension of our analysis to the \ac{XXZM} confirmed the problems of the \ac{BFGS} optimizer with big search spaces and the rapid runtime growth for \ac{ADAM}.
\ac{NatGrad} performed less reliably on the \ac{XXZM} and the per-epoch cost dominate the reduced number of epochs.
The convergence issues might be either due to local minima or optimization interrupts based on small improvements with a series of updates, where preliminary insights suggest that the latter is the case and that \ac{NatGrad} could be improved by tailoring it to \acp{VQE}.
Our investigations have shown that \ac{NatGrad} might enable \acp{VQE} to solve more complex and bigger problems as it performs well on a test model with challenges representative of those in potential future applications of \acp{VQE}.
Caution is in order, however, when generalizing this result to other models as we saw in the case of the \ac{XXZM}.
\section{Acknowledgements}
We would like to thank Chae-Yeun Park, David Gross, Gian-Luca Anselmetti and Thorben Frank for helpful discussions.
We acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 – 390534769.
The authors would like to thank Covestro Deutschland AG, Kaiser Wilhelm Allee 60, 51373 Leverkusen, for the support with computational resources.
The work was conducted while all three authors were affiliated with the Institute for Theoretical Physics of the University of Cologne.
|
1,314,259,993,514 | arxiv | \section{Introduction}
Complementary slackness is a fundamental optimality condition, and
hence ubiquitous in optimization. In~the most general setting of
nonlinear programming (formulated below in the classical language of
nonlinear optimization), it requires a pair \(\pdpair{\xb}{\yb}\) of
primal-dual feasible solutions to an optimization problem
\begin{equation}
\label{opt-general}
\min\setst[\big]{
f(x)
}{
g_i(x) \geq 0\,
\forall i \in \set{1,\dotsc,m}
}
\end{equation}
and its (Lagrangean) dual to satisfy \(\yb_i g_i(\xb) = 0\) for every
\(i \in [m] \coloneqq \set{1,\dotsc,m}\); that~is, at least one of the
feasibility conditions \(g_i(\xb) \geq 0\) (in the primal) and \(\yb_i
\geq 0\) (in the dual) must be tight, i.e., they cannot both have a
slack. In this case, we say that \(\pdpair{\xb}{\yb}\) is
\emph{complementary}. This condition can be stated very conveniently
in structured convex optimization. For a linear program (LP)
\begin{equation}
\label{eq:generic-lp}
\max\setst{
\iprodt{c}{x}
}{
Ax = b,\,
x \geq 0
}
\end{equation}
and its dual
{\(
\min\setst{
\iprodt{b}{y}
}{
s = A^{\transp} y - c,\,
s \geq 0
}\),
}
where \(A \in \Reals^{m \times n}\) is a matrix, and \(b \in
\Reals^m\) and \(c \in \Reals^n\) are vectors, a pair
\(\pdpair{\xb}{\dualpair{\yb}{\sbar}}\) of primal-dual feasible
solutions is \emph{complementary} if \(\xb_i\sbar_i = 0\) for every
\(i \in [n]\). One can similarly consider a semidefinite program
(SDP)
\begin{equation}
\label{eq:generic-sdp}
\max\setst{
\Tr(CX)
}{
\Acal(X) = b,\,
X \succeq 0
};
\end{equation}
here, as usual, we equip the space~\(\Sym{n}\) of symmetric \(n\)-by-\(n\)
matrices with the trace inner-product \(\iprod{C}{X} \coloneqq
\Tr(CX^{\transp}) = \sum_{i,j} C_{ij} X_{ij}\), the map \(\Acal \colon
\Sym{n} \to \Reals^m\) is linear, and \(X \succeq 0\) denotes that
\(X \in \Sym{n}\) is positive semidefinite; most of our notation can
be found
in~\Cref{tbl:special-sets,tbl:linear-algebra,tbl:convex,tbl:measure,tbl:graphs}.
The dual SDP is
{\(\min\setst{
\iprodt{b}{y}
}{
S = \Acal^*(y) - C,\,
S \succeq 0
}\),
}
where \(\Acal^* \colon \Reals^m \to \Sym{n}\) is the adjoint
of~\(\Acal\), and a pair \(\pdpair{\Xb}{\dualpair{\yb}{\Sb}}\) of primal-dual
feasible solutions is called \emph{complementary} if
\(\Tr(\Xb\Sb) = 0\); equivalently, if \(\Xb\Sb = 0\), since
\(\Xb,\Sb \succeq 0\).
Strict complementarity is a refinement of the notion of complementary
slackness where we require \emph{precisely~one} of the feasibility
conditions involved to be tight, which forces the other one to have a
slack. A pair \(\pdpair{\xb}{\yb}\) of primal-dual feasible solutions
for the optimization problem in~\eqref{opt-general} and its dual is
\emph{strictly complementary} if \(\yb_i g_i(\xb) = 0\) and \(\yb_i +
g_i(\xb) > 0\) for every \(i \in [m]\). A pair
\(\pdpair{\xb}{\dualpair{\yb}{\sbar}}\) of primal-dual feasible
solutions for the LP in~\eqref{eq:generic-lp} and its dual is
\emph{strictly complementary} if \(\xb_i \sbar_i = 0\) and \(\xb_i +
\sbar_i > 0\) for every \(i \in [n]\). Finally, a pair
\(\pdpair{\Xb}{\dualpair{\yb}{\Sb}}\) of primal-dual feasible
solutions for the SDP in~\eqref{eq:generic-sdp} and its dual is
\emph{strictly complementary} if \(\Xb\Sb = 0\) and \(\Xb + \Sb \succ
0\), i.e., \(\Xb + \Sb\) is positive definite. The latter two notions
can be neatly unified in the context of convex conic optimization via
the concept of faces (see~\cite{Pataki00a}).
Complementary slackness characterizes optimality whenever Strong
Duality holds, in both LPs and SDPs: a~primal-dual pair of feasible
solutions is optimal if and only if it is complementary. This is
sometimes described by saying that \emph{complementary slackness
holds} for the (primal-dual pair of) programs. In the case of LPs,
whenever primal and dual are both feasible, there exists a primal-dual
pair of optimal solutions that is strictly
complementary~\cite{GoldmanT56a}; i.e., \emph{strict complementarity
holds} for every primal-dual pair of feasible~LPs. However, there
exist primal-dual pairs of SDPs (which satisfy strong regularity
conditions sufficient for SDP Strong Duality) that have no strictly
complementary primal-dual pair of optimal solutions
(see~\cite{ShapiroS00a}); in~such~cases, we say that \emph{strict
complementarity fails} for said primal-dual pair of SDPs. In fact,
failure of strict complementarity is deeply related to failure of
Strong Duality in the context of convex conic
optimization~\cite{TuncelW12a}.
Existence of a strictly complementary pair of optimal solutions for an SDP is
crucial for some key properties of interior-point methods used to
solve such an optimization problem; see, e.g.,
\cite{AlizadehHO98a,JiPS99a,LuoSZ98a,KojimaSS98a} for superlinear convergence
and~\cite{HalickaKR02a} for convergence of the central path to the analytic center of the
optimal face. Strict complementarity is also very useful in the
identification of optimal faces (in the primal and dual problems), for
detection of infeasibility and unboundedness as well as efficient
recovery of certificates of these \cite{YeTM94a,NesterovTY99a}. Hence,
it is important to determine whether strict complementarity holds for
a given SDP.
It is known that strict complementarity holds generically for
SDPs~\cite{AlizadehHO97a}; for a generalization to convex optimization
problems, see~\cite{PatakiT01a}. However, there are some generic
properties of LPs that fail in some natural, highly structured
formulations arising in combinatorial optimization. For instance,
whereas systems of linear inequalities are well-known to be
generically nondegenerate, the natural description of many classical
polytopes is degenerate (e.g., for the matching polytope,
see~\cite[Theorem~25.4]{Schrijver03a}), and ``\ldots most real-world
LP problems are degenerate'' according to~\cite{YeGTZ93a}. Thus, one
ought to be careful about strict complementarity when approaching
combinatorial optimization problems via SDP relaxations.
In this paper, we study how often strict complementarity holds or
fails for the MaxCut SDP and its dual, when an optimal solution of the
primal occurs at a vertex of its feasible region. Recall that the
\emph{MaxCut problem} for a given graph \(G = (V,E)\) on \(V = [n]\)
and weight function \(w \colon E \to \Reals\) can be cast as the
optimization problem
{\(
\max\setst{
\qform{C}{x}
}{
x \in \set{\pm1}^n
}\),
}
where \(C \in \Sym{n}\) is defined as
\begin{equation}
\label{eq:1}
4C
\coloneqq
\Lcal_G(w)
\coloneqq
\sum_{\set{i,j} \in E} w_{\set{i,j}} \oprodsym{(e_i-e_j)}
\end{equation}
and \(\set{e_1,\dotsc,e_n}\) is the standard basis of~\(\Reals^n\).
The matrix \(\Lcal_G(w)\) is known as (a weighted) \emph{Laplacian}
matrix of~\(G\), and it is simple to check that \(\Lcal_G(w) \succeq
0\) if \(w \geq 0\). The natural SDP relaxation for this problem is
the following \emph{MaxCut SDP}, which we write along with its dual:
\begin{equation}
\label{eq:maxcut-sdp}
\begin{array}[!h]{rl}
\max & \Tr(CX) \\
& \diag(X) = \ones, \\
& X \succeq 0,
\end{array}
\begin{array}[!h]{lrl}
= & \min & \iprodt{\ones}{y} \\
& & S = \Diag(y) - C, \\
& & S \succeq 0;
\end{array}
\end{equation}
here, \(\diag \colon \Sym{n} \to \Reals^n\) extracts the diagonal,
\(\Diag \colon \Reals^n \to \Sym{n}\) is the adjoint of~\(\diag\), and
\(\ones\) is the vector of all-ones. Strong Duality holds for every
\(C \in \Sym{n}\) since both SDPs have \emph{Slater points}, i.e.,
feasible solutions that are positive definite.
The feasible region of the MaxCut SDP, called the \emph{elliptope} and
denoted by \(\Elliptope{n}\), is a compact convex set in~\(\Sym{n}\)
and its vertices are precisely its elements that are rank-one
matrices~\cite{LaurentP95a}, i.e., matrices of the form
\(\oprodsym{x}\) with \(x \in \set{\pm 1}^n\). Thus, they correspond
precisely to the \emph{exact} solutions of the MaxCut problem,
for~which the SDP is a relaxation. The vertices of~\(\Elliptope{n}\)
are by definition the points of~\(\Elliptope{n}\) whose normal cones
are full-dimensional (we postpone the definition of normal cone to
\Cref{sec:vertices}). It~is known~\cite{CarliT15a} that strict
complementarity holds in~\eqref{eq:maxcut-sdp} precisely when \(C\)
lies in the relative interior of the normal cone of \emph{some} \(X
\in \Elliptope{n}\). In~particular, if~\(\Xb\) is a vertex
of~\(\Elliptope{n}\), then strict complementarity holds
for~\eqref{eq:maxcut-sdp} whenever \(C\) is in the interior of the
normal cone of~\(\Elliptope{n}\) at~\(\Xb\). However, when \(C\) lies
in the boundary of this normal cone, it is not clear whether strict
complementarity holds.
In this paper, we prove that, when \(C\) is chosen from the boundary
of the normal cone at a vertex of the elliptope, strict
complementarity almost always fails for~\eqref{eq:maxcut-sdp}; in this
regard, surprisingly, the MaxCut SDP displays the worst possible
behavior for a convex optimization problem. In~order to make the statement ``almost always fails''
rigorous, we~shall make use of Hausdorff measures. However, our
treatment is self-contained and it does not require in-depth knowledge of the theory of
Hausdorff measures.
We also focus on two classes of objective functions
for~\eqref{eq:maxcut-sdp}. We prove that, when \(C\) is sampled
uniformly from (a normalization of) the negative semidefinite rank-one
matrices in the normal cone at a vertex of the elliptope, the
probability that strict complementarity fails
for~\eqref{eq:maxcut-sdp} is in~\((0,1)\). Naturally, we shall also
use Hausdorff measures to achieve this. Finally, we also extend a
construction due to Laurent and Poljak~\cite{LaurentP96a}, who proved
that strict complementarity may fail for~\eqref{eq:maxcut-sdp} when
\(C\) is a weighted Laplacian matrix. Their construction works for
complete graphs, and we extend it to graphs which are cosums where one
of the summands is connected and with some mild condition relating the
maximum eigenvalues of their Laplacians.
The order in which our results are presented is different from what we
described above. Since the weighted Laplacian construction
generalized from Laurent and Poljak involves only matrix analysis and
spectral graph theory, and no measure theory, we start with that
result. Only then we shall delve into measure theory tools to prove
the other results. Hence, the rest of this paper is organized as
follows. \Cref{sec:prelim} contains some preliminaries, such as
notation and background results about the MaxCut
SDP~\eqref{eq:maxcut-sdp}. In \Cref{sec:maxcut-sc-failure} we discuss
failure of strict complementarity for~\eqref{eq:maxcut-sdp} using
previous results by Laurent and Poljak and we extend their Laplacian
construction to cosums of graphs. In~\Cref{sec:generic-sc-failure}, we
develop some Hausdorff measure basics and use them to prove that
strict complementarity fails generically (``almost everywhere'') for
the MaxCut~SDP~\eqref{eq:maxcut-sdp} when the objective function is in
the boundary of the normal cone of a vertex of the~elliptope.
Finally, in \Cref{sec:rank-one2}, we zoom into the set of negative
semidefinite rank-one matrices in the latter boundary, and prove that
in this case the probability that strict complementarity holds is
in~\((0,1)\).
\section{Preliminaries}
\label{sec:prelim}
We refer the reader to
\Cref{tbl:special-sets,tbl:linear-algebra,tbl:convex,tbl:measure,tbl:graphs} for
our mostly standard notation and terminology. In order to treat
\(\Reals^n\) and \(\Sym{n}\) uniformly, we adopt the language of
Euclidean spaces, i.e., finite-dimensional real vectors spaces
equipped with an inner product. We denote arbitrary Euclidean spaces
by~\(\Ebb\) and~\(\Ybb\). We adopt Minkowski's notation; for
instance,
{\(
\Cscr + \Lambda \Dscr
\coloneqq
\setst{
x + \lambda y
}{
x \in \Cscr,\,
\lambda \in \Lambda,\,
y \in \Dscr
}\)
}
for \(\Cscr,\Dscr \subseteq \Ebb\) and \(\Lambda \subseteq \Reals\).
Also, whenever possible we shorten singletons to their single
elements, e.g., we write \(\Reals_+(1 \oplus \Cscr)\) to denote the
conic homogenization of the set \(\Cscr\) in one higher dimensional space.
\bgroup
\renewcommand{\arraystretch}{1.2}
\begin{table}[!ht]
\caption{Notation for special sets.}
\centering
\begin{tabular}{r c p{12cm} }
\toprule
\([n]\)
& \(\coloneqq\) &
\(\set{1,\dotsc,n}\) for each \(n \in \Naturals\)
\\
\(\Powerset{X}\)
& \(\coloneqq\) &
the power set of~\(X\)
\\
\(\Reals_+\)
& \(\coloneqq\) &
\(\setst{x \in \Reals}{x \geq 0}\), the set of nonnegative reals
\\
\(\Reals_{++}\)
& \(\coloneqq\) &
\(\setst{x \in \Reals}{x > 0}\), the set of positive reals
\\
\(\Reals^{n \times n}\)
& \(\coloneqq\) &
the space of \(n \times n\) real-valued matrices
\\
\(\Sym{n}\)
& \(\coloneqq\) &
\(\setst{X \in \Reals^{n \times n}}{X = X^{\transp}}\),
the space of symmetric \(n \times n\) matrices
\\
\(\Psd{n}\)
& \(\coloneqq\) &
\(\setst{X \in \Sym{n}}{\qform{X}{h} \geq 0\,\forall h \in \Reals^n}\),
the cone of \emph{positive semidefinite} matrices
\\
\(\Pd{n}\)
& \(\coloneqq\) &
\(\setst{X \in \Sym{n}}{\qform{X}{h} > 0\,\forall h \in \Reals^n \setminus \set{0}}\),
the cone of \emph{positive definite} matrices
\\
\(\Elliptope{n}\)
& \(\coloneqq\) &
the \emph{elliptope}; see~\eqref{eq:4}
\\
\bottomrule
\end{tabular}
\label{tbl:special-sets}
\end{table}
\egroup
\bgroup
\renewcommand{\arraystretch}{1.2}
\begin{table}[!ht]
\caption{Notation for linear algebra.}
\centering
\begin{tabular}{r c p{12cm} }
\toprule
\(\Acal^*\)
& \(\coloneqq\) &
the \emph{adjoint} of a linear map~\(\Acal\) between Euclidean spaces
\\
\(\Tr(X)\)
& \(\coloneqq\) &
\(\sum_{i=1}^n X_{ii}\), the \emph{trace} of \(X \in \Reals^{n \times n}\)
\\
\(I\)
& \(\coloneqq\) &
the identity matrix in the appropriate space
\\
\(\ones\)
& \(\coloneqq\) &
the vector of all-ones in the appropriate space
\\
\(\set{e_1,\dotsc,e_n}\)
& \(\coloneqq\) &
the set of canonical basis vectors of~\(\Reals^n\)
\\
\(\Image(A)\)
& \(\coloneqq\) &
the range of \(A \in \Reals^{n \times n}\)
\\
\(\Null(A)\)
& \(\coloneqq\) &
the nullspace of \(A \in \Reals^{n \times n}\)
\\
\(\supp(x)\)
& \(\coloneqq\) &
\(\setst{i \in [n]}{x_i \neq 0}\),
the \emph{support} of \(x \in \Reals^n\)
\\
\(\diag(X)\)
& \(\coloneqq\) &
\(\sum_{i=1}^n X_{ii}e_i\) for each \(X \in \Reals^{n \times n}\)
so \(\diag \colon \Reals^{n \times n} \to \Reals^n\) extracts the diagonal
\\
\(\Diag(x)\)
& \(\coloneqq\) &
\(\sum_{i=1}^n x_i \oprodsym{e_i} \in \Reals^{n \times n}\)
for each \(x \in \Reals^n\),
so \(\Diag\) is the adjoint of \(\diag\)
\\
\(\Cscr^{\perp}\)
& \(\coloneqq\) &
\(\setst{x \in \Ebb}{\iprod{x}{s} = 0\,\forall s \in \Cscr}\)
for each subset \(\Cscr\) of an Euclidean space~\(\Ebb\)
\\
\(\oplus\)
& \(\coloneqq\) &
the direct sum of two vectors or two sets of vectors
\\
\(x \perp y\)
& \(\coloneqq\) &
denotes that \(x,y \in \Ebb\) are orthogonal, i.e., \(\iprod{x}{y} = 0\)
\\
\(\succeq\)
& \(\coloneqq\) &
the \emph{Löwner partial order} on \(\Sym{n}\), i.e., \(A \succeq B \iff A-B \in \Psd{n}\) for \(A,B \in \Sym{n}\)
\\
\(\succ\)
& \(\coloneqq\) &
the partial order on~\(\Sym{n}\) defined as \(A \succ B \iff A-B \in \Pd{n}\) for \(A,B \in \Sym{n}\)
\\
\(\lambda_{\max}(A)\)
& \(\coloneqq\) &
the largest eigenvalue of \(A \in \Sym{n}\)
\\
\(A^{\MPinv}\)
& \(\coloneqq\) &
the Moore-Penrose pseudoinverse of \(A \in \Reals^{m \times n}\);
see~\cite{HornJ90a}
\\
\(\matvec\)
& \(\coloneqq\) &
the map that sends a matrix in \(\Reals^{n \times n}\) to a vector
indexed by \([n] \times [n]\)
\\
\bottomrule
\end{tabular}
\label{tbl:linear-algebra}
\end{table}
\egroup
\bgroup
\renewcommand{\arraystretch}{1.2}
\begin{table}[!ht]
\caption{Notation for convex analysis on an Euclidean space~\(\Ebb\).}
\centering
\begin{tabular}{r c p{12cm} }
\toprule
\(\cl(\Cscr)\)
& \(\coloneqq\) &
the \emph{closure} of \(\Cscr \subseteq \Ebb\)
\\
\(\interior(\Cscr)\)
& \(\coloneqq\) &
the \emph{interior} of \(\Cscr \subseteq \Ebb\)
\\
\(\ri(\Cscr)\)
& \(\coloneqq\) &
the \emph{relative interior} of a convex set \(\Cscr \subseteq \Ebb\)
\\
\(\bd(\Cscr)\)
& \(\coloneqq\) &
\(\cl(\Cscr) \setminus \interior(\Cscr)\), the \emph{boundary} of \(\Cscr \subseteq \Ebb\)
\\
\(\rbd(\Cscr)\)
& \(\coloneqq\) &
\(\cl(\Cscr) \setminus \ri(\Cscr)\),
the \emph{relative boundary} of a convex set \(\Cscr \subseteq \Ebb\)
\\
\(\Fscr \faceeq \Cscr\)
& \(\coloneqq\) &
denotes that \(\Fscr\) is a face of a convex set \(\Cscr \subseteq \Ebb\); see \Cref{sec:bdstruct}
\\
\(\Fscr \faceneq \Cscr\)
& \(\coloneqq\) &
denotes that \(\Fscr\) is a proper face of a convex set \(\Cscr \subseteq \Ebb\); see \Cref{sec:bdstruct}
\\
\(\Faces(\Cscr)\)
& \(\coloneqq\) &
the set of faces of a convex set \(\Cscr \subseteq \Ebb\); see \Cref{sec:bdstruct}
\\
\(\Normal{\Cscr}{x}\)
& \(\coloneqq\) &
the normal cone of a convex set \(\Cscr \subseteq \Ebb\) at \(x \in \Cscr\); see \eqref{eq:3}
\\
\(\Ball\)
& \(\coloneqq\) &
the unit ball in the appropriate Euclidean space
\\
\(\Ball_{\infty}\)
& \(\coloneqq\) &
the unit ball for the \(\infty\)-norm in the appropriate \(\Reals^n\)
\\
\bottomrule
\end{tabular}
\label{tbl:convex}
\end{table}
\egroup
\bgroup
\renewcommand{\arraystretch}{1.2}
\begin{table}[!ht]
\caption{Notation for the theory of Hausdorff measures in a normed space~\(\Vscr\).}
\centering
\begin{tabular}{r c p{12cm} }
\toprule
\(H_d(\Xscr)\)
& \(\coloneqq\) &
the \(d\)-dimensional Hausdorff outer measure of \(\Xscr \subseteq \Vscr\); see \eqref{eq:15}
\\
\(\lambda_d(\Xscr)\)
& \(\coloneqq\) &
the \(d\)-dimensional Lebesgue outer measure of \(\Xscr \subseteq \Reals^d\)
\\
\(\dim_H(\Xscr)\)
& \(\coloneqq\) &
the Hausdorff dimension of \(\Xscr \subseteq \Vscr\); see~\eqref{eq:17}
\\
\bottomrule
\end{tabular}
\label{tbl:measure}
\end{table}
\egroup
\bgroup
\renewcommand{\arraystretch}{1.2}
\begin{table}[!ht]
\caption{Notation for a graph \(G = (V,E)\).}
\centering
\begin{tabular}{r c p{12cm} }
\toprule
\(V(G)\)
& \(\coloneqq\) &
the vertex set of~\(G\)
\\
\(E(G)\)
& \(\coloneqq\) &
the edge set of~\(G\)
\\
\(\Lcal_G(w)\)
& \(\coloneqq\) &
the weighted Laplacian matrix of~\(G\) with weights \(w \in \Reals^E\); see~\eqref{eq:1}
\\
\(G \cosum H\)
& \(\coloneqq\) &
the cosum of graphs \(G\) and~\(H\); see~\eqref{eq:6}
\\
\bottomrule
\end{tabular}
\label{tbl:graphs}
\end{table}
\egroup
\subsection{Uniqueness of Dual Optimal Solutions}
Delorme and Poljak~\cite{DelormeP93b} proved that the dual SDP
in~\eqref{eq:maxcut-sdp} has a unique optimal solution. We shall
state a slightly generalized version of their result with some changes
and include a proof for the sake of completeness.
\begin{proposition}[{\cite[Theorem~2]{DelormeP93b}}]
\label{prop:unique}
Consider the primal-dual pair of SDPs
\begin{align}
\label{eq:unique-primal}
& \max\setst{
\Tr(CX)
}{
\Acal(X) = b,\,
X \succeq 0
}
\qquad\text{and}
\\
\label{eq:unique-dual}
& \min\setst{
\iprodt{b}{y}
}{
S = \Acal^*(y) - C,\,
S \succeq 0
},
\end{align}
where \(\Acal \colon \Sym{n} \to \Reals^m\) is a surjective linear
map, \(C \in \Sym{n}\), and \(b \in \Reals^m\). Assume there exist
\(\Xcirc \in \Pd{n}\) and \(\ycirc \in \Reals^m\) such that
\(\Acal(\Xcirc) = b\) and \(\Acal^*(\ycirc) \in \Pd{n}\). Suppose
that, for every nonzero \(y \in \Reals^m\), there exists \(z \in
\Reals^m\) such that \(\iprodt{b}{z} \neq 0\) and
\(\Null(\Acal^*(y)) \subseteq \Null(\Acal^*(z))\). Then
\eqref{eq:unique-dual} has a unique optimal solution.
\end{proposition}
\begin{proof}
Since \(\Xcirc\) is a Slater point for~\eqref{eq:unique-primal},
there exists an optimal solution for~\eqref{eq:unique-dual}.
Suppose for the sake of contradiction that \(\dualpair{y_1}{S_1}\)
and \(\dualpair{y_2}{S_2}\) are distinct optimal solutions
for~\eqref{eq:unique-dual}. Set \(\yb \coloneqq
\tfrac{1}{2}(y_1+y_2)\) and \(\Sb \coloneqq \Acal^*(\yb) - C =
\tfrac{1}{2}(S_1+S_2) \succeq 0\). We have \(\Sb \neq 0\) since \(\Acal\) is
surjective. Then \(\dualpair{\yb}{\Sb}\) is also optimal
in~\eqref{eq:unique-dual}. Let \(\zb \in \Reals^m\) such that
\(\iprodt{b}{\zb} \neq 0\) and \(\Null(\Acal^*(y_1-y_2)) \subseteq
\Null(\Acal^*(\zb))\), which exists by assumption. Then
\begin{equation}
\label{eq:2}
\Null(\Sb) \subseteq \Null(\Acal^*(\zb));
\end{equation}
indeed, if \(h\) lies in \(\Null(\Sb) = \Null(S_1) \cap
\Null(S_2)\), then we get \(\Acal^*(y_1) h = C h = \Acal^*(y_2) h\),
whence \(h \in \Null(\Acal^*(y_1-y_2)) \subseteq
\Null(\Acal^*(\zb))\).
Define
\begin{equation*}
\beta
\coloneqq
-
\dfrac{
\iprodt{b}{\ycirc}
}{
\iprodt{b}{\zb}
},
\qquad
d
\coloneqq
\ycirc + \beta \zb,
\end{equation*}
and note that \(\iprodt{b}{d} = 0\). Let \(\mu > 0\) be the
smallest positive eigenvalue of \(\Sb \in \Psd{n} \setminus
\set{0}\). Let \(\norm{\cdot}_2\) denote the operator \(2\)-norm.
If \(\beta\norm{\Acal^*(\zb)}_2 = 0\), set \(\eps \coloneqq 1\);
otherwise set
\begin{gather*}
\eps \coloneqq \frac{\mu}{\abs{\beta}\norm{\Acal^*(\zb)}_2} > 0.
\end{gather*}
Also, set \(\yt \coloneqq \yb + \eps d\) and \(\St \coloneqq
\Acal^*(\yt) - C\). Let \(h \in \Reals^n\). Write \(h = h_1 +
h_2\) with \(h_1 \in \Null(\Sb)\) and \(h_2 \in
[\Null(\Sb)]^{\perp}\). By~\eqref{eq:2} we have
\begin{equation*}
\begin{split}
\qform{\St}{h}
& =
\qform{\Sb}{h}
+
\eps\qform{\Acal^*(d)}{h}
\\
& \geq
\mu \norm{h_2}^2
+
\eps\qform{\Acal^*(\ycirc)}{h}
+
\eps\beta\qform{\Acal^*(\zb)}{h}
\\
& \geq
\mu \norm{h_2}^2
+
\eps\qform{\Acal^*(\ycirc)}{h}
-
\eps\abs{\beta}\norm{\Acal^*(\zb)}_2\norm{h_2}^2
\\
& \geq
\eps\qform{\Acal^*(\ycirc)}{h}.
\end{split}
\end{equation*}
Thus, \(\St \succeq \eps\Acal^*(\ycirc) \succ 0\), so there exists a
feasible solution for~\eqref{eq:unique-dual} with objective value
strictly smaller than \(\iprodt{b}{\yt} = \iprodt{b}{\yb}\), a
contradiction.
\end{proof}
\begin{corollary}[{\cite[Theorem~2]{DelormeP93b}}]
\label{cor:unique-maxcut}
The dual SDP in~\eqref{eq:maxcut-sdp} has a unique optimal solution.
\end{corollary}
\begin{proof}
We shall apply Proposition~\ref{prop:unique}
to~\eqref{eq:maxcut-sdp}. Let us see that the map \(\Acal \coloneqq
\diag\) satisfies the required properties. Take \(\Xcirc \coloneqq
I\) and \(\ycirc \coloneqq \ones\). Let \(y \in \Reals^n\) be
nonzero. Define \(z \in \Reals^n\) as \(z_i \coloneqq \abs{y_i}\)
for every \(i \in [n]\), and note that \(\Null(\Diag(y)) =
\Null(\Diag(z))\) and that \(\iprodt{\ones}{z} > 0\) since \(y \neq
0\).
\end{proof}
\subsection{Vertices of the Elliptope}
\label{sec:vertices}
Let \(\Cscr\) be a convex set in an Euclidean space~\(\Ebb\). The
\emph{normal cone} of~\(\Cscr\) at \(\xb \in \Cscr\) is
\begin{equation}
\label{eq:3}
\Normal{\Cscr}{\xb}
\coloneqq
\setst{
a \in \Ebb
}{
\iprod{a}{x} \leq \iprod{a}{\xb}\,
\forall x \in \Cscr
},
\end{equation}
i.e., it is the set of all normals to supporting halfspaces
of~\(\Cscr\) at~\(\xb\). Note that we are identifying the dual
space~\(\Ebb^*\) of~\(\Ebb\) with~\(\Ebb\). We say that \(\xb \in
\Cscr\) is a \emph{vertex} of~\(\Cscr\) if \(\Normal{\Cscr}{\xb}\) is
full-dimensional. The set of vertices of the \emph{elliptope}
\begin{equation}
\label{eq:4}
\Elliptope{n}
\coloneqq
\setst{X \in \Psd{n}}{\diag(X) = \ones}
\end{equation}
was determined by Laurent and Poljak~\cite{LaurentP95a}:
\begin{theorem}[{\cite[Theorem~2.5]{LaurentP95a}}]
\label{thm:vertices-elliptope}
The set vertices of~\(\Elliptope{n}\) is
\(\setst[\big]{\oprodsym{x}}{x \in \set{\pm1}^n}\).
\end{theorem}
An \emph{automorphism} of~\(\Elliptope{n}\) is a nonsingular linear
operator~\(\Tcal\) on~\(\Sym{n}\) that preserves~\(\Elliptope{n}\),
i.e., \(\Tcal(\Elliptope{n}) = \Elliptope{n}\). For \(s \in
\set{\pm1}^n\), the map \(X \in \Sym{n} \mapsto \Diag(s) X \Diag(s)\)
is easily checked to be an automorphism of~\(\Elliptope{n}\). If
\(x,y \in \set{\pm1}^n\), then \(y = \Diag(s) x\) for \(s \in
\set{\pm1}^n\) defined by \(s_i \coloneqq x_i y_i\) for each \(i \in
[n]\). Hence, any vertex of~\(\Elliptope{n}\) can be mapped into the
vertex \(\oprodsym{\ones}\) by an automorphism of~\(\Elliptope{n}\);
i.e., the automorphism group of~\(\Elliptope{n}\) acts transitively on
the vertices of~\(\Elliptope{n}\). This allows us to prove many
linear properties about the vertices of~\(\Elliptope{n}\) by just
proving them for the vertex~\(\oprodsym{\ones}\). \emph{We shall make
extensive use of this fact without further mention.}
Laurent and Poljak~\cite{LaurentP95a} also provided formulas for the
normal cones of the elliptope. Here we shall use slightly different
formulas from~\cite[Proposition~2.1]{CarliT15a}:
\begin{equation}
\label{eq:normal}
\begin{split}
\Normal{\Elliptope{n}}{X}
& =
\Image(\Diag) - (\Psd{n} \cap \set{X}^{\perp})
\\
& =
\Image(\Diag) - \setst{Y \in \Psd{n}}{\Image(Y) \subseteq \Null(X)}
\end{split}
\qquad
\forall X \in \Elliptope{n}.
\end{equation}
When \(\Xb\) is a vertex of~\(\Elliptope{n}\), every element of
\(\Normal{\Elliptope{n}}{\Xb}\) can be described in a unique way as an
element of the Minkowski sum at the RHS of~\eqref{eq:normal}:
\begin{lemma}
\label{le:unique}
Let \(\Xb\) be a vertex of~\(\Elliptope{n}\). Let \(\yb,\yt \in
\Reals^n\) and \(\Sb,\St \in \Psd{n} \cap \set{\Xb}^{\perp}\) be
such that \(\Diag(\yb) + \Sb = \Diag(\yt) + \St\). Then \(\yb =
\yt\) and \(\Sb = \St\).
\end{lemma}
\begin{proof}
We may assume that \(\Xb = \oprodsym{\ones}\). Then \(\Sb \in
\Psd{n} \cap \set{\oprodsym{\ones}}^{\perp}\) implies that \(\Sb
\ones = 0\). Analogously, \(\St \ones = 0\). Thus \(\yb =
\Diag(\yb)\ones = (\Diag(\yb)+\Sb)\ones = (\Diag(\yt)+\St)\ones =
\Diag(\yt)\ones = \yt\), so \(\Sb = \St\).
\end{proof}
\section{Failure of Strict Complementarity with Laplacian Objectives}
\label{sec:maxcut-sc-failure}
Existence of strictly complementary optimal solutions is known to be
equivalent to membership of the objective vector in the relative
interior of some normal cone:
\begin{proposition}[{\cite[Proposition~4.2]{CarliT15a}}]
\label{prop:sc-ri}
If the feasible region \(\Cscr\) of the SDP~\eqref{eq:generic-sdp} has a
positive definite matrix, then strict complementarity holds
for~\eqref{eq:generic-sdp} and its dual if and only if \(C \in
\ri(\Normal{\Cscr}{X})\) for some \(X \in \Cscr\).
\end{proposition}
Hence, strict complementarity is locally generic when the objective
function is chosen in the normal cone of a given feasible solution;
see~\cite[Corollary~4.3]{CarliT15a}.
By~\eqref{eq:normal} and standard convex analysis,
\begin{equation}
\label{eq:5}
\begin{split}
\ri(\Normal{\Elliptope{n}}{X})
& =
\Image(\Diag) - \ri(\Psd{n} \cap \set{X}^{\perp})
\\
& =
\Image(\Diag) - \setst{Y \in \Psd{n}}{\Image(Y) = \Null(X)}
\end{split}
\qquad
\forall X \in \Elliptope{n}.
\end{equation}
When \(\Xb\) is a vertex of~\(\Elliptope{n}\), we may
combine~\eqref{eq:normal} with \eqref{eq:5} and Lemma~\ref{le:unique}
to conclude that
\begin{equation}
\label{eq:bd-normal-vx}
\begin{split}
\bd(\Normal{\Elliptope{n}}{\Xb})
& =
\Image(\Diag) - \rbd(\Psd{n} \cap \set{\Xb}^{\perp})
\\
& =
\Image(\Diag) - \setst{Y \in \Psd{n}}{\Image(Y) \subsetneq \Null(\Xb)}.
\end{split}
\end{equation}
In~\cite{CarliT15a}, we noted that strict complementarity holds
for~\eqref{eq:maxcut-sdp} for every \(C\) of the form \(C =
\tfrac{1}{4} \Lcal_G(w)\) with \(w \geq 0\) provided that the
polar~\(\Elliptope{n}^{\circ} \coloneqq \setst{Y \in \Sym{n}}{\Tr(YX)
\leq 1 \,\forall X \in \Elliptope{n}}\) of the elliptope is facially
exposed, and we (implicitly) asked whether the latter holds. It turns
out, Laurent and Poljak~\cite[Example~5.10]{LaurentP96a} showed, even
before we raised the question, in a different context and using a
slightly different terminology, that strict complementarity may fail
for~\eqref{eq:maxcut-sdp} for every \(n \geq 3\), hence answering the
question in the negative. For each complete graph \(G = K_n\) with
\(n \geq 3\), they provided a weight function \(w \geq 0\) for which
strict complementarity fails for~\eqref{eq:maxcut-sdp} with \(C =
\tfrac{1}{4} \Lcal_G(w)\).
We generalize their construction showing that strict complementarity
may fail with a weighted Laplacian objective for graphs which are
cosums, with mild conditions on the (co-)summands. Recall that, if
\(G = (V,E)\) and \(H = (U,F)\) are graphs such that \(V \cap U =
\emptyset\), the \emph{cosum} of~\(G\) and~\(H\) is the graph
\begin{equation}
\label{eq:6}
G \cosum H
\coloneqq
(V \cup U, E \cup F \cup \setst{\set{v,u}}{(v,u) \in V \times U}).
\end{equation}
We shall use a characterization of positive semidefinite matrices
partitioned in blocks using Schur complements and the Moore-Penrose
pseudoinverse:
\begin{lemma}[{see~\cite[Theorem~4.3]{Gallier10a}}]
For \(A \in \Sym{m}\), \(C \in \Sym{n}\), and \(B \in \Reals^{m
\times n}\), we have
\begin{equation}
\label{eq:7}
\begin{bmatrix}
A & B \\
B^{\transp} & C \\
\end{bmatrix}
\succeq 0
\iff
A \succeq 0,
\quad
(I-AA^{\MPinv})B = 0,
\quad
\text{and }
C \succeq B^{\transp}A^{\MPinv}B.
\end{equation}
\end{lemma}
\begin{theorem}
\label{thm:cosum}
Let \(G\) and~\(H\) be graphs with \(n_G \geq 1\) and \(n_H \geq 1\)
vertices, respectively. Let \(w_G \colon E(G) \to \Reals_{++}\) and
\(w_H \colon E(H) \to \Reals_{++}\) be weight functions, and denote
the respective weighted Laplacians by \(L_G \coloneqq \Lcal_G(w_G)\)
and \(L_H \coloneqq \Lcal_H(w_H)\). Set \(\mu_G \coloneqq
\lambda_{\max}(L_G)\) and \(\mu_H \coloneqq \lambda_{\max}(L_H)\).
Suppose that \(n_G \mu_G > n_H \mu_H\) and that \(H\) is connected.
Define \(\wb \colon E(G \cosum H) \to \Reals_{++}\) as \(\wb
\coloneqq w_G \oplus w_H \oplus \alpha \ones\) where \(\alpha
\coloneqq \mu_G/n_H\). For enhanced clarity denote the vectors of
all-ones in \(\Reals^{V(G)}\) and~\(\Reals^{V(H)}\) by \(\ones_G\)
and \(\ones_H\), respectively. Then the unique pair of primal-dual
optimal solutions for~\eqref{eq:maxcut-sdp} with \(4C \coloneqq
\Lcal_{G \cosum H}(\wb)\) is \(\pdpair{X^*}{\dualpair{y^*}{S^*}}\)
where
\begin{equation}
X^* \coloneqq
\oprodsym{
\begin{bmatrix*}[r]
-\ones_G\, \\
\ones_H\, \\
\end{bmatrix*}
}
\quad
y^* \coloneqq 2\alpha
\begin{bmatrix*}[r]
\,n_H \ones_G\, \\
\,n_G \ones_H\, \\
\end{bmatrix*},
\quad
S^* \coloneqq
\begin{bmatrix}
\mu_G I - L_G & \alpha \oprod{\ones_G}{\ones_H} \\
\alpha \oprod{\ones_H}{\ones_G} & \alpha n_G I - L_H \\
\end{bmatrix}.
\end{equation}
In particular, since \((X^*+S^*)(h \oplus 0) = 0\) for any
\(\mu_G\)-eigenvector~\(h\) of~\(L_G\), there is no strictly
complementary pair of primal-dual optimal solutions
for~\eqref{eq:maxcut-sdp}.
\end{theorem}
\begin{proof}
It is easy to check that \(X^*\) is feasible in the primal. We have
\[
S^*
=
2
\begin{bmatrix}
\mu_G I & 0^{\transp}\, \\
0 & \alpha n_G I \\
\end{bmatrix}
-
\begin{bmatrix}
L_G + \mu_G I & -\alpha\oprod{\ones_G}{\ones_H} \\
-\alpha\oprod{\ones_H}{\ones_H} & L_H + \alpha n_G I \\
\end{bmatrix}
=
\Diag(y^*) - L_{G \cosum H}(\wb),
\]
and the condition \(S^* \succeq 0\) is equivalent to the conditions
\begin{subequations}
\label{eq:8}
\begin{gather}
\label{eq:9}
A \coloneqq \mu_G I - L_G \succeq 0,
\\
\label{eq:10}
(I-AA^{\MPinv})\ones_G = 0,
\\
\label{eq:11}
\alpha n_G I \succeq L_H + \alpha^2 \qform{A^{\MPinv}}{\ones_G} \oprodsym{\ones_H}.
\end{gather}
\end{subequations}
Note that~\eqref{eq:9} holds trivially. Also \(A\ones_G =
\mu_G\ones_G\), so \(\ones_G \in \Image(A)\) and \eqref{eq:10} holds
since \(I-AA^{\MPinv}\) is the orthogonal projector onto \(\Null(A)
= \Image(A)^{\perp}\). Finally, \(A^{\MPinv}\ones_G =
\mu_G^{-1}\ones_G\) so \eqref{eq:11} is equivalent to \(\alpha n_G I
\succeq L_H + \alpha n_G \frac{1}{n_H} \oprodsym{\ones_H}\), which
holds since \(\alpha n_G > \mu_H\) by assumption. It follows that
\(\dualpair{y^*}{S^*}\) is feasible in the dual. It is easy to
check that \(\Tr(X^*S^*) = 0\), so~\(X^*\) and
\(\dualpair{y^*}{S^*}\) are optimal solutions. By
Corollary~\ref{cor:unique-maxcut}, \(\dualpair{y^*}{S^*}\) is the
unique optimal solution for the dual.
It remains to show that \(X^*\) is the unique optimal solution for
the primal. Let
\begin{equation*}
X =
\begin{bmatrix}
X_G & B \\
B^{\transp} & X_H \\
\end{bmatrix}
\end{equation*}
be an optimal solution for the primal. Complementary slackness
yields
\begin{equation}
\label{eq:12}
0
=
X S^*
=
\begin{bmatrix}
X_G(\mu_{G}I-L_G)+{\alpha}B\oprod{\ones_H}{\ones_G} & {\alpha}X_G\oprod{\ones_G}{\ones_H}+B({\alpha}n_{G}I-L_H) \\
B^{\transp}(\mu_{G}I-L_G)+{\alpha}X_H\oprod{\ones_H}{\ones_G} & {\alpha}B^{\transp}\oprod{\ones_G}{\ones_H}+X_H({\alpha}n_{G}I-L_H) \\
\end{bmatrix}.
\end{equation}
If \(h \perp \ones_H\) is an eigenvector of~\(L_H\),
(left-)multiplying~\(h\) by the bottom right block in~\eqref{eq:12} yields
\(X_{H}h = 0\), where we used the assumption that \({\alpha}n_G >
\mu_H\). Since \(H\) is connected, this implies that \(X_H\) is a
nonnegative scalar multiple of~\(\oprodsym{\ones_H}\), and so
\begin{equation*}
X_H = \oprodsym{\ones_H}.
\end{equation*}
Next apply \(\iprodt{\ones_G}{\cdot\ones_H}\) and
\(\iprodt{\ones_H}{\cdot\ones_G}\) to the top right block and bottom
left block of~\eqref{eq:12}, respectively, to get
\begin{gather}
\label{eq:13}
0 = n_H\qform{X_G}{\ones_G} + n_G\iprodt{\ones_G}{B\ones_H},
\\
\label{eq:14}
0 = n_H\iprodt{\ones_H}{B^{\transp}\ones_G} + n_G\qform{X_H}{\ones_H}.
\end{gather}
Hence,
\begin{equation*}
\frac{
\qform{X_G}{\ones_G}
}{
n_G^2
}
=
\frac{
\qform{X_H}{\ones_H}
}{
n_H^2
}
\qquad\text{and}\qquad
X_G = \oprodsym{\ones_G}.
\end{equation*}
Finally, by~\eqref{eq:13} we get \(\iprodt{\ones_G}{B\ones_H} =
-n_{G}n_{H}\), and so \(B = -\oprod{\ones_G}{\ones_H}\). Hence, \(X = X^*\).
\end{proof}
Note that the dimension of the \(\lambda_{\max}(\Lcal_G(w))\)-eigenspace
controls the ``degree'' to which strict complementarity fails
in~\Cref{thm:cosum}. In~particular, when \(G\) is the complete graph
and \(w_G = \ones\), we have \(\rank(X^*) + \rank(S^*) = 1 + n_H\).
\Cref{thm:cosum} shows that, if \(F\) is a graph which is a cosum
(i.e., the complement of~\(F\) is not connected) \(F = G \cosum H\),
where \(G\) has at least one edge and \(H\) is connected, then there
is a nonnegative weight function \(w \colon E(F) \to \Reals_{++}\) such
that strict complementarity fails for~\eqref{eq:maxcut-sdp} with \(C =
\tfrac{1}{4}\Lcal_F(w)\); one may just fix \(w_H \in
\Reals_{++}^{E(H)}\) arbitrarily, e.g., \(w_H = \ones\), and set \(w_G
\coloneqq M \ones\) for large enough~\(M\) so that \(n_{G}\mu_{G} >
n_{H}\mu_{H}\). A natural question following from this is:
\begin{problem}
Characterize the set of graphs for which there exists a positive
weight function on the edges such that strict complementarity fails
for~\eqref{eq:maxcut-sdp} when \(4C\) is the corresponding weighted
Laplacian matrix.
\end{problem}
\section{Generic Failure of Strict Complementarity on the Boundaries
of Normal Cones}
\label{sec:generic-sc-failure}
In this section, we consider how often strict complementarity holds
for~\eqref{eq:maxcut-sdp} when \(C\) lies in the (relative) boundary
of \(\Normal{\Elliptope{n}}{\Xb}\) for some vertex \(\Xb\)
of~\(\Elliptope{n}\). Note that this boundary is described as a
Minkowski sum in~\eqref{eq:bd-normal-vx}.
We start by considering the case \(n = 3\),
where~\eqref{eq:bd-normal-vx} simplifies to
\begin{equation}
\label{eq:bd-normal-vx-3}
\bd(\Normal{\Elliptope{3}}{\oprodsym{\xb}})
=
\Image(\Diag) - \setst{\oprodsym{z}}{z \in \set{\xb}^{\perp}}
\end{equation}
for every \(\xb \in \set{\pm1}^3\).
\begin{proposition}
\label{prop:sc-fail3}
Let \(\xb \in \set{\pm1}^3\), and let \(C = \Diag(\yb) -
\oprodsym{\zb}\) for some \(\yb \in \Reals^3\) and \(\zb \in
\set{\xb}^{\perp}\), so that \(C \in
\bd(\Normal{\Elliptope{3}}{\oprodsym{\xb}})\). Then strict
complementarity holds for~\eqref{eq:maxcut-sdp} if and only if
\(\zb_i = 0\) for some \(i \in [3]\).
\end{proposition}
\begin{proof}
Set \(\Sb \coloneqq \Diag(\yb) - C = \oprodsym{\zb}\) and \(\Xb
\coloneqq \oprodsym{\xb}\). Clearly, \(\dualpair{\yb}{\Sb}\) is
feasible in the dual and \(\Tr(\Sb\Xb) = (\iprodt{\zb}{\xb})^2 =
0\), so \(\pdpair{\Xb}{\dualpair{\yb}{\Sb}}\) is a pair of
primal-dual optimal solutions. By
Corollary~\ref{cor:unique-maxcut}, \(\dualpair{\yb}{\Sb}\) is the
unique optimal solution in the dual.
Suppose that \(\zb_i \neq 0\) for every \(i \in [3]\). We claim
that \(\Xb\) is the unique optimal solution in the primal. Indeed,
let \(X \in \Elliptope{3}\) be optimal in the primal. Then \(0 =
\Tr(\Sb X) = \qform{X}{\zb}\) so \(X\zb = 0\). Thus,
\[
0 =
\begin{bmatrix}
1 & X_{12} & X_{13} \\
X_{12} & 1 & X_{23} \\
X_{13} & X_{23} & 1
\end{bmatrix}
\begin{bmatrix}
\zb_1 \\ \zb_2 \\ \zb_3
\end{bmatrix}
=
\begin{bmatrix}
\zb_1 + \zb_2 X_{12} + \zb_3 X_{13} \\
\zb_1 X_{12} + \zb_2 + \zb_3 X_{23} \\
\zb_1 X_{13} + \zb_2 X_{23} + \zb_3
\end{bmatrix},
\]
so
\[
\begin{bmatrix}
\zb_2 & \zb_3 & 0 \\
\zb_1 & 0 & \zb_3 \\
0 & \zb_1 & \zb_2
\end{bmatrix}
\begin{bmatrix}
X_{12} \\ X_{13} \\ X_{23}
\end{bmatrix}
=
-\zb.
\]
The determinant of the matrix defining the latter linear system is
\(-2\zb_1\zb_2\zb_3 \neq 0\), so the unique solution is given by the
off-diagonal entries of \(\Xb\).
Suppose now that \(\zb_i = 0\) for some \(i \in [3]\). If \(\zb =
0\) then \(\pdpair{I}{\dualpair{\yb}{0}}\) satisfies strict
complementarity, so assume \(\zb \neq 0\). Set \(\xt \coloneqq
\Diag(\ones-e_i)\xb\) and \(\Xt \coloneqq \oprodsym{\xt} +
\oprodsym{e_i} \in \Elliptope{3}\). Then \(\Tr(\Sb\Xt) =
\qform{(\oprodsym{\xt} + \oprodsym{e_i})}{\zb} =
(\iprodt{\zb}{\xt})^2 + \zb_i^2 = 0\) since \(\iprodt{\zb}{\xt} =
\iprodt{\zb}{\xb} = 0\). Hence,
\(\pdpair{\Xt}{\dualpair{\yb}{\Sb}}\) is a strictly complementarity
pair of primal-dual optimal solutions for~\eqref{eq:maxcut-sdp}.
\end{proof}
For \(n \geq 4\), characterization of strict complementarity
in~\eqref{eq:maxcut-sdp} is not as easily described. However, we can
prove the following condition sufficient for the failure of strict
complementarity, which will turn out to be sufficient for our
purposes.
\begin{theorem}
\label{thm:sc-failure-high-rank}
Let \(n \geq 3\). Let \(C = \Diag(\yb) - \Sb\) for some \(\yb \in
\Reals^n\) and \(\Sb \in \Psd{n}\), so that \(C \in
\bd(\Normal{\Elliptope{n}}{\oprodsym{\ones}})\). Suppose that
\(\Null(\Sb) = \linspan\set{\ones,h}\) for some \(h \in
\set{\ones}^{\perp}\) and that \(h\) has at least three distinct
coordinates. Then strict complementarity fails
for~\eqref{eq:maxcut-sdp}.
\end{theorem}
\begin{proof}
Set \(y^* \coloneqq \yb\) and \(S^* \coloneqq \Diag(y^*) - C =
\Sb\). Set \(\Xb \coloneqq \oprodsym{\ones}\). Clearly,
\(\dualpair{y^*}{S^*}\) is feasible in the dual and \(\Tr(S^*X^*) =
0\), so \(\pdpair{\Xb}{\dualpair{y^*}{S^*}}\) is a pair of
primal-dual optimal solutions. By
Corollary~\ref{cor:unique-maxcut}, \(\dualpair{y^*}{S^*}\) is the
unique optimal solution in the dual. We shall prove that \(\Xb\) is
the unique optimal solution in the primal.
Let \(X \in \Elliptope{n}\) be an optimal solution in the primal.
By complementary slackness, \(\Tr(XS^*) = 0\), so \(S^*X = 0\) and
\(\Image(X) \subseteq \Null(S^*) = \linspan\set{\ones,h}\). Hence,
\(X = \alpha_1 \oprodsym{\ones} + \alpha_2 \oprodsym{h} + \alpha_3
(\oprod{h}{\ones} + \oprod{\ones}{h})\) for some \(\alpha \in
\Reals^3\). Since \(\diag(X) = \ones\), we find that \(\alpha_1 +
\alpha_2 h_i^2 + 2\alpha_3 h_i = 1\) for every \(i \in [n]\). Let
\(i,j,k \in [n]\) such that \(|\set{h_i,h_j,h_k}| = 3\). Then
\[
\begin{bmatrix}
1 & 2 h_i & h_i^2\thinspace \\[2pt]
1 & 2 h_j & h_j^2\thinspace \\[2pt]
1 & 2 h_k & h_k^2\thinspace \\
\end{bmatrix}
\begin{bmatrix}
\alpha_1 \\
\alpha_3 \\
\alpha_2
\end{bmatrix}
= \ones. \] The determinant of the matrix defining this linear
system is a Vandermonde determinant, and it is equal to \(2^3 (h_j -
h_i) (h_k - h_i) (h_k - h_j) \neq 0\) by assumption. Hence,
\(\alpha = e_1\) is its unique solution. Thus, \(X =
\oprodsym{\ones}\).
\end{proof}
\Cref{thm:sc-failure-high-rank} seems to indicate that strictly
complementarity fails ``almost everywhere'' on the boundary of
\(\Normal{\Elliptope{n}}{\oprodsym{\ones}}\), since the high rank
matrices make up the bulk of the boundary (consider that the set of
nonsingular matrices is open and dense) and for ``most'' of them the
extra vector \(h\) in the nullspace has at least three distinct
coordinates. Unfortunately, we are dealing with somewhat complicated
sets (e.g., the high rank matrices in the boundary of a normal cone).
In order to make our previous statements precise, we shall make use of
the theory of Hausdorff measures, which we introduce next.
\subsection{Preliminaries on Hausdorff Measures}
We refer the reader to~\cite{Rogers98a}, though we use different
notation and more standard terminology. See
also~\cite{DrusvyatskiyL11a,PatakiT01a} for a somewhat similar presentation. We
focus our presentation on finite-dimensional normed spaces (over the
reals) but most of it could be developed for arbitrary metric spaces.
Our main normed spaces are (subspaces of) \(\Reals^n\) and
\(\Sym{n}\). Since these are Euclidean spaces, they are equipped with
a norm induced by their inner-products, and that is the norm that we
will consider unless explicitly stated otherwise. We shall only use
other norms in~\Cref{sec:rank-one2}.
Let \(\Vscr\) be a finite-dimensional normed space. Let \(d \in
\Reals_+\) and \(\eps \in \Reals_{++}\). For each \(\Xscr \subseteq
\Vscr\), define
\[
H_d^{\eps}(\Xscr)
\coloneqq
\inf\setst*{
\sum_{i=0}^{\infty}
\sqbrac[\big]{\diam(\Uscr_i)}^d
}{
\set{\Uscr_i}_{i \in \Naturals} \subseteq \Powerset{\Vscr},\,
\Xscr \subseteq \bigcup_{i=0}^{\infty} \Uscr_i,\,
\diam(\Uscr_i) < \eps\,\forall i \in \Naturals
},
\]
where the \emph{diameter} of \(\Uscr \subseteq \Vscr\) is
\(\diam(\Uscr) \coloneqq \sup_{x,y \in \Uscr} \norm{x-y}\). The
function \(H_d \colon \Powerset{\Vscr} \to [0,+\infty]\) defined by
\begin{equation}
\label{eq:15}
H_d(\Xscr)
\coloneqq
\sup_{\mathclap{\eps>0}} H_d^{\eps}(\Xscr)
=
\lim_{\eps \downarrow 0} H_d^{\eps}(\Xscr)
\qquad
\forall \Xscr \subseteq \Vscr
\end{equation}
is an outer measure on~\(\Vscr\).
Hence, the restriction of~\(H_d\) to the \(H_d\)-measurable subsets
of~\(\Vscr\) is a complete measure on~\(\Vscr\), called the
\emph{\(d\)-dimensional Hausdorff measure} on~\(\Vscr\). The
\(0\)-dimensional Hausdorff measure~\(H_0\) is the cardinality of a
set, \(H_1\) is its length, \(H_2\) is its area, and so on.
Let \(d\) be a positive integer and set \(\Vscr \coloneqq \Reals^d\).
Let \(\lambda_d \colon \Powerset{\Reals^d} \to [0,+\infty]\) denote the
\(d\)-dimensional Lebesgue outer measure on~\(\Reals^d\). It can be
proved~\cite[Theorem~30]{Rogers98a} that
\begin{equation}
\label{eq:Lebesgue-Hausdorff}
\frac{\lambda_d(\Xscr)}{\lambda_d(\Ball)}
=
\frac{H_d(\Xscr)}{2^d}
\qquad
\forall \Xscr \subseteq \Reals^d.
\end{equation}
In~particular, the \(H_d\)-measurable subsets of~\(\Reals^d\) are the
same as the \(\lambda_d\)\nobreakdash-measurable sets.
Let \(a,b \in \Reals_+\) with \(a < b\) and let \(\Xscr \subseteq \Vscr\).
It is not hard to prove from the definition that
\begin{alignat}{2}
\label{eq:16}
H_a(\Xscr) < \infty & \implies H_{b}(\Xscr) = 0, \\
H_b(\Xscr) > 0 & \implies H_{a}(\Xscr) = \infty.
\end{alignat}
Hence,
\begin{equation}
\label{eq:17}
\sup\setst{d \in \Reals_+}{H_d(\Xscr) = \infty}
=
\inf\setst{d \in \Reals_+}{H_d(\Xscr) = 0},
\end{equation}
and the common value in~\eqref{eq:17} is the \emph{Hausdorff dimension}
of~\(\Xscr\), denoted by \(\dim_H(\Xscr)\). In~particular,
\begin{equation}
\label{eq:18}
\text{
%
if~\(d \in \Reals_+\) and \(\Xscr \subseteq \Vscr\)
satisfy \(H_d(\Xscr) \in
(0,\infty)\), then \(\dim_H(\Xscr) = d\).%
}
\end{equation}
We may now define \emph{genericity} precisely. Let \(\Xscr\) be a subset of
a finite-dimensional normed space~\(\Vscr\). Let \(P\) be a property that may hold or
fail for points in~\(\Xscr\), i.e., \(P(x)\) is either true or false
for each \(x \in \Xscr\). We say that \(P\) \emph{holds generically
on~\(\Xscr\)} if \(H_d(\setst{x \in \Xscr}{\text{\(P(x)\) is
false}}) = 0\) for \(d \coloneqq \dim_H(\Xscr)\). We say that
\(P\) \emph{fails generically on~\(\Xscr\)} if the negation of~\(P\)
holds generically on~\(\Xscr\). In~\Cref{sec:generic-failure}, we
will use \Cref{thm:sc-failure-high-rank} to prove that strict
complementarity fails generically at the boundary of the normal cone
of any vertex of~\(\Elliptope{n}\), for \(n \geq 3\), modulo some
qualification on the ambient space. In the remainder of this section
and in the next one, we will describe a few more measure-theoretic
tools that we shall use towards this goal.
Let \(\Vscr\) and~\(\Uscr\) be finite-dimensional normed spaces. Let
\(\Xscr \subseteq \Vscr\). Recall that a function \(\varphi \colon \Xscr \to
\Uscr\) is \emph{Lipschitz continuous} with Lipschitz constant \(L >
0\) if
\begin{equation}
\label{eq:19}
\norm{\varphi(x)-\varphi(y)} \leq L\norm{x-y}
\qquad
\forall x,y \in \Xscr.
\end{equation}
The following is well known and easy to prove:
\begin{theorem}
\label{thm:Lipschitz}
Let \(\Vscr\) and~\(\Uscr\) be finite-dimensional normed spaces. Let \(\Xscr \subseteq
\Vscr\) and \(d \in \Reals_+\). Let \(\varphi \colon \Xscr \to \Uscr\) be
Lipschitz continuous with Lipschitz constant~\(L\). Then
\begin{equation}
\label{eq:20}
H_d\paren[\big]{\varphi(\Xscr)} \leq L^d H_d(\Xscr).
\end{equation}
\end{theorem}
\Cref{thm:Lipschitz} is especially useful to determine some Hausdorff
dimensions via bi-Lipschitz maps. We recall the definition here. Let
\(\Vscr\) and~\(\Uscr\) be finite-dimensional normed spaces. Let \(\Xscr \subseteq \Vscr\),
and let \(\varphi \colon \Xscr \to \Uscr\) be a one-to-one function with
range \(\Yscr \coloneqq \varphi(\Xscr)\). We say that \(\varphi\) is
\emph{bi-Lipschitz continuous} with Lipschitz constants \(L_1 > 0\)
and \(L_2 > 0\) if \(\varphi\) is Lipschitz continuous with Lipschitz
constant~\(L_1\) and \(\varphi^{-1} \colon \Yscr \to \Vscr\) is Lipschitz
continuous with Lipschitz constant~\(L_2\).
\begin{corollary}
\label{cor:bi-Lipschitz}
Let \(\Vscr\) and~\(\Uscr\) be finite-dimensional normed spaces. Let \(\Xscr \subseteq
\Vscr\) and \(d \in \Reals_+\). Let \(\varphi \colon \Xscr \to \Uscr\) be
bi-Lipschitz continuous with Lipschitz constants~\(L_1\)
and~\(L_2\). Then
\begin{equation}
\label{eq:21}
L_2^{-d} H_d(\Xscr) \leq H_d(\varphi(\Xscr)) \leq L_1^d H_d(\Xscr).
\end{equation}
In~particular, if \(H_d(\Xscr) \in (0,\infty)\), then
\(\dim_H(\varphi(\Xscr)) = d\).
\end{corollary}
This corollary may be used, for~instance, to regard any
\(d\)-dimensional Euclidean space~\(\Vscr\) as~\(\Reals^d\) by
considering the coordinate map \(\varphi \colon \Vscr \to \Reals^d\)
with respect to a fixed orthonormal basis of~\(\Vscr\). Another
frequent use of \Cref{cor:bi-Lipschitz} goes as follows. Equip the
space \(\Sym{n}\) with the trace inner-product. If \(Q \in \Reals^{n
\times n}\) is an orthogonal matrix, the map \(X \in \Sym{n} \mapsto
QXQ^{\transp}\) preserves inner-products, and hence norms and
distances; hence, the map is Lipschitz continuous with Lipschitz
constant~1. Its inverse is \(X \in \Sym{n} \mapsto Q^{\transp}XQ\)
and so the map \(X \in \Sym{n} \mapsto QXQ^{\transp}\) is bi-Lipschitz
continuous with Lipschitz constants~\(1\) and~\(1\).
The next result is useful for determining the Hausdorff dimension of
some simple unbounded sets in the \(\sigma\)-finite case,
when~\eqref{eq:18} is not directly applicable:
\begin{proposition}
\label{prop:dim-countable}
Let \(\Xscr\) be a subset of a finite-dimensional normed~\(\Vscr\). For each \(i
\in \Naturals\), let \(\Yscr_i\) be a subset of a finite-dimensional normed
space~\(\Uscr_i\), and let \(\varphi_i \colon \Yscr_i \to \Vscr\) be a
Lipschitz continuous function with Lipschitz constant~\(L_i\). If
\(\Xscr \subseteq \bigcup_{i \in \Naturals} \varphi_i(\Yscr_i)\), then
\(\dim_H(\Xscr) \leq \sup_{i \in \Naturals} \dim_H(\Yscr_i)\).
\end{proposition}
\begin{proof}
Set \(d \coloneqq \sup_{i \in \Naturals} \dim_H(\Yscr_i)\). Let \(\dbar
> d\). Then~\eqref{eq:17} yields \(H_{\dbar}(\Yscr_i) = 0\) for each \(i
\in \Naturals\), so by \Cref{thm:Lipschitz} we have \(H_{\dbar}(\Xscr)
\leq \sum_{i \in \Naturals} L_i^{\dbar} H_{\dbar}(\Yscr_i) = 0\).
\end{proof}
For instance, \(\Reals^d = \bigcup_{M \in \Naturals} M\Ball\) and the
ball \(M\Ball \subseteq \Reals^d\) with nonzero~\(M\) has Hausdorff
dimension~\(d\) by~\eqref{eq:18} and~\eqref{eq:Lebesgue-Hausdorff}, so
\Cref{prop:dim-countable} shows that \(\dim_H(\Reals^d) \leq d\).
Since \(\Reals^d \supseteq \Ball\) shows that \(H_d(\Reals^d) \geq
H_d(\Ball) > 0\) by~\eqref{eq:Lebesgue-Hausdorff}, we conclude by~\eqref{eq:18} that
\(\dim_H(\Reals^d) = d\). Together with \Cref{cor:bi-Lipschitz}, this
shows that Hausdorff dimension and the usual (linear) dimension
coincide on linear subspaces, and hence also for convex sets by
translation invariance.
\subsection{Hausdorff Measures and the Boundary Structure of Convex Sets}
\label{sec:bdstruct}
In this section we collect some results relating Hausdorff measures
and the boundary structure of convex sets, including a quick review of
basic facts about faces.
The following result is well known:
\begin{theorem}
\label{thm:dimH-bd-compact}
Let \(\Ebb\) be an Euclidean space. If \(\Cscr \subseteq \Ebb\) is a
compact convex set with dimension \(d \geq 1\), then
\(\dim_H(\rbd(\Cscr)) = d-1\).
\end{theorem}
\begin{proof}
We may assume that \(\dim(\Ebb) = d\) so that \(\Cscr\) has nonempty
interior. By choosing an orthonormal basis for~\(\Ebb\), we may
assume that \(\Ebb = \Reals^d\). We may also assume that \(0 \in
\interior(\Cscr)\) by translation invariance of Hausdorff measure.
Set \(X \coloneqq
\bd(\Ball_{\infty})\), and note that \(H_{d-1}(X) \in (0,+\infty)\)
by~\eqref{eq:Lebesgue-Hausdorff} and \Cref{cor:bi-Lipschitz}. Let
\(\eps,M \in \Reals_{++}\) such that \(2 \eps \Ball_{\infty}
\subseteq \Cscr \subseteq \tfrac{1}{2}M\kern .5pt \Ball_{\infty}\).
Let \(p_\Cscr \colon \Reals^d \to \Cscr\) be the metric projection
onto~\(\Cscr\), i.e., \(\set{p_\Cscr(x)} = \argmin_{y \in
\Cscr}\norm{y-x}\) for each \(x \in \Reals^d\). Then \(p_\Cscr\)
is Lipschitz continuous (with Lipschitz constant~1).
\Cref{thm:Lipschitz} applied
to~\(p_\Cscr\mathord{\restriction}_{MX}\) and positive homogeneity
of~\(H_{d-1}\) (of degree~\(d-1\)) yield \(H_{d-1}(\bd(\Cscr)) <
\infty\). Similarly, applying \Cref{thm:Lipschitz} to the
restriction to~\(\bd(\Cscr)\) of metric projection onto~\(\eps
\Ball_{\infty}\) yields \(H_{d-1}(\bd(\Cscr)) > 0\). The theorem
now follows from~\eqref{eq:18}.
\end{proof}
Since we are dealing with convex cones, the previous result will be
more useful to us when stated in a lifted form about pointed closed
convex cones:
\begin{corollary}
\label{cor:dimH-bd-cone}
Let \(\Ebb\) be an Euclidean space. If \(\Kscr \subseteq \Ebb\) is a
pointed closed convex cone with dimension \(d \geq 1\), then
\(\dim_H(\rbd(\Kscr)) = d-1\).
\end{corollary}
\begin{proof}
We may assume that \(\Ebb = \Reals^d\). Since \(\Kscr\) is pointed,
after applying some rotation, which preserves Hausdorff measures by
\Cref{cor:bi-Lipschitz}, we may assume that \(\Kscr = \Reals_+(1
\oplus \Cscr)\) for some compact convex set \(\Cscr \subseteq
\Reals^{\dbar}\) where \(\dbar \coloneqq d-1\). For each \(N \in
\Naturals\), define the compact convex set \(\Kscr_N \coloneqq \Kscr
\cap [N,N+1] \oplus \Reals^{\dbar}\).
Since
\begin{equation}
\label{eq:22}
\rbd(\Kscr)
\subseteq
\bigcup_{N=0}^{\infty}
\rbd(\Kscr_N),
\end{equation}
the result follows from \Cref{prop:dim-countable} and
\Cref{thm:dimH-bd-compact}.
\end{proof}
The next result refers to faces of a convex set, so before we state it
we shall briefly recall the basic theory;
see~\cite[Sec.~18]{Rockafellar97a}. Let \(\Ebb\) be an Euclidean
space. Let \(\Cscr \subseteq \Ebb\) be a convex
set. A convex subset~\(\Fscr\) of~\(\Cscr\) is a \emph{face} of~\(\Cscr\) if, for
each \(x,y \in \Cscr\) such that the open line segment \((x,y) \coloneqq
\setst{(1-\lambda)x+\lambda y}{\lambda \in (0,1)}\) between~\(x\)
and~\(y\) meets~\(\Fscr\), we have \(x,y \in \Fscr\). We use the notation \(\Fscr
\faceeq \Cscr\) to denote that~\(\Fscr\) is a face of~\(\Cscr\), and \(\Fscr \faceneq
\Cscr\) to denote that~\(\Fscr\) is a \emph{proper} face of~\(\Cscr\), i.e., \(\Fscr
\faceeq \Cscr\) and \(\Fscr \neq \Cscr\). Denote \(\Faces(\Cscr) \coloneqq
\setst{\Fscr}{\Fscr \faceeq \Cscr}\).
Faces of closed convex sets are closed, and faces of convex cones are
convex cones. An arbitrary intersection of faces of~\(\Cscr\) is a face
of~\(\Cscr\) and, since the faces of a convex set are partially ordered by
inclusion and \(\Cscr \faceeq \Cscr\), every point~\(x\) of~\(\Cscr\) lies in a
unique minimal face~\(\Fscr\) of~\(\Cscr\); this face~\(\Fscr\) is characterized
by the property \(x \in \ri(\Fscr)\). Also, it can be proved that
\(\setst{\ri(\Fscr)}{\emptyset \neq \Fscr \faceeq \Cscr}\) is a partition
of~\(\Cscr\). If \(\Cscr\) is a compact convex set, it is not hard to prove
that the faces of the homogenization of~\(\Cscr\) are described by:
\begin{equation}
\label{eq:25}
\Faces\paren[\big]{
\Reals_+(1 \oplus \Cscr)
}
=
\set[\big]{\emptyset,\set{0}}
\cup
\setst[\big]{
\Reals_+(1 \oplus \Fscr)
}{
\emptyset \neq \Fscr \faceeq \Cscr
}.
\end{equation}
\begin{theorem}[Larman~\cite{Larman71a}]
\label{thm:Larman}
Let \(\Ebb\) be an Euclidean space. If \(\Cscr \subseteq \Ebb\) is a
compact convex set with dimension \(d \geq 1\), then
\begin{equation*}
H_{d-1}\paren[\Big]{\,
\bigcup_{\Fscr \faceneq \Cscr}{\rbd(\Fscr)}
} = 0.
\end{equation*}
\end{theorem}
As before, we shall need a conic version of Larman's Theorem.
We apply tools similar to the ones used to lift
\Cref{thm:dimH-bd-compact} to \Cref{cor:dimH-bd-cone}:
\begin{theorem}
\label{thm:conic-Larman}
Let \(\Ebb\) be an Euclidean space. If \(\Kscr \subseteq \Ebb\) is a
pointed closed convex cone with dimension \(d \geq 1\), then
\begin{equation*}
H_{d-1}\paren[\Big]{\,
\bigcup_{\Fscr \faceneq \Kscr}{\rbd(\Fscr)}
} = 0.
\end{equation*}
\end{theorem}
\begin{proof}
The case \(d = 1\) is easy to verify; assume that \(d \geq 2\). We
may assume that \(\Ebb = \Reals \oplus \Reals^{\dbar}\) for \(\dbar
\coloneqq d-1\) and, as in the beginning of the proof of
\Cref{cor:dimH-bd-cone}, we may assume that \(\Kscr =
\Reals_+(1\oplus \Cscr)\) for some compact convex set \(\Cscr
\subseteq \Reals^{\dbar}\) with nonempty interior. For each \(N \in
\Naturals\), define the compact convex set \(\Kscr_N \coloneqq \Kscr
\cap [N,N+1] \oplus \Reals^{\dbar}\). By elementary convex analysis,
\begin{equation}
\label{eq:63}
\bigcup_{\Fscr \faceneq \Kscr} \rbd(\Fscr)
\subseteq
\bigcup_{N=0}^{\infty}
\bigcup_{\Fscr_N \faceneq \Kscr_N}
\rbd(\Fscr_N).
\end{equation}
Hence,
\begin{equation*}
H_{d-1}\paren*{
\bigcup_{\Fscr \faceneq \Kscr} \rbd(\Fscr)
}
\leq
\sum_{N=0}^{\infty}
H_{d-1}\paren*{
\bigcup_{\Fscr_N \faceneq \Kscr_N}
\rbd(\Fscr_N)
}
= 0,
\end{equation*}
where we used the fact that each summand is zero by
\Cref{thm:Larman}.
\end{proof}
\subsection{Generic Failure of Strict Complementarity}
\label{sec:generic-failure}
In this section, we prove one of our main results: strict
complementarity fails generically in the relative boundary of the
normal cone of the elliptope at any of its vertices.
We shall apply \Cref{thm:conic-Larman} to~\(\Psd{n}\). Let us briefly
recall some well-known descriptions of the faces of the positive
semidefinite cone~\(\Psd{n}\). Let \(\Lfrak_n\) denote the set of
linear subspaces of~\(\Reals^n\). For each \(\Lscr \in \Lfrak_n\), denote
\begin{equation}
\label{eq:26}
\Fscr_\Lscr
\coloneqq
\setst{
X \in \Psd{n}
}{
\Null(X) \supseteq \Lscr
}
\end{equation}
and note that
\begin{equation}
\label{eq:27}
\ri(\Fscr_\Lscr)
=
\setst{
X \in \Psd{n}
}{
\Null(X) = \Lscr
}.
\end{equation}
Then
\begin{gather}
\label{eq:28}
\Faces(\Psd{n})
=
\set{\emptyset}
\cup
\setst{
\Fscr_\Lscr
}{
\Lscr \in \Lfrak_n
}.
\end{gather}
Note that, for \(\Lscr \in \Lfrak_n\) such that \(\Lscr \neq \Reals^n\), there
is an orthogonal matrix \(Q \in \Reals^{n \times n}\) such that
\begin{equation}
\label{eq:29}
\Fscr_\Lscr
=
\setst[\bigg]{
Q
\begin{bmatrix}
U & 0 \\
0 & 0 \\
\end{bmatrix}
Q^{\transp}
}{
U \in \Psd{r}
},
\end{equation}
where \(r \coloneqq n - \dim(\Lscr)\).
\begin{lemma}
\label{lem:low-rank-negligible}
Let \(n \geq 2\) be an integer. Then the property ``\(C \mapsto
\rank(C) = n-1\)'' holds generically in \(\bd(\Psd{n})\).
\end{lemma}
\begin{proof}
Set \(d \coloneqq \dim_H(\Psd{n})\). Note that \(d-1 =
\dim_H(\bd(\Psd{n}))\) by \Cref{cor:dimH-bd-cone}. Let \(X \in
\bd(\Psd{n})\) such that \(\rank(X) = n-1\) fails. Then \(\rank(X)
\leq n-2\). For each nonzero \(h \in \Null(X)\), let \(\Lscr\) be the
linear subspace of~\(\Reals^n\) spanned by~\(h\) and note that \(X
\in \rbd(\Fscr_\Lscr)\), following the notation from~\eqref{eq:26}. Hence,
\begin{equation*}
\setst{
X \in \bd(\Psd{n})
}{
\rank(X) \neq n-1
}
=
\setst{
X \in \Psd{n}
}{
\rank(X) \leq n-2
}
\subseteq
\bigcup_{\mathclap{\Fscr \faceneq \Psd{n}}} \rbd(\Fscr).
\end{equation*}
The \((d-1)\)-dimensional Hausdorff measure of the set on the RHS
above is zero by \Cref{thm:conic-Larman}.
\end{proof}
We are ready to prove one of our main results:
\begin{theorem}
\label{thm:generic-sc-failure}
Let \(n \geq 3\), and let \(\Xb\) be a vertex of~\(\Elliptope{n}\).
Then the property ``\(C \mapsto\) strict complementarity holds for
\eqref{eq:maxcut-sdp}'' fails generically on \(\rbd(\Psd{n} \cap
\set{\Xb}^{\perp})\).
\end{theorem}
\begin{proof}
By \Cref{thm:vertices-elliptope} and the discussion of linear
automorphisms of~\(\Elliptope{n}\) from \Cref{sec:vertices}, we may
assume that \(\Xb = \oprodsym{\ones}\). Set \[m \coloneqq n-1.\]
%
Let \(Q \in \Reals^{n \times n}\) be an orthogonal matrix such that
\(Q^{\transp} e_n = n^{-1/2}\ones\) and \(Q^{\transp} e_{m} =
2^{-1/2}(e_1-e_2)\). Using the map \(M \in \Sym{n} \mapsto
QMQ^{\transp}\) and \Cref{cor:bi-Lipschitz}, we find that
\(\rbd(\Psd{n} \cap \set{\Xbar}^{\perp})\) and \(\rbd(\Psd{n} \cap
\set{\oprodsym{e_n}}^{\perp})\) have the same Hausdorff dimension.
Since the cone \(\Psd{n} \cap \set{\oprodsym{e_n}}^{\perp}\) is an
embedding of \(\Psd{m}\) into~\(\Psd{n}\), the Hausdorff dimension
of \(\rbd(\Psd{n} \cap \set{\oprodsym{e_n}}^{\perp})\) is
\(\dim_H(\Psd{m})-1\) by \Cref{cor:dimH-bd-cone}. Hence,
\begin{equation}
\label{eq:30}
d \coloneqq
\dim_H\paren*{
\rbd\paren*{
\Psd{n} \cap \set{\Xbar}^{\perp}
}
}
= \binom{n}{2}-1.
\end{equation}
Set
%
{\(
\Cscr \coloneqq \setst[\big]{
C \in \rbd(\Psd{n} \cap \set{\Xbar}^{\perp})
}{
\text{strict complementarity holds in~\eqref{eq:maxcut-sdp}}
}\)}.
%
By \Cref{thm:sc-failure-high-rank},
\begin{equation}
\label{eq:31}
\Cscr \subseteq
\Dscr_0
\cup
\Dscr_{12} \cup \Dscr_{13} \cup \Dscr_{23}
\end{equation}
where
\begin{gather*}
\Dscr_0 \coloneqq
\setst[\big]{
C \in \Psd{n} \cap \set{\Xbar}^{\perp}
}{
\rank(C) \leq n-3
},
\\
\Dscr_{ij} \coloneqq
\setst[\big]{
C \in \Psd{n}
}{
\exists h \in \set{\ones,e_i-e_j}^{\perp},\,
h \neq 0,\,
\Null(C) = \linspan\set{\ones,h}
},
\end{gather*}
for each \(i,j \in [n]\). Clearly all the sets \(\Dscr_{ij}\) have
the same \(d\)-dimensional Hausdorff measures, so it suffices to
prove that
\begin{gather}
\label{eq:32}
H_d(\Dscr_0) = 0,
\\
\label{eq:33}
H_d(\Dscr_{12}) = 0.
\end{gather}
By using the map \(M \in \Sym{n} \mapsto QMQ^{\transp}\) and
\Cref{cor:bi-Lipschitz}, \(\Dscr_0\) and \(\setst{C \in
\Psd{m}}{\rank(C) \leq m-2}\) have the same \(d\)-dimensional
Hausdorff measure. Hence, \eqref{eq:32} follows from
\Cref{lem:low-rank-negligible} and \Cref{cor:dimH-bd-cone}. Again
using the map \(M \in \Sym{n} \mapsto QMQ^{\transp}\) and
\Cref{cor:bi-Lipschitz}, we find that \(H_d(\Dscr_{12}) =
H_d(\Dscr')\) where
\begin{equation*}
\Dscr' \coloneqq
\setst{
U \in \Psd{m}
}{
\rank(U) = m-1, e_m \in \Image(U)
}.
\end{equation*}
Hence, to prove~\eqref{eq:33} and thus the theorem, it suffices to
prove that
\begin{equation}
\label{eq:34}
H_d(\Dscr') = 0.
\end{equation}
For each \(k \in [m-1]\) define the permutation matrix \(P_k
\coloneqq \sum_{i \in [m] \setminus \set{k,m}} \oprodsym{e_i} +
\oprod{e_k}{e_m} + \oprod{e_m}{e_k} \in \Sym{m}\). Set \(P_m
\coloneqq I\). For each \(k \in [m]\) define the map \(\varphi_k
\colon \Pd{m-1} \oplus \Reals^{m-1} \to \Sym{m}\) by setting
\begin{equation*}
\varphi_k(A \oplus c)
\coloneqq
P_k^{\transp}
\begin{bmatrix}
A & Ac \\
c^{\transp} A & \qform{A}{c} \\
\end{bmatrix}
P_k.
\end{equation*}
It is easy to verify that
\begin{gather}
\label{eq:35}
\setst{U \in \Psd{m}}{\rank(U) = m-1}
=
\bigcup_{k \in [m]} \varphi_k(\Pd{m-1} \oplus \Reals^{m-1}),
\\
\Null(\varphi_k(A \oplus c))
=
P_k\linspan\set{-c \oplus 1}
\qquad
\forall A \oplus c \in \Pd{m-1} \oplus \Reals^{m-1}.
\end{gather}
Let \(U \in \Psd{m}\) with \(\rank(U) = m-1\), and let \(k \in [m]\)
and \(A \oplus c \in \Pd{m-1} \oplus \Reals^{m-1}\) such that \(U =
\varphi_k(A \oplus c)\). Then \(e_m \in \Image(U)\) is equivalent
to \(e_m \perp P_k (-c \oplus 1)\), which is equivalent to \(k \in
[m-1]\) and \(c \perp e_k\). Hence,
\begin{equation}
\label{eq:36}
\Dscr'
=
\bigcup_{\mathclap{k \in [m-1]}} \varphi_k(\Pd{m-1} \oplus \set{e_k}^{\perp}).
\end{equation}
Let \(k \in [m-1]\). Since each entry of \(\varphi_k(A \oplus
c)\) is (component-wise) polynomial function of the input, the map \(\varphi_k\) is
Lipschitz continuous on any compact subset of the domain. It
follows from \Cref{prop:dim-countable} that
\begin{equation}
\label{eq:37}
\dim_H(\varphi(\Pd{m-1} \oplus \set{e_k}^{\perp}))
\leq
\binom{m}{2} + m - 2
=
d - 1;
\end{equation}
note that the subspace \(\set{e_k}^{\perp}\) in the LHS is
\((m-2)\)-dimensional, as this subspace is the set of vectors
in~\(\Reals^{m-1}\) orthogonal to~\(e_k\). Now~\eqref{eq:34}
follows from~\eqref{eq:36} and~\eqref{eq:37}.
\end{proof}
\section{Failure of Strict Complementarity for Rank-One Objectives}
\label{sec:rank-one2}
In \Cref{sec:generic-sc-failure}, we zoomed into the boundary of the
normal cone of an arbitrary vertex of the elliptope and proved that
strict complementarity fails generically there. \emph{Informally}, we
might say that with zero ``probability'' a ``uniformly chosen''
objective function in the boundary of such normal cone yields an SDP
that satisfies strict complementarity. Now we zoom in even
further in that boundary, into the set of negative semidefinite
rank-one objectives, and consider again how often strict
complementarity holds. We will state and prove a self-contained
result in \Cref{thm:rank-one} below. However, in order to motivate
the objects of the construction and the intermediate results, we
start with an informal discussion.
Assume throughout this discussion that \(n \geq 4\). We will
normalize the ``sample space'' so that we can have a
probability space. For the sake of discussion, let us focus our
attention on the vertex \(\oprodsym{\ones}\) of~\(\Elliptope{n}\) and
consider the sample space to be
\begin{equation}
\Omega_M
\coloneqq
\setst{
C \in \bd(\Normal{\Elliptope{n}}{\oprodsym{\ones}})
}{
C \preceq 0,\,
\rank(C) = 1,\,
\pnorm[\infty]{\matvec(C)} = 1
}.
\end{equation}
Accordingly, equip \(\Sym{n}\) with the norm \(X \in \Sym{n} \mapsto
\pnorm[\infty]{\matvec(X)}\). Set \(d \coloneqq \dim_H(\Omega_M)\).
In~order to obtain a probability space on~\(\Omega_M\), we will define
a probability measure
\begin{equation}
\label{eq:38}
\Prob_M(\Ascr_M)
\coloneqq
\frac{H_d(\Ascr_M)}{H_d(\Omega_M)}
\end{equation}
over all \(H_d\)-measurable subsets~\(\Ascr_M\) of~\(\Omega_M\); we
shall prove that \(H_{n-2}(\Omega_M) \in (0,\infty)\), so
that~\eqref{eq:38} is properly defined and \(d=n-2\). Our goal is to
prove that the probability of the event
\begin{equation}
\label{eq:39}
\Gscr_M
\coloneqq
\setst{
C \in \Omega_M
}{
\text{strict complementarity holds for~\eqref{eq:maxcut-sdp} with~\(C\)}
}.
\end{equation}
lies in \((0,1)\).
In order to achieve this, we shall reduce the problem to the space of
vectors that generate the rank-one tensors in~\(\Omega_M\) and
\(\Gscr_M\), which lie in the matrix space. In order to carry results
back and forth between these spaces, we rely on
\Cref{cor:bi-Lipschitz}. For each \(s \in \set{\pm1}^n\), define
\begin{gather}
\label{eq:40}
\Reals_s^n \coloneqq \Diag(s)\Reals_+^n,
\\
\label{eq:41}
\varphi_s \colon b \in \Reals_s^n \cap \bd(\Ball_{\infty}) \mapsto -\oprodsym{b}.
\end{gather}
Equip \(\Reals^n\) with the norm \(x \in \Reals^n \mapsto
\pnorm[\infty]{x}\). We shall split our analysis to each of the
\(2^n\) bi-Lipschitz maps~\(\varphi_s\), one for each chamber/orthant
of~\(\Reals^n\), according to their sign vectors:
\begin{theorem}
\label{thm:bi-Lipschitz-tensor}
Let \(s \in \set{\pm1}^n\). Then the map \(\varphi_s\) defined
in~\eqref{eq:41} is bi-Lipschitz continuous with Lipschitz
constants~2 and~1, where we equip the domain with the
\(\infty\)-norm, and we equip the range with the norm
\(\pnorm[\infty]{\matvec(\cdot)}\).
\end{theorem}
\begin{proof}
To see that \(\varphi_s\) is Lipschitz continuous with Lipschitz
constant~2, let \(x,y \in \Reals_s^n \cap \bd(\Ball_{\infty})\) and
note that
\begin{equation*}
\pnorm[\infty]{2\matvec(\oprodsym{x}-\oprodsym{y})}
=
\pnorm[\infty]{
\matvec\sqbrac{\oprod{(x-y)}{(x+y)} + \oprod{(x+y)}{(x-y)}
}
}
\leq
2 \pnorm[\infty]{x+y}\pnorm[\infty]{x-y}
\leq
4 \pnorm[\infty]{x-y}.
\end{equation*}
The proof that \(\varphi_s^{-1}\) is Lipschitz continuous with
Lipschitz constant~1 is also simple but it involves case analysis.
Set \(A \coloneqq \oprodsym{x}-\oprodsym{y}\). Let \(k \in [n]\)
such that \(\abs{x_k} = 1\), so \(x_k = s_k\). Similarly, let
\(\ell \in [n]\) such that \(\abs{y_{\ell}} = 1\), so \(y_{\ell} =
s_{\ell}\). Let \(j \in [n]\). We shall make use of the following
facts:
\begin{gather*}
\alpha_k
\coloneqq
\frac{y_k}{s_k} \in [0,1],
\qquad
\beta_{\ell}
\coloneqq
\frac{x_{\ell}}{s_{\ell}} \in [0,1],
\qquad
\abs{A_{kj}}
=
\abs*{x_j - \alpha_k y_j},
\qquad
\abs{A_{\ell j}}
=
\abs*{\beta_{\ell} x_j - y_j}.
\end{gather*}
We consider 4 cases, according to which of \(x_j\) or \(y_j\) is
largest, and according to their signs; note that both \(x_j\) and
\(y_j\) have the same sign.
We have
\begin{align*}
x_j \geq y_j \geq 0
& \implies
0 \leq \abs{x_j-y_j} = x_j - y_j \leq x_j - \alpha_k y_j = \abs{A_{kj}};
\\
y_j \geq x_j \geq 0
& \implies
0 \leq \abs{x_j-y_j} = y_j - x_j \leq y_j - \beta_{\ell} x_j = \abs{A_{\ell j}};
\\
0 \geq x_j \geq y_j
& \implies
0 \leq \abs{x_j-y_j} = x_j - y_j \leq \beta_{\ell} x_j - y_j = \abs{A_{\ell j}};
\\
0 \geq y_j \geq x_j
& \implies
0 \leq \abs{x_j-y_j} = y_j - x_j \leq \alpha_k y_j - x_j = \abs{A_{kj}}.
\end{align*}
Hence, \(\pnorm[\infty]{x-y} \leq
\pnorm[\infty]{\matvec(\oprodsym{x}-\oprodsym{y})}\).
\end{proof}
Note that restricting the domain of~\(\varphi_s\) in
\Cref{thm:bi-Lipschitz-tensor} to chambers of~\(\Reals^n\) is
necessary. Indeed, consider \(x \coloneqq (1,-1,\eps)^{\transp}\) and
\(y \coloneqq (-1,1,0)^{\transp}\), for an arbitrary \(\eps \in
(0,1)\). Then \(\pnorm[\infty]{x-y} = 2\) but
\(\pnorm[\infty]{\matvec(\oprodsym{x}-\oprodsym{y})} = \eps\).
Next we relate the description for~\(\Omega_M\) to the vectors that
appear in the rank-one tensors:
\begin{proposition}
\label{prop:omega}
For \(n \geq 3\), we have
\begin{equation}
\label{eq:42}
\Omega_M
=
\setst*{
-\oprodsym{b}
}{
b \in \Reals^n
\text{ and }
\begin{array}[!h]{l}
\text{%
either
\(b = e_i - \alpha e_j\) for some distinct \(i,j \in [n]\)
and \(\alpha \in [0,1]\),
}
\\
\text{or }
(b \perp \ones \text{ and }
\card{\supp(b)} \geq 3 \text{ and }
\pnorm[\infty]{b} = 1)
\end{array}
}
\end{equation}
\end{proposition}
\begin{proof}
We first prove the inclusion `\(\supseteq\)'. If \(b \perp \ones\)
and \(\pnorm[\infty]{b} = 1\), it follows from~\eqref{eq:normal}
that \(-\oprodsym{b} \in \Omega_M\). Suppose that \(b = e_i -
\alpha e_j\) for distinct \(i,j \in [n]\) and \(\alpha \in [0,1]\).
Set \(\beta \coloneqq 1 - \alpha \in [0,1]\) and \(y \coloneqq
-\beta b\). It is easy to verify that \(S \coloneqq \Diag(y) +
\oprodsym{b} \succeq 0\) and \(S\ones = 0\); now \(-\oprodsym{b} =
\Diag(y) - S \in \Omega_M\) follows from~\eqref{eq:normal}. In both
cases, we rely on \(n \geq 3\) to ensure that \(-\oprodsym{b}\) lies
in the boundary.
Now we prove the inclusion `\(\subseteq\)'. Let \(b \in \Reals^n\)
such that \(-\oprodsym{b} \eqqcolon C \in \Omega_M\). Clearly
\(\pnorm[\infty]{b} = 1\). We may assume that \(\beta \coloneqq
\iprodt{\ones}{b} \geq 0\) and that \(b_1 > 0\).
Use~\eqref{eq:normal} to write \(C = \Diag(y) - S\) for some \(y \in
\Reals^n\) and \(S \in \Psd{n}\) such that \(S\ones = 0\). Then
\(-\beta b = -\oprodsym{b}\ones = C\ones = y - S\ones = y\), so
\begin{equation}
\label{eq:43}
0 \preceq S = \Diag(y) + \oprodsym{b} = -\beta \Diag(b) + \oprodsym{b}.
\end{equation}
We claim that
\begin{equation}
\label{eq:44}
b_i < 0
\qquad
\forall i \in \supp(b) \setminus \set{1}.
\end{equation}
Indeed, by restricting~\eqref{eq:43} to a principal submatrix we get
\begin{equation}
\begin{bmatrix}
b_1^2 & b_1 b_i \\
b_1 b_i & b_i^2 \\
\end{bmatrix}
\succeq
\beta
\begin{bmatrix}
b_1 & 0 \\
0 & b_i \\
\end{bmatrix}.
\end{equation}
If \(b_i > 0\), then the RHS is positive definite, whereas the LHS
is singular. This proves~\eqref{eq:44}.
Suppose first that \(\card{\supp(b)} \leq 2\). Then \(b = e_1 -
\alpha e_j\) for some \(j \in \supp(b) \setminus \set{1}\) and
\(\alpha \in [-1,1]\). By~\eqref{eq:44}, we have \(\alpha \in
[0,1]\), and so \(-\oprodsym{b}\) lies in the RHS of~\eqref{eq:42}.
Suppose next that \(\card{\supp(b)} \geq 3\). We must prove that
\begin{equation}
\label{eq:45}
b \perp \ones.
\end{equation}
Suppose for the sake of contradiction that \(\beta > 0\). Next let
\(i,j \in \supp(b) \setminus \set{1}\) be distinct. Again
by~\eqref{eq:43} we get that the determinant of
\begin{equation}
\begin{bmatrix}
b_1(b_1-\beta) & b_1 b_i \\
b_1 b_i & b_i(b_i-\beta) \\
\end{bmatrix}
\succeq 0,
\end{equation}
is nonnegative. This yields \(b_1 + b_i \leq \beta\) using \(\beta
> 0\). But
\eqref{eq:44} implies that \(\beta \leq b_1 + b_i + b_j < b_1 +
b_i\), contradiction. This concludes the proof of~\eqref{eq:45},
and hence \(-\oprodsym{b}\) lies in the RHS of~\eqref{eq:42}.
\end{proof}
Finally, we need to relate \(\Gscr_M\) with the vectors that appear in
the rank-one tensors. A vector \(b \in \Reals^n\) is \emph{strictly
balanced} if \(\abs{b_i} < \sum_{j \in [n]\setminus\set{i}}
\abs{b_j}\) for every \(i \in [n]\). It is easy to verify that,
\begin{equation}
\label{eq:46}
\text{%
if \(b \in \Reals^n\) and \(i \in [n]\) is such that \(\abs{b_i} =
\pnorm[\infty]{b}\), then \(b\) is strictly balanced \(\iff
\abs{b_i} < \textstyle\sum_{j \in [n] \setminus \set{i}} \abs{b_j}\).
}
\end{equation}
We shall rely on yet another result by Laurent and Poljak:
\begin{theorem}[{\cite[Theorem~2.6]{LaurentP96a}}]
\label{thm:strictly-balanced}
Let \(b \in \Reals^n\) such that \(b \perp \ones\) and \(\supp(b) =
[n]\). Then there exists \(X \in \Elliptope{n}\) such that
\(\Null(X) = \linspan\set{b}\) if and only if \(b\) is strictly
balanced.
\end{theorem}
\begin{proposition}
\label{prop:sc-iff-sb}
Let \(b \in \Reals^n\) such that \(b \perp \ones\) and \(\supp(b) =
[n]\). Then strict complementarity holds for~\eqref{eq:maxcut-sdp}
with \(C = -\oprodsym{b}\) if and only if \(b\) is strictly
balanced.
\end{proposition}
\begin{proof}
Note that \(\oprodsym{\ones}\) is an optimal solution
for~\eqref{eq:maxcut-sdp} if \(C = -\oprodsym{b}\). By Proposition~\ref{prop:sc-ri}, we must
show that existence of \(X \in \Elliptope{n}\) such that
\(-\oprodsym{b} \in \ri(\Normal{\Elliptope{n}}{X})\) is equivalent
to strict balancedness of~\(b\). We will show that, for each \(X
\in \Elliptope{n}\),
\begin{equation}
\label{eq:reduction-balancedness}
-\oprodsym{b} \in \ri(\Normal{\Elliptope{n}}{X})
\iff
\oprodsym{b} \in \setst{Z \in \Psd{n}}{\Image(Z)=\Null(X)}.
\end{equation}
Since existence of \(X \in \Elliptope{n}\) such that the RHS
of~\eqref{eq:reduction-balancedness} holds is equivalent to \(b\)
being strictly balanced by Theorem~\ref{thm:strictly-balanced}, the
result will follow.
The proof of sufficiency in~\eqref{eq:reduction-balancedness}
follows from~\eqref{eq:5} and \(\ri(\Psd{n} \cap \set{X}^{\perp}) =
\setst{Z \in \Psd{n}}{\Image(Z) = \Null(X)}\). For the proof of
necessity, recall~\eqref{eq:5} and suppose that there exists \(X \in
\Elliptope{n}\) such that \(-\oprodsym{b} = \Diag(y) - S\) for some
\(y \in \Reals^n\) and \(S \in \ri(\Psd{n} \cap \set{X}^{\perp})\).
Then \(0 = -\oprodsym{b}\ones = (\Diag(y)-S)\ones=y-S\ones\) shows
that
\begin{equation}
\label{eq:47}
y=S\ones.
\end{equation}
Since \(X\) and \(\oprodsym{\ones}\) are optimal solutions
for~\eqref{eq:maxcut-sdp}, we find that \(0 =
\Tr(-\oprodsym{b}\oprodsym{\ones}) = \Tr(-\oprodsym{b}X) =
\iprodt{y}{\diag(X)}-\Tr(SX)\) so \(\iprodt{\ones}{y} = \Tr(SX) =
0\). By~\eqref{eq:47}, \(\qform{S}{\ones} = \iprodt{\ones}{y} =
0\), so \(\ones \in \Null(S)\) and \(y = 0\).
\end{proof}
We are now in position to present the main result of this section:
\begin{theorem}
\label{thm:rank-one}
Let \(n \geq 4\) be an integer. Equip \(\Sym{n}\) with the norm
\(\pnorm[\infty]{\matvec(\cdot)}\). Set
\begin{gather*}
\Omega_M
\coloneqq
\setst{
C \in \bd(\Normal{\Elliptope{n}}{\oprodsym{\ones}})
}{
C \preceq 0,\,
\rank(C) = 1,\,
\pnorm[\infty]{\matvec(C)} = 1
}
\subseteq \Sym{n},
\\
\Gscr_M
\coloneqq
\setst{
C \in \Omega_M
}{
\text{strict complementarity holds for~\eqref{eq:maxcut-sdp} with~\(C\)}
},
\\
d \coloneqq \dim_H(\Omega_M).
\end{gather*}
Let \(\Sigma_d\) be the \(\sigma\)-algebra of \(H_d\)-measurable
subsets of~\(\Sym{n}\) and set \(\Sigma_M \coloneqq \setst{\Ascr_M
\in \Sigma_d}{\Ascr_M \subseteq \Omega_M}\). Then
\begin{enumerate}[(i)]
\setlength{\itemsep}{4pt}
\item \(\Omega_M \in \Sigma_d\) and \(\Gscr_M \in \Sigma_M\),
\item \(H_{n-2}(\Omega_M) \in (0,\infty)\), so \(d = n-2\),
\item \(H_d(\Gscr_M) > 0\) and \(H_d(\overline{\Gscr_M}) > 0\),
where \(\overline{\Gscr_M} \coloneqq \Omega_M \setminus \Gscr_M\).
\end{enumerate}
In~particular, if we set
\begin{equation}
\label{eq:48}
\Prob_M(\Ascr_M)
\coloneqq
\frac{H_d(\Ascr_M)}{H_d(\Omega_M)}
\qquad
\forall \Ascr_M \in \Sigma_M,
\end{equation}
then \((\Omega_M,\Sigma_M,\Prob_M)\) is a probability space and the
event \(\Gscr_M\) satisfies \(\Prob_M(\Gscr_M) \in (0,1)\).
\end{theorem}
\begin{proof}
\newcommand*{\Bscr_{\textrm{bal}}}{\Bscr_{\textrm{bal}}}
\newcommand*{\Bscr_{\textrm{bal},m,U}}{\Bscr_{\textrm{bal},m,U}}
\newcommand*{\Gscr_M}{\Gscr_M}
\newcommand*{\Gscr_V}{\Gscr_V}
\newcommand*{\Zscr_i}{\Zscr_i}
\newcommand*{\Zscr_{\emptyset}}{\Zscr_{\emptyset}}
%
We start by proving that
\begin{equation}
\label{eq:49}
\Omega_M \in \Sigma_d.
\end{equation}
By standard Hausdorff measure theory, \(\Sigma_d\) contains every
Borel set of~\(\Sym{n}\); see, e.g., \cite[Theorem~27]{Rogers98a}.
Recall that the Borel sets of~\(\Sym{n}\) are the elements of the
smallest \(\sigma\)-algebra on~\(\Sym{n}\) that contains all the
open subsets of~\(\Sym{n}\). For distinct \(i,j \in [n]\), set
\(\Bscr_{ij} \coloneqq e_i - [0,1]e_j\). For each \(S \in
\tbinom{[n]}{3}\) and \(m \in \Naturals \setminus \set{0}\), define
\begin{equation*}
\Bscr_{S,m}
\coloneqq
\setst{
b \in \Reals^n
}{
b \perp \ones,\,
\pnorm[\infty]{b} = 1,\,
\abs{b_i} \geq \tfrac{1}{m}\,\forall i \in S
}.
\end{equation*}
Clearly, each \(\Bscr_{ij}\) and \(\Bscr_{S,m}\) is compact. Let
\(\varphi \colon b \in \Reals^n \mapsto -\oprodsym{b} \in \Sym{n}\).
By \Cref{prop:omega},
\begin{equation}
\label{eq:50}
\Omega_M
=
\bigcup_{i \in [n]}
\bigcup_{j \in [n]\setminus\set{i}}
\varphi(\Bscr_{ij})
\cup
\bigcup_{m=1}^{\infty}
\bigcup_{S \in \tbinom{[n]}{3}}
\varphi(\Bscr_{S,m}).
\end{equation}
Since each \(\varphi(\Bscr_{ij})\) and each \(\varphi(\Bscr_{S,m})\)
is compact, \eqref{eq:50} shows that \(\Omega_M\) is an
\(F_{\sigma}\), i.e.,~a~countable union of closed sets, and hence a
Borel set. This proves~\eqref{eq:49}.
Next we prove that
\begin{equation}
\label{eq:51}
H_{n-2}(\Omega_M) \in (0,\infty)
\end{equation}
from which it will follow via~\eqref{eq:18} that
\begin{equation}
\label{eq:52}
d = n-2.
\end{equation}
Again we shall use \Cref{prop:omega}. By \Cref{cor:bi-Lipschitz}
and \Cref{thm:bi-Lipschitz-tensor},
\begin{equation}
\label{eq:53}
H_1\paren[\Big]{
\bigcup_{i \in [n]}
\bigcup_{j \in [n]\setminus\set{i}}
\varphi(\Bscr_{ij})
}
\in (0,\infty).
\end{equation}
Moreover,
\begin{equation*}
\Omega_M
\supseteq
\setst{
-\oprodsym{b}
}{
b = -1 \oplus c,\,
c \in \Reals_+^{n-1},\,
\iprodt{\ones}{c} = 1
}
\implies
H_{n-2}(\Omega_M) > 0.
\end{equation*}
For each \(s \in \set{\pm1}^n\) and \(i \in [n]\), the polytope
\(\Bscr_{s,i} \coloneqq \setst{b \in \Reals_s^n}{b \perp
\ones,\, -\ones\leq b\leq\ones,\, b_i = s_i}\) has dimension less
than or equal to \(n-2\). Since
\begin{equation*}
\Omega_M
\subseteq
\Nscr \cup
\bigcup_{s \in \set{\pm1}^n} \bigcup_{i \in [n]} \varphi(\Bscr_{s,i})
\end{equation*}
for some set \(\Nscr\) of zero \(d\)-dimensional Hausdorff measure,
and each \(\varphi(\Bscr_{s,i})\) has finite \(d\)-dimensional
Hausdorff measure by \Cref{cor:bi-Lipschitz} and
\Cref{thm:bi-Lipschitz-tensor}, the proof of~\eqref{eq:51} is
complete.
In the remainder of the proof we shall use subsets of~\(\Reals^n\)
with constraints on the coordinates that are zero:
\begin{equation*}
\Zscr_i
\coloneqq
\setst{b \in \Reals^n}{b_i = 0}
\quad
\forall i \in [n],
\qquad\text{and}\qquad
\Zscr_{\emptyset}
\coloneqq
\Reals^n \setminus \bigcup_{i \in [n]} \Zscr_i
=
\setst{b \in \Reals^n}{\supp(b) = [n]}.
\end{equation*}
Define also
\begin{gather*}
\Omega_V
\coloneqq
\setst{
b \in \Reals^n
}{
b \perp \ones,\,
\card{\supp(b)} \geq 3,\,
\pnorm[\infty]{b} = 1
},
\\
\Gscr_V
\coloneqq
\setst{
b \in \Omega_V
}{
-\oprodsym{b} \in \Gscr_M
},
\\
\overline{\Gscr_V}
\coloneqq
\Omega_V \setminus \Gscr_V,
\\
\Bscr_{\textrm{bal}}
\coloneqq
\setst{
b \in \Omega_V
}{
b \text{ is strictly balanced}
},
\\
\overline{\Bscr_{\textrm{bal}}}
\coloneqq
\Omega_V \setminus \Bscr_{\textrm{bal}}.
\end{gather*}
\Cref{prop:sc-iff-sb} implies that
\begin{gather}
\label{eq:54}
\Gscr_V \cap \Zscr_{\emptyset} = \Bscr_{\textrm{bal}} \cap \Zscr_{\emptyset},
\\
\overline{\Gscr_V} \cap \Zscr_{\emptyset} = \overline{\Bscr_{\textrm{bal}}} \cap \Zscr_{\emptyset}.
\end{gather}
For each \(i \in [n]\), we have \(\Gscr_V \cap \Zscr_i \subseteq
\Omega_V \cap \Zscr_i\) and the set on the RHS has zero
\(d\)-dimensional Hausdorff measure. Hence,
\begin{equation}
\label{eq:55}
H_d(\Gscr_V \cap \Zscr_i) = 0
\qquad
\forall i \in [n].
\end{equation}
Define \(\varphi_s\) as in~\eqref{eq:41} for each \(s \in
\set{\pm1}^n\). By putting together \eqref{eq:53}, \eqref{eq:55},
and~\eqref{eq:54}, we find that
\begin{equation}
\label{eq:56}
\Gscr_M
=
\Nscr \cup \bigcup_{s \in \set{\pm1}^n} \varphi_s(\Gscr_V \cap \Zscr_{\emptyset} \cap \Reals_s^n)
=
\Nscr \cup \bigcup_{s \in \set{\pm1}^n} \varphi_s(\Bscr_{\textrm{bal}} \cap \Zscr_{\emptyset} \cap \Reals_s^n)
\end{equation}
for some subset \(\Nscr \subseteq \Omega_M\) such that \(H_d(\Nscr) = 0\).
Let us prove that
\begin{equation}
\label{eq:57}
\Gscr_M \in \Sigma_M.
\end{equation}
For each \(m \in \Naturals \setminus \set{0}\) and each \(U \in \tbinom{[n]}{3}\),
define
\begin{equation*}
\Bscr_{\textrm{bal},m,U}
\coloneqq
\setst[\Big]{
b \in \Reals^n
}{
b \perp \ones,\,
\pnorm[\infty]{b} = 1,\,
\abs{b_i} \geq \tfrac{1}{m}\,\forall i \in U,\,
\abs{b_i} + \tfrac{1}{m} \leq \sum_{j \in [n]\setminus\set{i}} \abs{b_j}
\,\forall i \in [n]
}.
\end{equation*}
Clearly, \(\Bscr_{\textrm{bal}} = \bigcup_{m=1}^{\infty} \bigcup_{U \in \tbinom{[n]}{3}} \Bscr_{\textrm{bal},m,U}\). Hence,
by~\eqref{eq:56},
\begin{equation}
\label{eq:58}
\Gscr_M = \Nscr \cup \bigcup_{m=1}^{\infty} \bigcup_{U \in
\tbinom{[n]}{3}} \bigcup_{s \in \set{\pm1}^n}
\varphi_s(\Bscr_{\textrm{bal},m,U} \cap \Zscr_{\emptyset} \cap \Reals_s^n).
\end{equation}
Since each \(\varphi_s(\Bscr_{\textrm{bal},m,U} \cap \Zscr_{\emptyset} \cap \Reals_s^n)\) is
compact, it follows that \(\Gscr_M\) is the union of a null set with
an~\(F_{\sigma}\), and hence \(\Gscr_M \in \Sigma_d\). This
proves~\eqref{eq:57}.
Set
\begin{equation*}
\xcirc \coloneqq 1 \oplus \frac{1}{n-1} \oplus \frac{-n}{(n-1)(n-2)} \ones
\in \Reals^n,
\quad
\eps \coloneqq \frac{3}{4(n-1)(n-2)},
\quad
\text{and}
\quad
s(x) \coloneqq 1 \oplus 1 \oplus -\ones \in \set{\pm1}^n.
\end{equation*}
It is not hard to verify that
\begin{equation}
\label{eq:59}
\xcirc + \eps(\Ball_{\infty} \cap \set{e_1,\ones}^{\perp})
\subseteq
\Bscr_{\textrm{bal}} \cap \Zscr_{\emptyset} \cap \Reals_{s(x)}^n.
\end{equation}
Since the set in the LHS of~\eqref{eq:59} has positive
\(d\)-dimensional measure, so does the set in the RHS of~\eqref{eq:59},
whence
\begin{equation}
\label{eq:60}
H_d(\Gscr_M) > 0
\end{equation}
by \Cref{cor:bi-Lipschitz}, \Cref{thm:bi-Lipschitz-tensor},
and~\eqref{eq:56}.
Set
\begin{equation*}
\ycirc \coloneqq 1 \oplus -\frac{1}{n-1}\ones \in \Reals^n,
\quad
\delta \coloneqq \frac{1}{2(n-1)},
\quad
\text{and}
\quad
s(y) \coloneqq 1 \oplus -\ones \in \set{\pm1}^n.
\end{equation*}
It is not hard to verify that
\begin{equation}
\label{eq:61}
\ycirc + \delta(\Ball_{\infty} \cap \set{e_1,\ones}^{\perp})
\subseteq
\overline{\Bscr_{\textrm{bal}}} \cap \Zscr_{\emptyset} \cap \Reals_{s(y)}^n.
\end{equation}
Hence,
\begin{equation*}
\overline{\Gscr_M}
\supseteq
\varphi(\overline{\Gscr_V} \cap \Zscr_{\emptyset})
=
\varphi(\overline{\Bscr_{\textrm{bal}}} \cap \Zscr_{\emptyset})
\supseteq
\varphi_{s(y)}\paren{
\overline{\Bscr_{\textrm{bal}}} \cap \Zscr_{\emptyset} \cap \Reals_{s(y)}^n
}
\supseteq
\varphi_{s(y)}\paren{
\ycirc + \delta(\Ball_{\infty} \cap \set{e_1,\ones}^{\perp})
}.
\end{equation*}
Thus,
\begin{equation*}
H_d(\overline{\Gscr_M}) > 0
\end{equation*}
by \Cref{cor:bi-Lipschitz} and \Cref{thm:bi-Lipschitz-tensor}.
\end{proof}
\section{Conclusion}
We proved in \Cref{sec:generic-sc-failure} that the MaxCut
SDP~\eqref{eq:maxcut-sdp} has the worst possible behavior with respect
to strict complementarity when the objective function is in the
boundary of the normal cone of the elliptope at any of its vertices.
At a first glance, this may seem surprising since the MaxCut SDP is so
elementary and has so many favorable
properties. However, as we explain next, from a properly chosen
viewpoint this bad behavior is not so surprising.
Consider, for instance, the convex set \(\Cscr \subseteq \Reals^2\) in
\Cref{fig:football}. For concreteness, an explicit description
of~\(\Cscr\) is given by
\begin{equation}
\label{eq:62}
\Cscr
\coloneqq
\setst{x \in \Reals^2}{\norm{x}+\abs{x_1}\leq 1}
=
\setst[\big]{
x \in \Reals^2
}{
\abs{x_1}\leq1/2,\,\abs{x_2} \leq \sqrt{1-2\abs{x_1}}\,
},
\end{equation}
and it is not hard to show that \(\Cscr\) is the projection of the
feasible region of an SDP. It is intuitive and simple to verify that
\(\ones\) lies in (the boundary of) the normal cone of~\(\Cscr\) at
its vertex~\(e_2\), but \(\ones\) is not in the relative interior of
any normal cone of~\(\Cscr\). We can trace this phenomenon to the
smooth, nonpolyhedral boundary of~\(\Cscr\) around~\(e_2\). It is
straightforward to extend this example to~\(\Reals^3\) by considering
the solid of revolution obtained by rotating~\(\Cscr\) around the
\(e_2\) axis, i.e., an American football.
\begin{figure}
\centering
\begin{tikzpicture}
\draw[fill=gray!30!white]
plot [scale=1,domain=-0.5:0,smooth,samples=200,variable=\x] ({\x},{sqrt(1+2*\x)}) --
plot [scale=1,domain=0:-0.5,smooth,samples=200,variable=\x,rotate=180] ({\x},{-sqrt(1+2*\x)}) --
plot [scale=1,domain=-0.5:0,smooth,samples=200,variable=\x,rotate=180] ({\x},{sqrt(1+2*\x)}) --
plot [scale=1,domain=0:-0.5,smooth,samples=200,variable=\x] ({\x},{-sqrt(1+2*\x)});
\shade[bottom color=gray!30, top color=white] (0,1) -- (1.2,2.2) -- (-1.2,2.2) -- cycle;
\draw[->, help lines, dashed, font=\scriptsize] (-1.2,0) -- (1.2,0) node[right] {$x_1$};
\draw[->, help lines, dashed, font=\scriptsize] (0,-1.2) -- (0,2.2) node[above] {$x_2$};
\foreach \x in {-1,1} {
\draw[help lines,font=\scriptsize] (\x,2pt)--(\x,-2pt) node[below] {$\x$};
}
\draw[scale=1,domain=-0.5:0,smooth,samples=200,variable=\x] plot ({\x},{sqrt(1+2*\x)});
\draw[scale=1,domain=-0.5:0,smooth,samples=200,variable=\x,rotate=180] plot ({\x},{sqrt(1+2*\x)});
\draw[scale=1,domain=-0.5:0,smooth,samples=200,variable=\x] plot ({\x},{-sqrt(1+2*\x)});
\draw[scale=1,domain=-0.5:0,smooth,samples=200,variable=\x,rotate=180] plot ({\x},{-sqrt(1+2*\x)});
\draw[->, dashed] (0,1) -- (1.2,2.2);
\draw[->, dashed] (0,1) -- (-1.2,2.2);
\draw[->, thick] (0,1) -- (0.5,1.5) node[below right] {\(\mathbbm{1}\)};
\node at (0,-0.5) {\(\mathscr{C}\)};
\node[font=\scriptsize] at (0,1.8) {\(\mathcal{N}(\mathscr{C};e_2)\)};
\end{tikzpicture}
\caption{The set \(\Cscr\) defined in~\eqref{eq:62} and its normal cone \(\Ncal(\Cscr;e_2)\) at \(e_2\).}
\label{fig:football}
\end{figure}
The elliptope looks somewhat similar to~\(\Cscr\) in the following
sense. Let us consider the projection \(\Elliptope{n}' \subseteq
\Reals^{\tbinom{n}{2}}\) of the elliptope \(\Elliptope{n}\) into its
off-diagonal entries. For \(n \geq 3\), the set \(\Elliptope{n}'\) is
a compact nonpolyhedral convex set with \(2^{n-1}\) vertices by
\Cref{thm:vertices-elliptope}. Intuitively, \(\Elliptope{n}'\) can be
thought of as being obtained from the polytope which is the convex
hull of these \(2^{n-1}\) vertices by inflating it like a balloon,
while preserving the vertices fixed. (In~fact, by
\cite[Proposition~2.9]{LaurentP95a}, the line segments between the
\(2^{n-1}\) vertices are also kept fixed.) In~this~way,
\(\Elliptope{n}'\) is a round, plump convex set, whose boundary is
smooth almost everywhere, and the neighborhood of \(\Elliptope{n}'\)
around any vertex looks like (a generalization of) what is depicted by
the set~\(\Cscr\) from the previous paragraph. Thus, when one
considers that the elliptope around a vertex ``locally'' looks
like~\(\Cscr\) around~\(e_2\), the poor behavior of the MaxCut SDP
described in~\Cref{sec:generic-sc-failure} makes more intuitive sense.
The discussion above indicates a natural direction for future
research. Namely, to extend \Cref{thm:generic-sc-failure} to more
general SDPs, by requiring the feasible region to be ``locally
nonpolyhedral'' around its vertices.
|
1,314,259,993,515 | arxiv | \section{Introduction}
Search engine users want relevant documents returned quickly when searching. Searching is often performed using an inverted index \cite{yan2009inverted}, which stores a list of occurring terms, and for each term, documents in which that term occurs are recorded, along with term frequency within each document and all the corresponding position information.
The search process for conjunctive queries goes through two main phases: list intersection and ranking \cite{constantinos2013a}. We first find the documents that contain all the query terms, then rank them according to their relevance to the query; ideally, the most relevant documents are returned. Traditional methods for assessing if a document is relevant to a query use two kinds of features: (a) term-independent features, e.g., PageRank; and (b) term-dependent features. Term-dependent features focus on term frequency and inverted document frequency. BM25 \cite{robertson1976relevance} is one of the most widely used; given query $q$, the document $d$ is assigned the score:
\begin{equation}\label{eq:BM25_score}
S_{\mathrm{BM}25}(q,d)=\sum_{\text{term } t \in q} w_t\frac{f_{d,t}(k_1+1)}{f_{d,t}+k_1(1-b+\frac{b|d|}{\mathrm{avg}_d})},
\end{equation}
where $k_1$ and $b$ are predefined constants, $|d|$ is the length of document $d$, $\mathrm{avg}_d$ is the average document length in the collection, $f_{d,t}$ is the frequency of $t$ in document $d$, and $w_t$ is the inverse document frequency (idf) of term $t$.
The order in which terms appear in the document and the distance between their locations are both important ranking criteria. Consider the two-term query ``search engine'' for ranking the following toy example documents:
\begin{itemize}[leftmargin=0.7cm]
\item[$d_1$:] \textit{$\ldots$ word \textbf{search engine} word word word word \textbf{search engine} word word word word word word $\ldots$}
\item[$d_2$:] \textit{$\ldots$ \textbf{search} word word \textbf{search} word word \textbf{engine} \textbf{search} word \textbf{engine search} word \textbf{engine} word $\ldots$}
\end{itemize}
In this example, document $d_1$ can be regarded as more relevant, despite $f_{d_2,t}>f_{d_1,t}$ for both terms $t \in \{\text{search},\text{engine}\}$. Consequently, there is active research on methods of integrating term proximity (TP) into the usual ``bag of words'' ranking \cite{yan2010efficient}.
TP score has been demonstrated to have an overall positive effect on search quality \cite
tao2007exploration}.
However, there are two caveats: (a) some queries return inferior rankings when utilizing TP score, and (b) incorporating TP score increases computational time. This motivates us to propose a model which selects which queries would likely benefit from incorporating TP score into their ranking.
The remainder of the paper is organized as follows: Section~\ref{se:related} summarizes related work on proximity ranking models. Section~\ref{se:TP_model} introduces our proposed method. Section~\ref{se:exp} details the experiment setup and presents the results. In Section~\ref{se:conc}, we present our conclusions and future research directions are suggested.
\section{Related Work}\label{se:related}
There are two types of models using proximity in ranking: (a) complex ranking functions that combine hundreds of features (TP being one of them) using sophisticated machine learning techniques \cite{liu2007letor}, and (b) variations of
the classic ranking models. While the former achieves more effective results than the latter, it is sometimes too computationally expensive to use. Recent work has explored approaches to achieve a better balance between retrieval effectiveness and efficiency \cite{wang2011cascade}. However, in this paper, we aim to treat each query flexibly, so we focus on the latter.
Rasolofo and Savoy \cite{rasolofo2003term} proposed a TP-based ranking scheme BM25TP, a modified version of BM25 \eqref{eq:BM25_score}, which incorporates term proximity (a similar scheme was presented by B{\"u}ttcher \textit{et al}.\ \cite{buttcher2006term}). In BM25TP, the rank of document $d$ is given by:
\begin{equation}\label{eq:BM25TP_score}
\mathrm{S}^\text{RS}_\text{BM25TP}(q,d)=S_\text{BM25}(q,d)+\mathrm{S}^\text{ACC}_\text{TP}(q,d)
\end{equation}
\begin{equation}\label{eq:ACC_score}
S^{\text{ACC}}_{\text{TP}}(q,d)=\sum_{t\in Q} \min\{1,w_t\} \frac{\mathrm{acc}_d(t) (k_1+1)}{\mathrm{acc}_d(t)+k_1\big(1-b+\frac{b|d|}{\mathrm{avg}_d}\big)},
\end{equation}
where $\mathrm{acc}_d(t)$ denotes the proximity accumulator for term $t$:
\begin{equation*}
\mathrm{acc}_d(t)=\sum_{s \neq t} w_t\,\mathrm{tpi}_d(t,s)
\end{equation*}
and
\begin{equation*}
\mathrm{tpi}_d(t,s)=\sum_{\substack{\text{occurrence } o(t) \text{ of } t \\ \text{in document } d}}\mathrm{dist}(o(t),s)^{-2}
\end{equation*}
for each given term pair $(t,s)$, where $t \neq s$, and $\mathrm{dist}(o(t),s)$ is the number of terms between the position of $o(t)$ and the position of the preceding occurrence of the term $s$.
An assortment of other methods of utilizing TP in ranking have been studied. Akritidis \textit{et al}.\ \cite{akritidis2012improved} not only takes into account term proximity, but also the order of terms in the query. Zhu \textit{et al}.\ \cite{zhu2009effective} put forward some new ideas based on web page structure and set estimation rules for early termination to speed up top-$k$ computation.
Svore \textit{et al}.\ \cite{svore2010good} and Song \textit{et al}.\ \cite{song2008viewing} utilized TP in the form of ``spans'' to improve the accuracy of ranking functions. Tao and Zhai \cite{tao2007exploration} introduced five measures and combined them with an existing retrieval model with two newly designed heuristic constraints; their experiments showed significant performance improvement on the KL-divergence language model and the BM25 model. Lv and Zhai \cite{lv2009positional} presented four proximity-based density functions to estimate different positional language models (PLMs), namely the Gaussian, triangular, cosine, and circle. Metzler \textit{et al}.\ \cite{bendersky2010learning,metzler2005markov} developed a general Markov random field (MRF) retrieval model that captures various kinds of term dependencies: full independence, sequential dependence, and full dependence. Cummins and O'Riordan \cite{cummins2009learning} outlined an extensive list of possible term proximity measures, and incorporated them to the original framework by machine learning methods.
Different queries benefit from different proximity features and methods \cite{cummins2009learning,lu2014effective,svore2010good}. Moreover, a too-complicated ranking formula may be a burden to use, both for the operator and by requiring too much overhead. This motivates us to propose a method where TP statistics are utilized only when they are most useful.
\section{Selective TP model}\label{se:TP_model}
\subsection{Features considered}
Table~\ref{ta:features} lists the features considered in this work. These features can be roughly divided into two categories: (a) query dependent, and (b) term dependent. Term dependent features are divided into two subcategories: frequency-based and position-based.
The query dependent features we include is the number of documents related to the query. The inverted document frequency (idf) indicates the overall importance of a term, and we utilize its statistics: mean, min, max, and sum; the sum of squared idfs, and the sum of squared differences between ascendant or descendant idf values between consecutive terms in a query\footnote{Ascendant (resp.\ descendant) idfs refer to consecutive term pairs whose first idf value is less (resp.\ greater) than the second.}. The position-based features include the average position of a term in a document, averaged over all documents, which we call the general position (abbreviated pos). Most pos statistics used are analogous to the idf statistics. These features vary in their ability to distinguish queries from each other; we specify the features actually used in Section~\ref{se:exp}.
\begin{table}[t]
\caption{Summary of query features used}
\label{ta:features}
\centering
\begin{tabular}{|l|}
\hline
\textbf{query features} \\
\hline
number of relevant documents\\
\cline{1-1}
\textbf{term frequency features} \\
\cline{1-1}
mean; min; max; sum idf \\
sum of squared idfs \\
sum of squares of ascendant idfs \\
sum of squares of descendant idfs \\
\cline{1-1}
\textbf{term position features} \\
\cline{1-1}
mean; min; max; sum pos \\
square statistics \\
\cline{1-1}
\end{tabular}
\end{table}
\subsection{Term proximity score}\label{se:term_prox}
Three of the most popular TP ranking functions, which we test our selective model based on, are introduced here. Two are BM25TP, given by \eqref{eq:BM25TP_score}, and MRF by Metzler \textit{et al}. \cite{metzler2005markov}. MRF is defined by the following ranking function (full details are omitted for space reasons):
\begin{align}
\mathrm{S}_\text{MRF}(q,d) &:= \mathrm{S}_\text{TF}(q,d) + \sum_{c\in O}{\lambda}_Of_O(c) + \sum_{c\in {O \cup U}}{\lambda}_Uf_U(c); \label{eq:MRF_rank} \\
\mathrm{S}_\text{TF}(q,d) &:= \sum_{c\in T}{\lambda}_Tf_T(c). \label{eq:TERM_rank}
\end{align}
where $T$ is the set of 2-cliques involving a query term and a document, $O$ is the set of cliques containing the document node and two or more query terms that appear contiguously within the query, and $U$ is the set for query terms appearing non-contiguously within the query. In this paper, we use an extension model proposed by \cite{bendersky2010learning}.
The third one we use is from Tao and Zhai \cite{tao2007exploration} who instead calculate a term-proximity-based rank by:
\begin{align}
\mathrm{S}^\text{TZ}_\text{EXP}(q,d) &:= \mathrm{S}^\text{TZ}_\text{TP}(q,d)+\mathrm{S_{BM25}}(q,d); \label{eq:TaoZhai_rank}\\
\mathrm{S}^\text{TZ}_\text{TP}(q,d) &:= \log\big(\alpha+\exp(-\mathrm{min\_dist}(q,d))\big) \nonumber
\end{align}
where $\mathrm{min\_dist}(q,d)$ is the minimum distance between any occurrence of any two query terms in document $d$, and $\alpha$ is a parameter. Tao and Zhai state that \eqref{eq:TaoZhai_rank} provides stable performance when $\alpha$ is set to 0.3, which we use for our experiments.
Some studies have identified a decaying function between two words to calculate the strength of their association \cite{gao2002resolving,vechtomova2006study,yuret1998discovery}.
\subsection{Our approach}
As our ranking functions, we use \eqref{eq:MRF_rank} and a generalized combination of \eqref{eq:BM25TP_score} and \eqref{eq:TaoZhai_rank}:
\begin{align}
\mathrm{S_{EXP}}(q,d) &:= \epsilon\,\mathrm{S^{\text{TZ}}_{TP}}(q,d)+(1-\epsilon)\,\mathrm{S_{BM25}}(q,d); \nonumber
\\
\mathrm{S_{BM25TP}}(q,d) &:= \beta\,\mathrm{S^{ACC}_{TP}}(q,d)+(1-\beta)\,\mathrm{S_{BM25}}(q,d).\label{eq:our_rank}
\end{align}
Parameters $\epsilon$ and $\beta$ are used to adjust the weighting of BM25 and the TP score. We test $\epsilon,\beta \in \{0.1,0.2,\ldots,0.9\}$ for the two query sets in our experiments, and choose the parameters leading to the best mean average precision (MAP).
The MAP of each query is used to evaluate the performance of the two ranking models. If a query gets better results using \eqref{eq:our_rank} than \eqref{eq:BM25_score} or \eqref{eq:MRF_rank} than \eqref{eq:TERM_rank}, its features will be labeled as 1, otherwise 0. These results are used to train a (supervised) classifier, determining whether or not using TP score is likely to benefit the document rankings for arbitrary queries.
We use a Back Propagation Artificial Neural Network (BP-ANN) to build our selective TP model, because of its powerful learning ability and rapid forecasting speed.
BP-ANN \cite{rumelhart1988learning}
uses a back-propagation algorithm to modify the internal network weights during the training process. In our experiment, we establish a one-node (denoting the query type) output layer BP-ANN, which contains one hidden layer and whose input nodes are query features
\section{Experiments}\label{se:exp}
All experiments are performed on the GOV2 data set using Porter stemming. We use the query set MQ2007 and MQ2008 for evaluation. The BM25 scores used in this paper are extracted from LETOR4.0\footnote{\url{http://research.microsoft.com/en-us/um/beijing/projects/letor/}}.
Figure~\ref{fig:figuremap} shows the MAP values of the rankings for queries, as the proportion of queries using TP (EXP-score) varies. The queries are sorted by how beneficial it would be to use TP scores in their ranking, with the most benefited coming first. Figure~\ref{fig:figuremap} shows that using TP scores does not always improve retrieval quality (assigning more than around 40\% has no benefit). We also test methods assigning a label $0$ or $1$ to a query randomly, which again shows that naively increasing of proportion of queries utilizing TP will not necessarily result in a performance improvement.
\begin{figure}[htp]
\centering
\includegraphics[width=5.5cm]{figure1_left}
\includegraphics[width=5.5cm]{figure1_right}
\caption{MAP on the two query sets as the proportion of queries using TP score varies; Sorted difference in average precision of each query}
\label{fig:figuremap}
\end{figure}
Only relevant features are necessary in the BP-ANN model construction, so we remove unnecessary features. To determine feature importance, we combine statistical methods (ranksum, z-score, and $\chi$-squared), a searching algorithm (decision tree), and a feature weight algorithm (relief).
We find that max idf, sum idf, and sum of squared idfs have relatively more importance among the term frequency features, and max pos, min pos, sum pos, and mean pos have relatively more importance among the term position features. As such, we primarily use max pos, min pos, sum pos, and mean pos for EXP; sum idf, max idf, and min pos for MRF; and min idf, sum of squared idfs, sum of squared differences between descendant idfs, and sum pos for BM25TP.
As other researchers have observed \cite{cummins2009learning,huston2014comparison,lu2014effective,svore2010good}, we also find that query length is an important factor in distinguishing whether or not using TP score will be beneficial, so we train independent models for different query length. Because of their effective performance in a more systematic study \cite{azzopardi2009query}, we consider queries with 3 to 5 terms.
After each query is labeled, we train a neural network on 70\% of all the data and test the effectiveness and efficiency of the model on the remaining data. In our BP networks, the input layer has 3 to 5 features and the output layer has 1 node. The maximum number of iterations is set to 1000 and the learning rate of the network is 0.01. We choose the sigmoid function
\[f(x) = \frac{1}{1 + e^{-x}}\]
as the activation function and test the performance of different hidden layer nodes. All the networks aim to correctly predict queries labeled 1 as much as possible, motivated by Figure~\ref{fig:figuremapdeg} which indicates that mispredicting queries labeled 1 is consistently worse than other mispredictions. The bias on mispredictions is used as a reference for process parameter adjustment in the network. The number of hidden layer nodes used and its momentum coefficient $\alpha$ are listed in Table~\ref{ta:netParameter}, along with the precision and recall values on the training and test data.
\begin{figure}[htp]
\centering
\includegraphics[width=7.5cm]{MAP_degradation}
\caption{MAP degradation caused by wrong judgement for different query sets}
\label{fig:figuremapdeg}
\end{figure}
We compare three TP-based rankings for each TP score: \texttt{\_tpAll} (where TP is always used for ranking), \texttt{\_tpS} which calculates TP score depending on BP-ANN predictions (the proposed ranking method), and \texttt{\_oracle}, a theoretically perfect situation where we know \textit{a priori} whether or not a query would benefit from TP score. For comparison, we also include a non-TP-based ranking \texttt{\_tpNo} given by \eqref{eq:BM25_score} and \eqref{eq:TERM_rank}.
\begin{table*}[htp]
\centering
\caption{Parameters of BP-ANN and the precision and recall values on the training and test data.}
\label{ta:netParameter}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{3}{|c|}{\texttt{EXP}}& \multicolumn{3}{|c|}{\texttt{MRF}}& \multicolumn{3}{|c|}{\texttt{BM25TP}}\\
\hline
\hline
Query len.&3&4&5&3&4&5&3&4&5 \\
\hline
\hline
Prec.\ on train&59.29\%&61.74\%&61.15\%&53.48\%&60.26\%&56.96\%&54.02\%&49.77\%&42.18\%\\
\hline
Recall on train&89.73\%&99.39\%&94.12\%&91.25\%&90.97\%&91.84\%&90.38\%&87.60\%&91.18\% \\
\hline
Prec.\ on test&69.30\%&72.53\%&64.62\%&53.98\%&62.89\%&55.88\%&51.40\%&51.06\%&47.37\% \\
\hline
Recall on test&98.80\%&95.65\%&95.45\%&88.41\%&92.42\%&95.00\%&82.09\%&92.31\%&90.00\% \\
\hline
\hline
\#hidden nodes&43&58&47&45&39&47&52&10&20\\
\hline
$\alpha$&1&0.85&0.85&0.35&0.9&0.85&0.95&0.25&0.75 \\
\hline
\end{tabular}
\end{table*}
We use three methods for measuring the quality of the rankings: MAP, precision for top-$k$ results, and Mean Normalized Discounted Cumulative Gain (Mean NDCG), given in Table~\ref{ta:GOV_res}. We also list the number of queries benefiting from calculation of TP score and the throughput.
Table~\ref{ta:GOV_res} shows that the TP rankings (\texttt{\_tpAll} and \texttt{\_tpS}) consistently exhibit significantly better rankings than without TP (\texttt{\_tpNo}). We also see that the selective model (\texttt{\_tpS}) returns slightly better rankings than \texttt{\_tpAll} while having better throughput. In terms of MAP, we see that MRF is consistently superior to the other ranking formulas. However, \texttt{\_tpS} used with EXP is the nearest to the corresponding \texttt{\_oracle} (the best possible MAP). Further, we can also find that \texttt{\_tpS} shows a better performance (vs.\ \texttt{\_tpAll}) in terms of $k=1$ precision, which is a critical measure for exact queries, such as queries restricted to web sites.
\begin{table*}[htp]
\centering
\caption{Performance under the various ranking methods in terms of query length (Q len.), MAP, mean NDCG, precision, and throughput. We also include the number of queries which uses TP ranking (\#TP-Q).}\label{ta:GOV_res}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
& \multirow{2}{*}{Q len.} &\multirow{2}{*}{MAP}&\multirow{2}{*}{Mean NDCG}&{Prec.}&{Prec.}&{Prec.}&\multirow{2}{*}{\#TP-Q}&{Throughput}\\
& & & &{$k=1$}&{$k=3$}&{$k=10$}& & (Q/s) \\
\hline
\multirow{12}{2cm}{\texttt{EXP\_BM25\_tpAll} \texttt{EXP\_BM25\_tpS} \texttt{EXP\_oracle} \texttt{EXP\_tpNo}}
&\multirow{4}{*}{3}&0.4358&0.4497&0.3427&0.3683&0.3615&143&334.56\\
&&{0.4400}&{0.4582}&{0.3776}&{0.3893}&{0.3622}&107&379.71\\
&&0.4481&0.4733&0.3986&0.4033&0.3748&80&477.82\\
&&0.3772&0.3967&0.3287&0.3287&0.3182&---&974.75\\
\cline{2-9}
&\multirow{4}{*}{4}&0.3294&0.3524&0.2632&0.2544&0.2649&114&96.42\\
&&{0.3340}&{0.3599}&{0.2807}&{0.2632}&{0.2667}&91&100.06\\
&&0.3470&0.3763&0.2895&0.2749&0.2833&69&111.35\\
&&0.3026&0.3264&0.2193&0.2368&0.2377&---& 440.33\\
\cline{2-9}
&\multirow{4}{*}{5}&0.3397&0.3667&0.2568&0.2523&0.2662&74&80.84\\
&&{0.3422}&{0.3687}&{0.2703}&{0.2568}&{0.2676}&65&81.22\\
&&0.3525&0.3782&0.2703&0.2703&0.2743&44& 98.57\\
&&0.3295&0.3482&0.2568&0.2613&0.2527&---&300.75\\
\hline
\multirow{12}{2cm}{\texttt{MRF\_tpAll} \texttt{MRF\_tpS} \texttt{MRF\_oracle} \texttt{MRF\_tpNo}}
&\multirow{4}{*}{3} &0.4791&0.4883&0.4056&0.3916&0.3811&143&330.00\\
&&{0.4855}&{0.4924}&{0.4266}&{0.4103}&{0.3916}&113&346.53\\
&&0.5261&0.5362&0.4965&0.4592&0.4238&69&443.31\\
&&0.4558&0.4480&0.3706&0.3473&0.3608&---&903.79\\
\cline{2-9}
&\multirow{4}{*}{4} &0.4201&0.4365&0.4298&0.3333&0.3246&114&77.37\\
&&{0.4238}&{0.4416}&0.4123&{0.3450}&{0.3289}&97&77.92\\
&&0.4516&0.4869&0.4561&0.3919&0.3526&66&112.86\\
&&0.3397&0.3514&0.1930&0.2427&0.2693&---& 397.00\\
\cline{2-9}
&\multirow{4}{*}{5}&0.3760&0.3953&0.2535&0.2864&0.2958&71& 80.91\\
&&0.3810&{0.4003}&{0.2958}&{0.2911}&0.2887&68&81.01\\
&&0.4107&0.4395&0.3380&0.3521&0.3268&40&110.77\\
&&0.3338&0.3502&0.2394&0.2488&0.2549&---& 321.98\\
\hline
\multirow{12}{2.2cm}{\texttt{BM25TP\_tpAll} \texttt{BM25TP\_tpS} \texttt{BM25TP\_oracle} \texttt{BM25TP\_tpNo}}
&\multirow{4}{*}{3}&0.4128&0.4302&0.3310&0.3615&0.3493&142&352.71\\
&&0.4145&0.4354&0.3380&0.3662&0.3514&107&424.49\\
&&0.4300&0.4575&0.4085&0.4061&0.3683&67& 489.07\\
&&0.3807&0.4008&0.3380&0.3404&0.3275&---&912.59\\
\cline{2-9}
&\multirow{4}{*}{4}&0.3460&0.3542&0.2870&0.2580&0.2635&115&97.84\\
&&0.3506&0.3621&0.3130&0.2696&0.2730&94&150.02\\
&&0.3679&0.3931&0.3130&0.3014&0.2939&52&206.76\\
&&0.3052&0.3273&0.2261&0.2435&0.2426&---&410.79\\
\cline{2-9}
&\multirow{4}{*}{5}&0.3533&0.3580&0.2162&0.2703&0.2905&74&85.35\\
&&0.3712&0.3832&0.2703&0.2973&0.2986&57 &91.94\\
&&0.3894&0.4036&0.3378&0.3288&0.3149&30&140.26\\
&&0.3425&0.3587&0.2838&0.2928&0.2662&---&306.36\\
\hline
\end{tabular}
\end{table*}
\section{Concluding remarks}\label{se:conc}
Recent studies have achieved promising retrieval performance by taking term proximity into consideration in relevance scoring. In this work, we propose a modified TP score ranking scheme which predicts which queries will benefit from using TP score in their rankings. In this way, we can: (a) achieve a better ranking from utilizing TP scores, and (b) achieve rankings with slightly better quality than the rankings given when always incorporating the TP score, but with better throughput. In essence, we utilize TP score only when it's helpful.
Our work could be extended in several directions, e.g.: (a) The use of more features, particularly those that capture a notion of term proximity, could be explored. (b) We could use a more complicated weighting of the queries' benefit from using TP score (here we use a simple $1$ vs.\ $0$ weighting). This would enable us to use a linear regression model, which may achieve more effective results. (c) Since different features benefit different types of queries, we could train a collection of models, individually designed for a single query type.
\section{ACKNOWLEDGMENTS}
This work is partially supported by NSF of China (grant numbers: 61373018, 11301288, 11550110491), Program for New Century Excellent Talents in University (grant number: NCET130301) and the Fundamental Research Funds for the Central Universities (grant number: 65141021).
\bibliographystyle{abbrv}
|
1,314,259,993,516 | arxiv | \section{Introduction}
In recent years, deep neural network based semantic segmentation models have achieved considerable success. This success is much reliant on the large pixel-level annotated dataset over which these models are trained.
However, like many other deep neural network based models, semantic segmentation models suffer from considerable performance degradation when tested on images from the domain different than then one used in training.
This problem, attributed to the domain shift, is exacerbated in semantic segmentation algorithms since many of them are trained on the synthetic dataset, due to lack of large real-world annotated datasets, and are tested over the real-world images.
Retraining or fine-tuning for new domains is expensive, time consuming, and in many cases not possible due to the large number of ever-changing domains, especially in case of autonomous vehicles, and unavailability of annotated data.
To overcome domain shift, unsupervised domain adaptation (UDA), has been employed with reasonable success \cite{zou2018unsupervised,zou2019crst, mlsl2020}, but state-of-the-art is still lacking desired accuracy.
Many unsupervised domain adaptation algorithms for semantic segmentation \cite{hoffman2017cycada, chen2017no,clan_2019_CVPR, Yunsh2019bidirect, structure_2019_CVPR, dada_2019_ICCV, iqbal2022leveraging} perform global marginal distribution alignment through adversarial learning to translate the input image or feature volume or output probability tensor from one domain to other.
The adversarial loss looks at the whole tensor (image/feature or output probability) even when the objective is to improve the pixel-level label assignments \cite{clan_2019_CVPR}, more-over aligning marginal distributions does not guarantee preserving the discriminative information across the domain \cite{zhang2019category}.
Self-supervised learning methods \cite{mlsl2020, pan2020unsupervised, Lian_2019_pycda, LSE_2020_Naseer, zou2019crst, Yunsh2019bidirect,munir2021ssal} (either independently or along with adversarial learning ) try to overcome this challenge by back-propagating the cross-entropy loss computed over pixel-level pseudo-labels generated by the source model.
Quality of these pseudo-labels is dependent upon the generalization capacity of the classifier and effects overall adaptation process.
The deep neural network based semantic segmentation model when trained by minimizing cross-entropy loss, greedily learns representations that capture inter-class variations.
When optimally trained these inter-class variations should help map accurate decision boundary, projecting pixels from different classes to different sides of it (decision boundary).
However, due to the domain shift, the decision boundary is not aligned in target domain, resulting in noisy pseudo-labels leading to poor self-supervised domain adaptation.
Previous works \cite{chen2020homm, kumagai2019unsupervised} have shown discriminative clustering on target data and moment matching across domains helps in adaptation .
CAG-UDA \cite{zhang2019category} $\&$ \cite{Deng_2019_ICCV} tried to align the class aware cluster centers across domains for better adaptation.
However, visual semantic classes exhibit large set of variations, due to difference in texture, style, color, pose, illumination etc.. These variations are generally assumed to be across instance, e.g. two different types of cars, but they do manifest frequently in the same instance too, e.g. pixels belonging to different road locations or to different parts of car.
Class aware single cluster based alignment might align centers of the source and target domain without aligning overall distribution, leaving classes with large variations vulnerable to misclassification in target domain.
Learning to capture intra-class variations by representing each class with multiple modes and aligning the modes across domain might overcome these challenges.
Therefore, we propose a novel class aware multi-modal distribution alignment method for unsupervised domain adaption of semantic segmentation model.
We combine together the ideas of distribution alignment and pseudo-label based adaptation, however, instead of just using discriminatively learned features during the adaptation, we explicitly learn representations separately.
In addition to learning the inter-class variation through minimizing cross-entropy loss, i-e the pixel-level intra-class features variations are captured by learning a multi-modal for each class (Fig. \ref{img:teaser}), resulting in a much more generalized representation.
Both of these tasks have competing requirements, minimizing cross entropy loss results in learning inter-class discriminative representation along with intra-class consistency. Whereas multi-modal distribution learning intends to preserve information that can model intra-class variations.
We disentangle these two information requirements by developing class-aware multi-modal distribution learning (MMDL) module , parallel to standard segmentation head.
MMDL extracts the spatially low-resolution feature volume from the encoding block and maps to the spatially high-resolution embedding.
Class aware multi-modal modeling is performed over these embedding using Distance metric learning \cite{repmet2019}.
Since both of these heads share the backbone, simultaneously decreasing loss on both act as a regularizer over the learned features, resulting in the less noisy pseudo-labels.
During domain adaptation, the high quality pseudo-labels allow us to learn domain-invariant class discriminative feature representations in the discriminative space.
At the same time, stochastic mode-alignment is performed across domains, by minimizing distance between representation of source pixels and target pixels mapping to same mode; thus preserving intra-class variations.
Modes themselves are updated by increasing the posterior probability of target pixel belonging to the mode identified closest to target.
During adaptation too, these losses computed paralelly act as regularizer over each other, hence dampening each others noise.
Our contributions are summarized as follows.
First, we propose a multi-modal distribution alignment strategy for the self-supervised domain adaptation.
By designing a multi-modal distribution learning (MMDL) module parallel to standard segmentation head, with shared backbone, we disentangle inter-class discriminative and intra-class variation information; allowing them to be used during adaptation separately.
We show that due to regularization of MMDL, the pseudo-labels generated over target domain are more accurate. Lastly, to perform stochastic mode alignment, we introduce the \textit{cross domain consistency loss}.
We present state-of-the-art performance for benchmark synthetic to real, e.g., GTA-V/SYNTHIA to Cityscapes adaptation.
\begin{figure*}[t]
\centering
\includegraphics[width=1 \textwidth]{images/model_plus_ma.pdf}
\scriptsize
\caption{The proposed DRSL approach (a) Base features extracted from the base network are used for two separate tasks. The MMDL-FR module captures intra-class variations through multi-modal distribution learning. Semantic Segmentation head estimates the discriminative class boundaries necessary for the primary segmentation task. This disentanglement allows us simultaneous alignment in discriminative and multi-modal space, allowing MMDL-FR module to act as a regularizer over the Segmentation Head. (b) The proposed \textit{Stochastic mode alignment}: Minimizing $\mathcal{L}_{mcl}$ brings the source and target embeddings of the same mode of the same class closer than any source pixel’s embedding belonging to different class. $\mathcal{L}_{ma}$ decreases the in-mode variance for the target samples by forcing them to come closer to the assigned mode and move away from other class's modes.
}
\label{img:model}
\end{figure*}
\section{Related Work}
The domain shift between testing and training data deteriorates the model performance in most of the computer vision tasks like classification \cite{tzeng2017adversarial, pinheiro2018unsupervised, xu2019larger, deng2019cluster, belal2021knowledge, schrom2021improved}, object detection \cite{chen2018domain, khodabandeh2019robust,hsu2020progressive} and semantic segmentation \cite{chen2017no, chen2017road, clan_2019_CVPR, curr2017_ICCV, dai2019curriculum, hoffman2017cycada, iqbal2020weakly, LSE_2020_Naseer, yang2021exploring}. In this work, we focus on the domain shift problem for semantic segmentation with self-supervised learning. Our work is related to semantic segmentation, domain adaptation, and self-supervised learning.
\noindent \textbf{Domain Adaptation for Semantic segmentation:}
Recent works \cite{vu2019advent, mlsl2020, LSE_2020_Naseer, chen2017road, chen2017no, tsai2018learning, Lian_2019_pycda, iqbal2020weakly, zou2018unsupervised, Cordts2016Cityscapes, dada_2019_ICCV, guo2021metacorrection} aiming to minimize the distribution gap between source and target domains are focused in two main directions. 1) adversarial learning and, 2) self-supervised learning for unsupervised domain adaptation (UDA) of semantic segmentation.
\noindent \textbf{Adversarial Domain Adaptation:} Adversarial learning is the most explored area for output space \cite{tsai2018learning, wang2020differential, vu2019advent, pan2020unsupervised,dada_2019_ICCV}, latent/feature space \cite{chen2017no, mancini2018boosting} and input space adaptation \cite{hoffman2017cycada, clan_2019_CVPR, zhang2018fully, Yunsh2019bidirect}. We briefly describe the feature space/feature alignment, as our work is related to it.
The authors in \cite{kim2020learning, hoffman2017cycada, zhang2020towards} used adversarial loss to minimize the distribution gap between the high level features representations of the source and target domain images.
However, these methods do not align class-wise distribution shifts but instead match the global marginal distributions. To overcome this, \cite{chen2017no, clan_2019_CVPR} combined category level adversarial loss (by defining class discriminators) with domain discriminator at feature space. \cite{iqbal2020weakly} tried to regularize the segmentation network using weak labels along with latent space marginal distribution alignment for domain adaptation of semantic segmentation.
\textcolor{black}{Similarly, the authors in \cite{yang2021exploring} investigated the robustness of the UDA of semantic segmentation and proposed a self-training augmented adversarial learning to improve the robustness to adversarial examples. Their approach resulted better performance in the presence of adversarial examples, however, reducing the performance over normal input images.
}
\noindent \textbf{Self-supervised learning:}
Self-supervised learning for UDA is recently studied for major computer vision tasks like semantic segmentation and object detection \cite{tri2018fully, mlsl2020, khodabandeh2019robust, Lian_2019_pycda}.
The authors in \cite{zou2018unsupervised} proposed a self-paced self-training approach by generating class balanced pseudo-labels and class spatial priors extracted from the source dataset used to condition the pseudo-label generation. Zou et al. \cite{zou2019crst} extended the \cite{zou2018unsupervised} with confidence regularization strategies and soft pseudo-labels for self-training based UDA for semantic segmentation. LSE \cite{LSE_2020_Naseer} further worked with self-generated scale-invariant examples and entropy based dynamic selection for self-supervised learning.
\textcolor{black}{The authors in \cite{guo2021metacorrection} proposed a domain-aware meta-learning approach (MetaCorrection) to correct the segmentation loss and condition the pseudo-labels based on noise transition matrix. They report considerable mIoU gain especially when applied on pre-adapted model.
}
In this work, we exploit a strategy similar to \cite{zou2018unsupervised} to generate pseudo-labels for target domain images during adaptation.
\noindent \textbf{Clustering Based Features Regularization:}
Some previous works also explored the effect of discriminative clustering on target data and moment matching across domains for target data adaptation \cite{chen2020homm, kumagai2019unsupervised}.
Recently, \cite{zhang2019category, Deng_2019_ICCV} tried to define category anchors on the last feature volume of the segmentation model to align class aware centers across the source and target domains. Tsai et al. \cite{tsai2019domain} tried to match the clustering distribution of discriminative patches from source and target domain images. Similarly, \cite{mlsl2020} and \cite{Lian_2019_pycda} exploited latent space and output space respectively by defining category based classification modules, forcing towards class-aware adaptation.
However, these methods do not explore the intra-class variations present in source or target data but instead
leverage the discriminative property to align the inter-class clusters. We specifically focus to capture the intra-class variations present in the source and target data by learning class-aware mixture models to help the adaptation.
\section{Distribution Regularised Self-supervised Learning}
\label{sec:method}
In this section, we provide details of our distribution regularized self-supervised learning (DRSL) architecture. It employs DeepLab-v2 \cite{chen2018deeplab} as a baseline and embeds new components that enable the semantic segmentation model to be robust to domain shift.
\subsection{Preliminaries}
For supervised semantic segmentation, we have access to source domain images $\{\mathrm{x_s, y_s}\}$ from $X_s \in \mathbb{R} ^{H\times W\times 3}$ with corresponding ground truth labels $Y_s \in \mathbb{R} ^{H\times W\times K}$. The $\{H, W\}$ shows the width and height of source domain images and $K$ shows the number of classes. Let $\mathcal{G}$ be a segmentation model with weights $\mathrm{w_g}$ that predicts the $K$ channel softmax probability outputs. For a given source image $\mathrm{x_s}$, the segmentation probability vector of class $c$ at any pixel location (${i,j}$) is obtained as $p(c|\mathrm{x_s, w_g})_{i,j} = \mathcal{G}(\mathrm{x_s})_{i,j}$
For fully labeled source data, the network parameters $\mathrm{w_g}$ are learned by minimizing the cross entropy loss (Eq. \ref{eqn:2}),
\begin{equation}
\small
\mathcal{L}^s_{seg} (\mathrm{x_s, y_s}) = -\sum_{i=1}^H \sum_{j=1}^W \sum_{c=1}^K \mathrm{y_s^{(c,i,j)}} ~\log(p(c|\mathrm{x_s, w_g})_{c,i,j})
\label{eqn:2}
\end{equation}
where $\mathcal{L}^s_{seg}$ is the source domain segmentation loss.
For unsupervised domain adaptation of the target domain, we have access to the target domain images $\{\mathrm{x_t, -}\}$ from $X_t \in \mathbb{R} ^{H_t\times W_t\times 3}$ with no ground truths available.
Thus, we adapt the iterative process used by \cite{mlsl2020, zou2018unsupervised} to first generate pseudo-labels $\mathrm{\hat{y}_t}$ using the source trained model and then fine-tune the source trained model on target data using Eq. \ref{eqn:3}.
\begin{equation}
\small
\mathcal{L}^t_{seg} (\mathrm{x_t, \hat{y}_t}) = -\sum_{i=1}^{H_t} \sum_{j=1}^{W_t} \mathrm{b^{(i,j)}_t} \sum_{c=1}^K \mathrm{\hat{y}_t^{(c,i,j)}} \log(p(c|\mathrm{x_t, w_g})_{c,i,j})
\label{eqn:3}
\end{equation}
where $\mathcal{L}^t_{seg}$ is the segmentation loss for target domain images
with respect to generated pseudo-labels $\mathrm{\hat{y}_t}$. $\mathrm{b_t}$ represents a binary mask with same resolution as $\mathrm{\hat{y}_t}$ to back-propagate loss for pixels which are assigned pseudo-labels.
The total loss for the segmentation model is the combination of true labels based source domain loss and pseudo-labels based target domain loss and is given by Eq. \ref{eqn:loss_seg_total},
\begin{equation}
\small
\mathcal{L}_{\mathcal{G}}(\mathrm{x_s, y_s,x_t, \hat{y}_t}) = \mathcal{L}^s_{seg} (\mathrm{x_s, y_s}) + \mathcal{L}^t_{seg} (\mathrm{x_t, \hat{y}_t})
\label{eqn:loss_seg_total}
\end{equation}
\subsection{Multi-Modal Distribution Learning}
\label{sec:drsl}
We propose to learn the complex intra-class variations through a multi-modal distribution learning (MMDL) framework where instead of a single cluster/anchor, each class is represented by multiple modes. This diverse representation of each class is used in the adaptation process to align the domains on fine-grained level. Furthermore, we disentangle the task of learning these intra-class variations (MMDL) from the main segmentation task by designing a separate module for it called multi-modal distribution learning based feature regularization (MMDL-FR). The proposed MMDL-FR module is model agnostic and can be appended at the encoder of any segmentation network.
The MMDL-FR module consists of mixture models based per-pixel classification augmented with distance metric learning (DML) based per-pixel embedding block.
The input of the MMDL-FR module is the feature volume $F \in \mathbb{R} ^{h\times w\times d}$, where $\{h, w, \text{and} ~d\}$ shows the spatial height, width and depth of the encoder output (base features) as shown in Fig. \ref{img:model}(a).
The embedding block is comprised of 4 fully convolutional layers with different dilation rates (similar to ones used in the last layer of the segmentation network) followed by an upsampling layer.
The output of the embedding block $\mathcal{E}$ is a feature volume $E = \mathcal{E}(F) \in \mathbb{R} ^{h_o\times w_o\times \hat{d}}$, where $(h_o, w_o)=(H/2, W/2)$ (Sec.\ref{sec:ablation}) and $d>>\hat{d}$ for any randomly selected source image.
To train the MMDL-FR module, we adapt a formulation similar to \cite{repmet2019}.
For each class $c$, a multi-modal distribution with $M$ number of modes is learned.
Let $e=E(i,j)$ be embedding for location $(i, j)$, a vector $V^{c}_m$ represent the center of the mode $m, (m=1,..,M)$ of the class $c, (c=1,...,K)$ of the mixture models. In this work, these mode centers are formulated as the weights of a fully connected layer with size $K \cdot ~M \cdot ~\hat{d}$, and are reshaped into $(K \times M) \times \hat{d}$ producing $K \times M$ matrix for each input embedding vector $e$. This simple method makes it easy to flow back gradients to the fully connected layer and learn the segmentation backbone during training.
To compute the classification probability for each embedding vector $e$, we compute the euclidean distance $D^{c}_m(e) = ||e-V^{c}_m||^2_2$ between $e$ and representative $V^{c}_m$ and compute the posterior probabilities $q^c_m(e) \propto exp(- (D^{c}_m(e))^2 / 2\sigma^2)$,
\textcolor{black}{where $\sigma^2$ is the variance of each mode and is set to 0.5.}
For class $c$ posterior probability, we take the maximum over $M$ modes of class $c$ as, $Q(C=c|e)=\max_{m=1,...,M} q^c_m(e)$,
where C = c shows class c.
\noindent \textbf{Loss Functions: }
To train the MMDL-FR module, two losses are used, i.e., triplet loss and the cross entropy loss. The triplet loss for \textit{embedding block} is defined by Eq. \ref{eqn:6},
\begin{equation}
\small
\mathcal{L}_{emb}(E) =\sum_{e \in E} |\min_m D^{c^*}_m(e) - \min_{m, c^* \neq c}D^{c}_m(e) + \alpha|_+
\label{eqn:6}
\end{equation}
where $|.|_+$ is the Relu function and $\alpha$ is the minimum margin between the distance of an embedding $e$ to the closest mode representative $V^{c^*}_m$ of the true class $c^*$, and distance of embedding $e$ to the closest mode representative of the incorrect class $V^{c}_m$.
Similarly, the cross entropy loss for mixture models based classification is given by Eq. \ref{eqn:7},
\begin{equation}
\small
\mathcal{L}_{cls} (\mathrm{E, u_f}) = - \sum_{e \in E} \sum_{c=1}^K \mathrm{u_f^{(c)}} \log(Q(C=c|e))
\label{eqn:7}
\end{equation}
where $\mathrm{u_f^{(c)}}$ is the embedding classification label obtained from $\mathrm{y_s^{c}}$ or $\mathrm{\hat{y}_t^{c}}$ for class $c$.
The triplet loss enforces the embedding block to learn representation that capture intra-class variation information, while cross entropy loss pushes them to not lose necessary class-specific information.
Due to these two losses, the MMDL-FR module acts as a regularizer at latent space over the shared backbone, so that the shared features are much more informative if only segmentation head is used.
\subsection{Stochastic Mode Alignment}
One of the characteristic of domain generalization will be that the multi-modal distribution learning over one domain should result in the modes which are very close to the modes learned in the other domain.
However, due to the domain shift, this is not generally true.
That is in the target domain, the features of pixels assigned pseudo-label $c$ might not be closer to the any of the modes belonging to the class $c$.
In addition, features in target domain mapping to same mode might not be closer to each other, resulting in low posterior probability.
We minimize two loss functions to perform \textit{stochastic mode alignment}.
For first, we apply \textit{domain invariant consistency loss}, ensuring that features of pixels mapped to same modes of same class should be near to each other regardless of the domain the are sampled from.
Assume a batch consisting of arbitrary number of source and target images, $\{(x_s^i, y_s^i)|i=0,1\dots,N_s,(x_t^i,\hat{y}_t^i)|i=0,1\dots,N_t \}$, where $\hat{y}_t^i$ are the pseudo-labels assigned to $x_t^i$.
Embedding $E_t^i=\mathcal{E}(x_t^i)$ and $E_s^i=\mathcal{E}(x_s^i)$ are computed for all the target and source images in the batch.
We randomly sample $N_e$ number of embedding, $\{e_t^i|i=0,1\dots,N_e\}$ from $\{E_t^i|i=0,1\dots,N_t \}$, choosing only from the ones having valid pseudo-label.
For \textit{domain invariant consistency}, we create a triplet $(e_t^i, e_s^i, \hat{e}_{s}^i)$ such that pseudo-label of $e_t^i$ and ground-truth label of $e_s^i$ is same class $c$, and both map to same mode $m$ of class $c$. $\hat{e}_{s}^i$ on the other hand is source pixel's embedding of any class $c^{+} \ne c$.
This loss when minimized brings $e_t^i$ closer to $e_s^i$ than any source pixel's embedding belonging to different class.
\begin{equation}
\small
\mathcal{L}_{mcl} = \sum_{i}^{N_e} | || e_t^i - e_s^i ||_2^2 - || e_t^i - \hat{e}_{s}^i ||_2^2 + \alpha1|_+
\label{eqn:ada}
\end{equation}
Note: we could have chosen most closest source sample as negative, however, this would have been computationally prohibitive. Margin, $\alpha1$, is set to 1, for all experiments.
The in-mode variance for the target samples is decreased by forcing them to come closer to the assigned mode and move away from the modes of the other classes. We sample $T_e$ embeddings per image per class from both source and target images and create set $E_s$ and $E_t$ respectively. Eq. \ref{eqn:6-sma} minimizes the triplet loss for both the source and target embeddings simultaneously.
\begin{equation}
\small
\mathcal{L}_{ma}(E_s, E_t) = \frac{1}{T^s_e} \mathcal{L}_{emb}(E_s) + \frac{1}{T^t_e} \mathcal{L}_{emb}(E_t)
\label{eqn:6-sma}
\end{equation}
where $T^s_e$ and $T^t_e$ represent cardinality of $E_s$ and $E_t$, which might be different since samples from all classes might not be available.
\subsection{Total Loss for Training and Adaptation}
The DRSL model is trained using the combination of segmentation losses, mode consistency loss and MMDL-FR module losses.
Let $\mathcal{L}_{cls}^s$ and $\mathcal{L}_{cls}^t$ represent call to Eq. \ref{eqn:7} using sourse and target embeddings respectively.
The source model with MMDL module is trained using Eq.\ref{eqn:loss-src}.
\begin{equation}
\begin{split}
\small
\mathcal{L}_{src} = \mathcal{L}^s_{seg} + \beta ~\mathcal{L}_{emb} +\eta \mathcal{L}_{cls}^s
\label{eqn:loss-src}
\end{split}
\end{equation}
During adaptation to target domain the loss functions in Eq.\ref{eqn:drsl} and Eq.\ref{eqn:drsl+} are used.
\begin{equation}
\begin{split}
\small
\mathcal{L}_{DRSL} = \mathcal{L}_{\mathcal{G}} + \beta ~\mathcal{L}_{ma} +\eta (\mathcal{L}_{cls}^s+\mathcal{L}_{cls}^t)
\label{eqn:drsl}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\small
\mathcal{L}_{DRSL+} = \mathcal{L}_{\mathcal{G}} + \beta ~\mathcal{L}_{ma} +\eta (\mathcal{L}_{cls}^s+\mathcal{L}_{cls}^t) + \gamma ~\mathcal{L}_{mcl}
\label{eqn:drsl+}
\end{split}
\end{equation}
where, $\beta$, $\eta$ and $\gamma$ are hyper-parameters to limit the effect of MMDL-FR module loss values.
\begin{table*}[h]
\centering
\caption{Semantic segmentation performance for GTA-V to Cityscapes adaptation. The abbreviations “$A_I$”, “$A_F$” and “$A_O$” stand for adversarial training at input space, latent space, and output space. Similarly, “$S_T$” represents self-supervised learning.}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|c|c|ccccccccccccccccccc|c|c}
\hline
\multicolumn{23}{c}{GTA-V $\rightarrow$ Cityscapes}\\
\hline
Methods & \rot{Baseline} & \rot{Appr.} & \rot{Road} & \rot{Sidewalk} & \rot{Building} & \rot{Wall} & \rot{Fence} & \rot{Pole} & \rot{T. Light} & \rot{T. Sign} & \rot{Veg.} & \rot{Terrain} & \rot{Sky} & \rot{Person} & \rot{Rider} & \rot{Car} & \rot{Truck} & \rot{Bus} & \rot{Train} & \rot{M.cycle} & \rot{Bicycle} & \rot{mIoU}& \rot{mIoU Gain} \\ \hline \hline
Source \cite{chen2018deeplab} & \multirow{8}{*}{\rotHalf{DeepLab-v2}} & -& 75.8 & 16.8 & 77.2 & 12.5 & 21.0 & 25.5 & 30.1 & 20.1 & 81.3 & 24.6 & 70.3 & 53.8 & 26.4 & 49.9 & 17.2 & 25.9 & 6.5 & 25.3 & 36.0 & 36.6 & - \\
MinEnt \cite{vu2019advent} & & $A_O + S_T$ & 86.6 & 25.6 & 80.8 & 28.9 & 25.3 & 26.5 & 33.7 & 25.5 & 83.3 & 30.9 & 76.8 & 56.8 & 27.9 & 84.3 & \textbf{33.6} & 41.1 & 1.2 & 23.9 & 36.4 & 43.6 &7.0\\
FCAN \cite{zhang2018fcan} & & $A_I + A_O$ & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 46.6 &10.0\\
IntraDA \cite{pan2020unsupervised} & & $A_O + S_T$ & 90.6 & 37.1 & 82.6 & 30.1 & 19.1 & 29.5 & 32.4 & 20.6 & {\ul 85.7} & {\ul 40.5} & 79.7 & 58.7 & {\ul 31.1} & \textbf{86.3} & 31.5 & {\ul 48.3} & 0.0 & 30.2 & 35.8 & 46.3 & 9.7 \\
PyCDA \cite{Lian_2019_pycda} & & $S_T$ & 90.5 & 36.3 & \textbf{84.4} & {\ul 32.4} & \textbf{28.7} & 34.6 & 36.4 & 31.5 & \textbf{86.8} & 37.9 & 78.5 & 62.3 & 21.5 & {\ul 85.6} & 27.9 & 34.8 & {\ul 18.0} & 22.9 & \textbf{49.3} & 47.4 & 10.8 \\
LSE \cite{LSE_2020_Naseer} & & $S_T$ & 90.2 & 40.0 & {\ul 83.5} & 31.9 & {\ul 26.4} & 32.6 & 38.7 & 37.5 & 81.0 & 34.2 & \textbf{84.6} & 61.6 & \textbf{33.4} & 82.5 & {\ul32.8} & 45.9 & 6.7 & 29.1 & 30.6 & 47.5 & 10.9 \\ \hline
Source \cite{wu2019Resnet38} & \multirow{3}{*}{\rotHalf{ResNet-38}} & - & 70.0 & 23.7 & 67.8 & 15.4 & 18.1 & 40.2 & 41.9 & 25.3 & 78.8 & 11.7 & 31.4 & {\ul 62.9} & 29.8 & 60.1 & 21.5 & 26.8 & 7.7
& 28.1 & 12.0 & 35.4 & - \\
CBST \cite{zou2018unsupervised} & & $S_T$ & 86.8 & 46.7 & 76.9 & 26.3 & 24.8 & 42.0 & { \ul 46.0} & \textbf{38.6} & 80.7 & 15.7 & 48.0 & 57.3 & 27.9 & 78.2 & 24.5 & \textbf{49.6} & 17.7 & 25.5 & 45.1 & 45.2 & 9.8 \\
CRST \cite{zou2019crst} & & $S_T$ & 84.5 & 47.7 & 74.1 & 27.9 & 22.1 & \textbf{43.8} & \textbf{46.5} & {\ul 37.8} & 83.7 & 22.7 & 56.1 & 56.8 & 26.8 & 81.7 & 22.5 & 46.2 & \textbf{27.5} & \textbf{32.3} & {\ul 47.9} & 46.8 & 11.4 \\ \hline
Source \cite{chen2018deeplab} & \multirow{5}{*}{\rotHalf{DeepLab-v2}} & -& 71.7 & 18.5 & 67.9 & 17.4 &10.2 &36.5 &27.6 &6.3 & 78.4 &21.8 &67.6 &58.3 &20.7 &59.2 & 16.4 & 12.5 & 7.9 & 21.2 & 13.0 & 33.8 & -\\
MRENT \cite{zou2019crst} & & $S_T$ & 91.8 & 53.4 & 80.6 & \textbf{32.6} & 20.8 & 34.3 & 29.7 & 21.0 & 84.0 & 34.1 & {\ul 80.6} & 53.9 & 24.6 & 82.8 & 30.8 & 34.9 & 16.6 & 26.4 & 42.6 & 46.1 & 12.3\\
Ours (DRSL) & & $A_I + S_T$ & \textbf{92.8} & \textbf{57.5} & 82.8 & 28.7 & 17.7 & 40.6 & 34.3 & 27.0 & 85.5 & \textbf{42.7} & 77.8 & 62.3 & 30.8 & 82.2 & 24.3 & 38.5 & 8.4 & {\ul 31.1} & 39.6 & {\ul 47.6} & {\ul 13.8}\\
Ours (DRSL+) & & $A_I + S_T$ & {\ul 92.6} & {\ul 55.9} & 82.4 & 29.0 & 24.6 & {\ul 42.7} & 38.3 & 35.7 & 85.5 & 39.5 & 77.0 & \textbf{64.2} & 26.2 & 83.9 & 19.5 & 31.6 & 9.3 & 27.1 & 42.5 & \textbf{47.8} & \textbf{14.0}
\\
\hline
\end{tabular}
}
\label{table:gta2city}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width= \textwidth]{images/gta2city-ISA-01.pdf}\\
\footnotesize
\begin{tabular}{P{2cm}P{2cm}P{2cm}P{2cm}P{2cm}}
Target Image & Gound Truth & DeepLab-v2 \cite{chen2018deeplab} &DRSL (Ours) & DRSL+ (Ours)
\end{tabular}
\caption{Semantic segmentation qualitative results for Cityscapes validation set when adapted from GTA-V dataset.}
\label{img:gta2city}
\end{figure*}
\section{Experiments and Results}
We performed multiple experiments for domain adaptation of semantic segmentation and compare the obtained results with state-of-the-art methods.
\subsection{Experimental Setup}
\textbf{Datasets: }
Following \cite{Lian_2019_pycda, zou2018unsupervised, mlsl2020}, we use the standard benchmark setting of \textit{synthetic-to-real} setup for our experiments. Specifically we setup for, \textit{GTA-V to Cityscapes} and \textit{SYNTHIA to Cityscapes} dataset, where the prior is source domain dataset and the later is the target domain dataset.
\noindent \textbf{Cityscapes} \cite{Cordts2016Cityscapes} dataset is a known benchmark for the task of semantic segmentation and domain adaptation.
The dataset have 5000 high resolution labeled images partitioned as, training (2975), validation (500) and testing (1125). However, the annotations are only available for training and validation sets.
\noindent \textbf{GTA-V} dataset \cite{Richter_2016_ECCV} is obtained from the video game and the images are densely labeled with similar classes to cityscapes. There are 24966 images with spatial resolution spatial resolution $1052 \times 1914$. The GTA-V dataset also covers the road scene imagery.
\noindent \textbf{SYNTHIA} \cite{Ros_2016_CVPR} is another synthetic labeled images collection having 16 classes similar to Cityscapes. The dataset have 9400 images each with a spatial size $760 \times 1280$. Contrary to GTA-V and Cityscapes, SYNTHIA dataset has more viewpoint variations where; the camera is not supposed to be on the top of a vehicle every time.
\textbf{Network Architecture: }
\label{sec:network}
Following \cite{vu2019advent, tsai2018learning}, we use ResNet-101 \cite{he2016deep} backbone based DeepLab-v2 \cite{chen2018deeplab} as our baseline segmentation model.
Parallel to the segmentation head is the multi-modal distribution learning based feature regularization (MMDL-FR) module consisting of a combination of DML based Embedding Block (EB) and multi-modal distribution learning.
We call the DeepLab-v2 last block as the encoder (base network) and the output feature-map as base features.
For segmentation, these features are passed to segmentation layer while for MMDL-FR, these features are passed to the embedding block(Fig. \ref{img:model}).
The embedding block consists of 4 fully convolutional layers with different dilation rates (similar to ones used in the segmentation layer of the segmentation network), producing an aggregated output.
Unlike \cite{repmet2019}'s fully-connected layers based DML for embedding generation, our strategy preserves the spatial structure necessary for segmentation and requires much less memory.
The modes of the multi-modal are modeled with a fully connected layer as described in Sec. 3.2. and shown in Fig. \ref{img:model}.
For each input the embedding block of the MMDL-FR module outputs an embedding volume $E$ of size $(h\times w\times \hat{d})$.
For an input image, we select a maximum of $T_e$ embedding vectors per-class at random for further processing.
\textbf{Implementation Details: }
\label{sec:implement-details}
To implement the proposed approach and conduct the experiments, we use PyTorch deep learning framework and a single GTX 1080ti GPU with a single Core-i5 machine with 32GB RAM. The ImageNet \cite{russakovsky2015imagenet} trained weights for ResNet-101 \cite{he2016deep} are used to train the DeepLab-v2 on source dataset. SGD optimizer with weight decay of $5\times 10^{-4}$, momentum of 0.9, and initial learning rate of $2.5 \times 10^{-4}$ for source domain training and $5 \times 10^{-5}$ during adaptation is used. In both source training and adaptation, we used a scale variance (0.5-1.5) and horizontal flipping randomly. For DML and mixture models based classification, the loss weights are set to $\beta = 0.25$ and $\eta = 0.1$ to limit the excessive gradient flow to segmentation model. Similarly for mixture models, the number of modes $M$ is set to 3, and the number of embedding $T_e$ per-class per-image is set to 300. For both source and target domain images, due to GPU memory limitations, small patches of size $512 \times 512$ cropped at random compared to original high-resolution images are processed.
The baseline segmentation model and the MMDL-FR module are initially trained with original source domain images, \textcolor{black}{in-general called as source-only model.}
For self-supervised domain adaptation, selection of pixels as pseudo-labels is an important step as the adaptation process depends on the quality of pseudo-labels. We adapt an approach similar to \cite{zou2018unsupervised}, to generate pseudo-labels using the original source data trained model. For a given class $c$, we select $\delta$ confident pixels as pseudo-labels in the first round ($\delta=20\%$ ) and increase this number of pixels ratio by 5\% in each additional round.
To further help the adaptation, we have obtained the translated version of the source domain datasets using CycleGan\cite{hoffman2017cycada} and use these alongside original source images during adaptation.
\begin{comment}
\begin{table}[H]
\footnotesize
\caption{Performance (mIoU) gain comparison between the GTA-V trained source models and the respective GTA-V to Cityscapes adapted models.}
\centering
\begin{tabular}{c|ccc}
\hline
Dataset & \multicolumn{3}{c}{GTA-V $\rightarrow$ Cityscapes} \\
\hline
Methods & Source only & UDA Algo. & mIoU gain \\ \hline
\hline
FCN in the wild \cite{hoffman2016fcns}& 21.2 & 27.1 & 5.9\\
Curriculam DA \cite{curr2017_ICCV} & 22.3 & 28.9 & 6.6\\
AdaptSetNet \cite{tsai2018learning} & 36.6 & 42.4 & 5.8 \\
MinEnt \cite{vu2019advent} & 36.6 & 42.3 & 5.7 \\
CLAN \cite{clan_2019_CVPR} & 36.6 & 43.2 & 6.6 \\
All Structure \cite{structure_2019_CVPR} & 36.6 & 45.4 & 8.8 \\
CBST \cite{zou2018unsupervised} & 35.4 & 46.2 & 10.8 \\
PyCDA \cite{Lian_2019_pycda} & 36.6 & 47.4 & 10.8 \\
LSE \cite{LSE_2020_Naseer} & 36.6 & 47.5 & 10.9 \\
MRENT \cite{zou2019crst} & 33.6 & 46.1 & 12.5 \\
\hline
Ours (DRSL) & 33.6 & {\ul 47.6} & {\ul 14.0} \\
Ours (DRSL+) & 33.6 & \textbf{47.8} & \textbf{14.2} \\ \hline
\end{tabular}%
\label{table:gain}
\end{table}
\end{comment}
\begin{table*}[h]
\centering
\caption{Semantic segmentation performance of DRSL for SYNTHIA to Cityscapes adaptation.
We present the mIoU (16-classes) and mIoU* (13-classes) comparison with existing state-of-the-art domain adaptation methods for the Cityscapes validation set.
}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|c|c|cccccccccccccccc|c|c}
\hline
\multicolumn{19}{c}{SYNTHIA $\rightarrow$ Cityscapes}\\
\hline
Methods & \rot{Baseline} & \rot{Appr.} & \rot{Road} & \rot{Sidewalk} & \rot{Building} & \rot{Wall} & \rot{Fence} & \rot{Pole} & \rot{T. Light} & \rot{T. Sign} & \rot{Veg.} & \rot{Sky} & \rot{Person} & \rot{Rider} & \rot{Car} & \rot{Bus} & \rot{M.cycle} & \rot{Bicycle} & \rot{mIoU} & \rot{mIoU*} \\ \hline \hline
Source \cite{chen2018deeplab} & \multirow{8}{*}{\rotHalf{DeepLab-v2}} & - & 64.3 & 21.3 & 73.1 & 2.4 & 1.1 & 31.4 & 7.0 & 27.7 & 63.1 & 67.6 & 42.2 & 19.9 & 73.1 & 15.3 & 10.5 & 38.9 & 34.9 & 40.3 \\
CLAN \cite{clan_2019_CVPR} & & $A_O$ & 81.3 & 37.0 & 80.1 & - & - & - & 16.1 & 13.7 & 78.2 & 81.5 & 53.4 & 21.2 & 73.0 & 32.9 & {\ul 22.6} & 30.7 & - & 47.8 \\
Structure \cite{structure_2019_CVPR} & & $A_F + A_O$ & \textbf{91.7} & \textbf{53.5} & 77.1 & 2.5 & 0.2 & 27.1 & 6.2 & 7.6 & 78.4 & 81.2 & 55.8 & 19.2 & 82.3 & 30.3 & 17.1 & 34.3 & 41.5 & 48.7 \\
LSE \cite{LSE_2020_Naseer} & & $S_T$ & {\ul 82.9} & {\ul 43.1} & 78.1 & 9.3 & 0.6 & 28.2 & 9.1 & 14.4 & 77.0 & 83.5 & 58.1 & 25.9 & 71.9 & \textbf{38.0} & \textbf{29.4} & 31.2 & 42.6 & 49.4\\
CRST \cite{zou2019crst} & & $S_T$ & 67.7 & 32.2 & 73.9 & 10.7 & {\ul 1.6} & 37.4 & 22.2 & 31.2 & 80.8 & 80.5 & 60.8 & {\ul 29.1} & {\ul 82.8} & 25.0 & 19.4 & 45.3 & 43.8 & 50.1 \\ \hline
Source \cite{wu2019Resnet38} & \multirow{3}{*}{\rotHalf{ResNet-38}} & - & 32.6 & 21.5 & 46.5 & 4.81 & 0.03 & 26.5 & 14.8 & 13.1 & 70.8 & 60.3 & 56.6 & 3.5 & 74.1 & 20.4 & 8.9 & 13.1 & 29.2 & 33.6 \\
CBST \cite{zou2018unsupervised} & & $S_T$ & 53.6 & 23.7 & 75.0 & 12.5 & 0.3 & 36.4 & {\ul 23.5} & 26.3 & 84.8 & 74.7 & \textbf{67.2} & 17.5 & \textbf{84.5} & 28.4 & 15.2 & \textbf{55.8} & 42.5 & 48.4 \\
MLSL \cite{mlsl2020} & & $S_T$ &73.7 &34.4 &78.7 &{\ul 13.7} &\textbf{2.9} &36.6 &\textbf{28.2} &22.3 &\textbf{86.1} &76.8 &{\ul 65.3} &20.5 &81.7 &31.4 &13.9 &47.3 &44.4 &50.8 \\
\hline
Source \cite{chen2018deeplab} & \multirow{3}{*}{\rotHalf{DeepLab-v2}} & - & 69.2 & 26.6 & 66.5 & 6.5 & 0.1 & 33.2 & 4.1 & 18.0 & 80.5 & 80.0 & 55.3 & 15.1 & 67.5 & 20.1 & 6.8 & 14.0 & 35.2 & 40.3\\
DRSL & & $A_I + S_T$ & 70.1 & 30.1 & \textbf{81.6} & \textbf{15.6} & 1.0 & {\ul 40.9} & 20.9 & \textbf{36.4} & {\ul 85.4} & {\ul 84.0} & 59.4 & 26.9 & 81.8 & {\ul 35.9} & 16.7 & 48.1 &{\ul 45.9} & {\ul 52.0}\\
DRSL+ & & $A_I + S_T$ & 82.8 & 40.1 & {\ul 81.3} & 13.0 & 1.6 & \textbf{41.6} & 19.8 & {\ul 33.1} & 85.3 & \textbf{84.3} & 59.5 & \textbf{30.1} &
78.6 & 25.3 & 19.8 & {\ul 51.7} & \textbf{46.7} &\textbf{53.2} \\ \hline
\end{tabular}
}
\label{table:syn2city}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width= \textwidth]{images/syn2city-ISA-01.pdf}\\
\footnotesize
\begin{tabular}{P{2cm}P{2cm}P{2cm}P{2cm}P{2cm}}
Target Image & Gound Truth & DeepLab-v2 \cite{chen2018deeplab} &DRSL (Ours) & DRSL+ (Ours)
\end{tabular}
\caption{Semantic segmentation qualitative results for SYNTHIA to Cityscapes adaptation.}
\label{img:syn2city}
\end{figure*}
\subsection{Experimental Results}
In this section, we present experimental results of the proposed approach for semantic segmentation. We follow the standard synthetic to real adaptation setup.
\subsubsection{Results on GTA-V to Cityscapes Adaptation}
Table \ref{table:gta2city} presents domain adaptation performance for the task of semantic segmentation of the proposed DRSL approach compared to existing adversarial learning and self-supervised learning architectures. To have a fair comparison, the methods are divided into three groups where each comparing model is listed with its respective source model and backbone network.
Fig. \ref{img:gta2city} shows example images to highlight the performance of the proposed DRSL qualitatively. The DRSL improves the performance for both objects and stuff classes, as shown in Fig. \ref{img:gta2city} (Column. 4). Small and far away objects like person, traffic light, and signboards are better adapted alongside near to camera objects and large area stuff classes like road, bus, and sidewalk.
The cross domain mode alignment loss further penalizes the adaptation for small objects, further improving the performance for classes like bicycle, traffic sign, traffic light, pole, fence and person as shown in Table. \ref{table:gta2city} (DRSL+).
Overall, the proposed DRSL+ outperforms the latest self-supervised learning frameworks with clear gaps, surpassing the source dataset trained model with 14.0$\%$ gain in mIoU(last column of Table. \ref{table:gta2city}).
The DRSL+ performs well on both object classes as well as stuff classes compared to previous methods which may perform better on some classes but fail on other classes.
Compared to CRST and MRENT \cite{zou2019crst} which regularizes the labels and models for high predictions, the proposed approach achieves a mIoU gain of 1.0 and 1.7\% respectively. Similarly, the DRSL outperforms the PyCDA \cite{Lian_2019_pycda}, which works on pyramid level labeling, and LSE \cite{LSE_2020_Naseer} which incorporates scale invariances with class balancing strategies augmented with higher mIoU baseline models. Compared to composite adversarial learning-based methods like FCAN \cite{zhang2018fcan} and IntraDA \cite{pan2020unsupervised}, DRSL shows improvement with a minimum of 1\% in mIoU and specifically with high margins in small objects.
Similarly, compared to CAG-UDA\cite{zhang2019category} (mIoU=43.9\% without warm-up training), the DRSL+ gains 3.9\% in mIoU.
\subsubsection{Results on SYNTHIA to Cityscapes Adaptation}
Table \ref{table:syn2city} presents the proposed DRSL approach segmentation performance for SYNTHIA to Cityscapes adaptation. To have a fair comparison with existing methods, the comparing methods are divided into three groups and the respective source model results with different setups are shown. Moreover, for SYNTHIA to Cityscapes, we show the mIoU (16-classes) and mIoU* (13-classes) as shown by \cite{mlsl2020, zou2018unsupervised}.
Fig.\ref{img:syn2city} shows qualitative results for DRSL and DRSL+ compared to baseline results. Row-1 and row-2 of Fig.\ref{img:syn2city} focuses on objects like rider, bicycle, person, and the stuff classes, row-3 highlights the faraway objects and segmentation for road scene imagery.
The DRSL approach performs well on both stuff and object classes adaptation and shows an improvement of 11.7\% in mIoU and 12.9\% in mIoU* compared to the baseline model (source). Compared to strong CBST\cite{zou2018unsupervised} and MLSL\cite{mlsl2020} self-supervised learning approaches, the DRSL shows a minimum improvement of 2.3\% and 2.4\% in mIoU and mIoU* respectively. Similarly, the DRSL shows significant improvement to existing regularization based models, like CRST \cite{zou2019crst} and entropy-based methods, e.g., LSE\cite{LSE_2020_Naseer} and MinEnt \cite{vu2019advent}. Compared to CAG-UDA\cite{zhang2019category} (44.5\% mIoU and 51.4\% mIoU*), the DRSL+ gains 2.2\% in mIoU and 1.9\% in mIoU* respectively. The gaps can be more visible if compared with "without warm-up" training CAG-UDA.
\subsubsection{Ablation Experiments}
Ablation experiments are performed for GTA-V to Cityscapes.
\label{sec:ablation}
\noindent \textbf{Multi-Modal Distribution Learning based Regularization Module (MMDL-FR): }
During training and adaptation it's essential to understand the balance between the segmentation and different elements of MMDL-FR.
We search over a range of values to identify (empirically) optimal values for the loss scaling factors, $\beta$ and $\eta$ (Table. \ref{table:cfr-values}).
Based on the experiments, $\beta$ and $\eta$ are set to 0.25 and 0.1 respectively, for all the experiments including SYNTHIA to Cityscapes.
\begin{table}[!htb]
\small
\centering
\caption{Effect of $(\beta, \eta)$ values of the MMDL-FR module.}
\resizebox{3.5in}{!}{
\begin{tabular}{c|ccccc}
\hline
$\beta, \eta$& (0.0, 0.0) & (0.1, 0.1) &(0.25, 0.1) & (0.5, 0.5) & (1.0, 1.0)\\ \hline
DRSL (mIoU) & 44.9 & 46.1 &\textbf{47.6} &45.9 & 46.0\\ \hline
\end{tabular}
}
\label{table:cfr-values}
\end{table}
\noindent \textbf{Effect of MMDL-FR Module on Adaptation Process: }
As described in Sec. \ref{sec:drsl} and Fig. \ref{img:model}, the MMDL-FR module regularizes the encoder (base-network) of the segmentation model with DML based embedding block and MMDL based classification. The MMDL-FR overall enhances the adaptation performance compared to the non-regularized version of the proposed method as shown in Table. \ref{table:drsl-effect}.
\begin{table}[!htb]
\small
\centering
\caption{Effect of MMDL-FR module on adaptation.}
\resizebox{3.5in}{!}{
\begin{tabular}{c|ccc}
\hline
Methods & Source~\cite{chen2018deeplab} &Without MMDL-FR & With MMDL-FR \\ \hline
mIoU & 33.6 &44.9 & \textbf{47.6} \\ \hline
\end{tabular}
}
\label{table:drsl-effect}
\end{table}
\noindent \textbf{Effect of Modes: }
As described in Sec. \ref{sec:drsl}, it is very critical to select correct number of modes for multi-modal in MMDL. We have experimented with multiple number of modes (Table. \ref{table:drsl-modes}) and selected M=3 for all the experiments.
\begin{table}[!htb]
\small
\centering
\caption{Effect to number of modes (M) in MMDL.}
\begin{tabular}{c|ccc}
\hline
Number of Modes (M) & M=1 &M=3 & M=5 \\ \hline
mIoU & 44.7 &\textbf{47.6} & 46.2 \\ \hline
\end{tabular}
\label{table:drsl-modes}
\end{table}
\noindent \textbf{Effect of Labels Reduction for MMDL-FR Module: }
The output of the embedding block in the MMDL-FR module is 8 times reduced compared to input image size.
Embeddings needed to be upsampled 8 times if labels are not reduced requiring a lot of memory. Contrary to this, reducing labels 8 times introduces boxing effect. Based on these observations the scale factor 2 is used. A comparative performance of labels reduction is shown in Table. \ref{table:embedding}.
\begin{table}[H]
\footnotesize
\centering
\caption{Effect of label reduction ratio on mIoU.}
\begin{tabular}{ccccc}
\hline
\multicolumn{5}{c}{GTA-V $\rightarrow$ Cityscapes} \\
\hline
Label Reduction Ratio & 1 & 2 & 4 & 8 \\
Embeddings Upsampling Ratio & 8 & 4 & 2 & 1 \\
Adaptation Performance (mIoU) & 47.1 & \textbf{47.6} & 46.8 & 46.4 \\
\hline
\end{tabular}
\label{table:embedding}
\end{table}
\noindent \textbf{Pseudo-label Accuracy: }
To understand how the MMDL-FR results in more accurate pseudo-labels during the adaptation process, we compute mIoU of pseudo-labels for when MMDL-FR is not used (A) and when MMDL-FR is used (B).
At the start of adaptation (round-0), we have same mIoU for both A $\&$ B (Table-\ref{table:pl-ious}) since MMDL-FR will start to contribute when adaptation starts , i.e., \textit{during} round-0.
Due to MMDL-FR, the predictions by B after round-0 have much lower self-entropy and pseudo-labels have higher mIoU than the ones generated by model-A, thus improving self-supervised domain adaptation.
\begin{table}[!htb]
\small
\centering
\caption{Pseudo-labels with $\&$ without MMDL-FR module}
\resizebox{3.5in}{!}{
\begin{tabular}{c|c|c|c|c}
\hline
\multirow{2}{*}{Method}& \multicolumn{2}{c}{Start of Round-0} & \multicolumn{2}{|c}{Start of Round-1}\\ \cline{2-5}
& mIoU & Self-Entropy & mIoU & Self-Entropy \\ \hline
A: Without MMDL-FR \{ST, ISA\} & 73.9 &6.56 $\times 10^{-2}$ & 76.4 &1.57$\times 10^{-2}$\\
B: With MMDL-FR \{ST, ISA, MMDL-FR\} & 73.9& 6.56$\times 10^{-2}$& \textbf{78.7}& \textbf{1.14}$\boldsymbol{\times 10^{-2}}$\\ \hline
\end{tabular}
}
\label{table:pl-ious}
\end{table}
\noindent \textbf{Effect of Consistency Loss Weight: }
The cross domain mode consistency loss helps to make the embeddings of the source and target images belonging to the same mode of the same class closer, helping to better adapt the small object classes. However, its contribution in the whole loss needs to be limited to make the system stable. Our experiments suggests $\gamma=0.1$ suits the DRSL+ as shown in Table. \ref{table:drsl-cons-loss}.
\begin{table}[!htb]
\small
\centering
\caption{Effect of cross domain mode consistency loss.}
\begin{tabular}{c|ccc}
\hline
Loss weight $\gamma$ & 0.01 &0.1 & 0.25 \\ \hline
mIoU & 46.0 &\textbf{47.8} & 45.3 \\ \hline
\end{tabular}
\label{table:drsl-cons-loss}
\end{table}
\noindent \textbf{Effect of Input Space Adaptation (ISA): }
Removing ISA module, mIoU decreases 1.6 points, from 47.6 (DRSL) to 46.0 (DRSL w/o ISA), indicating that ISA is needed but not vital for the effectiveness of the proposed model.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we propose a distribution regularized self-supervised learning approach for domain adaptation of semantic segmentation.
Parallel to the semantic segmentation decoding head, we employ a clustering based feature regularization (MMDL-FR) module.
Where segmentation head identifies what can differentiate a class, MMDL-FR explicitly models intra-class pixel-level feature variations, allowing the model to capture much richer representation of the class at pixel-level, thus improving model's generalization.
Moreover, this disentanglement of information w.r.t tasks improves task dependent representation learning and allows performing separate domain alignments.
Shared base-network enables MMDL-FR to act as regularizer over segmentation head, thus reducing the noisy pseudo-labels. Extensive experiments on the standard synthetic to real adaptation show that the proposed DRSL outperforms the state-of-the-art approaches.
{\small
|
1,314,259,993,517 | arxiv | \section{Introduction}
Large scale studies of the human microbiome have become increasingly common thanks to advances in next generation sequencing (NGS) technologies \citep{MetaHIT, HMP}.
A relevant task in these studies is to measure the association between a sample's microbial composition and individual characteristics, such as biomarkers and aspects of the sample's environment \citep{xochi2012,quince2013impact,kostic2015dynamics}.
The abundances of microbial taxa are measured by assigning DNA reads to reference genomes. Some experiments target specific genes, such as the 16S rRNA gene, while others sample the entire bacterial genome. In all cases, the resulting count data for a collection of samples are organized into a contingency table known as the operational taxonomic unit (OTU) table.
Several methods for association studies with microbial data apply ideas from RNA-seq and other high-throughput genomic experiments \citep{edger,deseq, metagenomeseq}.
These methods use raw or transformed counts of microbial species to test the association of a single species with relevant covariates. Typically, these tests are carried out one species at a time by using generalized linear models (GLMs) combined with families of distributions that are over-dispersed and zero-inflated \citep{xu2015} to accommodate well-known characteristics of microbial abundance data \citep{HZLi-review}. The major drawback of this approach is that it models species independently. This approach does not take into account correlations across microbial species and does not allow borrowing of information across species.
The outlined limitation has prompted the introduction of joint models of microbial abundance \citep{multinomial-dirichlet, logistic-normal, wadsworth2017integrative, MIMIX}. These methods model the counts of $I$ microbial species $(n_{i,j};i=1,\ldots,I)$, of a specific sample $j$, say a saliva sample, with a multinomial distribution.
To account for the overdispersion these methods assume the multinomial parameter $P^j=(P_{1}^j,\ldots,P_{I}^j)$ is random and distributed accordingly to a parametric model. For example, in \cite{multinomial-dirichlet} and \cite{wadsworth2017integrative}, $P^j$'s follow independent Dirichlet distributions and in \cite{logistic-normal} and \cite{MIMIX}, $P^j$'s follow multivariate logistic-normal distributions.
To associate the covariates to the microbial compositions, all models link the parameters of each distribution of $P^j$ (Dirichlet or logistic-normal) to covariates of sample $j$ via a regression function. Inference on the regression coefficients indicates whether a covariate is associated with the abundance of a species or not.
Although these joint models overcome limitations of separate modeling of single species, the assumed distributions of the $P^j$'s in these methods are restrictive. For instance, $P_i^j$ is strictly positive for all $i$ and $j$. This does not reflect the fact that some species can be completely absent in sample $j$. In addition, the variation of $P^j$'s across samples might be mainly associated to some latent characteristics that are not observed. In this case, the methods which link model parameters exclusively to covariates do not capture dependence across species-specific residuals.
Bayesian nonparametric methods that jointly model the compositions $P^j$ offer flexible alternatives. A widely used class of nonparametric models stems from the Hierarchical Dirichlet process \citep{hdp}. In its simplest form, the Hierarchical Dirichlet process (HDP) assumes samples are exchangeable and the compositions $P^j$ over $I=\infty$ species are identically distributed. The exchangeability assumption in the HDP does not capture potential association between $P^j$'s and covariates. Nonparametric models with covariates explicitly embedded are ideal candidates for modeling dependence of compositions $P^j$ on covariates. There are only a few such models discussed in literature. A relevant class of nonparametric models embedding covariates utilizes the Chinese restaurant processes representation \citep{johnson2013bayesian}. A second class of such models utilizes completely random measures \citep{lijoi2014bayesian}. A third class of models follows the idea in \cite{DDP}. Among this class of models, \cite{rodriguez2011nonparametric}, \cite{muller2011product}, \cite{griffin2013comparing}, and \cite{arbel} construct dependent random measures using stick-breaking processes with atoms and weights specified through covariate-indexed stochastic processes.
Recently, a Bayesian nonparametric model for microbiome data specified through sample-specific latent factors has been discussed in \cite{boyuren}. This construction induces a marginal Dirichlet process prior for each composition $P^j$ and introduces dependences across samples by associating microbial compositions $P^j$ to linear combinations of latent factors. In addition, the authors introduced a link function with hard-truncation at zero to model zero-inflation in microbiome data. This model employs a shrinkage prior on the latent factors to produce parsimonious estimates that concentrate on a low-dimensional space.
This manuscript builds on the model of \cite{boyuren}, linking the microbial composition $P^j$ to covariate effects as well as to the latent factors. The resulting extended model takes into account overdispersion and zero-inflation in microbiome data.
More importantly, it can also enable association studies for microbiome data with efficient computations.
By estimating coefficients for linear combinations of relevant covariates, we visualize and infer whether a given covariate is associated with the microbial compositions or not.
We performed an extensive simulation analysis to compare the performances of the proposed model and a parametric model with latent factors \citep{MIMIX} that is used in microbiome studies. The simulation study suggests that our model can accurately recover population-level trends of microbial abundances over covariates even when the model is misspecified. Our model has better performance than \cite{MIMIX} in estimating the relationship between covariates and microbial abundances when the level of zero-inflation in the data increases. We also discuss the interpretation of model parameters and propose approaches to visualize covariates' effects.
The paper is organized as follows. In Section \ref{sec:2} we specify the Bayesian model and discuss the identifiability of relevant model parameters. Section \ref{sec:3} is dedicated to computational aspects and provides an overview of the sampling algorithm for posterior inference. Section \ref{sec:4} presents simulation studies and in Section \ref{application} we discuss an application of the model to data from type 1 diabetes studies which collected longitudinal measurements from a cohort of infants.
Section \ref{sec:6} concludes and discusses possible extensions of the proposed analyses.
\section{Prior model}
\label{sec:2}
In this section, we first review the construction of the Dependent Dirichlet processes in \cite{boyuren}, and then provide a new version of the model which incorporates covariates. We also discuss the identifiability of the model parameters, including the parameters that correspond to the covariates' effects.
The model will be used in the next sections to analyze the OTU table $\bn=(n_{i,j}; i\leq I,j\leq J)$, where $n_{i,j}$ is the observed count of the microbial species $i$ in sample $j$. $I$ and $J$ are the total number of species and samples respectively. Our aim is to extract from the OTU table information on the relationships between microbial composition and observed samples' characteristics.
\subsection{Dependent Dirichlet processes}
\label{prev.model}
In Table \ref{otu.table.example}, we illustrate a subset of the OTU table from the DIABIMMUNE project \citep{tommi}. The goal of the DIABIMMUNE project is to compare microbiome communities in
infants with type 1 diabetes (T1D) or serum auto-antibodies (markers predicting
the onset of T1D) and healthy controls in three countries: Finland (FIN), Estonia (EST) and Russia (RUS). The study is
prospective and longitudinal, and the microbial abundances are measured with shotgun sequencing. Table \ref{otu.table.example} records the counts of 10 microbial species in three Russian samples and three Finnish samples based on 16S rRNA sequencing. We denote the $i$th recorded species by $Z_i$. For instance, $Z_1$ is Bifidobacterium longum in Table \ref{otu.table.example}.
For sample $j$, we assume the vector $(n_{1,j},\ldots,n_{I,j})$ follows a multinomial distribution with unknown parameters.
Our analyses extend easily to the case in which the counts $n_{i,j}$ are Poisson random variables with unknown means.
The sequencing depth $n_j=\sum_{i=1}^I n_{i,j}$ and the sample-specific multinomial probabilities $(P_{1}^j,\ldots,P_{I}^j)$ determine the distribution of $(n_{i,j};i\leq I)$. The probabilities $(P_{1}^j,\ldots, P_{I}^j)$ represent the microbial composition of sample $j$. We use $P^j(\{Z_i\})=P_{i}^{j}$ to denote the relative abundance of $Z_i$ in sample $j$. The vectors $P^j$ vary across samples according to heterogeneity of either measured or unknown characteristics of the $J$ samples. For example, in Table \ref{otu.table.example}, the maximum likelihood estimates (MLE) of abundances of Bifidobacterium longum ($P^j(\{Z_1\})$) tend to be higher in Russian samples than in the Finnish samples.
\begin{table}
\footnotesize
\centering
\caption[A subset of DIABIMMUNE dataset]{{An example of OTU table \citep{tommi}.}}
\label{otu.table.example}
\begin{tabular}{@{}ccccccccccccc@{}}
\toprule
Species & RUS1 & RUS2 & RUS3 & FIN1 & FIN2 & FIN3 \\
\midrule
Bifidobacterium longum & 0 & 73222 & 3014074 & 14294 & 7291 & 9228 \\
Bifidobacterium bifidum & 3594189 & 49223 & 0 & 11177 & 11656816 & 14759 \\
Escherichia coli & 4210380 & 23025 & 635855 & 29700 & 7508 & 556208 \\
Bifidobacterium breve & 0 & 136 & 245827 & 19312 & 7223273 & 0 \\
Bacteroides fragilis & 0 & 88751 & 0 & 6257732 & 343 & 75506 \\
Bacteroides vulgatus & 0 & 7454 & 0 & 4745 & 0 & 25859 \\
Bacteroides dorei & 0 & 0 & 0 & 0 & 0 & 0 \\
Bifidobacterium adolescentis & 0 & 111248 & 1626357 & 735715 & 1194 & 0 \\
Bacteroides uniformis & 0 & 3901 & 0 & 5859 & 1633 & 28638 \\
Ruminococcus gnavus & 145485 & 33004 & 92101 & 253830 & 29 & 1186774 \\
\bottomrule
\end{tabular}
\end{table}
We describe the Bayesian model for the unknown compositions $P^j$, $j=1,\ldots,J$ in \cite{boyuren}.
Let $\Zsc$ be the set of all microbial species and $Z_i\in\Zsc, i\geq 1$ be a sequence of distinct species.
The model does not constrain {\it a priori} the number of species present in the $J$ samples. The relative abundance of OTU $Z_i$ in sample $j$ is defined as
\begin{equation}\label{old.model}
P^j(\{Z_i\}) = \frac{\sigma_i\langle\bX_i,\bY_j\rangle_+^{2}}{\sum_{i'} \sigma_{i'}\langle\bX_{i'},\bY_j\rangle_+^{2}}
\end{equation}
where $\sigma_i\in (0,1)$, $\sigma_1>\sigma_2>\sigma_3>\ldots$, and $\bX_i,\bY_j\in \mathbb R^K$. The $k$-th components of $\bX_i$ and $\bY_j$ are denoted as $X_{k,i}$ and $Y_{k,j}$. We will explain the definitions of $\sigma_i$, $\bX_i$, $\bY_j$ and $K$ in the next paragraph. $\mathbb I(\cdot)$ is the indicator function and $x_+ = x\times \mathbb I(x>0)$. $\langle\cdot,\cdot\rangle$ denotes the standard inner product in $\mathbb R^K$. We define $Q_{i,j} = \langle \bX_i,\bY_j\rangle$. In addition, $\bsigma=(\sigma_i;i\geq 1)$, $\bX=(\bX_i;i\geq 1)$, $\bY=(\bY_j;j\leq J)$ and $\bQ = (Q_{i,j};i\geq 1, j\leq J)$.
We can interpret $\sigma_i>0$ as a summary of the overall abundance of species $i$ across samples. We call $\bX_i$ and $\bY_j$ species vector and sample vector, respectively. $\bX_i$ and $\bY_j$ are latent components of the probability model. Differences across compositions $P^j$ are determined by the $\bY_j$ latent vectors. Vectors $\bY_j$ can be interpreted as latent characteristics of the samples that affect their microbial compositions.
The model assumes that there are $K$ latent characteristics and $\bX_i$ corresponds to the effects of these $K$ latent characteristics on the abundance of the species $Z_i$.
The construction above implies that the angle $\phi_{j,j'}$ between $\bY_j$ and $\bY_{j'}$ determines the degree of similarity between compositions $P^j$ and $P^{j'}$.
Specifically, small $\phi_{j,j'}$ indicates that $P^j$ and $P^{j'}$ are similar. When $\phi_{j,j'}=0$, compositions $P^j$ and $P^{j'}$ are identical.
Symmetrically, the angle $\varphi_{i,i'}$ between $\bX_i$ and $\bX_{i'}$ can be viewed as a measure of similarity between species $Z_i$ and $Z_{i'}$. When $\varphi_{i,i'}$ decreases towards zero, the correlation between $(P^j(\{Z_i\});j\leq J)$ and $(P^j(\{Z_{i'}\};j\leq J))$ increases to one.
The prior specification in the model is as follows. First $\sigma_1>\sigma_2>\sigma_3\ldots$ are {\it a priori} ordered points from a Poisson process on $(0,1)$ with intensity $\nu(\sigma) = \alpha\sigma^{-1}(1-\sigma)^{-1/2}$. Second the $X_{k,i}$ random variables are independent Gaussian $\Nsc(0,1)$, $i=1,2,\ldots$, $k=1,2,\ldots,K$. We can assume for the moment that the $\bY_j$'s are fixed.
The resulting marginal prior distribution on the composition $P^j$ is a Dirichlet process \citep{boyuren}. In addition, $P^j$ and $P^{j'}$ are dependent for $j\neq j'$. To provide some intuition on this construction of Dirichlet process we consider a similar model with $I<\infty$ species. For simplicity we set $\|\bY_j\|=1$, where $\|\cdot\|$ is the Euclidean norm of a real vector.
The prior on $\bX_i$'s induces a standard normal distribution on $(Q_{1,j},\ldots,Q_{I,j})$. The prior distribution of $(Q_{i,j})_+^{2}$ is therefore a mixture of a point mass at zero and a $\text{Gamma}(1/2,1/2)$ distribution. Assume $(\sigma_1,\ldots,\sigma_I)$ are independent $\text{Beta}(\alpha/I, 1/2-\alpha/I)$ variables. It can be verified by moment generating function that the joint law of these ordered independent Beta random variables converges to the law of a Poisson process on $(0,1)$ with intensity $\alpha\sigma^{-1}(1-\sigma)^{-1/2}$ when $I\to \infty$. The products $(\sigma_i(Q_{i,j})_+^{2},i=1,\ldots,I)$ then follow a mixture distribution of a point mass at zero and a $\text{Gamma}(\alpha/I,1/2)$.
The normalized vector $(\sigma_i(Q_{i,j})_+^{2}/\sum_{i'}\sigma_{i'}(Q_{i',j})_+^{2},i=1,\ldots,I)$, conditioned on $(\mathbb I(Q_{1,j}>0), \ldots, \mathbb I(Q_{I,j}>0))$ follows a Dirichlet distribution with weights proportional to $\mathbb I(Q_{i,j}>0)$. If $I\to\infty$, we know that the $(\sigma_1,\ldots,\sigma_I)$ converges in distribution to the Poisson process with intensity $\nu$, and $(\sigma_i(Q_{i,j})_+^{2}/\sum_{i'}\sigma_{i'}(Q_{i',j})_+^{2},i=1,\ldots,I)$ becomes a Dirichlet process \citep{ferguson1973bayesian}. This holds also when $\|\bY_j\|\neq 1$ because the distribution of $(\sigma_i(Q_{i,j})_+^{2}/\sum_{i'}\sigma_{i'}(Q_{i',j})_+^{2},i=1,\ldots,I)$ does not depend on $\|\bY_j\|$.
For inferential and visualization purposes it is desirable that the $\bY_{j}$ latent vectors concentrate approximately on a low dimensional space. The resulting $\bY_j$ are parsimonious latent factors that capture the variability of observed species abundances across samples. To this end, the model applies the prior studied in \cite{dunsonfactor},
$$
\bY_j\sim \Nsc(\bzero,\text{diag}\{\gamma_1,\ldots,\gamma_K\}),
$$
where $\gamma_k$ rapidly decrease with $k$. The prior formalizes the desiderata of having the norm $\|\bY_j\|$ mostly driven by the first few components of $\bY_j$, say the first three components $(Y_{1,j},Y_{2,j},Y_{3,j})$, and the rest of the components, $(Y_{4,j},\ldots,Y_{K,j})$, vanish with negligible values. In different words, only a small set of $\bY_j$ entries---three in the example---are relevant. This approach is preferable to a hyper-prior on the dimensionality of $\bY_j$ mainly for computational convenience.
\subsection{Fixed effects}
\label{dp.fixed}
The goal of this subsection is to model relationships between microbial compositions and samples' characteristics. For example, in studies of Inflammatory Bowel Disease (IBD) \citep{xochi2012,greenblum2012metagenomic,gevers2014treatment}, researchers were interested in identifying microbes that correlate with the onset of IBD to develop therapeutic hypotheses. These analyses typically utilize regression models where the outcomes coincide with OTU abundances.
Following a similar strategy, we expand the model in Section \ref{prev.model}.
Assume there are $L\geq 1$ observed covariates. We use $\bw_j=(w_{l,j};l=1,\ldots,L)$ to denote the covariates' values for sample $j$. The effects of this set of covariates on species $i$ are $\bv_i=(v_{l,i};l=1,\ldots,L)$. The collection of all $\bw_j$ and $\bv_i$ are $\bw=(\bw_1,\ldots,\bw_J)$ and $\bv = (\bv_1,\ldots,\bv_I)$. Our extended model directly modifies the random variables $Q_{i,j}$'s introduced in the definition of the model (\ref{old.model}), by adding a linear function of $\bw_j$ and an error term:
\begin{equation}
Q_{i,j} = \langle\bX_i, \bY_j\rangle + \langle\bv_i,\bw_j\rangle + \epsilon_{i,j},
\label{expand.model}
\end{equation}
where $\epsilon_{i,j}\overset{iid}{\sim} \Nsc(0,1)$ is the error term. Thus,
\begin{displaymath}
P^j(\{Z_i\}) = \frac{\sigma_{i} (Q_{i,j})_+^{2}}{\sum_{i'} \sigma_{i'}(Q_{i',j})_+^{2}}.
\end{displaymath}
The inner product $\langle\bv_i,\bw_j\rangle$ represents the fixed effects of our model, whereas $\langle\bX_i,\bY_j\rangle$ represents the random effects. We fix the variance of the errors to one since the model for $P^j$ is invariant if we rescale all $Q_{i,j}$ variables by a fixed multiplicative term.
In this construction, $\bv_i$ and $\bw_j$ can be viewed as additional dimensions of $\bX_i$ and $\bY_j$ respectively.
%
The angle between $(\bw_j^{},\bY_j^{})^{}$ and $(\bw_{j'}^{},\bY_{j'}^{})^{}$, denoted as $\tilde\phi_{j,j'}$, measures the similarity between the microbial compositions $P^j$ and $P^{j'}$. As in model (\ref{old.model}), one can verify that the correlation $\text{cor}(P^j(A),P^{j'}(A))$ is monotone with respect to $\tilde\phi_{j,j'}$. Similarly, the angle between $(\bv_i,\bX_i)$ and $(\bv_{i'},\bX_{i'})$, $\tilde\varphi_{i,i'}$, is representative of the correlation between abundances of species $i$ and $i'$ across samples. A small $\tilde\varphi_{i,i'}$ value makes the correlation between vectors $(P^j(\{Z_i\});j\leq J)$ and $(P^j(\{Z_{i'}\});j\leq J)$ close to one.
The coefficients $\bv_i$ are \textit{a priori} independent normal random variables with mean zero and variance one.
When the latent factors $\bY$ are fixed, and the prior for $\bX_i$ and $\sigma_i$ remains the same as in Section \ref{prev.model}, the microbial composition $P^j$, for each $j=1,\ldots,J$, retains a marginal Dirichlet Process distribution. More precisely, $P^j$ is a Dirichlet process with concentration parameter $\alpha$. This can be shown using the same argument as in Section \ref{prev.model}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{linear_eff.pdf}
\caption[Observed data generated from model in Chapter 2]{
Effect of a single covariate $w_{1,j}$ on microbial species abundances.
We illustrate the expected abundances of species 1 and 2 when $w_{1,j}$ varies (\textbf{Left}) and the observed microbial abundances species 1-10 in one simulated dataset as $w_{1,j}$ changes (\textbf{Right}).
We focus on a single sample $j$ and fix the random effects $\langle \bX_i,\bY_j\rangle$ in all simulations.
Only the value of $w_{1,j}$ and the error terms $\epsilon_{i,j}$'s vary.
The expected abundances are calculated by averaging over 1000 simulation replicates.
We consider the cases where $Q_{i,j}=v_{1,i}w_{1,j}+\langle\bX_i,\bY_j\rangle+\epsilon_{i,j}$ with $v_{1,1}=5,$ $v_{1,2}=-5$ and $v_{1,i}=0$ for $i>2$ (\textbf{Top}) and
$Q_{i,j}=v_{1,i}\sin(w_{1,j})+\langle\bX_i,\bY_j\rangle + \epsilon_{i,j}$ with $v_{1,1}=5$ and $v_{1,i}=0$ for $i>1$ (\textbf{Bottom}).
The covariate $w_{1,j}$ varies from $-5$ to $5$ with $0.1$ increments.}
\label{fig1}
\end{figure}
It is important not to misinterpret the coefficients $\bv_i$. The species abundances are not linear functions of the covariates (see Figure \ref{fig1}). In certain cases, the relationship between the covariates and the species abundances is not monotone. Consider a single covariate $w_{1,j}$ and assume $Q_{i,j}=v_{1,i} w_{1,j} + \langle\bX_i,\bY_j\rangle +\epsilon_{i,j}$, where $v_{1,1}=5$, $v_{1,2}=1$ and $v_{1,i}=0$ when $i>2$. For simplicity, assume in addition $\sigma_i$ variables all equal to $0.5$.
When $w_{1,j}$ is small, say $w_{1,j}\in (0,0.5)$, the abundances of species 1 and 2 increases with $w_{1,j}$. However, as $w_{1,j}$ gets larger, say $w_{1,j}>5$, species 1 will dominate all other species and the abundance of species 2 decreases to nearly zero.
\subsubsection{Models for data analysis}
\label{sub.spec.model}
In our analyses we considered longitudinal data with repeated measurements over time for each individual.
Assume samples $j=1,\ldots,J$ are partitioned into $U$ groups, i.e. $U$ distinct individuals.
We use $u_j$ to identify the individual associated to sample $j$. We enforce the samples $j$ and $j'$ from the same individual $u$ ($u_j=u_{j'}=u$) to share common latent factors $\bY_u$. The \textit{longitudinal} version of model (\ref{expand.model}) utilizes
\begin{equation}
Q_{i,j} = \langle \bX_i,\bY_{u_j}\rangle + \langle\bv_i,\bw_j\rangle + \epsilon_{i,j}.
\label{expand.model.long}
\end{equation}
The rationale for this model is that samples derived from the same individual tend to be similar.
The covariates $\bw_j$ will include time information (e.g. individual's age) for each sample $j$.
This version of the model is tailored towards longitudinal data and studies with repeated measurements, and it allows one to visualize time trends of microbial compositions.
We will use a truncated version of model (\ref{expand.model}) or (\ref{expand.model.long}) in data analyses, which we call the \textit{finite-species} model. Truncating the stick-breaking representation of Dirichlet process has been studied extensively in literature \citep{ishwaran2002exact}. The truncated process can be arbitrarily close to the Dirichlet process in total variation distance if the number of retained atoms is large. In our case, we truncate the model (\ref{expand.model.long}) at the number of observed species, $I$. This is sufficient for data analysis as the sequencing depth in microbiome studies is generally large enough to capture most of the microbial species of interest. With $I<\infty$ species the finite-species model is defined by
\begin{equation}
\begin{aligned}
Q_{i,j} &= \langle \bX_i, \bY_j \rangle + \langle\bv_i,\bw_j\rangle + \epsilon_{i,j}, & \qquad
P^j(\{Z_i\})&= \frac{\sigma_i (Q_{i,j})_+^{2}}{\sum_{i'=1}^I \sigma_{i'} (Q_{i',j})_+^{2}},\;\;
\;
\;
\;
\;
\;
\;
i=1,\ldots,I.
\end{aligned}
\label{expand.model.finite}
\end{equation}
The prior for $\bX_i$ and $\bY_j$ remain identical. The prior for $\sigma_i$'s becomes
$
\sigma_i\overset{iid}{\sim}\text{Beta}(\alpha/I, 1/2-\alpha/I).
$
\subsection{Identifiability}
\label{sec.identify}
In this subsection we consider the identifiability of the proposed model.
Since the model is invariant under simultaneous rotations of the vectors $\bY_j$ and $\bX_i$, we cannot learn $\bY$ from the data.
We discuss the identifiability of the correlation matrix $\bS$ associated to $\bSigma = \bY^\intercal \bY + \bI$, where $\bI$ is the $J\times J$ identity matrix.
Similarly, since the composition $P^j$ is invariant to scale transformation of $\bsigma$ we will discuss identifiability of the ratios $\sigma_i/\sigma_{i'}$ for $i\neq i'$.
We assume that the number of samples is finite and that covariates $\bw_j$'s are independent with $\mathbb E(\bw_j\bw_j^\intercal)$ of full rank.
We proceed assuming initially that $(P^j(\{Z_i\});i\geq 1, j\leq J)$ are observable random variables. Recall that
\begin{equation}\label{eq:gauss}
(Q_{i,j};j\leq J)|\bv_i,\bY,\bw \sim \Nsc(\bw^\intercal\bv_i,\bSigma).
\end{equation}
Since we assume that $P^j(\{Z_i\})$ is observable, we have that $P^j(\{Z_i\})=0$ implies $Q_{i,j}\leq 0$.
Consider a set of new random variables, denoted as $(\Ptilde^j(\{Z_i\});i\leq I,j\leq J)$, where $\Ptilde^j(\{Z_i\})=\mathbb I(P^j(\{Z_i\})>0)$. From \eqref{eq:gauss}, the conditional distribution of $(\Ptilde^j(\{Z_i\});i\geq 1,j\leq J)$ given $(\bsigma,\bY, \bv, \bw)$ is
\begin{equation}
\begin{aligned}
&p(\Ptilde^j(\{Z_i\}),i\geq 1,\;\; j\leq J|\bsigma,\bY, \bv, \bw) \\
\propto& \prod_{i}\left[\int_{\Asc_{i}}(2\pi)^{-J/2}|\bSigma|^{-1/2}\times\exp\left(-\frac{1}{2}(\bQ_i-\bmu_i)^\intercal\bSigma^{-1}(\bQ_i-\bmu_i)\right)d\bQ_i\right].
\end{aligned}
\label{lik.fnc}
\end{equation}
Here $\bQ_i=(Q_{i,1},\ldots,Q_{i,J})$, $\bmu_{i} = \bw^\intercal\bv_i$, $\Asc_{i}=\bigtimes_{j=1}^J \Asc_{i,j}$ and $\Asc_{i,j}=(-\infty,0]$ if $\Ptilde^j(\{Z_i\})=0$, while $\Asc_{i,j}=[0,\infty)$ when $\Ptilde^j(\{Z_i\})=1$.
To illustrate the identifiability of the parameters $(\sigma_i/\sigma_{i'};i\neq i'),\bS$ and $\bv$, we start with two simplified cases and then give a proposition.
\begin{enumerate}
\item {\it Without random effects} ($\bY = \bzero$). We first note that conditioning on $\bw$, for a fixed $i$, $(\Ptilde^j(\{Z_i\});j\leq J)$ are samples from a standard probit model \citep{albert1993bayesian}, where $\bv_i$ serves as regression coefficients and the sample covariates are $\bw_j$. Based on the theory of generalized linear models $\bv_i$ is identifiable when $\mathbb E(\bw_j \bw_j^\intercal)$ is of full rank.
We then consider $(\sigma_i/\sigma_{i'};i\neq i')$.
By construction, $$
\frac{P^j(\{Z_i\})}{P^j(\{Z_{i'}\})}=\frac{\sigma_i}{\sigma_{i'}}\frac{(Q_{i,j})_+^{2}}{(Q_{i',j})_+^{2}}.
$$
Here we use the convention that the ratio is zero whenever the denominator is zero. To ensure the identifiability of $(\sigma_i/\sigma_{i'};i\ge 1,j\leq J)$, we want to show that if $$(P^j(\{Z_i\});i\ge 1, j\leq J),\bw|\bv,\bsigma\overset{d}{=} (P^j(\{Z_i\});i\ge 1, j\leq J),\bw|\bv',\bsigma',$$ then $\sigma_i/\sigma_{i'}=\sigma'_i/\sigma'_{i'}$ for all $i\neq i'$. Using the identifiability of $\bv_i$,
the above equality in distribution implies $\bv_i=\bv_i'$, and in turn the equality of the conditional distributions
$p(((Q_{i,j})_+^{2},(Q_{i',j})_+^{2}),\bw_j|\bv,\bsigma)$ and $p(((Q_{i,j})_+^{2},(Q_{i',j})_+^{2}),\bw_j|\bv',\bsigma').$
This directly implies $\sigma_i/\sigma_{i'}=\sigma_i'/\sigma'_{i'}$ for all $i\neq i'$.
\item {\it Without fixed effects } ($\bv_i=\bzero$). We consider $\bsigma$ and $\bS$. The distribution of $(\Ptilde^j(\{Z_i\}),\Ptilde^{j'}(\{Z_i\}))$ is
$$
p(\Ptilde^j(\{Z_i\}),\Ptilde^{j'}(\{Z_i\})|\bsigma,\bY) = \frac{1}{2\pi} \int_{\Asc_{i,j}\times \Asc_{i,j'}} \!\!\!\!\!\!\!\!(1-S_{j,j'}^2)^{-1/2}\exp\left(-\frac{1}{2}\bq^\intercal\bS_{j:j'}^{-1}\bq\right)d\bq,
$
where $S_{j,j'}$ is the correlation between $Q_{i,j}$ and $Q_{i,j'}$, and $\bS_{j:j'}$ is the correlation matrix of $(Q_{i,j},Q_{i,j'})$. $\Asc_{i,j}=(-\infty,0]$ if $\Ptilde^j(\{Z_i\})=0$, while $\Asc_{i,j}=[0,\infty)$ if $\Ptilde^j(\{Z_i\})=1$.
Corollary 3.12 in \cite{slepian} shows that $p(\Ptilde^j(\{Z_i\}),\Ptilde^{j'}(\{Z_i\})|\bv_i,\bY) $, when $\bv_i=\bzero$, is monotone with respect to $S_{j,j'}$. This implies that $S_{j,j'}$ is identifiable.
Using the same arguments as in the case where no random effect is present, one can show that the ratios $(\sigma_i/\sigma_{i'};i\neq i')$ remain identifiable.
\end{enumerate}
In the general case the identifiability of the model parameters, with both fixed and random effects, is described through Proposition 1 in Section S1 of the Supplementary Material.
\section{Posterior simulations and visualization of covariates' effects}
\label{sec:3}
In this section we focus on posterior inference and computational aspects.
In Section \ref{comp.1} we introduce an algorithm for posterior simulations with the model described in Section \ref{dp.fixed}. Then, in Section \ref{comp.2} we propose graphical visualizations to illustrate associations of microbial compositions and covariates. These representations are relevant for the analysis of microbial abundances because, as we mentioned in Section \ref{dp.fixed}, a positive (or negative) element of the vector $\bv_i$, say the $l$-th element, does not imply a monotone relation between the $l$-th covariate and the abundances of species $i$. To illustrate the relation between the $l$-th covariate and species $i$, we estimate how the abundance of species $i$ would vary at hypothetical values of the $l$-th covariate.
\subsection{Posterior simulations}
\label{comp.1}
We proceed with the finite-species model (\ref{expand.model.finite}). The likelihood function is
$$
p(\bn|\bQ,\bsigma) \propto \left(\prod_{j=1}^J\prod_{i=1}^I(\sigma_i(Q_{i,j}))_+^{2})^{n_{i,j}}\right)\times\prod_{j=1}^J\left(\sum_{i=1}^I\sigma_i(Q_{i,j})_+^{2}\right)^{-n_j},
$$
and
\begin{equation}
\begin{aligned}
p(\bsigma,\bQ,\bX,\bY,\bv|\bn,\bw)\propto \left(\prod_{j=1}^J\prod_{i=1}^I(\sigma_i(Q_{i,j})_+^{2})^{n_{i,j}}\right)\times\prod_{j=1}^J\left(\sum_{i=1}^I\sigma_i(Q_{i,j})_+^{2}\right)^{-n_j}&\times \\
\pi(\bsigma,\bQ|\bX,\bY,\bv,\bw) \pi(\bX,\bY,\bv)&,
\end{aligned}
\label{post.den.raw}
\end{equation}
where $\pi$ indicates the prior. By introducing positive latent random variables $\bT=(T_1,\ldots,T_J)$ as in \cite{james-latent}, we rewrite the conditional distribution,
\begin{equation}
\begin{aligned}
p(\bsigma,\bQ,\bX,\bY,\bv|\bn,\bw)\propto& \int \pi(\bsigma,\bQ,\bX,\bY,\bv|\bw)\times\\
&\prod_{j=1}^J\left\{\left(\prod_{i=1}^I(\sigma_i(Q_{i,j})_+^{2})^{n_{i,j}}\right)T_j^{n_j-1}\exp\left(-T_j\sum_i\sigma_i(Q_{i,j})_+^{2}\right)\right\}d\bT.
\end{aligned}
\label{post.density}
\end{equation}
We use a Gibbs sampler to perform posterior simulations.
The algorithm iteratively samples $\bsigma,\bT,\bQ,\bX,\bY$ and $\bv$ from the full conditional distributions. We describe the two components of the algorithm.
\begin{enumerate}
\item The first component samples $\bsigma,\bT$ and $\bQ$ from the full conditional distributions. We note that $\sigma_1,\ldots,\sigma_I$, given the remaining variables, are conditionally independent. The sampling of $(\sigma_1,\ldots,\sigma_I)$ from the full conditional distribution is identical as in \cite{boyuren}. The random variables $T_1,\ldots,T_J$, given $(\bQ,\bn,\bsigma)$, are conditionally independent with Gamma distributions. These random variables can be straightforwardly generated from the full conditional distribution. To complete this part of the algorithm we can write
\begin{equation}
\begin{aligned}
p(Q_{i,j}|&\bn,\bQ_{-i,-j},\bT,\bsigma,\bX,\bY,\bw,\bv)\propto\\
&(Q_{i,j})_+^{2n_{i,j}}\times\exp(-T_j\sigma_i(Q_{i,j})_+^{2})\times\exp\left(-\frac{\left(Q_{i,j}-\langle\bX_i,\bY_j\rangle-\langle\bv_i,\bw_j\rangle\right)^2}{2}\right),
\end{aligned}
\label{post.Q}
\end{equation}
where $\bQ_{-i,-j}$ is identical to $\bQ$ with the only exception that it does not include $Q_{i,j}$. The density (\ref{post.Q}) indicates that the $Q_{i,j}$'s random variables are conditionally independent. We also note that the density in (\ref{post.Q}) is log-concave. We use these arguments to sample $\bQ$ from the full conditional distribution.
\item The second component considers the sampling of $\bY,\bX$ and $\bv$ from the full conditional distributions. Using expression (\ref{post.density}) we write
$$
p(\bX|\bn,\bsigma,\bT,\bQ,\bY,\bv,\bw)\propto \exp\left(-\sum_{i,j}\frac{(Q_{i,j}-\langle\bX_i,\bY_j\rangle-\langle\bv_i,\bw_j\rangle)^2}{2}\right)\times \pi(\bX).
$$
Recall that the $\bX_i$'s are {\it a priori} independent normal random variables. Therefore the full conditional distribution of $\bX_i$ coincides with the conjugate posterior distribution in a standard linear model \citep{lindley1972bayes}. Sampling of $\bY$ and $\bv$ from the full conditional distributions follows identical arguments. Indeed the prior model studied in \cite{dunsonfactor}, which we use for $\bY$, is conditionally conjugate.
\end{enumerate}
\subsection{Visualization of covariate effects}
\label{comp.2}
We consider the partial derivatives
$$
\frac{\partial P^j(\{Z_i\})}{\partial w_{l,j}} := \partial\left[\frac{\sigma_i(\langle\bX_i,\bY_j\rangle + \langle\bv_i,\bw_j\rangle + \epsilon_{i,j})_+^{2}}{\sum_{i'} \sigma_{i'}(\langle\bX_{i'},\bY_j\rangle + \langle\bv_{i'},\bw_j\rangle + \epsilon_{i',j})_+^{2}}\right]\bigg/\partial w_{l,j}.
$$
The derivative $\partial P^j(\{Z_i\})/\partial w_{l,j}$ quantifies the abundance variation of species $i$ in sample $j$ in response to an infinitesimal increment of the $l$th component of $\bw_j$. We can estimate these derivatives from the data using the posterior approximation obtained by the algorithm in Section \ref{comp.1}. We use the estimates $\mathbb E\left(\partial P^j(\{Z_i\})/\partial w_{l,j}|\bn,\bw\right).$ For example, the top row of Figure \ref{dist_pw} summarizes the posterior distributions of $\partial P^j(\{Z_i\})/\partial w_{1,j}$, $j=1,\ldots,300$, for three species. Details on the figure, including a description of the simulated data that generated the panels, are provided in Section \ref{deriv.trend}. In species 1, the estimates of the derivatives are positive for the majority of the samples and tend to be large when $w_{1,j}>0$. We also note that the estimates of $\partial P^j(\{Z_i\})/\partial w_{l,j}$ are larger for samples in the subgroup $w_{2,j}=1$ than in the subgroup $w_{2,j}=0$. These results indicate that, for any $j=1,\ldots, 300$, if we could increase (decrease) the value of $w_{1,j}$ while holding $w_{2,j}$ fixed, then one would expect an increase (decrease) of the relative abundances of species 1, and this trend appears more pronounced in those samples with $w_{1,j}>0$ and $w_{2,j}=1$.
We also define
$$
P^j(\{Z_i\};\bw_0) := \frac{\sigma_i\left(\langle\bX_i,\bY_j\rangle + \langle\bv_i,\bw_0\rangle + \epsilon_{i,j}\right)_+^{2}}{\sum_{i'}\sigma_{i'}\left(\langle\bX_{i'},\bY_j\rangle + \langle\bv_{i'},\bw_0\rangle + \epsilon_{i',j}\right)_+^{2}};
$$
it is the abundance of species $i$ if the covariates values of sample $j$ could be enforced to be equal to $\bw_0$.
When estimating the effect of a binary covariate $w_{l,j}\in\{0,1\}$ on microbial compositions, we replace derivatives by differences:
\begin{equation}
\begin{gathered}
\frac{\Delta P^j(\{Z_i\})}{\Delta w_{l,j}}
:= P^j(\{Z_i\};\bw_{l,j}^{1})-P^j(\{Z_i\};\bw_{l,j}^{0}),
\end{gathered}
\label{discrete.effect}
\end{equation}
here $\bw_{l,j}^1$
is identical to $\bw_j$ with the exception that the $l$-th component
$w_{l,j}$ is set to be one and symmetrically $\bw_{l,j}^0$ is specified with $w_{l,j}$ equal to zero.
Therefore $\Delta P^j(\{Z_i\})/\Delta w_{l,j}$ is the variation of $P^j(\{Z_i\})$
that one would observe by changing the value of a binary covariate.
We also consider the population-level associations between microbial compositions and a specific covariate, say the $l$-th covariate, when adjusting for all other covariates. To this end, we first define $\Pbar(\{Z_i\};\bw_0)$, the \textit{population average abundance} of species $i$ at a covariate value $\bw_0$, by
$$
\Pbar(\{Z_i\};\bw_0) := \frac{1}{J}\left(\sum_{j=1}^J P^j(\{Z_i\};\bw_0)\right),
$$
which quantifies the average abundance of species $i$ when all $J$ samples in the study have the same hypothetical covariates values $\bw_0$. We estimate $\Pbar(\{Z_i\};\bw_0)$ from the data with $\mathbb E\left(\Pbar(\{Z_i\};\bw_0)|\bn,\bw\right)$.
To illustrate the association between the abundance of species $i$ and the $l$-th covariate, we visualize the variation of $\Pbar(\{Z_i\};\bw_0)$ as $w_{l,0}$
(the $l$th entry of $\bw_0$) varies and all other covariates remain fixed at $\bw_{-l,0}.$
This visualization is obtained by plotting the estimated $\Pbar(\{Z_i\};\bw_0)$ against $w_{l,0}$.
We call the resulting curve the \textit{population trend} of species $i$ with respect to the $l$-covariate at $\bw_{-l,0}$.
In Figure \ref{dist_pw}, bottom row, we illustrate population trends of three species with respect to the first covariate at $w_{2,0}=0$ and at $w_{2,0}=1$.
Interactions terms for pairs of covariates, and more generally functions of the covariates, can be included in the proposed model. We specify a function $\mathbf f:\mathbb R^L\to\mathbb R^{L'}$
for interaction terms. One example is $\mathbf f(\bw_j)=w_{1,j}w_{2,j}$. The definition of $Q_{i,j}$ in (\ref{expand.model}) when interactions are incorporated becomes
$$
Q_{i,j} = \langle\bX_i,\bY_j\rangle + \langle\bv_i,(\bw_j,\mathbf f(\bw_j))\rangle + \epsilon_{i,j},
$$
where $\bv_i\in\mathbb R^{L+L'}$. In these cases variations of the $l$-th coordinate of $\bw_j$ affect $\mathbf f(\bw_j)$ and translate into compositional variations equal to $\partial P^j(\{Z_i\})/\partial w_{l,j}$ or $\Delta P^j(\{Z_i\})/\Delta w_{l,j}$.
\section{Simulation study}
\label{sec:4}
In this section we focus on the model introduced in Section \ref{dp.fixed}, and we illustrate that
we can transform the model parameters into interpretable results on the relationship between covariates and microbial compositions. We also provide in this section a comparison between our model and a recent published latent factor model MIMIX \citep{MIMIX} which uses the logistic-normal distribution to link covariates to the relative abundances of species. We illustrate in simulation scenarios that the proposed model has similar performance to the logistic-normal model even when the data is generated from MIMIX. When the degree of zero-inflation is large, our model tends to outperform MIMIX regression regardless of the underlying data generating model. The code for replicating the simulation studies is available from the Github repository \url{https://github.com/boyuren158/DirFactor-fix}.
In our simulation study we included $I=100$ species and $J=300$ samples. The 300 samples are taken from $U=50$ individuals (see Section \ref{sub.spec.model}). Each individual is measured six times. The read-depth of each sample is $n_j=10^{5}$. We simulate $\bsigma$ using independent Beta densities with mean $0.2$ and variance $0.1$. As we discussed in Section \ref{prev.model}, $\sigma_i$ represents the average abundance of species $i$ across all samples. We included in the simulation a continuous covariate $w_{1,j}$, generated from independent $\Nsc(0,1)$ distributions, and a binary covariate $w_{2,j}$, generated from independent $\text{Bernoulli}(0.5)$. We also use the interaction term $w_{1,j}\times w_{2,j}$ to specify scenarios where effects of $w_{1,j}$ differ in the groups $w_{2,j}=0$ and $w_{2,j}=1$. We will later discuss in Section \ref{application} this type of interaction in a microbiome study for type 1 diabetes.
For the latent factors $\bY$ we assumed $\bY_u\in\mathbb R^4$. For the first half of the individuals, $u=1,\ldots,25$, we set $Y_{3,u}=Y_{4,u}=0$ while for the other half, $u=26,\ldots,50$, we set symmetrically $Y_{1,u}=Y_{2,u}=0$. The non-zero components in $\bY_u$ were simulated independently from a $\Nsc(0,1)$ density. This specification of $\bY$ makes the correlation matrix $\bS$ block diagonal (see Figure \ref{control.corr}(b)).
We simulate the first eight species with positive $v_{1,i}$'s and the following eight species ($i=9,\ldots,16$) with negative $v_{1,i}$'s. As detailed in Table \ref{v.spec} the first 16 species abundances correlate with $w_{2,j}$. Moreover, we make the assumption that some of the trends with respect to $w_{1,j}$ are either amplified or reversed when we contrast the two groups $w_{2,j}=1$ and $w_{2,j}=0$. All other species ($i>16$) have the corresponding $\bv_i$ coefficients equal to $\bzero$ (Table \ref{v.spec}).
\begin{table}
\centering
\small
\caption{Specification of $\bv$ in the simulation study.}
$$
\left[\begin{tabular}{c|ccccccccccccccccccc}
Species $(i)$&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&\ldots&100\\
\hline
$v_{1,i}$(for $w_{1,j}$)&5&5&5&5&5&5&5&5&-5&-5&-5&-5&-5&-5&-5&-5&0&\ldots&0\\
$v_{2,i}$(for $w_{2,j}$)&5&5&5&5&-5&-5&-5&-5&5&5&5&5&-5&-5&-5&-5&0&\ldots&0\\
$v_{3,i}$(for $w_{1,j}\cdot w_{2,j}$)&10&-5&-5&-10&10&-5&-5&-10&-10&5&5&10&-10&5&5&10&0&\ldots&0
\end{tabular}\right]
$$
\label{v.spec}
\end{table}
We further examine the robustness of our method by checking its performances when the link function between $P^j$ and $Q_{i,j}$ is misspecified. In particular, we apply our method to data simulated using the following specification of $(P^j(\{Z_i\});i\leq I, j\leq J),$
\begin{equation}
P^j(\{Z_i\}) = \frac{\sigma_i Q_{i,j}^{+}}{\sum_{i'}\sigma_{i'} Q_{i',j}^{+}}.
\label{expand.model.mis}
\end{equation}
The specification of $\bsigma$, $\bv$, $\bY$ and $\bw$ remains the same as described in the previous paragraphs.
\subsection{Estimating species and samples parameters $\bv$ and $\bS$}
\label{bv.bS.sec}
We first consider estimation of $\bv$ and $\bS$ between individuals when the model is correctly specified. Recall, from Proposition 1 in the Supplementary Material, $\bv$ is identifiable when $\text{trace}(\bSigma)$ is assumed fixed at a constant value. We assume $\text{trace}(\bSigma)=1$ and compute the posterior distribution of $\bv/\sqrt{\text{trace}(\bSigma)}$.
The performance of the estimate of $\bS$ is measured by the RV-coefficient \citep{rvcoef}, which is bounded between zero and one, between the estimated $\bS$ and the actual value of $\bS$. An RV-coefficient close to one indicates that the estimate is close to the parameter $\bS$ used in simulations.
In Figure \ref{control.corr}(a) we illustrate the estimates of $\bv_i$, $i=1,\ldots,16$, in one simulation. The posterior means of $\bv_i$ for the first 16 species are in general close to the corresponding values of the simulation scenario.
One exception is species 16, whose average relative abundance is the lowest ($8.1\times 10^{-5}$) among the first 16 species.
In the left panel of Figure \ref{control.corr}(b), we illustrate the posterior mean of $\bS$ between individuals in one simulation. The estimate is close to the actual value of $\bS$ with an RV coefficient between them equal to 0.98, although the estimate indicates weak correlation between two independent subgroups (subject 1-25 and subject 26-50).
When the model is misspecified (see equation (\ref{expand.model.mis})), the estimates of $\bv$ are not comparable to the corresponding values of the simulation scenarios. However, this result does not discourage the application of model (\ref{expand.model}) when estimating effects of covariates on microbial compositions. The model can still capture the derivatives and population trends (see Section \ref{deriv.trend}), which directly describe the covariates' effects. The estimate of $\bS$, on the other hand, is only minimally affected by model misspecification and preserves its closeness to the actual value of $\bS$ (Figure \ref{control.corr}(b), right panel). The RV coefficient between the estimate and the actual value of $\bS$ is 0.96 in this case.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.645]{p2f1_new.pdf}
\caption{Estimates of $\bv_i$, $i=1,\ldots,16$, and $\bS$ between individuals. (\textbf{a}) Posterior distributions of $\bv_i/\sqrt{\text{trace}(\bSigma)}$, $i=1,\ldots,16$. The posterior distributions are visualized by boxplots. The corresponding values of $\bv_i/\sqrt{\text{trace}(\bSigma)}$ used for data simulation are indicated by dots. (\textbf{b}) Posterior mean of the correlation matrix $\bS$ between individuals (values above the main diagonal) compared to the truth (values below the main diagonal) in one simulation when the model is correctly specified (\textbf{Left}) and misspecified (\textbf{Right}).}
\label{control.corr}
\end{figure}
We then repeat the simulation for 50 times under the correctly specified model as well as the misspecified model to verify the observed accuracy levels. We fix across all simulation replicates the values of $\bsigma$. When the model is correctly specified, the mean squared errors (MSEs) between the posterior means of rescaled $\bv_i$ coefficients and the values of the simulation scenario across the 50 simulation replicates are comparable across species. The smallest average MSE across 50 replicates is $4.3\times 10^{-6}$ for species 18 (the standard deviation of the estimate is $5.8\times 10^{-6}$) and the largest average MSE is $5.2\times 10^{-3}$ for species 14 (the standard deviation of the estimate is $2.3\times 10^{-2}$). The RV coefficients between the posterior means of $\bS$ and the actual value of $\bS$ across 50 replicates are close to one, whether the model is correctly specified or not. When the model is correctly specified, the mean and the standard deviation of the RV-coefficients are 0.964 and 0.009. When the model is misspecified, the mean and the standard deviation are 0.960 and 0.012. We diagnosed the mixing of the MCMC sampler for our model with $\hat R$ statistics \citep{Rhat}. The $\hat R$ statistics indicate that when the model is correctly specified, mixing is reached for rescaled parameter $v_{l,j}$ and eigenvalues of $\bS$ after 60,000 iteration. See Section S2 of the Supplementary Material for details.
The Bayesian model can be embedded into a permutation procedure to detect whether a covariate $w_{l,j}$ is associated with the microbial composition or not. The null and alternative hypotheses that we consider are
$
H_0: \bv_{l}=\bzero_I\;\; vs.\;\; H_A: \bv_l\neq \bzero_I,
$
where $\bzero_I$ is a vector of zeros.
We permute covariate values $w_{l,j}$ across samples and estimate, under $H_0$, the distribution of $\|\hat\bv_l\|$, where $\hat\bv_l$ is the posterior mean of $\bv_l$.
Permutation is one possible approach to estimate the $\|\hat\bv_l\|$ distribution under $H_0$, which is applicable if covariates are independent or nearly independent.
We finally compare the actual $\|\hat\bv_l\|$ value, with the estimated distribution (see Section S3 of the Supplementary Material for an example).
One could apply other approaches to generate in silico datasets under $H_0$. For example, the parametric bootstrap can replace the observed $w_{l,j}$ values with samples from estimates of the conditional distributions $p(w_{l,j}|w_{-l,j})$, where $w_{-l,j}$ indicates the values of all covariates except $w_{l,j}$.
\subsection{Visualizing the relationship between covariates and microbial compositions}
\label{deriv.trend}
As we mentioned in Section \ref{dp.fixed}, the values of $\bv$ do not directly express the sign and the magnitude of the covariates' effects on microbial compositions. Recall for example that a positive $v_{l,i}$ might correspond to a decreasing trend with respect to the covariate $w_{l,j}$.
This can happen when the $v_{l,i'}$ of another species $i'$ is larger than $v_{l,i}$.
The goal of this subsection is to evaluate if we can estimate responses of species abundances to variations of covariates of interest.
We consider the visualization approaches described in Section \ref{comp.2}. We first focus on the estimates of the derivatives $\partial P^j(\{Z_i\})/\partial w_{l,j}$.
These provide, for each individual sample, estimates of the variation in microbial abundance resulting from an infinitesimal increment of a specific covariate $w_{l,j}$, while the other covariates remain fixed.
The results for three representative species are summarized in the top panels of Figure \ref{dist_pw}.
The X-axes indicate the value of $w_{l,j}$ and the Y-axes the value of $\partial P^j(\{Z_i\})/\partial w_{l,j}$.
Each solid curve in these figures is generated
by computing the posterior means of $\partial P^j(\{Z_i\})/\partial w_{l,j}$, for each sample $j$, which then become the input of a LOWESS algorithm.
We also calculate the actual values of $\partial P^j(\{Z_i\})/\partial w_{l,j}$ using the $\bsigma,\bX,\bY$ and $\bv$ parameters that generated the data.
The actual values of the partial derivatives are visualized with dash lines.
In Section S4 of the Supplementary Material, we also plot the posterior mean of $\partial P^j(\{Z_i\})/\partial w_{l,j}$ versus the actual value of $\partial P^j(\{Z_i\})/\partial w_{l,j}$ for each sample $j$, along with the 95\% credible intervals for $\partial P^j(\{Z_i\})/\partial w_{l,j}$.
We then focus on the population level estimates of covariates' effects by visualizing the population trend of species $i$ with respect to a given covariate (see Section \ref{comp.2}). Population trends of three representative species with respect to $w_{1,0}$ at different values of $w_{2,0}$ are summarized in Figure \ref{dist_pw}, bottom panels.
The X-axes indicate the value of $w_{1,0}$ and the Y-axes the population average abundance $\Pbar(\{Z_i\};\bw_0).$ The shaded areas indicate the pointwise 95\% credible bands of population trends.
\begin{figure}
\centering
\includegraphics[scale=0.4]{deriv_correct.pdf}
\includegraphics[scale=0.4]{poptrend.pdf}
\caption{Posterior estimates of individual-level and population-level relationship between covariate $l=1$ and relative abundances when the model is correctly specified.
(\textbf{Left}) Increasing trend for the group $w_{2,j}=0$ and the group $w_{2,j}=1$. (\textbf{Middle}) Increasing trend for the group $w_{2,j}=0$ and non-monotone trend for the group $w_{2,j}=1$. (\textbf{Right}) Decreasing trend for the group $w_{2,j}=0$ and the group $w_{2,j}=1$.
Each curve in the \textbf{top} panels is generated
by computing the individual posterior estimates of $\partial P^j(\{Z_i\})/\partial w_{l,j}$, for each sample $j$, which then become the input of a LOWESS procedure.
The \textbf{bottom} panels illustrate the posterior distribution of the population trends.}
\label{dist_pw}
\end{figure}
When the model is misspecified, the comparisons of estimated derivatives and population trends to the truth are shown in Figure \ref{dist_pw_mis}.
To compute the actual derivatives and population trends, we use the specification of $P^j(\{Z_i\})$ in (\ref{expand.model.mis}). From the top panels of Figure \ref{dist_pw_mis}, we observe that the estimates of the derivatives capture the sign of the actual values. However, the estimates are not as close to the actual values of the derivatives as in the case where the model is correctly specified.
This result is expected as we erroneously assume that $P^{j}(\{Z_i\})$ depends on $(Q_{i,j})_+^{2}$ instead of $(Q_{i,j})_+$. Bottom panels of Figure \ref{dist_pw_mis} illustrate that the estimated population trends follow the actual trends, but the posterior credible bands do not cover the truth as in the previous example, where the model is correctly specified.
\begin{figure}
\centering
\includegraphics[scale=0.4]{deriv_mis.pdf}
\includegraphics[scale=0.4]{poptrend_mis.pdf}
\caption{Posterior estimates of individual-level and population-level relationship between covariate $l=1$ and relative abundances when the model is misspecified. }
\label{dist_pw_mis}
\end{figure}
We repeat the simulation for 50 times under the correctly specified model as well as under the misspecified model.
For each species $i$, we use MSE between the posterior mean of $(P^j(\{Z_i\});j\leq J)$ derivatives and the corresponding values of our simulation model.
In Supplementary Figure S5.1 (top panel), we plot the distributions of MSEs across simulation replicates.
This figure confirms the results in Figure \ref{dist_pw} and Figure \ref{dist_pw_mis}.
The estimates of derivatives in the correctly specified model are closer to the truth compared to the estimates with the misspecified model.
For both, correctly specified and misspecified models,
the mean MSE across replicates reaches its maximum for species 9, with value $8.7\times 10^{-4}$ under the correctly specified model and $3.8\times 10^{-2}$ under the misspecified model.
We then consider the estimates of population trends in the 50 replicates. In Supplementary Figure S5.1 middle and bottom panels, we illustrate the estimated population trends in three species, when the model is correctly specified and misspecified. When the model is misspecified, the overall shape of each band still mirrors the actual trend, but the confidence band does not cover the actual trend in a few intervals of $w_{1,0}$.
\subsection{The Logistic-normal model}
\label{dir.mimix.comp}
We conclude the simulation study with a comparison of our model (referred to as DirFactor) to MIMIX \citep{MIMIX}, a logistic-normal model with latent factors. MIMIX employs a low-dimensional latent structure that is shared by both the fixed effects and the random effects to highlight the relationships between microbial species. The major difference between MIMIX and our model lies in the distribution assumption for $P^j$. In MIMIX, the distribution of $P^j$ follows a logistic-normal distribution:
\begin{equation}
P^j(\{Z_i\}) = \frac{\exp(Q_{i,j})}{\sum_{i'}\exp(Q_{i',j})}.
\label{MIMIX.link}
\end{equation}
A characteristic of this specification is that the relative abundances of species are strictly positive and not tailored to zero-inflated microbiome data. By contrast, our specification of $P^j$ assigns non-zero mass to zero, which means that our model allows for explicit zero-inflation. In this subsection, we are interested in comparing the estimation performance of our model to that of MIMIX.
We focus on the accuracy of the estimated population trends for the continuous covariates $w_{1,j}$. The accuracy is evaluated in two aspects: root mean-squared errors (RMSE) of the estimated population trends and coverages of the estimated credible bands of the population trends. The first metric is a universal summary of the bias and variance of the estimates while the second metric is used to evaluate the reported uncertainty on the estimates. We generate two sets of simulation datasets. The first set of data is generated using the link function (\ref{old.model}) in our model whereas the second set uses the link function (\ref{MIMIX.link}) in MIMIX. The specifications of $\bv, \bw, \bX, \bY$ are the same for both sets and are described at the beginning of Section \ref{sec:4}.
For each simulated dataset, we impose additional zero-inflation via hard truncation of $P^j$ at $10^{-3}$ and $10^{-2}$. The larger the threshold the higher the degree of zero-inflation introduced in the simulated dataset. We also examine the effect of overdispersion. Specifically, for fixed $\bv, \bw, \bX, \bY$, we generate three datasets based on them with $\text{var}(\epsilon_{i,j})=1$, $\text{var}(\epsilon_{i,j})=5$ and $\text{var}(\epsilon_{i,j})=10$ to represent low overdispersion, medium overdispersion and high overdispersion. We finally consider the effect of overdispersion in the distribution of read depths $n_j$. Once relative abundances $(P^j(\{Z_i\});j\leq J, i\leq I)$ are simulated, we generate the OTU counts with three different distributions of $n_j$: a Poisson distribution with mean $10^5$, a negative Binomial distribution with mean $10^5$ and variance $10^9$ (moderate overdispersion) and a negative Binomial distribution with mean $10^5$ and variance $4\times 10^{10}$ (large overdispersion).
We use a B-spline basis of $w_{1,j}$
both
when we produce inference based on
our model or MIMIX in the simulation study (i.e. we don't directly incorporate the $w_{1,j}$ values within the models). This adds flexibility in the relation between covariates and microbial compositions. We recommend the use of splines or other transformations
when the number of covariates is considerably lower than the number of samples, as in our simulation study. The B-spline basis we used is of degree three with internal knots at -1, 0 and 1 and two boundary knots at -2 and 2. We simulate 50 instances of $\bv,\bw,\bX$ and $\bY$. For each simulation replicate of $\bv,\bw,\bX$ and $\bY$, we generate datasets based on combinations of different link function (\ref{old.model}) and (\ref{MIMIX.link}), three different truncation levels, three overdispersion levels and three distributions of $n_j$.
In each simulation replicate, we estimate the population average abundance (see Section \ref{comp.2} for its definition) of each species at 20 different values of $w_{1,0}$ equally spaced between -2 and 2. We report the average RMSE between the resulting vector of estimates and simulation scenario parameters across all species and 50 simulation replicates as well as two values of $w_{2,0}$. We also report the coverage of the 95\% credible intervals of the population trends for $w_{1,0}\in(-2,2)$ in the 50 replicates averaging across all species and two values of $w_{2,0}=0,1$.
For $n_j$ generated from the Poisson distribution, the RMSEs are shown in Table \ref{sim.compare.mse} and the coverage probabilities are included in Table \ref{sim.compare.ci}. For the other two distributions of the $n_j$ counts we illustrate the results in Section S6 of the Supplementary Material.
\begin{table}
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc|}
& \multicolumn{6}{c|}{Simulated from DirFactor} & \multicolumn{6}{c|}{Simulated from MIMIX} \bigstrut[b]\\
\cline{2-13} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} \bigstrut\\
\hline
Threshold & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ \bigstrut[t]\\
\hline
$\var(\epsilon_{i,j})=1$ &1.0&1.2&1.4&15.3&24.3&44.8&1.3&2.5&3.8&1.3&1.5&1.8\\
$\var(\epsilon_{i,j})=5$ &3.5&3.5&3.4&35.4&65.9&74.1&5.4&7.9&8.8&1.4&2.9&4.1\\
$\var(\epsilon_{i,j})=10$ &4.5&4.6&4.7&66.0&96.6&154.3&10.6&11.3&11.7&1.7&3&4.7\bigstrut[b]\\
\hline
\end{tabular}%
\caption{Average RMSE of estimated population mean abundances at 20 different values of $w_{1,0}$ equally spaced between -2 and 2 across simulation replicates for our model (DirFactor) and MIMIX. The threshold parameter indicates at which value we truncate the simulated $P^j(\{Z_i\})$'s to zero. We consider two scenarios where the data is generated from DirFactor and MIMIX respectively. The read depths are generated from a Poisson distribution with mean $10^5$. All RMSEs in the table are multiplied by $10^3$.}
\label{sim.compare.mse}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc|}
& \multicolumn{6}{c|}{Simulated from DirFactor} & \multicolumn{6}{c|}{Simulated from MIMIX} \bigstrut[b]\\
\cline{2-13} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} \bigstrut\\
\hline
Threshold & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ \bigstrut[t]\\
\hline
$\var(\epsilon_{i,j})=1$ &0.99&0.97&0.95&0.92&0.86&0.80&0.95&0.89&0.89&1.00&0.93&0.89\\
$\var(\epsilon_{i,j})=5$ &0.97&0.97&0.95&0.94&0.87&0.80&0.96&0.91&0.90&1.00&0.95&0.90\\
$\var(\epsilon_{i,j})=10$ &0.94&0.91&0.90&0.94&0.86&0.77&0.90&0.88&0.83&0.94&0.90&0.84\bigstrut[b]\\
\hline
\end{tabular}
\caption{Coverage of the posterior distribution of the population trend (defined in Sec 3.2). We average across species and across $w_{1,0}$ values between -2 and 2 and $w_{2,0}=0$ and $w_{2,0}=1$. The threshold parameter indicates at which value we truncate the simulated $P^j(\{Z_i\})$'s to zero. The coverage is calculated using simulation replicates. The read depths are generated from a Poisson distribution with mean $10^5$. We consider two scenarios where the data is generated from DirFactor and MIMIX respectively.}
\label{sim.compare.ci}
\end{table}
From the results we find that the proposed DirFactor model shows little sensitivity to the degree of zero-inflation.
Setting $P^j(\{Z_i\})$ to be zero when its value is below a given threshold does not affect accuracy. On the other hand, when the threshold for truncating $P^j(\{Z_i\})$ increases, the accuracy of MIMIX tends to decrease. The RMSE of MIMIX increases with this threshold parameter, regardless of the data generating models (\ref{old.model}) and (\ref{MIMIX.link}) and the level of overdispersion $\text{var}(\epsilon_{i,j})$. The performances in terms of coverage of the two models appear comparable even when $\text{var}(\epsilon_{i,j})$ is large. These findings are confirmed when the distribution of $n_j$ counts is a negative binomial distribution.
But prediction accuracy and coverage of the two models decrease significantly when the overdispersion of the negative binomial distribution is large (mean $=10^5$ and variance $=4\times10^{10}$).
See Supplementary Tables S6.3 and S6.4 for details.
We conclude this subsection with a posterior predictive procedure to evaluate and compare Bayesian models.
For distinct Bayesian models, we compute leave-one-out 95\% posterior predictive intervals of the relative abundance ($P^j(\{Z_i\})$) of a species $i$ in sample $j$ using the available data, with sample $j$ excluded.
The predictive intervals are generated using Pareto smoothed importance sampling \citep{Vehtari1,Vehtari2}.
We calculate the predictive intervals for all samples and all species in the data. We then derive the proportion of samples whose observed relative abundances $n_{i,j}/n_j$ of species $i$ are covered by the corresponding leave-one-out predictive intervals. We define the mean coverage probability of the model by averaging these proportions across species. In Section S7 of the Supplementary Material, we illustrate the approach in the comparison of our Bayesian model and MIMIX \citep{MIMIX}. Limitations of leave-one-out cross-validation in terms of stability have been previously discussed (Kohavi, 1995), the use of the procedure in our work serves the main purpose of producing interpretable summaries that integrate our evaluations and comparisons of regression methods.
\section{Microbiome analyses for type 1 diabetes in early infancy}
\label{application}
We use the longitudinal model in Section \ref{sub.spec.model} to evaluate associations between gut microbiome compositions, clinical variables and demographic characteristics of infants in the DIABIMMUNE project \cite{tommi}.
The DIABIMMUNE project collected longitudinal microbiome data in 157 infants over a period up to 1600 days after birth.
Infants were enrolled from Finland, Estonia and Russia.
Dietary information has been collected from each participant.
The main goal of this project is to examine the relationship between type 1 diabetes (T1D) associated autoantibody seropositivity (seroconverted), which is an indicator of T1D onset, and the infants' gut microbiome.
In this project, seven out of 157 infants are seroconverted.
The dataset contains a total of 55 microbial genera and 762 samples from 157 infants.
A large collection of potential associations between relative abundances of microbial taxa and covariates has been previously discussed in \cite{tommi}.
Among these associations, the most significant ones link nationality and age to 44 microbial genera.
Due to moderate sample size, only limited evidence of variations of the microbiome profile associated with seroconversion has been reported.
We present analyses based on the proposed Bayesian model. The set of covariates is composed by nationality, age, seroconversion and the interaction between age and nationality. We want to verify consistency of our posterior inference with the results discussed in the literature. Additionally, we want to quantify the uncertainty of the estimated relationship between seroconversion and microbial compositions in human gut.
\subsection{Estimating the effects of age}
We estimated the effects of age on microbial compositions using the visualization approaches in Section \ref{comp.2}.
In the top panels of Figure \ref{bug.time}, we illustrate the estimated derivatives of microbial abundances with respect to age, $\partial P^j(\{Z_i\})/\partial w_{3,j}$, for two genera, Bifidobacterium and Bacteroides. We only plot the $\partial P^j(\{Z_i\})/\partial w_{3,j}$'s for 150 randomly selected samples for visual clarity. We show the 95\% credible intervals for derivatives with bars, and the sizes of points are proportional to the observed abundances.
In the bottom panels of Figure \ref{bug.time}, we plot the estimated population trends of the same genera with respect to age.
We consider the population trends for Estonian, Finnish and Russian infants and assume that the infants are not seroconverted.
Posterior credible bands for the population trends are visualized by shaded areas.
The observed abundances of Bifidobacterium and Bacteroides in all samples are illustrated by scatter plots together with the estimated population trends.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.55]{p2f3.pdf}
\caption[
Estimated relationship between age and species abundances in DIABIMMUNE dataset]{(\textbf{Top}) Estimated $\partial P^j(\{Z_i\})/\partial w_{3,j}$ for two genera. Each point represents a sample. Colors indicate nationalities and the sizes of points are proportional to the observed abundances.
The error bars indicate 95\% credible intervals. We only plot 150 randomly selected samples. (\textbf{Bottom}) Estimated population trends of Bifidobacterium and Bacteroides for Estonian, Finnish and Russian infants. The infants are assumed to be nonseroconverted. Curves represent the estimated population trends and the shaded areas illustrate pointwise 95\% credible bands. Points indicate the observed abundance of Bifidobacterium or Bacteroides in all samples. We use colors to indicate nationalities.}
\label{bug.time}
\end{figure}
The estimated derivatives with respect to age for Bifidobacterium are significantly smaller than zero in most of the samples, indicating that the abundances of Bifidobacterium in infants' gut microbiome tend to decrease with age. This is to some extent expected, since bacteria from this genus are associated with breastfeeding \citep{fanaro2003intestinal}. The results on derivatives are consistent with the estimated population trends. In all three populations (Estonian, Finnish and Russian), Bifidobacterium is estimated to have a decreasing population trend with respect to age. The trends for Finnish and Estonian infants are similar, while for Russian infants the decrease is faster for infants that are less than 600 days old.
The association between genus Bacteroides and age is less pronounced.
The derivatives of Bacteroides tend to be positive in samples taken before 300 days.
When the infants get older the derivatives become slightly negative in Estonian and Finnish infants but remain positive in the Russian group.
The population trends in this case are also consistent with the estimated derivatives.
For nonseroconverted Estonian and Finnish infants, the estimated population abundances of Bifidobactrium increase with age when the infants are less than 450 days old and start to decrease slowly afterward. In Russian infants, the initial increasing trend is more pronounced with a narrower credible band than the other two populations until 900 days. After 900 days, the population average abundance reaches a plateau and the credible band widens.
\subsection{Estimating effects of nationalities and seroconversion}
\label{nation.effect}
We make inference about the associations between the gut microbial compositions and nationalities using the differences $\Delta P^j(\{Z_i\})/\Delta w_{l,j}$ defined in (\ref{discrete.effect}).
For each sample, we estimate $\Delta P^j(\{Z_i\})/\Delta w_{1,j}$, which is the difference associated to the change of nationality from Finland (FIN) to Estonia (EST), as well as $\Delta P^j(\{Z_i\})/\Delta w_{2,j}$, the difference associated to the change from Finland (FIN) to Russia (RUS). We consider the averages of $\Delta P^j(\{Z_i\})/\Delta w_{1,j}$ and $\Delta P^j(\{Z_i\})/\Delta w_{2,j}$ in each of five consecutive age groups. The posterior distributions of these
population averages (Figure \ref{bug.c.s}) illustrate the effect of nationality.
In both panels of Figure \ref{bug.c.s}, the X-axis identifies age groups and the Y-axis indicates the value of $\Delta P^j(\{Z_i\})/\Delta w_{1,j}$ and $\Delta P^j(\{Z_i\})/\Delta w_{2,j}$. Each box-plot approximates, using posterior simulations, the posterior distribution of the average $\Delta P^j(\{Z_i\})/\Delta w_{l,j},l=1,2$.
These averages are defined by integration within a specific age group.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{p2f4_new.pdf}
\caption[Estimated relationship between country and species abundances in DIABIMMUNE dataset]{
Posterior distributions of the average difference $\Delta P^j(\{Z_i\})/\Delta w_{1,j}$ (red) and $\Delta P^j(\{Z_i\})/\Delta w_{2,j}$ (green) in five consecutive age groups. We plot the results for Bifidobacterium (\textbf{Left}) and Bacteroides (\textbf{Right}).}
\label{bug.c.s}
\end{figure}
There is an increase of Bifidobacterium abundance when we compare FIN to RUS nationalities. This increase diminishes with age.
In the last age group (670-1160 days), the posterior distribution of $\Delta P^j(\{Z_i\})/\Delta w_{2,j}$ indicates that the abundances of Bifidobacterium in samples collected from infants older than 670 days remain comparable across nationalities. In the second comparison of nationalities, FIN to EST, only minor changes of Bifidobacterium
abundance levels are observed. The abundances of Bacteroides are smaller in RUS than in FIN nationalities. This difference again diminishes with age. The difference of Bacteroides abundances between EST and FIN are also minor.
We also explored associations between microbial compositions and seroconversion status. In this case we again examine the posterior distributions of average $\Delta P^j(\{Z_i\})/\Delta w_{4,j}$ in five consecutive age groups.
We do not find evidence in our analyses of any genus associated to seroconversion, due to high uncertainty of the estimated average $\Delta P^j(\{Z_i\})/\Delta w_{4,j}$ in all age groups.
\subsection{Similarities of microbial genera}
In this subsection, we focus on similarities between microbial genera.
We first consider the simple approach where the similarity of two genera is measured by the correlation between their observed relative abundances across all samples.
The result of this approach is a correlation matrix, denoted by $\bS_{\text{raw}}=(S_{\text{raw}}(i,i');i,i'\leq I)$, where $S_{\text{raw}}(i,i') = \text{cor}[(n_{i,j}/n_j;j\leq J),(n_{i',j}/n_j;j\leq J)]$. We then consider two approaches which utilize the proposed model.
The first approach uses the cosine of the angle between $\bv_i$ and $\bv_{i'}$ to quantify the similarity of genera $i$ and $i'$,
whereas the second approach uses the cosine of the angle between $\bX_i$ and $\bX_{i'}$. The results of these two approaches are normalized Gram matrices, denoted as $\bS_{\bv}$ and $\bS_{\bX}$ respectively. In the top panels of Figure \ref{bug.bug}, we illustrate the estimates of $\bS_{\text{raw}}$, $\bS_{\bv}$ and $\bS_{\bX}$ by heat-maps. Each row or column of the heat-map represents a specific genus and the color of each tile represents the estimated similarity of two genera.
We then focus on examining the concordance of $\bS_{\text{raw}}$, $\bS_{\bv}$ and $\bS_{\bX}$ to the phylogenetic relations of the observed genera. To this end, we compare the phylogenetic tree of the observed genera published in \cite{segata2013phylophlan} to the heat-maps. If an estimated correlation matrix indicates clusters of genera that share similarities with the phylogenetic tree, then we conclude that the estimate is consistent with phylogenetic relations.
From the figures we can find that $\bS_{\text{raw}}$ indicates little between-genera similarity and does not recover phylogenetic relations of the observed genera. On the other hand, both $\bS_{\bX}$ and $\bS_{\bv}$ indicate clusters of genera that are consistent with the phylogenetic tree.
For instance, the cluster in the middle of the heat-maps of $\bS_{\bX}$ and $\bS_{\bv}$ corresponds to 13 genera from phylum Firmicutes (Clostridium, Ruminococcus, etc).
These results suggest that both $\bS_{\bX}$ and $\bS_{\bv}$ capture the phylogenetic relations of the observed genera. The ordination plot of genera based on $\bS_{\bX}$ in the bottom panel of Figure \ref{bug.bug} further confirms this conclusion. We generate the ordination plot using the method in \cite{boyuren}, which represents each genus by a region instead of a single point. In the ordination plot we find that genera from the same cluster in $\bS_{\bX}$ or $\bS_{\bv}$ are close to each other.
We also verify quantitatively the consistency of $\bS_{\bX}$ and $\bS_{\bv}$ to the phylogenetic relations.
We first calculate the pair-wise phylogenetic distance matrix of the observed genera using unweighted-Unifrac dissimilarity \citep{lozupone2011unifrac}. We then convert this distance matrix into a normalized Gram matrix $\bS_{\text{unifrac}}$ by Torgerson Classical Scaling \citep{borg2005modern} and compare $\bS_{\text{unifrac}}$ to $\bS_{\text{raw}}$, $\bS_{\bX}$ and $\bS_{\bv}$. The estimated $\bS_{\bX}$ and $\bS_{\bv}$ are both similar to $\bS_{\text{unifrac}}$ with RV-coefficients 0.66 and 0.76 respectively, while the RV-coefficient between $\bS_{\text{raw}}$ and $\bS_{\text{unifrac}}$ is 0.32.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{p2f5_new.pdf}
\caption{{Estimated similarities of genera. (\textbf{Top}) Estimates of $\bS_{\bX}$, $\bS_{\bv}$ and $\bS_{\text{raw}}$. Each row or column in the heat-maps correspond to a specific genus. The color of each entry is determined by the estimated pair-wise similarity. The rows and columns in heat-map are reordered so that adjacent rows or columns correspond to genera that are close phylogenetically.
The phylogenetic tree for these genera are plotted at the right side of the figure. (\textbf{Bottom})
Ordination of genera based on $\bS_{\bX}$. The contour lines indicate uncertainty regions in the ordination configuration. The contour line of a genus is colored accordingly to the phylum of the genus.}}
\label{bug.bug}
\end{figure}
\subsection{Goodness-of-fit of the model}
We conducted goodness-of-fit analyses for our model based on the model evaluation approach proposed in Section \ref{dir.mimix.comp}, see for example the results shown in Section S8 of the Supplementary Material.
We use posterior predictive evaluations to examine whether the observed distributions of reads for the two species discussed in this section, Bacteroides and Bifidobacterium, are close to the corresponding posterior predictive distributions. We construct the leave-one-out 95\% posterior predictive intervals of the relative abundances of Bacteroides and Bifidobacterium in biological sample $j$ based on data with biological sample $j$ excluded. We then check if the leave-one-out posterior predictive intervals cover the observed abundances of Bacteroides and Bifidobacterium. In our case, the predictive intervals for 93.2\% of all biological samples cover the observed relative abundances of Bacteroides and 96.3\% of all biological samples for Bifidobacterium. The high proportions of coverage for both genera indicate that there is no systematic discrepancy between the observed data and the fitted model.
\section{Discussion}
\label{sec:6}
We proposed a Bayesian mixed effects regression model to perform multivariate analyses for microbiome data. This regression analysis estimates the effects of covariates on microbial composition while allowing for correlations of the residuals. We illustrate that the model parameters are identifiable. This result is consistent with our simulation study. The model allows us to infer the relationship between covariates and microbial compositions with two visualization approaches.
In simulations we showed that both the individual-level and the population-level relationships between covariates and microbial compositions can be accurately estimated.
Moreover, our model is more robust against zero-inflation than a latent factor model based on logistic-normal distribution.
We finally applied the model to a longitudinal microbiome dataset and compared our results with those previously reported in the literature.
The current posterior computation is implemented with a Gibbs-sampler. This can be inefficient when the number of parameters is large. The computation time increases approximately linearly with the number of samples and, similarly, with the number of microbial species. For the longitudinal microbial dataset that we analyzed the computation time of one chain with $10^5$ iterations is around 90 minutes. A possible substantial improvement in computation time can probably be obtained with Hamiltonian Monte Carlo or variational Bayes methods.
In the future we would also like to investigate appropriate variable selection techniques for the fixed effects.
This is particularly helpful in settings with large collections of covariates.
A more flexible model for the fixed effects is also desirable. Currently, the relationship between abundances and covariates are depicted by linear functions
of the samples characteristics, possibly augmented by pre-specified transformations of the covariates.
Finally, the current prior specification ignores potential relationship across regression vectors $\bv_i$ associated to similar microbial species.
A systematic way to incorporate such information would involve the specification of a prior distribution on $\bv$ that mirrors the phylogeny of microbial species.
\section{Supplementary Materials}
We provide the proof of the proposition for model identifiability in the general setting. We also include additional supporting plots and tables for the simulation studies and data application.
\bibliographystyle{biom}
\section{Model identifiability in the general setting}
\label{iden.proof.sec}
\begin{prop}
\label{prop1}
Assume that $\bw_j$, $j=1,\ldots,J$, are independently distributed with $\mathbb E(\bw_j\bw_j^\intercal)$ of full rank.
Consider two sets of parameters $(\bv,\bY,\bsigma) $ and $ (\bv',\bY',\bsigma')$ having $\text{trace}(\bSigma)=\text{trace}(\bSigma')$, where $\bSigma = \bY^\intercal\bY + \bI $ and symmetrically $\bSigma' = (\bY')^\intercal \bY' + \bI$. If $[\bv,\bY,(\sigma_i/\sigma_{i'};i\neq i')]$ and $[\bv',\bY',(\sigma'_i/\sigma'_{i'};i\neq i')]$ are different, then there exists an integer $n_0>0$, such that when $n_j\ge n_0$, $j=1,\ldots,J$, under the two sets of parameters, the joint distributions of $[(n_{i,j};i\geq 1, j\leq J),\bw]$ are different.
\end{prop}
We note that the requirement $\text{trace}(\bSigma)=\text{trace}(\bSigma')$ is not restrictive in data analysis. This is because the model is invariant to scale transformation of $\bQ$. By scaling $\bQ$ we can make $\text{trace}(\bSigma)$ to be equal to any pre-specified value.
To prove the above proposition, we first show that for parameter values $(\bsigma,\bY,\bv)$ and $(\bsigma',\bY',\bv')$, the
equality, under the two sets of parameters, of the joint distribution of $[(P^j(\{Z_i\});i\ge 1,j\leq J),\bw]$
implies
\begin{equation}
\sigma_i/\sigma_{i'} = \sigma_i'/\sigma_{i'}', i\neq i',\;\;\;\;
\bS = \bS',\;\;\;\;
\bv_i=\bv_i', i\ge 1.
\label{param.eq}
\end{equation}
\begin{enumerate}
\item Denote $\Ptilde^j(\{Z_i\})=\mathbb I(P^j(\{Z_i\})>0)$,
$$
p(\Ptilde^j(\{Z_i\})|\bsigma,\bv,\bw_j,\bY)f(\bw_j) = \Phi((1-2\Ptilde^j(\{Z_i\}))\bv_i^\intercal\bw_j/\sqrt{\bSigma_{j,j}})f(\bw_j),
$$
where $\Phi$ is the CDF of the standard normal distribution and $f(\bw_j)$ is the density of $\bw_j$.
If the joint distribution of $[(P^j(\{Z_i\});i\ge 1,j\leq J),\bw]$ is identical under the two sets of parameters,
\begin{equation}
[(P^j(\{Z_i\});i\ge 1,j\leq J),\bw] \mid \bv,\bsigma,\bY \overset{d}{=} [(P^j(\{Z_i\});i\ge 1,j\leq J),\bw]\mid \bv',\bsigma',\bY',
\label{dist.eq}
\end{equation}
it follows that $\bv_i^\intercal\bw_j\Sigma_{j,j}^{-1/2}=(\bv_i')^\intercal\bw_j(\Sigma'_{j,j})^{-1/2}$ almost surely. With the assumption that $\mathbb E(\bw_j\bw_j^\intercal)$ is of full rank, we get $\bv_i\Sigma^{-1/2}_{j,j}=\bv_i'(\Sigma_{j,j}')^{-1/2}$ for $i\geq 1$ and $j=1,\ldots,J$.
If $\bv=\bzero$, it is straightforward to verify that (\ref{dist.eq}) implies $\bv=\bv'$.
If $\bv_i\neq \bzero$, since we know that for any $j\neq j'$,
\begin{align*}
\bv_i\Sigma_{j,j}^{-1/2} = \bv_i'(\Sigma_{j,j}')^{-1/2},\;\;\;\;\;
\bv_i\Sigma_{j',j'}^{-1/2} = \bv_i'(\Sigma_{j',j'}')^{-1/2},
\end{align*}
we have $\Sigma_{j,j}/\Sigma_{j',j'}=\Sigma_{j,j}'/\Sigma_{j',j'}'.$ This equality, combined with the assumption that $\text{trace}(\bSigma)=\text{trace}(\bSigma')$, implies that $\Sigma_{j,j}=\Sigma_{j,j}'$ for all $j\leq J$, and therefore $\bv_i=\bv_i'$ for all $i\ge 1$ if (\ref{dist.eq}) holds.
\item We then prove that (\ref{dist.eq}) implies $\bS=\bS'$.
We write
\begin{equation}
\begin{aligned}
&f(\bw_j)f(\bw_{j'})p(\Ptilde^j(\{Z_i\}),\Ptilde^{j'}(\{Z_i\})|\bw,\bsigma,\bv,\bY)= \\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;f(\bw_j)f(\bw_{j'})\int_{\Asc} \frac{1}{2\pi}(1-S_{j,j'}^2)^{-1/2}\exp\left(-\frac{1}{2}\bq^\intercal\bS_{j:j'}^{-1} \bq \right)d\bq,
\end{aligned}
\label{slepian.prob}
\end{equation}
where $\Asc=\Asc_j\times\Asc_{j'}$. $\Asc_j=(-\infty,\bv_i^\intercal\bw_j\Sigma_{j,j}^{-1/2}]$ if $\Ptilde^j(\{Z_i\})=0$ and $\Asc_j=[\bv_i^\intercal\bw_j\Sigma_{j,j}^{-1/2},\infty)$ if $\Ptilde^j(\{Z_i\})=1$. Using Corollary 3.12 in \cite{slepian}, we know that
the probability in (\ref{slepian.prob}) is monotone with respect to $S_{j,j'}$. Based on the previous paragraphs, $\Asc=\Asc'$ for every $\bw$. Therefore (\ref{dist.eq}) implies that $S_{j,j'}=S_{j,j'}'$ for all $j,j'\leq J$. When $\bv\neq \bzero$, we further get $\bSigma = \bSigma'$.
\item Finally, we prove that (\ref{dist.eq}) implies $\sigma_i/\sigma_{i'} = \sigma'_i/\sigma'_{i'}$ for all $i\neq i'$. By construction,
$$
\frac{P^j(\{Z_i\})}{P^j(\{Z_{i'}\})}=\frac{\sigma_i}{\sigma_{i'}}\frac{(Q_{i,j})_+^{2}}{(Q_{i',j})_+^{2}}.
$$
We use the convention that the ratio is zero whenever the denominator is zero. By combining (\ref{dist.eq}) and the previous paragraphs, the joint distribution of $((Q_{i,j})_+^{2}/(Q_{i',j})_+^{2},\bw_j)$ remains the same when the parameters values change from $(\bsigma,\bv,\bY)$ to $(\bsigma',\bv',\bY')$. This directly implies that $\sigma_i/\sigma_{i'}=\sigma_i'/\sigma_{i'}'$ for all $i\neq i'$ if (\ref{dist.eq}) holds.
\end{enumerate}
If (\ref{param.eq}) does not hold, then the joint distribution of $[(P^j(\{Z_i\});i\ge1, j\leq J),\bw]$ is different under the two sets of parameters. By de Finetti's theorem \citep{hewitt1955symmetric}, the joint distribution of the observable variables $(n_{i,j};i\geq 1,j\leq J)$ and $\bw$ is uniquely determined by the mixing distribution, which in our case is the law of $(P^j(\{Z_i\});i\ge 1, j\leq J)|\bw,\bv,\bsigma,\bY$, and vice versa. This fact completes the proof.
\section{MCMC sampler for DirFactor}
\setcounter{figure}{0}
\label{mixing.sec}
We focus on identifiable components of our model. These are the normalized regression coefficients $v_{l,i}/\sqrt{\text{trace}(\bSigma)}$ and the correlation matrix $\bS$. We illustrate diagnostic summaries of the MCMC for two scenarios (considered in Figure \ref{simple.mix} and Figure \ref{hard.mix}) defined in Section 4.3.
In the first (simpler) scenario, the model is correctly specified
with no additional zero-inflation, $\var(\epsilon_{i,j})=1$
and read depths generated from a Poisson distribution (see Section 4.3 for the description of this scenario).
In the second scenario, the dataset is generated from MIMIX with a moderate addition of zeros in the microbial compositions (Threshold$=10^{-3}$), $\var(\epsilon_{i,j})=10$ and read depths are generated from a Negative binomial distribution with mean $10^5$ and variance $10^9$(see Section 4.3).
For the correlation matrix, we illustrate the trace-plots of the first two eigenvalues. For the regression coefficients, we illustrate the trace-plots for $v_{1,1}/\sqrt{\text{trace}(\bSigma)}$ and $v_{1,2}/\sqrt{\text{trace}(\bSigma)}$.
We computed the upper confidence limit as recommended in \cite{CODA} of the $\hat R$ statistics \citep{Rhat} and performed Geweke's diagnostics \citep{geweke1991evaluating} for all these parameters based on three MCMC chains to evaluate if the pre-specified number of Markov Chain transitions is sufficient for approximate posterior inference.
In the first simulation scenario, we consider 100,000 transitions for the Markov chains. The upper confidence limits of the $\hat R$
statistics for the first two eigenvalues of $\bS$ and the scaled regression coefficients $\bv$ are all close to one (between 1.00 and 1.15), which suggests that approximately 60,000 MCMC iterations might be sufficient for approximate posterior inference (Figure \ref{simple.mix}). This is confirmed by Geweke's convergence diagnostic with truncation of the chains set at 60,000 iterations. The absolute Geweke's z-scores for all parameters and all chains are smaller than 0.5.
In the second scenario, our model is misspecified and we consider longer Markov chains (200,000 transitions) to approximate posterior inference.
The upper confidence limits of $\hat R$ for all parameters are close to one (between 1.00 and 1.05). The analyses of the traceplots suggest that approximately 100,000 iterations (Figure \ref{hard.mix}) might be sufficient for approximate posterior inference. We verify this by calculating the absolute Geweke's z-scores for all four parameters across all chains when truncation of the chains is set at 100,000 iterations. All resulting absolute z-scores are smaller than 1.96.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{traceplot.pdf}
\caption{ DirFactor MCMC, first simulation scenario. (\textbf{Top}) Trace-plots of two normalized regression coefficients $v_{1,1}/\sqrt{\text{trace}(\bSigma)}$ and $v_{1,2}/\sqrt{\text{trace}(\bSigma)}$. \textbf{(Bottom)} Trace-plots of the first two eigen-values of the correlation matrix $\bS$. The dash line in each figure indicates the actual value of the parameter. The upper bounds of the $\hat R$
statistics based on the MCMC results from three chains are reported.}
\label{simple.mix}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{traceplot_hard.pdf}
\caption{DirFactor MCMC, second simulation scenario.(\textbf{Top}) Trace-plots of two normalized regression coefficients $v_{1,1}/\sqrt{\text{trace}(\bSigma)}$ and $v_{1,2}/\sqrt{\text{trace}(\bSigma)}$. \textbf{(Bottom)} Trace-plots of the first two eigen-values of the correlation matrix $\bS$. We do not visualize the actual value of the parameter since the data is not generated by our model. The upper bounds of the $\hat R$ statistics based on the MCMC results from three chains are reported.}
\label{hard.mix}
\end{figure}
\section{Permutation test}
\label{perm.test.sec}
We explore the use of the permutation procedure in Section 4.1 in a simulation study.
The permutation test is computationally intensive, since we need approximate posterior inference for each permutation of the $w_{l,j}$ values.
In the example that we describe, we used parallel computing.
We consider the same simulation scenario as in Section 4.1 and generate the data using our DirFactor model.
As previously mentioned we utilize the posterior mean $\hat \bv_l$ and the Euclidean norm $\|\hat\bv_l\|$ as summary statistics for testing.
We generated 500 permutations and derived a permutation-based p-value to test the null hypothesis $\bv_1=\bzero_I$.
The p-value is smaller than 0.002, which indicates strong evidence of a relation between the microbial composition and the covariate $l=1$.
The test correctly detects the association between $w_{1,j}$ and the microbial composition. The null distribution of $\|\hat \bv_1\|$
and the posterior mean $\|\hat\bv_1\|$ from the original data are shown in Figure \ref{perm.test}.
We also examine the behavior of the test under the null hypothesis, to evaluate if the Type-I error is controlled. We generate 50 datasets from the same scenario but we set $\bv_1=\bzero_I$. We then check if the distribution of the permutation p-values is uniform with a Q-Q plot (Figure \ref{perm.test}, right panel).
\setcounter{figure}{0}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{perm_test.pdf}
\includegraphics[scale=0.5]{perm_null.pdf}
\caption{(\textbf{Left}) Estimated null distribution of $\|\hat\bv_1\|$. The red line indicates the Euclidean norm of $\hat\bv_1$ from the original data. (\textbf{Right}) Q-Q plot compares the uniform distribution and the distribution of the permutation p-values for 50 independent datasets where $\bv_1=\bzero_I$.}
\label{perm.test}
\end{figure}
\section{Comparison of estimated derivatives to their actual values}
\label{deriv.comp.sec}
We illustrate additional results on the simulation study presented in Section 4.2, with scatter plots that compare the estimated values of the partial derivatives $\partial P^j(\{Z_i\})/\partial w_{l,j}$ and their 95\% credible intervals to the actual values.
\setcounter{figure}{0}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{deriv_comp_correct.pdf}
\includegraphics[scale=0.45]{deriv_comp_mis.pdf}
\caption{Comparison between posterior estimates of partial derivatives $\partial P^j(\{Z_i\})/\partial w_{l,j}$
and the corresponding actual values when our model is correctly specified (\textbf{Top}) and misspecified (\textbf{Bottom}). The bar for each point indicates the 95\% credible interval of the partial derivative. Panels include a 45-degree dashed line.}
\label{deriv.comp}
\end{figure}
\section{Derivatives and population trends in 50 simulation replicates}
\hspace{20mm}
\setcounter{figure}{0}
\begin{figure}[hb]
\centering
\includegraphics[scale=0.38]{p2f2_mse.pdf}
\caption{Summary of the estimates of derivatives and population trends in 50 simulation replicates. (\textbf{Top}) We consider the distributions of MSEs of the estimated derivatives for the first 16 species, when the model is correctly specified and misspecified. (\textbf{Middle}) The bands which include
94\% of the simulation-specific estimates of the population trends along with the actual trends when the model is correctly specified. (\textbf{Bottom}) The same figures of population trends for the misspecified model.}
\label{dist_pw_mse}
\end{figure}
\section{Prediction accuracy and coverage}
\setcounter{figure}{0}
\label{comp.res.read.depth.sec}
We summarize the results on prediction accuracy and coverage of DirFactor and MIMIX when $P^j(\{Z_i\})$'s are simulated as in Section 4.3 from either DirFactor or MIMIX and the read depths $n_j$ are generated from negative binomial distributions with moderate overdispersion (Table \ref{rmse.nb.1}, \ref{cov.nb.1}) and large overdispersion (Table \ref{rmse.nb.2}, \ref{cov.nb.2}).
We find that when the overdispersion of the read depths is large (mean $=10^5$ and variance $=4\times 10^{10}$), the prediction accuracy and coverage of both DirFactor and MIMIX decreases, while when the degree of overdispersion is moderate (mean $=10^5$ and variance $= 10^9$), the prediction accuracy and coverage remain similar as in the simulations where the read depths follow a Poisson distribution.
\begin{table}[htbp]
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc|}
& \multicolumn{6}{c|}{Simulated from DirFactor} & \multicolumn{6}{c|}{Simulated from MIMIX} \bigstrut[b]\\
\cline{2-13} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} \bigstrut\\
\hline
Threshold & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ \bigstrut[t]\\
\hline
$\var(\epsilon_{i,j})=1$ & 1.0&1.0&1.3&14.3&26.1&54.8&1.4&2.5&4.9&1.2&2.0&2.8\\
$\var(\epsilon_{i,j})=5$ &3.0&3.4&3.4&34.3&66.6&75.3&4.7&7.3&7.1&1.1&2.1&2.4\\
$\var(\epsilon_{i,j})=10$ &4.4&4.3&4.6&56.4&105.3&155.8&10.9&10.9&11.3&1.8&2.8&3.6\bigstrut[b]\\
\hline
\end{tabular}%
\caption{Average RMSE of estimated population mean abundances at 20 different values of $w_{1,0}$ equally spaced between -2 and 2 across simulation replicates for our model (DirFactor) and MIMIX. The threshold parameter indicates at which value we truncate the simulated $P^j(\{Z_i\})$'s to zero. We consider two scenarios where the dataset is generated from DirFactor and MIMIX respectively. The read depths are generated from a negative binomial distribution with mean $10^5$ and variance $10^9$. All RMSEs in the table are multiplied by $10^3$.}
\label{rmse.nb.1}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc|}
& \multicolumn{6}{c|}{Simulated from DirFactor} & \multicolumn{6}{c|}{Simulated from MIMIX} \bigstrut[b]\\
\cline{2-13} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} \bigstrut\\
\hline
Threshold & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ \bigstrut[t]\\
\hline
$\var(\epsilon_{i,j})=1$&0.99&0.94&0.91&0.92&0.84&0.79&0.97&0.97&0.89&1.00&0.94&0.87\\
$\var(\epsilon_{i,j})=5$&0.99&0.93&0.88&0.94&0.84&0.80&0.96&0.93&0.89&0.99&0.93&0.84\\
$\var(\epsilon_{i,j})=10$&0.96&0.95&0.89&0.93&0.82&0.75&0.90&0.88&0.84&0.95&0.93&0.85\bigstrut[b]\\
\hline
\end{tabular}
\caption{Coverage of the posterior distribution of the population trend (defined in Sec 3.2). We average across species and across $w_{1,0}$ values between -2 and 2 and $w_{2,0}=0$ and $w_{2,0}=1$. The threshold parameter indicates at which value we truncate the simulated $P^j(\{Z_i\})$'s to zero. The coverage is calculated using simulation replicates. The read depths are generated from a negative binomial distribution with mean $10^5$ and variance $10^9$. We consider two scenarios where the dataset is generated from DirFactor and MIMIX respectively.}
\label{cov.nb.1}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc|}
& \multicolumn{6}{c|}{Simulated from DirFactor} & \multicolumn{6}{c|}{Simulated from MIMIX} \bigstrut[b]\\
\cline{2-13} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} \bigstrut\\
\hline
Threshold & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ \bigstrut[t]\\
\hline
$\var(\epsilon_{i,j})=1$&2.2&3.0&4.5&24.4&35.9&53.0&2.2&4.3&6.1&2.2&2.7&2.9\\
$\var(\epsilon_{i,j})=5$&7.0&8.4&11.1&45.8&85.2&95.7&6.1&7.7&9.2&3.2&3.5&6.4\\
$\var(\epsilon_{i,j})=10$&8.4&10.5&12.6&75.5&126.0&185.6&15.7&18.7&21.7&4.6&6.3&7.2\bigstrut[b]\\
\hline
\end{tabular}%
\caption{Average RMSE of estimated population mean abundances at 20 different values of $w_{1,0}$ equally spaced between -2 and 2 across simulation replicates for our model (DirFactor) and MIMIX. The threshold parameter indicates at which value we truncate the simulated $P^j(\{Z_i\})$'s to zero. We consider two scenarios where the dataset is generated from DirFactor and MIMIX respectively. The read depths are generated from a negative binomial distribution with mean $10^5$ and variance $4\times 10^{10}$. All RMSEs in the table are multiplied by $10^3$.}
\label{rmse.nb.2}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc|}
& \multicolumn{6}{c|}{Simulated from DirFactor} & \multicolumn{6}{c|}{Simulated from MIMIX} \bigstrut[b]\\
\cline{2-13} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} & \multicolumn{3}{c|}{DirFactor} & \multicolumn{3}{c|}{MIMIX} \bigstrut\\
\hline
Threshold & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ & 0 & $10^{-3}$ & $10^{-2}$ \bigstrut[t]\\
\hline
$\var(\epsilon_{i,j})=1$&0.98&0.81&0.72&0.96&0.61&0.49&0.99&0.86&0.76&0.96&0.77&0.70\\
$\var(\epsilon_{i,j})=5$&0.95&0.8&0.69&0.98&0.62&0.49&0.96&0.83&0.66&0.94&0.72&0.69\\
$\var(\epsilon_{i,j})=10$&0.9&0.77&0.65&0.99&0.64&0.5&0.90&0.8&0.65&0.94&0.72&0.64\bigstrut[b]\\
\hline
\end{tabular}
\caption{Coverage of the posterior distribution of the population trend (defined in Sec 3.2). We average across species and across $w_{1,0}$ values between -2 and 2 and $w_{2,0}=0$ and $w_{2,0}=1$. The threshold parameter indicates at which value we truncate the simulated $P^j(\{Z_i\})$'s to zero. The coverage is calculated using simulation replicates. The read depths are generated from a negative binomial distribution with mean $10^5$ and variance $4\times 10^{10}$. We consider two scenarios where the dataset is generated from DirFactor and MIMIX respectively.}
\label{cov.nb.2}
\end{table}
\section{Model comparison}
\setcounter{figure}{0}
We use the same setting as in Section 4.3 without additional zero-inflation, Poisson distributed read-depths and $\var(\epsilon_{i,j})=1$ for the simulation of $Q_{i,j}$. After simulation of the $Q_{i,j}$ variables, we use them to generate two datasets,
one consistent with the distribution of our model ($P^j(\{Z_i\})\propto \sigma_i(Q_{i,j})_+^2$) and the other consistent with MIMIX ($P^j(\{Z_i\})\propto \exp(Q_{i,j})$).
We perform posterior predictive evaluations of our model and MIMIX as outlined in Section 4.3. We use binary regression to illustrate the coverage probability of the 95\% predictive intervals as a function of $w_{1,j}$ for Species 1. An example is visualized in Figure \ref{pp.sim.check}. The mean coverage probabilities, for data generated from MIMIX, are 0.98 and 0.97 with predictive analyses based on MIMIX and our model. Also, the mean coverage probabilities, for data generated from our model, are 0.92 and 0.99 with predictive analyses based on MIMIX and our model.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{DF_data_pp.pdf}
\includegraphics[scale=0.5]{MI_data_pp.pdf}
\caption{Estimated coverage probabilities of posterior leave-one-out 95\% predictive intervals as a function of $w_{1,j}$ values. The curves indicate estimated coverage probabilities and the points illustrate the observed relative abundances of all samples.
The curves are obtained by fitting a binary regression model to the covered or uncovered status of each sample.
The color indicates which model is used to generated the posterior predictive intervals. The data are generated from DirFactor model (Left) and MIMIX model (Right). The two summaries indicated as ``DF'' and ``MI'' are the proportions of samples that are covered by the predictive intervals generated by our Dirichlet model and the MIMIX model.}
\label{pp.sim.check}
\end{figure}
\section{Goodness-of-fit analysis for our model on the DIABIIMMUNE data}
\hspace{20mm}
\setcounter{figure}{0}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{pp_bacte.pdf}
\includegraphics[scale=0.5]{pp_bifido.pdf}
\caption{Leave-one-out posterior predictive intervals for the abundances of Bacteroides and Bifidobactrium. We only illustrate 150 randomly sampled biological samples in this plot. Black dots indicate the observed relative abundance and lines indicate 95\% posterior predictive intervals. The biological samples are sorted by the posterior means of the associated predictive distributions of the relative abundances.}
\label{goodness.of.fit}
\end{figure}
\bibliographystyle{biom}
|
1,314,259,993,518 | arxiv | \section{Introduction}
\label{sec:Introduction}
The inherent atomic properties, like mass or polarizability, and the very high sensitivities achievable with atom interferometers make them a very versatile tool for high-precision measurements of, for instance, fundamental constants, internal forces, accelerations or rotations \cite{Cronin_2009,Bouchendira_2011,Geiger_2011,Stockton_2011}, being also used in general relativity tests \cite{Dimopoulos_2007,Hohensee_2011} and having even being proposed for gravitational wave detection \cite{Dimopoulos_2008,Graham_2013}.
Atomic Bose-Einstein condensates (BECs) are promising candidates to increase the phase sensitivity of atom interferometers due to their large coherence length \cite{Andrews_1997,Hagley_1999,Bloch_2000}. However, elastic collisions in BECs can produce phase diffusion reducing the phase coherence \cite{Lewenstein_1996,Javanainen_1997,Castin_1997}. Phase diffusion can be reduced for example by using Feshbach resonances \cite{Fattori_2008,Cheng_2010} or by taking advantage of the interactions to introduce non-classical correlations between the two arms of the interferometer \cite{Jo_2007,Berrada_2013}. Nonlinear interactions also give rise to squeezed states which allow to surpass the standard quantum limit \cite{Kitagawa_1993,Esteve_2008,Riedel_2010,Gross_2010,Gross_2011,Lucke_2011,Hamley_2012}.
In BECs with attractive interactions, the use of matter wave bright solitons \cite{Khaykovich_2002,Hulet_2002,Wieman_2006} for interferometry was already proposed in \cite{Hulet_2002} and its potential to increase phase sensitivity has been recently discussed \cite{Kasevich_2012}. The main advantages that matter wave bright solitons offer are that they can be described by single large mass macroscopic wavefunctions, have a well defined spatial localization and present absence of dispersion. Some methods have been proposed for implementing the beam splitter behavior required in a matter wave bright soliton interferometer \cite{Malomed_2013} such as applying a resonant $\pi/2$ pulse to an internal state transition of the soliton \cite{Gardiner_2011}, using an accurate control of the scattering length in space or in time to split the soliton into two or more pieces \cite{Paredes_2012} or collisions with a potential barrier in different scenarios like using a rectangular barrier \cite{Carr_2012,Damgaard_2012,Sakaguchi_2005}, Gaussian and delta type potential barriers \cite{Gardiner_2012,Holmer_2007,Holmer_20071,Gertjerenken_2012,Gertjerenken_2012b,Weiss_2012}, in a quasi one dimensional external harmonic confinement in the presence of quantum fluctuations \cite{Martin_2012} or considering three dimensional dynamics \cite{Cuevas_2013}.
Here, we consider a matter wave bright soliton interferometer composed of a harmonic potential trap with a Rosen--Morse barrier at its center on which an incident soliton collides and splits into two solitons. The two split solitons recombine after a dipole oscillation in the trap at the position of the barrier producing two output solitons.
The number of atoms of these two outputs provides a measure of the phase difference between the two arms of the interferometer. The phase difference acquired by the two solitons during the splitting process in a collision with a potential barrier is often assumed to be $\pi/2$ even for finite width barriers. Here, we show that this is only the case in the limit of very high velocities of the incident soliton and extremely narrow barriers. In general, the phase difference between the split solitons strongly depends on the velocity of the incident soliton, the nonlinearity and the width of the barrier. We also point out the limitations to achieve a symmetric splitting of the incident soliton by scattering on a potential barrier. Although the two split solitons can be obtained with the same number of atoms, in general, the reflected soliton has less velocity than the transmitted one.
The paper is organized as follows. In section \ref{sec:Physical system} we describe the considered matter wave bright soliton interferometer. In section \ref{sec:Transmission coefficient} we study the transmission coefficient as a function of the kinetic energy of the soliton for different nonlinearities. Section \ref{sec:Splitting} is devoted to the analysis of the splitting process focusing on the case of equal-sized splitting. First, in section \ref{sec:RMvsDelta} the area of the Rosen--Morse barrier necessary to obtain two split solitons with the same number of atoms is analyzed. Then, in section \ref{Velocity of the split solitons}, we study the velocity of the split solitons and finally, in section \ref{sec:phase difference} the phase difference acquired between the two split solitons is characterized. Section \ref{sec:Recombination} is dedicated to the recombination process, and finally in section \ref{conclusions} we present the conclusions.
\section{Physical system}
\label{sec:Physical system}
Within the mean field approach the dynamics of a Bose-Einstein condensate at zero temperature in one dimension (1D) is described by the time-dependent 1D Gross--Pitaevskii equation (GPE):
\begin{equation}
\label{GP}
i\hbar\frac{\partial}{\partial t}\Psi({z},t)=\left(-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial z^2}+V({z})+g_{1D}|\Psi({z},t)|^2\right)\Psi({z},t),
\end{equation}
where $V(z)$ is the external potential, $m$ the atomic mass and the parameter that determines the strength of the atom--atom interactions is $g_{1D}=2N\hbar\omega_r a_s$; with $N$, $\omega_{r}$, $a_{s}$ corresponding to the atom number, frequency of the radial confinement and s-wave scattering length, respectively. The wavefunction is normalized to $1$, and, we consider negative scattering lengths, $a_{s}<0$, corresponding to attractive interactions.
For the implementation of the considered interferometer, first a matter wave bright soliton is created in a harmonic external potential trap. Then, the potential trap is suddenly displaced a distance $d$ in the $z$ direction and the soliton acquires potential energy (Fig. \ref{fig:scheme} (a)). The potential energy given by the displacement, according to the particle models \cite{Poletti_2008,Martin_2007}, is fully converted into kinetic energy once the soliton reaches the center of the trap $E^{v}_{k}= E_{p}=\frac{1}{2}m\omega_{z}^{2}d^{2}$. At this time, a Rosen--Morse potential barrier, on which the soliton will collide, is located at the center of the harmonic potential and then, the external potential reads:
\begin{equation}
\label{external_potential}
V(z)=\frac{1}{2}m\omega_{z}^{2}z^{2} + V_{b}\sech^{2}\left(\frac{z}{\sigma}\right),
\end{equation}
where $\omega_{z}$ is the frequency of the axial confinement, and $V_b$ and $\sigma$ are the strength and the width of the Rosen--Morse potential barrier, respectively. By scattering on the barrier, the incident soliton splits into two solitons which propagate in opposite directions undergoing a dipole oscillation in the harmonic potential (Fig. \ref{fig:scheme} (b)). Finally, the two solitons recombine at the position of the barrier (Fig. \ref{fig:scheme} (c)) producing two output solitons. The number of atoms of these two output solitons provides a measure of the phase difference between the two arms of the interferometer.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{Fig_1.pdf}
\caption{Schematics of the interferometer sequence: (a) initial situation in which the soliton is displaced with respect to the center of the harmonic potential trap by
a distance $d$; (b) the two split solitons obtained after the collision with the barrier separate from each other and (c) the two solitons return to the position of the barrier and collide.}
\label{fig:scheme}
\end{figure}
\section{Transmission coefficient}
\label{sec:Transmission coefficient}
The first requirement for the implementation of a matter wave bright soliton interferometer is to possess a mechanism to coherently split the incident soliton in two identical solitons. In our system, such a mechanism is provided by the interaction with a Rosen--Morse potential barrier, as described in section \ref{sec:Physical system}. The Rosen--Morse potential is a sech-squared-shape potential with analytical solution in the linear regime and provides a good approximation to the potential created by a focused light beam by means of the dipole light force \cite{Lee_2006,Yuri_2011}. Also, the absence of sharp edges in the Rosen--Morse barrier, contrarily to the delta and squared potentials, avoids sharp point effects \cite{Lee_2006}.
Fig. \ref{fig:transmission} shows the transmission coefficient as a function of the kinetic energy of the incident soliton, $E^{v}_{k}$, for different values of the nonlinear interaction term. $E^{v}_{k}$ does not contain the quantum pressure term, i.e., it is only due to the gradient of the soliton phase (see Appendix A). The transmission coefficient is defined as:
\begin{equation}
T=\int^{\infty}_{0}{\left|\Psi(z,t=\pi/\omega_{z})\right|^{2}dz},
\end{equation}
and it is obtained numerically at a time such that the two split solitons are well separated from each other ($t=\pi/\omega_{z}$).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{Fig_2.pdf}
\caption{(Color online) Transmission coefficient as a function of the kinetic energy of the incident soliton for different values of the nonlinear interaction term. The solid line corresponds to the analytical solution of the Rosen--Morse potential barrier obtained in the linear regime. The parameter values are: $V_{b}=17.14\hbar\omega_z$, $\sigma=0.67$ $\mu$m and $\omega_{z}=2\pi\times78$ Hz.}
\label{fig:transmission}
\end{figure}
From Fig. \ref{fig:transmission}, it can be seen that the nonlinearity dominates the behavior of the transmission coefficient. The kinetic energy of the incident soliton necessary to obtain a fixed value of the transmission coefficient decreases (increases) with the nonlinearity for $T>\overline{T}$ ($T<\overline{T}$), where $\overline{T}$ is a value around transmission coefficients of 0.5, being $\overline{T}=0.57$ for the case shown in Fig. \ref{fig:transmission}. Taking into account that the nonlinearity tends to hold all the atoms together, as $g_{1D}$ increases, the shape of the transmission coefficient becomes sharper, favoring the transmission (reflection) for $T>\overline{T}$ ($T<\overline{T}$). For large enough strength of the nonlinear interactions, the transmission coefficient presents a step-like behavior in which the incident soliton is either completely transmitted or completely reflected. This step-like behavior has been also reported for squared barriers \cite{Sakaguchi_2005,Damgaard_2012}. The analytical linear transmission coefficient of the Rosen--Morse potential, which for $\frac{8m V_{b}\sigma^{2}}{\hbar^{2}}>1$ reads \cite{Landau_1965}:
\begin{equation}
T_{RM}=\frac{\operatorname{sinh}^{2}\left(\sigma\pi\sqrt{\frac{2m E^{v}_{k}}{\hbar^{2}}}\right)}{\operatorname{sech}^{2}\left(\sigma\pi\sqrt{\frac{2m E^{v}_{k}}{\hbar^{2}}}\right)+\operatorname{cosh}^{2}\left(\frac{\pi}{2}\sqrt{\frac{8mV_{b}\sigma^{2}}{\hbar^{2}}-1}\right)}.
\end{equation}
is plotted also in Fig. \ref{fig:transmission} (solid line) showing that it is in good agreement with the numerical results in the limit of low nonlinearity.
\section{Splitting Process}
\label{sec:Splitting}
In this section we focus on the case where the two split solitons have the same number of atoms, i.e., $T=0.5$. For a fixed width of the barrier and a fixed nonlinearity, the potential strength of the barrier, $V_{b}$, is modified to obtain the equal-sized splitting for different velocities of the incident soliton. We consider velocities of the incident soliton and widths of the barrier achievable in current experiments \cite{Hulet} but also we study very high incident velocities and very narrow barriers to approach the limit of the delta potential barrier. In all the cases $E^{v}_{k}<V_{b}$ i.e., the system is in the tunneling regime.
For the analysis of the splitting mechanism we switch off the external harmonic potential trap in order to take into account only the effects produced by the interaction between the soliton and the barrier and we focus on three main issues: in section \ref{sec:RMvsDelta} we calculate the area of the potential barrier required to obtain the equal-sized splitting; in section \ref{Velocity of the split solitons} we study the difference between the velocity of the transmitted and reflected solitons and its relation with the velocity of the incident soliton; and in section \ref{sec:phase difference} we characterize the phase difference between the transmitted and reflected solitons.
\subsection{Rosen--Morse vs. Delta barrier}
\label{sec:RMvsDelta}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{Fig_3.pdf}
\caption{(Color online) Area of the Rosen--Morse barrier as a function of the velocity of the incident soliton, while keeping the transmission coefficient fixed at $T=0.5$, for different values of the width of the barrier and a fixed $g_{1D}/\hbar=25.74$ $\mu$m/ms (a) and for different values of the nonlinear interactions and a fixed width of the barrier $\sigma=0.085$ $\mu$m (b).}
\label{fig:velo_area}
\end{figure}
In Fig.~\ref{fig:velo_area} we show the area of the Rosen--Morse potential barrier, $A=2V_b\sigma$, required to obtain a fixed transmission coefficient of $0.5$, as a function of the velocity of the incident soliton for different values of the width of the barrier and a fixed nonlinearity (Fig. \ref{fig:velo_area} (a)) and for different values of the nonlinear interaction parameter and a fixed width of the barrier (Fig. \ref{fig:velo_area} (b)). For high velocities of the incident soliton and very narrow barriers (Fig. \ref{fig:velo_area} (a)) the area of the barrier has approximately a linear dependence with the velocity of the incident soliton. Thus, retrieving the behavior for a delta potential barrier, for which the transmission coefficient of a free particle reads \cite{Bohm_1951}:
\begin{equation}
\label{delta}
T_{\bot}=\frac{1}{1+\frac{\lambda^{2}}{\hbar^{2}v_{0}^{2}}},
\end{equation}
with $\lambda$ being the strength of the delta potential barrier, $V_{\bot}=\lambda\delta(z)$, and $v_0$ the velocity of the incident particle. From Eq. (\ref{delta}), in order to keep $T_{\bot}$ equal to $0.5$, the strength of the delta potential barrier must have a linear dependence with the incident velocity $\lambda=\hbar v_0$. In the Rosen--Morse barrier for $T=0.5$, we recover a linear dependence of the area with respect to the incident velocity for $V_b\rightarrow\infty$ and $\sigma\rightarrow0$ while keeping the product $V_b\sigma$ constant, but with a different slope than in the delta potential case. For wider barriers (Fig. \ref{fig:velo_area} (a)), we have found an approximately quadratic behavior of the area of the Rosen--Morse potential barrier with respect to the incident velocity for a fixed width of the barrier and a fixed nonlinearity. We can also see that the growth of the area of the barrier with respect to the velocity of the incident soliton is steeper as the width of the barrier increases. Note that here we have considered very thin barriers because for high incident velocities, the width of the barrier is limited from above since for wide enough barriers the incident soliton splits in more than two pieces.
In order to analyze the effects of the nonlinearity, in Fig. \ref{fig:velo_area} (b) we study low incident velocities and we observe that as $g_{1D}$ increases, for a fixed width of the barrier, the area of the barrier necessary to keep $T=0.5$ decreases. This effect is consistent with the dependence on the nonlinearity of the transmission coefficient as a function of $E^{v}_{k}$ (shown in Fig. \ref{fig:transmission}). For a fixed $E_{k}^{v}$ and for $T<\overline{T}=0.57$, as $g_{1D}$ increases, the transmission coefficient decreases. Thus, the potential strength of the barrier should decrease in order to maintain the equal-sized splitting. Note that the effect of the nonlinearity decreases as the incident velocity increases.
\subsection{Velocity of the split solitons}
\label{Velocity of the split solitons}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{Fig_4.pdf}
\caption{(Color online) Ratio between the modulus of the velocity of the reflected (transmitted) soliton and the velocity of the incident soliton as a function of the velocity of the incident soliton for a fixed $T=0.5$ for different values of the width of the barrier for a fixed $g_{1D}/\hbar=25.74$ $\mu$m/ms (a) and for different values of the nonlinear interactions for a fixed width of the barrier $\sigma=0.668$ $\mu$m (b). The velocities of the transmitted and reflected solitons are represented with the same line type, being in each case the velocity of the reflected soliton the lower curve.}
\label{fig:velocity}
\end{figure}
In the equal-sized splitting, even though the reflected and transmitted solitons have the same number of atoms, they do not behave symmetrically. In general, we find that the reflected soliton is slower than the transmitted one and their velocities depend on the width of the barrier and on the strength of the nonlinear interaction. Fig. \ref{fig:velocity} shows the numerically calculated ratio between the absolute value of the velocity of the reflected (transmitted) soliton and the velocity of the incident soliton as a function of the velocity of the incident soliton, for different values of the width of the barrier for a fixed $g_{1D}$ (Fig. \ref{fig:velocity} (a)) and for different values of the nonlinearity for a fixed $\sigma$ (Fig. \ref{fig:velocity} (b)). In each case, the lower curve corresponds to the reflected soliton. The difference between the absolute values of the velocities of the transmitted and reflected solitons increases as the width of the barrier increases (Fig. \ref{fig:velocity} (a)). We also observe that, for low velocities of the incident soliton, as the nonlinearity increases (Fig. \ref{fig:velocity} (b)), the split solitons are slowed down and eventually they become trapped at the position of the barrier. This effect also appears in rectangular barriers \cite{Damgaard_2012} and limits the maximum value of the nonlinearity in order to maintain the 50-50 splitting.
Notice also that the mean of the absolute value of the velocities of the two split solitons, $\overline{\Delta v}=(|v_T|+|v_R|)/2$, is practically independent of the width of the barrier (Fig. \ref{fig:velocity} (a)) but it is strongly affected by the nonlinearity (Fig. \ref{fig:velocity} (b)). Also, $\overline{\Delta v}$ approaches the velocity of the incident soliton for high incident velocities i.e., the ratio $\overline{\Delta v}/v_0$ tends to one as the incident velocity increases. In Fig. \ref{fig:velocity} we can also see that the velocity of the transmitted soliton is, in some cases, larger than the velocity of the incident one. Nevertheless, the increase of velocity of the transmitted soliton is accompanied by a decrease of the velocity of the reflected soliton and therefore the total energy is conserved. The difference in velocity of the split solitons introduces an accumulated phase difference between the two arms of the interferometer, that will be discussed in section \ref{sec:Recombination}.
\subsection{Phase difference}
\label{sec:phase difference}
Here we analyze the phase difference introduced during the splitting process when the two split solitons have the same number of atoms. Performing a detailed analysis of the phase evolution during the splitting of a soliton colliding with a Rosen--Morse potential barrier, we observe a strong dependence on the width of the barrier, velocity of the incident soliton and nonlinearity. These dependences go beyond the one soliton solution (Eqs. (\ref{soliton}) and (\ref{phase_soliton}) in Appendix A) of the GPE and requires to consider the n-soliton solution obtained by Zakharov and Shabat \cite{ZAKHAROV_1972,Gordon_1983}, which shows that neighbor solitons make their presence felt through phase and position shifts (Eq.(\ref{phaseshift}) in Appendix A). In fact, we find that the phase difference introduced during the splitting of a matter wave bright soliton into two solitons by colliding with a potential barrier arises from two main sources, the interaction between soliton and barrier and the interaction between the reflected and transmitted solitons. The influence of the soliton-soliton interactions when two solitons collide at the position of a potential barrier has been recently discussed \cite{Gardiner_2012}.
\subsubsection{High incident velocities}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{Fig_5.pdf}
\caption{(Color online) Phase difference between the transmitted and reflected solitons as a function of the mean of the absolute values of the velocities of the two split solitons for a fixed transmission coefficient of $0.5$, high velocities of the incident soliton and for different values of the width of the barrier and a fixed $g_{1D}/\hbar=25.74$ $\mu$m/ms (a) and for different values of the nonlinear interactions and a fixed $\sigma=0.085$ $\mu$m (b). Solid lines correspond to the semi-analytical fit given by Eq. (\ref{phase}) with the parameters $C_i$ (i=1,2) adjusted numerically.}
\label{fig:phase2}
\end{figure}
Fig. \ref{fig:phase2} shows the phase difference between transmitted and reflected solitons for a fixed transmission coefficient, $T=0.5$, as a function of the mean of the absolute values of the velocities of the two split solitons, $\overline{\Delta v}$, for high incident velocities and for different widths of the barrier and a fixed nonlinearity (Fig. \ref{fig:phase2} (a)) and for different nonlinearities and a fixed width of the barrier (Fig. \ref{fig:phase2} (b)). We compute the phase difference of the two split solitons when they are separated $10$ $\mu$m in order to avoid the self-interferences that appear in the reflected soliton just after the splitting. We observe that the phase difference increases as $\overline{\Delta v}$ increases, and its growth depends on the width of the barrier (Fig. \ref{fig:phase2} (a)). For a fixed width of the barrier (Fig. \ref{fig:phase2} (b)), the phase difference introduced during the splitting process in the case of high incident velocities increases as the nonlinearity increases. Taking into account these dependences with the parameters of the system and the phase shift due to the presence of two neighboring solitons, we approximate the phase difference introduced during the splitting of a matter wave bright soliton by interacting with a Rosen--Morse barrier for high velocities of the incident soliton (solid line of Fig. \ref{fig:phase2}) as:
\begin{equation}
\label{phase}
\Delta\phi(\overline{\Delta v})=-2\arctan\left(\frac{g_{1D}}{2\hbar \overline{\Delta v}}\right) + C_1 + C_2\sqrt{\overline{\Delta v}}
\end{equation}
where $C_i$, with $i=1,2$, are independent of $\overline{\Delta v}$ but depend on the parameters of the system, and in our case have been adjusted numerically.
The first term of the right hand side of Eq. (\ref{phase}) corresponds to the phase shift associated to the soliton-soliton interaction derived from Eq. (\ref{phaseshift}) in Appendix A) and that is highly affected by $g_{1D}$. The second term provides a phase difference due to the interaction of the incident soliton with the barrier that does not depend on $\overline{\Delta v}$ but in general is strongly affected by the nonlinearity, growing as the nonlinearity increases. The third term depends on $\sqrt{\overline{\Delta v}}$, and its growth is determined by $C_{2}$ which is highly affected by the width of the barrier. Our results recover the analytical phase difference predicted for a delta potential barrier in the limit of $\sigma\rightarrow0$ and very high incident velocities (see Fig. \ref{fig:phase2} (a) with $\sigma=0.011$ $\mu$m). In this limit, the first term of Eq. (\ref{phase}) tends to zero due to its inverse dependence on $\overline{\Delta v}$, the last term, which depends on the width of the barrier, also tends to zero and only remains the term $C_1$ which tends to $\pi/2$ for high enough incident velocities. Thus, retrieving the delta potential barrier behavior \cite{Holmer_2007,Holmer_20071}.
\subsubsection{Low incident velocities}
Here we focus on velocities of the incident soliton and widths of the barrier compatible with the range of parameter values available in current experimental setups \cite{Hulet}. In this regime, we obtain similar general dependences with the width of the barrier and the nonlinearity as the ones described for high incident velocities, but with a considerable deviation with respect to the semi-analytical fit given by Eq. (\ref{phase}). This may be due to the increase of the interaction time between soliton and barrier for low velocities. As for the case of high velocities of the incident soliton, the width of the barrier determines the growth of the phase difference with the mean of the absolute values of the velocities of the two split solitons (Fig. \ref{fig:phase3} (a)), while the nonlinearity affects mainly the arctangent dependence of the phase difference as shown in Fig. \ref{fig:phase3} (b). From Fig. \ref{fig:phase3} (b) we can also notice that the range of represented points is shifted to higher values of the mean of the absolute values of the split solitons as the nonlinearity decreases. This is related with the slowing down of the split solitons (discussed in section \ref{Velocity of the split solitons}) for large nonlinearities. Thus, the same velocity of the incident soliton, $v_0$, leads to different $\overline{\Delta v}$ for different nonlinearities.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{Fig_6.pdf}
\caption{(Color online) Phase difference between the transmitted and reflected solitons as a function of the mean of the absolute values of the velocities of the two split solitons for low velocities of the incident soliton, for a fixed transmission coefficient of $0.5$, and for different values of the width of the barrier and a fixed $g_{1D}/\hbar=25.74$ $\mu$m/ms (a) and for different values of the nonlinear interactions and a fixed width of the barrier $\sigma=0.668$ $\mu$m (b).}
\label{fig:phase3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{Fig_7.pdf}
\caption{(Color online) Phase evolution of a matter wave bright soliton of $^{7}$Li with $4\times10^3$ atoms (solid line) and the two split solitons (dashed and dotted lines) as a function of time during the complete interferometric sequence. The parameter values are: $V_{b}=16.09\hbar\omega_z$, $\omega_{z}=2\pi\times78$ Hz, $\sigma=0.67$ $\mu$m, $\omega_r=2 \pi\times800$ Hz, $d=-20$ $\mu$m and $a_s=-0.16$ nm }
\label{fig:phase_evolution}
\end{figure}
\section{Recombination}
\label{sec:Recombination}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.0\textwidth]{Fig_8.png}
\caption{(Color online) Contour plot of the atomic density, $|\Psi(z,t)|^{2}$, during the splitting and recombination processes for an incident soliton of $^{7}$Li with $4\times10^3$ atoms and $a_s=-0.16$ nm centered at $d=-20$ $\mu$m in a harmonic external potential with $\omega_z=2 \pi\times78$ Hz, $\omega_r=2 \pi\times800$ Hz and a Rosen-Morse barrier with height $V_{b}=16.09\hbar\omega_{z}$ and width $\sigma=0.67$ $\mu$m. We plot three different imprinted phases (a) $\varphi=0$ rad, (b) $\varphi=1.7$ rad, (c) $\varphi=2.4$ rad, which produce 92\%, 75\% and 50\% of the initial number of atoms at the right output, respectively.}
\label{fig:recombination}
\end{figure*}
In this section we study the complete evolution of the interferometric sequence described in Fig. \ref{fig:scheme}. Fig. \ref{fig:phase_evolution} shows the phase evolution of the incident soliton (solid line) and of the two equal-sized split solitons (dashed and dotted lines) as a function of time during the interferometric sequence: (i) the incident soliton (solid line) moves towards the potential barrier, during the first $2.8$ ms of the evolution, converting its potential energy into kinetic energy; (ii) the soliton collides with the Rosen--Morse barrier (gray area between $2.8$ and $3.7$ ms) and splits into two solitons with different phases and different velocities (dashed and dotted lines); (iii) the two solitons perform a dipole oscillation in the trap for approximately $5.4$ ms which provides an oscillatory behavior of the phase, different for each of the solitons due to their different velocities after the splitting (see section \ref{Velocity of the split solitons}); (iv) finally, the two solitons recombine at the position of the barrier (gray area starting at $9.1$ ms) giving two output solitons. The number of atoms at the two outputs of the interferometer depends on the phase difference between the transmitted and reflected solitons at the position of the barrier during the recombination process. We have tested the implementation of the interferometer by means of the phase imprinting method \cite{Dobrek_1999}, modifying instantaneously the phase of one of the arms of the interferometer. Fig. \ref{fig:recombination} (a) shows the density evolution of the incident and split solitons without any imprinted phase. Fig. \ref{fig:recombination} (b) and (c) correspond to the cases of imprinted phases of $\varphi= 1.7$ rad and $\varphi= 2.4$ rad, respectively. Clearly, the number of atoms at the outputs of the interferometer is dominated by the phase difference between the two solitons in the recombination stage.
\section{Conclusions}
\label{conclusions}
We have studied a matter wave bright soliton interferometer composed of a harmonic external potential trap with a Rosen--Morse potential barrier at its center. We have focused on the analysis of the splitting process for the case where the two split solitons have exactly the same number of atoms. First, we have shown that the area of the Rosen-Morse barrier necessary to obtain the equal-sized splitting retrieves the delta behavior for very thin barriers and very high incident velocities. Otherwise, a quadratic behavior of the area of the barrier with the velocity of the incident soliton appears. We have also reported that the velocities of the reflected and transmitted solitons are strongly affected by the nonlinearity being both solitons slowed down, and eventually trapped at the position of the barrier, for high enough nonlinearities. In addition, we have found that, in general, the reflected soliton is slower than the transmitted one. We have also characterized the phase difference between the two split solitons. For high velocities of the incident soliton we have derived a semi-analytical fit that reproduces the main dependences on the velocity, width of the barrier and nonlinearity for the equal-sized splitting. We have also recovered the delta behavior in the limit of high incident velocities and extremely thin barriers. Finally, we have analyzed the recombination process, studying first the phase evolution in the full interferometric sequence, and then we have tested the performance of the interferometer, introducing a phase difference between its two arms by means of the phase imprinting method showing that the number of atoms at each of the outputs is strongly affected by the introduced phase difference.
\begin{acknowledgments}
We thank Albert Benseny, Juan Luis Rubio, and Daniel Viscor for fruitful discussions and comments. We acknowledge support from the Spanish Ministry of Economy and Competitiveness under contract FIS2011-23719 and from the Catalan Government under contract SGR2009-00347. J. Polo also acknowledges financial support from the FPI grant with reference BES-2012-053447.
\end{acknowledgments}
|
1,314,259,993,519 | arxiv | \section{Introduction}
For quite a long time there has been a stable interest in the
physics of the massive neutrino. Investigations in this area
touch, to a certain extent, the feasible phenomenon of lepton
mixing and its manifestations in neutrino physics and,
consequently, in astrophysics and cosmology (neutrino
oscillations~[1] and their connection with the problem of the
solar neutrino~[2,3], the massive neutrino decay~[4]
and its influence on the relict radiation spectrum~[5],
the nature of the Supernova 1987A neutrino outburst~[6],
$\beta$-decay and the problem of the 17-keV neutrino~[7].
On the other hand, it is well known that an intensive
electromagnetic field can significantly influence the properties
of the massive neutrino itself~[8]
and even induce novel lepton transitions with flavor violation,
forbidden in a vacuum~[9].
A curious, in our opinion, effect of enhancing influence of the
magnetic field on the probability of the radiative decay
$\nu_i \rightarrow \nu_j \gamma$ ($i \neq j$, $i$~and~$j$
enumerate the definite mass neutrino species) of
the massive neutrino was discovered in our recent work~[10]
in the framework of the Glashow-Weinberg-Salam (GWS) theory with
lepton mixing. We stress that lepton mixing, similarly to quark
mixing, appears quite natural if the neutrinos have a
non-degenerate mass spectrum, and, in itself, does not go beyond
the framework of the standard electroweak theory.
In addition the development of intensive electromagnetic
field generation techniques and the current possibility to
obtain waves of high strength of electromagnetic field, namely
${\cal E} \sim 10^{9} V/cm$, stimulate the investigation of quantum
processes in strong external fields. Indeed, the parameter of the
wave intensity
\begin{equation}
x^2_e = - \; {e^2 a^2 \over m^2_e}
\end{equation}
\noindent (where {\it a} is the amplitude of wave, $m_e$ is the
electron mass and $e$ is an elementary charge) characterizing
the effect of the electromagnetic wave should not be neglected.
In the present work we investigate the effect of a circularly
polarized wave on radiative decay $\nu_i \rightarrow \nu_j \gamma$
in the framework of the GWS theory with lepton mixing.
\section{The amplitude of the process}
In the lowest order of the perturbation theory, a matrix
element of the radiative decay of the massive neutrino in
the Feynman gauge is described by diagrams of three types,
represented in Fig.1, where double lines imply the influence of
the external field. For the propagators of intermediate particles
(the $W$-boson, charged scalar and charged lepton) exact solutions
are used of the corresponding wave equations in the field of a
monochromatic circularly polarized wave with the four-potential
\begin{equation}
A_\mu = a_{1\mu} \cos\varphi + \xi \; a_{2\mu} \sin\varphi ,
\qquad \varphi = kx
\end{equation}
\noindent where $k^\mu = ( \omega,\vec k )$ is the four-wavevector;
$k^2 = ( a_1 k ) = ( a_2 k ) = (a_ 1 a_ 2 ) = 0, \; a^2_1 = a^2_2 =
a^2$; the parameter $\xi = \pm 1$ indicates the direction of the
circular polarization (left- or rightward). Note that vectors
$\vec a_1$, $\vec a_2$ and $\vec k$ form a right-handed coordinate
system. Provided that $e{\cal E}/m^2_W \ll 1$, the main contributions
are made by the diagrams with the virtual $W$-boson in Fig.$1a$ and
the virtual $Z$-boson in Fig.$1c$. We stress, that diagram
represented in Fig.$1c$ gives the contribution to the amplitude with
$i = j$ only. This is due to the fact that flavour-changing
neutral currents are absent in Standard Model.
The $S$-matrix element of the given process can be represented
in the following form:
\begin{equation}
S = S_0 + \Delta S
\end{equation}
\noindent where $S_0$ is the well known matrix element of the
radiative decay of the massive neutrino in vacuum [4],
and $\Delta S$ is the contribution, induced by the wave field:
\begin{equation}
\Delta S = {i (2\pi)^4 \over \sqrt{2E_1 V \cdot 2E_2 V \cdot 2q_0 V}}
\sum^{+2}_{n=-2} {\cal M}^{(n)} \delta^{(4)}
\big ( nk + p_1 - p_2 - q \big )
\end{equation}
\noindent Here $p_1, p_2, q$ and $E_1, E_2, q_0$ are the four-momenta
and energies of the initial, final neutrinos and photon, respectively.
$n = 0, \pm 1, \pm 2$ is the difference between the numbers of absorbed
and emitted photons of the wave field.
Note that the matrix element of some process in the field of
an electromagnetic wave has usually the form of summation of $n$
type (4), where $- \infty < n < + \infty$ [11].
That only five values of $n$ in our case are possible is extraordinary
and is due to the following reasons. The process $\nu_i \rightarrow
\nu_j \gamma$ is local with the typical scale $\Delta x \le 1/m_f$
($m_{f}$ is the mass of the virtual fermion). In this case the angular
momentum conservation degenerates to spin conservation. Since the total
spin of the particles participating in this process is no greater
than 2 (1/2 + 1/2 + 1), $\mid n\mid_{\max} = 2$ is the maximum
difference between the numbers of absorbed and emitted photons of the
external field (the photons of a monochromatic circularly polarized
wave have a definite spin $\xi = \pm 1)$. The direct calculation
supports this conclusion. A similar phenomenon has been discovered
before [9] in studies of the effect of a circularly polarized wave
on flavor-changing transitions of the massive neutrinos
$\nu_i \rightarrow \nu_j$ ( $i \neq j$ ) with $\mid n\mid_{\max} = 1$.
Notice that in the uniform constant fields the decay
$\nu_i \rightarrow \nu_j \gamma$ with $m_i > m_j$
is valid only [10]. This is due to the fact that the energy-momentum
conservation law in these fields coincides with one in vacuum.
On the other hand, as it follows from expression (4), the external
electromagnetic wave field can induce also radiative decay with
$m_ i \le m_j$ forbidden without the field. Indeed, from the
energy-momentum conservation law in the wave field
\begin{eqnarray}
nk + p_1 = p_2 + q \nonumber
\end{eqnarray}
\noindent the relation follows
\begin{eqnarray}
m^2_i - m^2_j \ge - \; 2n ( k p_1 ) \nonumber
\end{eqnarray}
\noindent In such a manner, the radiation decay $\nu_i \rightarrow
\nu_j \gamma$ with $m_i \le m_j$ is possible on condition that $n > 0$.
The exact invariant amplitudes ${\cal M}^{(n)}$ in the expression (4)
have cumbersome forms and will be published elsewhere. Here we
present the amplitude of this process in a physically more
interesting case of the ultrarelativistic neutrinos ($E_\nu \gg m_\nu$).
In this case the amplitudes ${\cal M}^{(n)}$ corresponding to $n \le 0$
are suppressed by the factor of the small neutrino mass $\nu_i$ and
the other amplitudes are significantly simplified and may be represented
as follows:
\begin{eqnarray}
{\cal M}^{(+1)} & \simeq & {G_F e^2 a^2 \over 2 \sqrt 2 \pi^2} \;
( j k ) \; {( \tilde f^{\ast} F ) \over ( k q )^2} \bigg \lbrace
\sum_\ell \Big ( K_{i \ell} K^\ast_{j \ell} - {1 \over 2} \;
\delta_{i j} \Big ) J_1 ( m_\ell ) \nonumber \\
& + & {3 \over 2} \; \delta_{i j} \sum_q \Big ( 2 T_{3q} Q^4_q \Big )
J_1 ( m_q ) \bigg \rbrace , \nonumber \\
{\cal M}^{(+2)} & \simeq & - \; { G_F \over 16 \sqrt 2 \pi^2} \;
( j F q ) \; {( f^{\ast} F ) \over ( k q )^2} \bigg \lbrace \sum_\ell
\Big ( K_{i \ell} K^{\ast}_{j \ell} + {1 \over 2} \; \delta_{i j} \;
g_\ell \Big ) J_2 ( m_\ell ) \nonumber \\
& - & {3 \over 2} \; \delta_{i j} \sum_q \Big ( Q^3_q g_q \Big )
J_2 ( m_q ) \bigg \rbrace , \\
j_\mu & = & \bar \nu_j ( p_2 ) \gamma_\mu ( 1 + \gamma_5 ) \nu_i
( p_1 ), \nonumber \\
F_{\mu \nu} & = & e ( k_\mu a_\nu - k_\nu a_\mu ) , \qquad
a_\mu = ( a_1 + i \xi \, a_2 )_\mu , \nonumber \\
f_{\mu \nu} & = & e ( q_\mu \varepsilon_\nu - q_\nu \varepsilon_\mu ),
\qquad \tilde f_{\mu \nu} = {1 \over 2} \; \varepsilon_{\mu \nu \alpha
\beta} f_{\alpha \beta} , \nonumber \\
g_{f} & = & 2 T_{3f} - 4 Q_f \sin^2\theta_W , \qquad f = \ell , q ,
\nonumber
\end{eqnarray}
\noindent where index $\ell$ indicates charged leptons
($\ell = e, \mu, \tau$)
and index $q$ indicates quark flavours ($q = u, c, t, d, s, b$),
$T_{3f}$ is the third component of the weak isospin and $Q_f$ is the
electric charge in units of the elementary charge, $\varepsilon_\mu$
is the polarization four-vector of the photon, $m_\ell$ and $m_q$ are
the masses of the virtual lepton and quark, respectively, $K_{i \ell}$
is the unitary lepton mixing matrix which can be parameterized similarly
to the quarks mixing Kabayashi-Maskawa matrix,
\begin{eqnarray}
J_1 ( m_f ) & = & \int \limits^1_0 {dy \over 1-y^2}
\int \limits^\infty_0 d\tau \; \tau^2 j_0 ( j^2_0 + j^2_1 ) \,
\exp ( -i ( \Phi ( m_f ) + \tau ) ) , \nonumber \\
J_2 ( m_f ) & = & \int \limits^1_0 dy \int \limits^\infty_0 d\tau \;
\tau ( j^2_0 + j^2_1 ) \, \exp ( -i ( \Phi ( m_f ) + 2 \tau ) ) , \\
\Phi ( m_f ) & = & { 4 \tau \over 1-y^2} \; {m^2_f \over ( k q )}
\Big [ 1 + x^2_f \Big ( 1 - j^2_0 ( \tau ) \Big ) \Big ] , \nonumber \\
x^2_f & = & - \; Q^2_f \; {e^2 a^2 \over m^2_f} , \nonumber
\end{eqnarray}
\noindent where $j_0 ( \tau ) = \sin\tau / \tau, \;
j_1 ( \tau ) = - j'_0 ( \tau )$
are spherical Bessel functions, $x^2_f$ is the parameter of the wave
intensity. It is easy to see, that the expressions for the amplitudes
(5) and (6) have no divergences and are gauge-invariant, as they are
expressed in terms of the electromagnetic field tensor of the photon
$f_{\mu \nu}$ and the external field tensor $F_{\mu \nu}$.
Previously we have investigated the radiative decay
$\nu_i \rightarrow \nu_j \gamma$ in a homogeneous magnetic field [10].
It is known that the weak field case with large dynamical parameter
$\chi^2_f = e^2 ( p_1 F F p_1 ) / m^6_f$ corresponds to the crossed
field limit.
This circumstance may be a peculiar kind of test of the correctness
of our calculations in the wave field, since the monochromatic wave
also admits the crossed field limit ( $\omega \rightarrow 0$ with
fixed field strengths). As would be expected the amplitudes in these
both cases really coincide. In fact, the amplitude ${\cal M}^{(+1)}$
in Eqn.(5), describing the ultrarelativistic neutrino decay ($E_\nu
\gg m_\nu$) in the crossed field limit are consistent with the
corresponding expressions (8) of Ref. [10]
(${\cal M}^{(+2)} \rightarrow 0$ in this limit).
\section{The decay probability of the ultrarelati\-vis\-tic neutrino}
The decay probability $\nu_i \rightarrow \nu_j \gamma$ in the wave
field
\begin{equation}
w = \sum^{+2}_{n=-2} w^{(n)}
\end{equation}
\noindent in the ultrarelativistic limit ( $E_\nu \gg m_\nu$ ) has
the form:
\begin{eqnarray}
E_\nu w^{(-2)} & \sim & O \bigg ( \alpha \; {G^{2}_{F} m^{10}_\nu
\over m^{4}_{e}} \; x^4_e \bigg ) , \nonumber \\
E_\nu w^{(-1)} & \sim & O \bigg ( \alpha \; {G^{2}_{F} m^{8}_\nu
\over m^{2}_{e}} \; x^2_e \bigg ) , \nonumber \\
E_\nu w^{(0)} & \sim & O \bigg ( \alpha \; G^{2}_{F} \; m^{2}_\nu \;
m^{4}_{e} \; x^{4}_{e} \bigg ) , \\
E_\nu w^{(+1)} & \simeq & {4 \alpha \over \pi } \; {G^{2}_{F} \over
\pi^{3}} \; m^{6}_{e} \; x^{6}_{e} \; \Big \vert K_{i e} K^{*}_{j e}
- {1 \over 2} \; \delta_{i j} \Big \vert^2 \nonumber \\
& \times & \int \limits^{+1}_{-1} dz {1-z \over (1+z)^{2}} \;
\big \vert J_{1} ( m_{e} ) \big \vert^2 , \nonumber \\
E_\nu w^{(+2)} & \simeq & {\alpha \over 4 \pi } \; {G^{2}_{F} \over
\pi^{3}} \; ( p_{1} k ) \; m^{4}_{e} \; x^{4}_{e} \; \Big \vert
K_{i e} K^{*}_{j e} + {1 \over 2} \; \delta_{i j} \; g_{e} \Big
\vert^2 \nonumber \\
& \times & \int \limits^{+1}_{-1} {dz \over 1+z} \; \bigg [
{( 1 - \xi ) \over 2} + {( 1 - z )^{2} \over 4} \; {( 1 + \xi )
\over 2} \bigg ] \big \vert J_{2} ( m_{e} ) \big \vert^2 , \nonumber
\end{eqnarray}
\noindent where $z = \cos\theta$, $\theta $ is the angle between the
photon momentum $\vec q$ and the wavevector $\vec k$ in the center of
the mass frame of the $\nu_{j}$ and $\gamma $. Consequently, in the
ultrarelativistic limit $( q k ) \simeq ( p_{1} k ) ( 1 + z ) / 2$
needs to be substituted in the expression (5) and (6). Notice that
there is no singularity in the lower limit $z \rightarrow -1$
because the integrals $J_{1}, J_{2}$ tend to zero sufficiently
fast. Only the contribution of the virtual electron in the loop is kept
in the expressions (8). This is due to the fact that this contribution
dominates over the others under consideration
\begin{eqnarray}
E_\nu \omega < 10^{16} (eV)^{2} , \nonumber
\end{eqnarray}
It should be pointed out that the decay probabilities (8) practically
do not depend on the mass of the neutrino. Consequently, the radiative
decay probabilities of a lighter neutrino into heavier one and
of a heavier neutrino into lighter one are equal.
It is of interest to compare the expressions (8) with the well known
decay probability $\nu_i \rightarrow \nu_j \gamma$ without the field
[4]:
\begin{equation}
w_{0} \simeq {27 \alpha \over 32 \pi } \; {G^{2}_{F} m^{5}_\nu \over
192 \pi^{3}} \; {m_\nu \over E_\nu} \;
\Big ( {m_{\tau} \over m_{W}} \Big )^4 \big \vert K_{i \tau}
K^{*}_{j \tau} \big \vert^{2} .
\end{equation}
This comparison shows that the very small GIM suppression factor
$\sim ( m_{\ell } / m_{W} )^{4}$ is absent in the probability of the
radiative decay (8). Furthermore, there is no suppression caused by
the smallness of the mass of the neutrino in the case of $n = 1,2$.
To illustrate the enhancing influence of the wave field on the
decay probability $\nu_i \rightarrow \nu_j \gamma$ we present the
numerical estimation of the ratio of the probability
$\nu_i \rightarrow \nu_j \gamma$ from the high energy accelerator
in the wave field of laser type to the decay probability in vacuum:
\begin{equation}
R = {w \over w_0} \sim 10^{33} \bigg ( {1 eV \over m_\nu} \bigg )^6
\bigg ( {E_\nu \omega \over m^2_e} \bigg )^5 \Big ( 10^3 x^2_e
\Big )^2 ,
\end{equation}
\noindent where the parameter of the wave intensity (1) can be
represented in the following form:
\begin{equation}
x^2_e \simeq 10^{-3} \bigg ( {{\cal E} \over 10^9 V/cm} \bigg )^2
\bigg ( {1eV \over \omega} \bigg )^2 .
\end{equation}
Such a significant enhancement of the decay probability
$\nu_i \rightarrow \nu_j \gamma$ is, in our opinion, of great interest,
even in a relatively weak electromagnetic field ($x^2_e \sim 10^{-3}$).
The results obtained in this work will be of use as to their application
in astrophysics and cosmology. For example, the crossed process $\gamma
\rightarrow \nu_i \tilde \nu_j$ of the photon splitting into the neutrino
pair is possible in the wave field (the amplitude of this process is
described by Eq. (5)). The probability of this process has the form:
\begin{eqnarray}
w & = & \frac{\alpha}{3 \pi} \; \frac{G_F^2}{8 \pi^3} \;
\frac{m_e^4}{q_0} \; x_e^4 \bigg \lbrace 8 m_e^2 x_e^2
\big \vert J_1 ( m_e ) \big \vert^2 \; \big \vert K_{i e} K_{j e}^* -
\frac{1}{2} \; \delta_{i j} \big \vert^2 \nonumber \\
& + & ( q k ) \big \vert J_2 ( m_e ) \big \vert^2 \;
\big \vert K_{i e} K_{j e}^* + \frac{1}{2} \; \delta_{i j}
\big \vert^2 \bigg \rbrace .
\end{eqnarray}
It can be treated as an additional mechanism of the energy loss by
stars etc.
\medskip
\medskip
The authors are grateful to A.V.Borisov and V.Ch.Zhukovskii
for fruitful discussions of the results obtained.
The work was supported by the Russian Foundation of
Fundamental Research under Grant No.93-02-14414.
\newpage
|
1,314,259,993,520 | arxiv | \section{Introduction}
HESS\,J1303$-$631\ is one of the most prominent examples of the so-called very
high energy (VHE; $E>100$\,GeV) $\gamma$-ray\ \rr{``dark''} sources, those
which were detected in the VHE band but do not have counterparts
at other energy bands. It was discovered in 2005
\citep{2005A&A...439.1013A} but the nature of the source was unclear
until 2012, when a detailed study of the energy-dependent morphology
provided evidence of the association with the pulsar PSR\,J1301$-$6305\
\citep{2012A&A...548A..46H}. With the increase of the energy threshold
a very extended emission region ($\sim 0.4$$^{\circ}$$\times 0.3$$^{\circ}$\ at the
$[0.84 - 2]$ TeV band) of VHE $\gamma$-rays\ \rr{``shrinks''} towards the
position of the pulsar at $E> 10$\,TeV. \rr{While at lower energies the peak
position of the extended emission region is significantly offset from the
position of the pulsar, at energies above 10\,TeV the pulsar is coincident with
the peak of the $\gamma$-ray\ emission region \citep{2012A&A...548A..46H}.}
Such an energy-dependent morphology is expected
for ancient pulsar wind nebulae \RR{\citep[PWNe;][and references therein]{2012A&A...548A..46H}}.
Young electrons located close to the pulsar are not cooled yet and, thus,
very energetic. These energetic electrons generate
the VHE emission around the pulsar via inverse Compton (IC)
scattering on the cosmic microwave background photons (CMB).
Older, cooled down, lower energy electrons might be spread farther
away from the pulsar for several reasons (e.g. proper motion of the
pulsar which causes that older particles are left behind and/or particle
diffusion) but they can still produce $\gamma$-rays\ via IC scattering, however
at lower energies than the young electrons.
The association of HESS\,J1303$-$631\ with the pulsar is further supported
by the detection of its X-ray counterpart with \textit{XMM-Newton}\
\citep{2012A&A...548A..46H}. The size of the X-ray PWN is much
smaller than the size of the VHE source, extending $2^\prime-3^\prime$ from the
pulsar position towards the centre of the VHE $\gamma$-ray\ emission region. The
much smaller size of the X-ray emitting region can be explained by an effective
synchrotron cooling of older electrons to energies too low to generate
synchrotron emission in the X-ray energy range and/or due to the decreasing
magnetic field strength in the PWN with time
\rr{\citep[see e.g.][]{2009arXiv0906.2644D, 2013ApJ...773..139V}}. The tail-like extension of the
X-ray source might be an indication of the proper motion direction of the
pulsar triggering speculations about its possible birth-place
\citep{2012A&A...548A..46H}.
An analysis of archival data from the
Parkes-MIT-NRAO (PMN) survey at 4.85 GHz
\citep{1993AJ....106.1095C} revealed also
a hint of radio emission at the pulsar position with size
comparable to the X-ray emission region \RR{\citep{2012A&A...548A..46H}}. Data analysis showed a
$\sim3\sigma$ feature with a peak flux \R{density} of \rrr{$30$ mJy/beam} which is at the
detection limit of the survey \RR{and was considered as an upper limit}. This hint of a
radio counterpart of HESS\,J1303$-$631\ triggered new dedicated observations with the
Australian Telescope Compact Array (ATCA), which were conducted in
September 2013. Results of these observations are presented in this paper.
Recently, the counterpart of HESS\,J1303$-$631\ was finally detected at GeV
energies with \textit{Fermi}-LAT\ \citep{2013ApJ...773...77A}. \rr{The morphology of the source
is consistent with a Gaussian of width $0.45^{\circ}$.} The source is
contaminated by the emission of the nearby Supernova remnant (SNR)
Kes\,17 \rr{(G$304.6+0.1$)}, but it is clearly seen above $31$\,GeV. The emission region of the
GeV counterpart of HESS\,J1303$-$631\ is as expected larger than the TeV source, but
the morphology of the emission region is very similar and features an
extension in the same direction as the TeV source. \rr{It should be noted, however,
that the size of the GeV source might be slightly overestimated due to the contamination
from Kes\,17. Kes\,17 is the closest known SNR to the pulsar PSR\,J1301$-$6305\ located at the angular distance of
$37^{\prime}$ \citep{2011ApJ...740L..12W, 2013ApJ...777..148G}. Assuming a distance to the pulsar of $6.6$\,kpc
\citep{2012A&A...548A..46H}, this corresponds to the projected distance between the pulsar and the
SNR of 71\,pc. This large distance makes the association of the SNR with the pulsar very unlikely as it would require an
unrealistically high pulsar velocity of $\sim6,000$\,km/s for the
characteristic age of the pulsar of 11 \RRR{kyr} \citep{2005AJ....129.1993M}.}
\RR{\section{Observations}}
The ATCA\ observations of the \rrr{region surrounding} PSR\,J1301$-$6305\
were conducted on September 5th, 2013. Observations were performed \rr{using the CABB receiver}
with the $1.5$A configuration of the array \rrr{(minimum and maximum baselines of 153\,m and 3000\,m, respectively)}
at $5.5$ and $7.5$\,GHz frequencies and
centred at $\alpha =
13^{\mathrm{h}}02^{\mathrm{m}}10.00^{\mathrm{s}}$, $\delta =
-63^{\circ}05^{\prime}34.8^{\prime\prime}$ (J2000.0), \rr{at the angular distance of about $3^\prime$ from the pulsar position.}
\rrr{The array configuration was chosen in order to match the resolution of the \textit{XMM-Newton}\ observations of $\sim 4^{\prime\prime}$,
while at the same time remain sensitive to structures comparable to the size of the X-ray PWN of $\sim2^{\prime}$. \R{However, the maximum
angular scales to which the observations are sensitive are slightly smaller than the size of the X-ray PWN, namely $\sim1.7^{\prime}$ at 5.5\,GHz and $\sim1.3^{\prime}$ at 7.5\,GHz, respectively.
}
The observations were carried out in two modes: {\it CFB 1M (no zooms)} - a bandwidth of 2 GHz with 2048 1-MHz channels in each intermediate frequency
(IF) band and {\it pulsar binning} - the same but with the addition of pulsar binning according to the provided ephemerids. The on-source scan time was \R{656.5 min.}
{\it Pulsar binning} mode was used in observations in order to be able to correctly subtract the pulsar contribution \R{to the total emission in order to determine the intrinsic
emission from HESS\,J1303$-$631}. However, since no significant \R{emission corresponding to HESSJ1303-631}
was detected (see Section \ref{data_analysis}), \R{the subtraction of the data taken in pulsar binning mode was not performed and thus these data were not used in this study.}
Primary \rr{(flux density)} and secondary \rr{(phase)} calibrators
were J$1934-638$ and J$1352-63$ respectively. \rrr{The flux density of J$1934-638$ is $5.00$\,Jy at $5.5$\,GHz and $2.97$\,Jy at $7.5$\,GHz.}
\rr{The phase calibrator was observed every \rrr{$\sim30$\,min}. The observation recorded all four linear polarization modes.}
Details of the collected data are listed in Table \ref{archive_data}.
In this paper we also considered archival ATCA\ data obtained during
observations of HESS\,J1303$-$631\, centred at $\alpha = 13^{\mathrm{h}}03^{\mathrm{m}}0.400^{\mathrm{s}}$, $\delta = -63^{\circ}11^{\prime}11.55^{\prime\prime} $
\rr{(the position of the peak VHE $\gamma$-ray\ emission)},
and performed in the $1.384$\,GHz and $2.368$\,GHz bands.
The archival data used in the analysis, taken as part of the
Reinfrank\,et\,al. project C1557, are presented in Table \ref{archive_data}.
Only the archival data taken with all 6 antennas and with observational time
longer than 100 \rrr{min} were used in the analysis. \rrr{The source J$1934-638$ was used as a
primary calibrator with flux densities of $14.95$\,Jy at $1.384$\,GHz and $11.59$\,Jy at $2.368$\,GHz. The sources J$1421-490$ and J$1329-665$ were used
for phase calibration. The maximum angular scale to which observations are sensitive is \R{$\sim18^{\prime}$} at $1.384$\,GHz and \R{$\sim11^{\prime}$} at $2.368$\,GHz.}
\begin{table*}
\centering
\caption{Details of the ATCA\ data of the HESS\,J1303$-$631\ \rrr{region} analysed in
this paper}
\label{archive_data}
\begin{tabular}{l l l l l l l l l}
\hline
\hline
\\
Date & Right Ascention& Declination& Time & Array& Frequencies & \rr{Bandwidth} & \rr{Primary} & \rr{Secondary}\\
& & & [min] & & [MHz] & \rr{[MHz]} & \rr{calibrator} & \rr{calibrator}\\
\hline
\\
2013-Sep-05& $13^\mathrm{h}2^\mathrm{m}10.00^\mathrm{s}$& $-63^{\circ}5^\prime34.8^{\prime\prime}$&\R{656.5}& 1.5A& 5500, 7500& \rr{2048}& \rr{$1934-638$}& \rr{$1352-63$}\\
2006-Oct-25& $13^\mathrm{h}3^\mathrm{m}0.400^\mathrm{s}$& $-63^{\circ}11^\prime11.55^{\prime\prime}$&433.8&EW352&1384, 2368& \rr{128}& \rr{$1934-638$}& \rr{$1421-490$}\\
2007-Mar-13& $13^\mathrm{h}3^\mathrm{m}0.400^\mathrm{s}$& $-63^{\circ}11^\prime11.55^{\prime\prime}$&618.1&750D&1384, 2368& \rr{128}& \rr{$1934-638$}& \rr{$1329-665$}\\
2007-Apr-24& $13^\mathrm{h}3^\mathrm{m}0.400^\mathrm{s}$& $-63^{\circ}11^\prime11.55^{\prime\prime}$&651.8&1.5C&1384, 2368& \rr{128}& \rr{$1934-638$}& \rr{$1329-665$}\\
\hline
\end{tabular}
\end{table*}
\RR{\section{Data Analysis \rr{and Results}}}
\label{data_analysis}
\rr{The data reduction and image analysis was performed using the \texttt{miriad}
\citep{1995ASPC...77..433S} and \texttt{karma} \citep{1995ASPC...77..144G} packages.} The resulting clean
primary beam corrected (restricted to the area of primary beam
response above $30\%$, \rr{which corresponds to the radial distance of $5.8^{\prime}$}) image \rr{(Stokes I)}
at $5.5$\,GHz is shown in Fig.\,\ref{radiomap}.
The pulsar PSR\,J1301$-$6305\ is detected at the position $\alpha = 13^\mathrm{h}01^\mathrm{m}45.678^\mathrm{s} \pm
0.013^\mathrm{s}$, $\delta = -63^{\circ}05^\prime34.85^{\prime\prime}
\pm 0.20^{\prime\prime}$. No significant extended emission
coincident with the pulsar position was detected. The fitted image
root mean square (RMS) noise is calculated using the \texttt{imsad} task at
the level of \rrr{$0.011$\,mJy/beam}. \rr{The synthesised beam is an ellipse with
the major and minor axes of $3.79^{\prime\prime}$ and
$3.65^{\prime\prime}$ respectively and the positional angle of $-7.6^{\circ}$}.
There is no extended emission coincident with the pulsar position
detected at $7.5$\,GHz as well. The fitted image RMS noise,
calculated using the \texttt{imsad} task, is at the level of \rrr{$0.011$\,mJy/beam}. \rr{The major and minor axes of the beam are
$3.06^{\prime\prime}$ and $2.90^{\prime\prime}$,
respectively, and the positional angle is $-6.2^{\circ}$}.
The analysis of the archival data at $1.384$\,GHz and $2.368$\,GHz
which combine all the observations listed in Table \ref{archive_data}
also does not reveal any significant emission coincident with the pulsar.
The observations at $1.384$\,GHz, however, reveal a \rr{detection} of a shell-like structure to
the east of the pulsar position which might
potentially be an SNR \RR{(see discussion in Section \ref{snr})}. Figure \ref{radiomap1384} shows the cleaned and
primary beam corrected (restricted to the area of primary beam
response above $20\%$, \rr{which corresponds to the radial distance of $25.3^{\prime}$}) image at $1.384$\,GHz. The \rrr{SNR candidate G304.4$-$0.2} is
positioned \rr{within} the black circle. \rr{The centre of the structure is at $\alpha = 13^\mathrm{h}04^\mathrm{m}31.1^\mathrm{s}$,
$\delta = -63^{\circ}02^\prime08^{\prime\prime}$.} The fitted image RMS noise is estimated
to be \rrr{$0.697$\,mJy/beam}. \rr{The major and minor axes of the
beam are \rr{$51.3^{\prime\prime}$} and \rr{$43.8^{\prime\prime}$}, respectively, and the positional angle is \rr{$20.9^\circ$}.}
The brightest parts of the \rrr{SNR candidate}
reach the significance of $13\,\sigma$.
\rrr{It is difficult to draw any conclusions about a possible emission from the SNR candidate
at $2.368$\,GHz, as these observations are less sensitive to large scale structures, and only a fraction
of the SNR candidate is located within the primary beam and the image is distorted
by artefacts produced by the strong source MGPS\,J$130237-625718$.}
\begin{table*}
\centering
\caption{New point-like radio sources detected in these observations}
\label{newsources}
\begin{threeparttable}
\rrr{
\begin{tabular}{l l l l l l l l}
\hline
\hline
\\
Identifier & Right Ascention& Declination& $F_{5.5\,\mathrm{GHz}}$& $F_{7.5\,\mathrm{GHz}}$& $F_{1.384\,\mathrm{GHz}}$& $F_{2.368\,\mathrm{GHz}}$ & X-ray counterparts\tnote{a} \\
& & & [$\mu$Jy] & [$\mu$Jy] &[mJy] & [mJy] & \\
\hline
\\
J1301-6306 & $13^\mathrm{h}01^\mathrm{m}40.27^\mathrm{s}$& $-63^{\circ}06^\prime48.6^{\prime\prime}$&127.2& 118.3 & ... & ... & 3XMM J130138.2-630654\tnote{b} \\
J1302-6304 & $13^\mathrm{h}02^\mathrm{m}8.74^\mathrm{s}$& $-63^{\circ}04^\prime52.1^{\prime\prime}$ &1448.0 & 1172.0& ... & ... & none\\
J1301-6304 & $13^\mathrm{h}01^\mathrm{m}55.86^\mathrm{s}$& $-63^{\circ}04^\prime13.5^{\prime\prime}$&191.2& 119.1& ... & ... & none\\
J1300-6311 & $13^\mathrm{h}0^\mathrm{m}9.23^\mathrm{s}$& $-63^{\circ}11^\prime37.8^{\prime\prime}$&...& ...& 11.7 & ... & 3XMM J130006.2-631207\tnote{b}\\
J1304-6258 & $13^\mathrm{h}4^\mathrm{m}36.23^\mathrm{s}$& $-62^{\circ}58^\prime22.7^{\prime\prime}$&...& ...& 11.3 & 5.7 & none\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] \R{There are also multiple infrared and/or optical sources that are consistent with the position of these radio sources.}
\item[b] \citet{2016A&A...590A...1R}
\end{tablenotes}
}
\end{threeparttable}
\end{table*}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{5500_paper_new_final_small.pdf}}
\caption{Radio map of the HESS\,J1303$-$631\ \rrr{region} at $5.5$\,GHz
overlaid with
contours of the X-ray PWN (red) as detected with \textit{XMM-Newton}\ \citep{2012A&A...548A..46H}. \RRR{X-ray contours show the peak of the emission close to the pulsar with a trail expanding in the eastern direction.} The blue circle indicates the position of the pulsar PSR J1301-6305. The black
box determines the region used for the flux \R{density} upper limit calculation. \rr{New point-like sources are indicated with index numbers. The synthesised beam is determined by an ellipse with
the major and minor axes of $3.79^{\prime\prime}$ and $3.65^{\prime\prime}$ respectively and the positional angle of $-7.6^{\circ}$.}}
\label{radiomap}
\end{figure}
\begin{figure*}[ht!]
\centering
\resizebox{\hsize}{!}{\includegraphics{1384_map_paper_v3.pdf}}
\caption{Radio map of the HESS\,J1303$-$631\ \rrr{region} at $1.384$\,GHz.
Red contours represent the X-ray PWN as detected with \textit{XMM-Newton}\ and blue contours represent HESS\,J1303$-$631\ as detected by H.E.S.S.\ \citep{2012A&A...548A..46H}. The black circle
indicates the SNR candidate G304.4$-$0.2\ \rr{and the black cross indicates its centre}. \rrr{MGPS-2 catalogue sources \citep{2007MNRAS.382..382M} are shown as red circles, PMN survey sources \citep{1993AJ....106.1095C}, excluding those which are coincident with MGPS-2 sources, are shown as green crosses and two ATCA sources \citep{2015ApJS..217....4S} are shown as blue squares. The two new point-like radio sources reported in this paper are marked with black diamonds.}
\rr{The synthesised beam is determined by an ellipse with the major and minor axes of $51.3^{\prime\prime}$ and $43.8^{\prime\prime}$ respectively and the positional angle of $20.9^{\circ}$.}}
\label{radiomap1384}
\end{figure*}
\RR{\subsection{Other sources in the observed region}}
\rrr{\R{Most of the} sources detected in the field at $1.384$\,GHz (Fig.\,\ref{radiomap1384}), both compact and extended, have counterparts at other radio frequencies
\citep{2007MNRAS.382..382M, 1993AJ....106.1095C, 2015ApJS..217....4S}. However, most of these sources are not classified, except
MGPS J$130422-624859$ (G$304.5-0.1$) which is identified as a HII region \citep{2002MNRAS.335..114M} and MGPS J$130037-625324$ which
is coincident with an infrared bubble \citep{2012MNRAS.424.2442S}. Extended emission at the south-western edge of the field is most probably
related to the very extended source PMN\,J$1259-6337$ \citep{1994ApJS...91..111W}.}
\rrr{Observations at $1.384$\,GHz and $2.368$\,GHz might shed some light on the nature of the unidentified source MGPS\,J$130237-625718$, which is
also visible at other radio wavelengths (e.g. it is detected in the PMN survey). Both $1.384$\,GHz and $2.368$\,GHz \R{data} reveal a complex morphology with
a central source and two lobes,
strongly
suggesting that MGPS\,J$130237-625718$ is an active galactic nucleus (AGN).
\R{Figure\,\ref{agn} shows the map at $2.368$\,GHz overlaid with contours indicating
significant emission from the source at $1.384$\,GHz.}
Only the observations with the best angular resolution (2007-Apr-24)
were used in this analysis.
\R{The fitted image RMS noise is $0.448$\,mJy/beam at $1.384$\,GHz (beam size: major and minor axes of $10.98^{\prime\prime}$ and $9.78^{\prime\prime}$ respectively with
the positional angle of $1.7^\circ$) and $0.322$\,mJy/beam at $2.368$\,GHz (beam size: major and minor axes of $7.15^{\prime\prime}$ and $6.65^{\prime\prime}$ respectively with
the positional angle of $3.7^\circ$).}
A fit to the observed emission at $1.384$\,GHz and $2.368$\,GHz was done
using the \texttt{imfit} task with three Gaussian components. The integrated flux \R{density} from the central component is $136.8 \pm 5.0$\,mJy and $80.8\pm3.7$\,mJy
at $1.384$\,GHz and $2.368$\,GHz, respectively. The flux \R{densities} from the southern and northern components are $260.1\pm7.3$\,mJy and $192.5\pm6.1$\,mJy at $1.384$\,GHz
and $156.1\pm7.4$\,mJy and $99.9\pm5.5$\,mJy at $2.368$\,GHz. Unfortunately, the source is outside the primary beam for observations at $5.5$\,GHz and $7.5$\,GHz
with much better angular resolution.}
\rrr{In the ATCA\ data presented here, 5 new point-like sources were detected in the region of HESS\,J1303$-$631\ (Fig.\,\ref{radiomap} and \ref{radiomap1384}).
Their locations and flux densities estimated using the \texttt{imsad} task are collected in
Table\,\ref{newsources}. Only sources with significance above $10\,\sigma$ at both $5.5$\,GHz and at $1.384$\, GHz were considered.
Each of these sources has one or more potential counterparts in infrared and/or optical catalogues \R{\citep[see e.g.][]{2003yCat.2246....0C, 2010AJ....140.1868W, 2003PASP..115..953B, 2003AJ....125..984M}}.
Two of them, J$1301-6306$ and J$1300-6311$, have X-ray counterparts in the \textit{XMM-Newton}\ catalog
\citep{2016A&A...590A...1R}. The source J$1304-6258$ is actually visible in the MGPS-2 (see Fig.\,\ref{snr_counterpart} right) but is
not listed in the catalogue probably because it is very difficult to separate its emission from the extended emission coincident with G304.4$-$0.2.}
\begin{figure}[h!]
\centering
\resizebox{\hsize}{!}{\includegraphics{agn_2368_cont1384_paper.pdf}}
\caption{\rrr{Radio map of MGPS\,J$130237-625718$ at $2.368$\,GHz overlaid with contours indicating the significance of the emission at $1.384$\,GHz \R{at the level of $10$, $40$ and $80$ times the RMS noise}. The synthesised beam is shown with a black ellipse in the left bottom corner.}}
\label{agn}
\end{figure}
\section{Discussion}
\rrr{The new radio observations reported in this paper were triggered by a hint of a signal ($\sim3\,\sigma$)
from a feature coincident with the X-ray PWN in the analysis of archival data from the PMN survey
at 4.85\,GHz \citep{2012A&A...548A..46H}. This feature is compatible to the \R{resolution} of the survey
of $\sim5^{\prime}$ \R{\citep{1993AJ....106.1095C}}.
These deeper observations at 5.5\,GHz and 7.5\,GHz with ATCA\ were optimised for the detection of the
putative radio PWN with a size comparable to the size of the X-ray PWN ($\sim2^{\prime}$)
while providing a resolution comparable to the one of \textit{XMM-Newton}. The assumption
of the size was motivated by the size of the possible radio feature and by the hypothesis that the size of both radio
and X-ray PWNe is constrained by the region of high magnetic field around the pulsar. Indeed, 3D magnetohydrodynamic simulations
of the Crab Nebula \citep{2014MNRAS.438..278P} show that the magnetic field strength close to the termination
shock is an order of magnitude higher than the average magnetic field strength in the rest of the PWN.
Moreover, in the left-behind \RRR{relic} nebula the magnetic field is expected to be relaxed with the magnetic field strength
comparable to the interstellar medium (ISM) magnetic field of about $3\,\mu$G.
However, neither new observations at
5.5/7.5\,GHz nor the analysis of the archival data at 1.384/2.368\,GHz show any evidence of extended emission
coincident with PSR\,J1301$-$6305\ and/or the X-ray PWN.
\R{It should be noted that the largest scales to which observations at 5.5/7.5\,GHz are sensitive are slightly smaller than the
size of X-ray PWN, $1.7^\prime$ at 5.5\,GHz and $1.3^\prime$ at 7.5\,GHz, and therefore we cannot rule out the detectebility
of the putative radio PWN of the size of the X-ray PWN at these frequencies. }
\R{However, observations at 1.384/2.368\,GHz allow us to detect structures with an extension up to $\sim18^{\prime}$ (1.384\,GHz) and $\sim11^{\prime}$
(2.368\,GHz). }
Assuming the size of the X-ray PWN as reported in
\citet{2012A&A...548A..46H} the upper limits on the radio flux
\R{density} at
$1.384$\,GHz was estimated
\rr{at the level of 3 time the RMS noise}
in the region of the X-ray PWN defined by
a box (black box in Figs.\,\ref{radiomap} and \ref{radiomap1384}).
\R{The upper limit on the flux density at $1.384$\,GHz is $2.6$\,mJy.}
However, the size of the putative radio PWN might exceed the size of the largest angular structure resolved in the observations
or even the size of the primary beam. GeV observaions \citep{2013ApJ...773...77A} exhibiting a large size
of the PWN ($\sim0.9^\circ$) indicate that a large amount of relatively low energy relativistic electrons
is spread out to large distances from the pulsar. If the size of the putative radio PWN is constrained
by the existence of relativistic electrons, i.e. the magnetic field is strong enough for efficient
synchrotron emission across the whole PWN, the radio PWN would be at least as large as the GeV
PWN. The same electrons which emitt GeV $\gamma$-rays\ via inverse Compton scattering on ambient
photon fields are also responsible for the generation of the radio emission via the synchrotron mechanism
\citep[see e.g.][and references therein]{2013ApJ...773..139V}.}
\RR{\subsection{G304.4$-$0.2\ - an SNR?}}
\label{snr}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{atca_snr_karma.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{pmn_snr_karma_ellipse_v5.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{mgps_snr_karma.pdf}
\end{subfigure}
\caption{\rrr{Left: radio map of the G304.4$-$0.2\ region at $1.384$\,GHz. Middle: radio map of the G304.4$-$0.2\ region at $4.85$\,GHz obtained in the PMN survey
\citep{1993AJ....106.1095C}. The blue ellipse shows the extension of the PMN survey source PMN\,J1303$-$6259\ \R{as presented in the catalogue \citep{1994ApJS...91..111W}. The resolution
of this map is $\sim5^\prime$ \citep{1993AJ....106.1095C}}.
Right: radio map of the G304.4$-$0.2\ region at $843$\,MHz obtained in the MGPS-2 survey \citep{2007MNRAS.382..382M}}. \R{The size of the beam is $\sim45^{\prime\prime} \times 50^{\prime\prime}$.}
\R{The black circle in all the panels indicates the extension
of the shell-like structure and the black cross shows its centre. Middle and right images were obtained using the NASA SkyView online tool (\url{http://skyview.gsfc.nasa.gov}).}}
\label{snr_counterpart}
\end{figure*}
\RR{The extended structure, G304.4$-$0.2, detected to the east of the pulsar at 1.384 GHz exhibits a shell-like morphology.
Such a morphology naturally suggests that this might be an SNR.} \rrr{
\RR{G304.4$-$0.2} is coincident with PMN\,J1303$-$6259, an extended source detected in the PMN survey at $4.85$\,GHz \R{\citep{1993AJ....106.1095C, 1994ApJS...91..111W}}.
The emission from PMN\,J1303$-$6259\ was fitted with a two-dimensional asymmetric Gaussian with major and minor axis widths of $\sigma_\mathrm{x} = 14.28^{\prime}$ and
$\sigma_\mathrm{y} = 5.88^{\prime}$ with a position angle of $1.5^{\circ}$ measured eastwards from the north direction \R{\citep{1993AJ....105.1666G}}. The total flux \R{density} from the source is
$235\pm14$\,mJy at $4.85$\,GHz \R{\citep{1994ApJS...91..111W}}. Although PMN\,J1303$-$6259\ is coincident with the western part of the detected shell-like structure the map also reveals
extended radio emission coincident with southern part of the shell (see Fig.\,\ref{snr_counterpart} middle). Overall, the $4.85$\,GHz map show a good agreement
with the $1.384$\,GHz morphology of the SNR candidate. To estimate a spectral index of the western part of the SNR candidate we smoothed
the $1.384$\,GHz map with a Gaussian with a width equal to the angular resolution of the PMN survey of $4.2^\prime$ and fitted the region
of PMN\,J1303$-$6259\ assuming the same extension of the source as obtained for PMN\,J1303$-$6259\ using the \texttt{miriad} task \texttt{imfit}. The obtained
flux \R{density} at $1.384$\,GHz is $398\pm3$\,mJy resulting in a spectral index of $\alpha = 0.42\pm0.05$\footnote{\R{Flux density, $S_\nu$,
scales with frequency, $\nu$, as $S_{\nu} \propto \nu^{-\alpha}$}}, which is
in a perfect agreement with a range of values observed for SNRs \RR{from $\sim0.2$ to $\sim0.8$ with a peak at around $0.5$ \citep[][]{2011Ap&SS.336..257R, 2014BASI...42...47G}}
and very close to a canonical $\alpha = 0.5$ expected in the diffusive shock acceleration in the case of strong shocks with a compression
ratio of 4 \citep[see e.g.][and references therein]{2015A&ARv..23....3D}.}
\rrr{Neither the compact source catalogue nor the SNR catalogue of the second epoch Molonglo Galactic Plane
Survey (MGPS-2) at $843$\,MHz \citep{2007MNRAS.382..382M, 2014PASA...31...42G} provide any counterpart for
the SNR candidate.
However, the MGPS-2 image
of the G304.4$-$0.2\ region at $843$\,MHz reveals a faint extended radio emission coincident with the SNR candidate,
roughly following the $1.384$\,GHz contours in the west and south regions but also exhibiting emission towards north-east
(Fig.\,\ref{snr_counterpart}, right). The peak flux \R{density} reaches $17.5$\,mJy/beam, well above the sensitivity of the
survey of $2$\,mJy/beam. The image is strongly distorted by artefacts produced by nearby bright sources which makes it
impossible to classify the morphology of the extended emission. This might be the reason why the source did not
appear in the list of the SNR candidates detected in the MGPS-2 survey as the search criteria include a condition on a
morphology which has to be shell-like or composite \citep{2014PASA...31...42G}.}
\rr{Infrared and optical} observations, e.g. the Two Micron All-Sky Survey (2MASS)
in the H-band \citep[$1.65\,\mu$m;][]{2006AJ....131.1163S}, do not show any
extended emission from the region of \rrr{G304.4$-$0.2}. This \rrr{further} suggests
that the radio emission from the \rrr{SNR candidate} is most probably
non-thermal as expected for SNRs.
\RR{Available X-ray observations\footnote{\RR{All observations of the region of interest available at
NASA's HEASARC archive (https://heasarc.gsfc.nasa.gov/docs/archive.html) were examined
including \textit{XMM-Newton}, \textit{Swift}, \textit{ROSAT}, \textit{Chandra}, and \textit{ASCA}.}} of this region show no evidence
of the large scale structures coincident with G304.4$-$0.2. This, however, does not
contradict the SNR hypothesis. The absence of the non-thermal X-ray emission can
be explained by a \RRR{potentially} old age of the SNR candidate (see below). \RRR{Indeed,
X-ray synchrotron emission from SNRs requires relatively high shock velocities of
$\gtrsim2000$ km/s to accelerate electrons to energies high enough, and these shock
velocities are \RRRR{believed to be associated with} young SNRs of $\sim1000$\,yr \citep[see e.g.][]{2012A&ARv..20...49V}.
Old age \RRRR{might} also be a reason for the lack of the X-ray thermal emission. Slowing
down of the shock to $<200$\,km/s in old SNRs of $\gtrsim10$\,kyr results in cooling
of the post-shock region to temperatures lower than required for the X-ray emission
\citep[see e.g.][]{2012A&ARv..20...49V}. Some old SNRs, however, feature thermal X-ray emission from their interior which might be due to, for example, interaction with dense cloudlets, which survive the forward shock crossing and slowly evaporate inside the remnant due to saturated thermal conduction \citep[see e.g.][]{1991ApJ...373..543W, 2011A&A...525A.154S}. Another possible reason for the the lack of the thermal emission might be \RRR{a} low density of the ambient medium around the remnant. How low should this density be is, however, model dependent and depends on whether ionization quilibrium is sustained. Non-equilibirum ionization models coupled to efficient particle acceleration shows that X-ray thermal emission would dominate over synchrotron emission only for relatively high densities of about $1$\,cm$^{-3}$ \citep{2007ApJ...661..879E}. The estimate of the density also depends on the distance to the remnant, which is not clear in this case. \RRRR{Finally, the lack of detection of X-ray emission from the source, both thermal and non-thermal, might be due to not sufficient sensitivity or exposure.}}
At $\gamma$-ray\ energies most of the field of view is covered
by HESS\,J1303$-$631\ and it is impossible to distinguish any emission associated with the
possible SNR.}
\RR{It should be noted, however, that other possible interpretations of the detected
extended emission cannot be firmly ruled out. Due to the faint emission the image of
G304.4$-$0.2\ appears to be pattchy making it difficult to firmly identify the shell-like morphology.
The emission could also be contaminated by unrelated point sources and the image might be
distorted by artefacts created by bright sources in the field of view. Since the shell-like
morphology of the source is the strongest argument in favor of its SNR nature, we have to treat
our conclusions with caution. The observed large scale structure might still be
a composition of individual background sources and/or HII regions.
}
\RR{\subsection{Possible birth place of PSR\,J1301$-$6305}}
\rrr{\RR{If G304.4$-$0.2\ is indeed an SNR it} could be
a birth place of the pulsar PSR\,J1301$-$6305.}
\rrr{This hypothesis is supported by the direction of the \RR{X-ray} ``tail'' which roughly points to the position of the SNR candidate (see Fig.\,\ref{radiomap1384}).}
The pulsar is located outside the shell-like structure, which means
that in case the SNR candidate is indeed the birth place of PSR\,J1301$-$6305, the pulsar has already escaped the remnant and continues to
propagate in the ambient medium. \rr{The larger size of the PWN does not contradict this hypothesis as the
present time nebula could have formed after the pulsar escaped the SNR. While the pulsar is still inside the SNR, its
nebula is strongly disrupted by the reverse shock of the remnant \citep[see e.g.][]{2001ApJ...563..806B, 2013A&A...551A.127V}
and at the moment of interaction with the shell it is very small. Moving outside the remnant the pulsar builds up a new nebula which can become very large
due to the proper motion of the pulsar, i.e. left behind electrons, and diffusion of electrons in the medium.
Escaping the SNR, the pulsar should also damage the shell of the remnant.}
\rrr{Although the emission in the direction of the pulsar is slightly fainter exhibiting a gap in the shell,
there is no \RR{clear} evidence of distortion.} This can be naturally explained if the pulsar is not moving in the projected plane but its
velocity has a considerable component perpendicular to the projected plane. In this case the distorted part
of the shell is facing the observer and is thus not visible. The angular distance between the pulsar and
\rr{the centre of} the SNR candidate of about \rr{$19^{\prime}$} corresponds to a projected distance of $36$\,pc
assuming the distance to the pulsar of $6.6$\,kpc. This corresponds
to a \rr{projected} pulsar velocity of \rrr{$V_{\mathrm{p}}\simeq 3100$\,km/s for the characteristic age of the pulsar of 11 ky \citep{2005AJ....129.1993M}}. \rrr{This velocity
would make PSR\,J1301$-$6305\ the fastest known pulsar. The highest pulsar velocity detected so far is
$\sim1600$\,km/s \citep{1998ApJ...505..315C}.}
\rr{However, there are indications that
some pulsars might be much faster\rrr{, with velocities comparable to the estimate presented here for PSR\,J1301$-$6305}.
The estimate of the kick velocity of the possible pulsar IGR\,J$11014-6103$ is
$2400-2900$\,km/s \citep{2012ApJ...750L..39T}, but the source nature cannot be unambiguously proven yet as no pulsations
were detected so far.}
\rrr{Also, the real age of PSR\,J1301$-$6305\ might be higher than the characteristic age if the braking index is lower than $3$,
which is the case for 8 out of 9 pulsars for which the braking index has been measured reliably \citep{2015MNRAS.446..857L, 2016ApJ...819L..16A}.
In this case the estimate of the pulsar velocity would be lower.}
\RR{
Alterntively, the real age of the pulsar can be estimated as the age of its SNR.
In case G304.4$-$0.2\ is indeed an SNR and the birth place of the pulsar PSR\,J1301$-$6305, its size can
provide an estimate for the pulsar age.
Assuming a distance to the pulsar of $6.6$\,kpc \citep{2012A&A...548A..46H}, the angular size of the SNR candidate of about $16^{\prime}$
corresponds to 30\,pc in diameter. The Sedov solution \citep{1959sdmm.book.....S}, which describes
the hydrodynamical expansion of an SNR in the adiabatic stage of evolution into the homogeneous medium,
provides an estimate of the SNR age for a given size of the remnant
\begin{equation}
t_{\mathrm{age}} = 16 \left(\frac{E}{10^{51}\,[\mathrm{erg}]}\right)^{-1/2}\left(\frac{n_{\mathrm{ISM}}}{1\,[\mathrm{cm}^{-3}]}\right)^{1/2}\left(\frac{R}{15\,[\mathrm{pc}]}\right)^{5/2}\,[\mathrm{kyr}],
\end{equation}
where $E$ is the explosion energy, $n_{\mathrm{ISM}}$ is the number density of the interstellar
medium and $R$ is the radius of the remnant. This value is somewhat higher than
the characteristic age of the pulsar of 11 kyr \citep{2005AJ....129.1993M}, but does not contradict it if the
braking index is lower than 3 (see above). If we assume that the real age of the pulsar is 16 kyr then
the projected velocity would be $2100$\,km/s, which is still very high.
It should be noted however hat the SNR age estimate is dependent on the ambient medium density which is often
considerably lower than $1\,\mathrm{cm}^{-3}$ (even by \RRR{orders of} magnitude) and it is also very sensitive to
the estimated physical size of the SNR. Therefore, this estimate of age should be taken with caution.}
\RR{A pulsar with this high velocity will be moving supersonically in the interstellar medium creating a
bow shock. A bow shock driven through the neutral gas can generate optical emission in the Balmer lines
at the forward shock \citep{2001A&A...375.1032B, 2002A&A...393..629B} and such emission is already
discovered for a few pulsars in H$_\alpha$. There is no evidence for such emission around PSR\,J1301$-$6305, which however does not
necessarily contradict our hypothesis, since a lot of pulsars are believed to be moving supersonically and
only for a few of them a bow-shock structure is detected at optical wavelengths. It should also be noted that
H$_\alpha$ bowshocks and X-ray tails are rarely seen together \citep{2015SSRv..191..391K}.
Inside the bow shock pulsar wind particles will be accelerated at the termination shock subsequently generating non-thermal emission
synchrotron emission which can be detected in radio and X-rays \citep{2006ARA&A..44...17G}. Although no
extended radio emission associated with PSR\,J1301$-$6305\ was detected (which can be simply because of the potentially
large size of the radio nebula), the X-ray nebula does exhibit a bow-shock
morphology with a tail pointing in the direction of G304.4$-$0.2\ strongly suggesting that the pulsar is moving
with high velocity.}
\section{Summary}
\label{summary}
ATCA\ observations of the HESS\,J1303$-$631\ \rr{region} at $5.5$\,GHz and $7.5$\,GHz do not
reveal any significant extended emission associated with PSR\,J1301$-$6305. Archival
$1.384$\,GHz and $2.368$\,GHz data also do not show any evidence of a radio
counterpart of HESS\,J1303$-$631. Non-detection of this evolved PWN at radio wavelengths
suggests that either the putative radio PWN is comparable to the size of the
GeV counterpart and, thus, larger than the \rrr{largest reliably imaged structure
and even the primary beam} of radio observations or that
the magnetic field is rather low, which is in agreement with the evolved
PWN identification as the magnetic field is expected to decrease with time
in PWNe. \RRR{The comparison of the X-ray emission to the TeV emission implies the
average magnetic field in the PWN of $\sim 0.2- 2\,\mu$G \citep{2012A&A...548A..46H}, which however does not exclude the possibility of enhanced magnetic field around the pulsar.}
Archival $1.384$\,GHz observations reveal a detection of a \RRRR{extended} structure \R{centred} at the angular
\R{distance} of \rr{$19^\prime$} from the pulsar. This \RRRR{extended} structure might be an
SNR and a potential birth place of the pulsar. If this is the case \R{then}
the projected velocity of the pulsar \R{would be} \rrr{$3100$\,km/s \RR{assuming the characteristic age
of the pulsar. This would make PSR\,J1301$-$6305\ the fastest known pulsar. However, the uncertainty of the
true age of the pulsar can significantly change this estimate.}}
\acknowledgements
We would like to thank the anonymous referee for valuable comments which strongly improved the paper.
\R{We would like to acknowledge the help of Marek Jamrozy (Astronomical Observatory of the Jagiellonian University of Krak\'ow, Poland) and Michael Bietenholz (Hartebeesthoek Radio Observatory, South Africa; York University, Toronto, Canada) on understanding certain aspects of the ATCA analysis performed in the paper.}
The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Australian Government for operation as a National Facility managed by CSIRO.
This paper includes archived data obtained through the Australia Telescope Online Archive (\url{http://atoa.atnf.csiro.au}).
\bibliographystyle{aa}
|
1,314,259,993,521 | arxiv | \section{Motivation and Overview}\label{sec:introduction}
Since their creation in the 1940's, formal languages and grammars have found many applications in, for instance, programming languages and artificial intelligence.
A rapid development of the Internet, in particular, may treat one-way finite automata as a meaningful model of hand-held communication devices with a small amount of fast cache memory, which can process incoming information streamlined through a communication channel.
Such types of automata are closely associated with the classical notions of {\em regular languages} and {\em context-free
languages}, which were also classified respectively as
type 3 and type 2 by Chomsky. These notions, which
are foundations to formal language theory,
have been a key subject of many core undergraduate
curricula in computer science. The beauty of these notions comes from their simplicity and applicability.
One of the most useful tools in formal language theory is so-called {\em pumping lemmas} \cite{BPS61} for regular and context-free languages. Most undergraduate textbooks in this classic theory describe these lemmas (and their variants) as a powerful but fundamental tool. A standard methodology of proving the non-regularity of a target language, for instance, is to apply the pumping lemma to the language, deriving a desired contradiction if we begin with assuming the regularity of the language (see, \textit{e.g.},\hspace*{2mm} \cite{HU79} for its argument). For certain natural properties of languages, however, the pumping lemmas are not in the most useful form and we thus need to develop another form of lemmas to prove those properties. A typical example concerns ``advised computation.''
One way of enhancing the power of machine's language recognition is to provide a piece of {\em advice}, which is a string depending only on the length of input strings, to the original input strings. Karp and Lipton \cite{KL82} formulated a mechanism of such advice for polynomial time-bounded computation. Inspired by the work of Karp and Lipton, Damm and Holzer \cite{DH95} and Tadaki, Yamakami, and
Lin \cite{TYL04} discussed the computational power of finite automata when advice is provided as supplemental information to their single read-only input tapes. See Section \ref{sec:notation} for the formal definition of automata that take advice. As in \cite{TYL04}, we use the notation $\mathrm{REG}/n$ to denote the family of languages recognized by deterministic finite automata if advice of size exactly $n$ is provided {\em in parallel} to an input of size $n$. Similarly, we define $\mathrm{CFL}/n$ for the language family characterized by nondeterministic pushdown automata with such advice.
As is known, the advised family $\mathrm{REG}/n$ is quite different from $\mathrm{REG}$, the family of regular languages.
Typical context-sensitive languages, such as
$L_{3eq} = \{a^nb^nc^n \mid n\geq0\}$, naturally fall into this advised family $\mathrm{REG}/n$. On the contrary, certain deterministic context-free languages, such as $Equal = \{w\in\{0,1\}^*\mid \#_0(w)=\#_1(w)\}$ and $GT = \{w\in\{0,1\}^*\mid \#_0(w)>\#_1(w)\}$, where $\#_a(w)$ denotes the number of $a$'s in $w$, do not belong to $\mathrm{REG}/n$.
{\em How can we prove this fact?}
Now, let us try to prove that, for example, $Equal$ is not in $\mathrm{REG}/n$ using the pumping lemma for regular languages. Assume otherwise that $Equal$ is characterized by a certain regular language $L$ with advice strings over an alphabet $\Gamma$.
Now, we apply the pumping lemma by picking a pumping-lemma constant $m$ and then choosing an appropriate string $w$
(together with an advice string) in $L$
of length at least $m$. Using its decomposition $w=xyz$, the pumping lemma pumps this chosen string $w$ with advice given in parallel to $w$, generating a new series of strings of the
form $xy^iz$. These pumped strings must belong to $L$; however, are they still in $Equal$ with the appropriate advice strings?
At this point, we encounter a serious flaw in our argument.
The pumping process unwisely pumps the original string as well as the valid advice string.
Since the pumping lemma forces the size of pumped strings to change, their associated pumped advice might no longer be ``valid'' advice for $Equal$.
Therefore, we cannot conclude that the pumped strings are actually in $Equal$ with advice. To avoid this pitfall, we need to develop a new lemma, which keeps advice valid before and after the application of the lemma.
In this paper, we shall present such a desired lemma, which we refer to as the {\em swapping lemma}, encompassing an essential nature of regular languages. In many cases, this new lemma is as powerful as the pumping lemma is. As examples, we shall later demonstrate, by a direct application of the swapping lemma, that the context-free language $Pal = \{ww^{R} \mid w\in\{0,1\}^*\}$ (even-length palindromes), where $w^R$ is $w$ in reverse, and the aforementioned languages $Equal$ and $GT$ cannot be in $\mathrm{REG}/n$ (and therefore, they are not in $\mathrm{REG}$). The last two examples show a separation of $\mathrm{DCFL}$, the family of deterministic context-free languages, from the advised class $\mathrm{REG}/n$. This immediately yields the class separation
$\mathrm{DCFL}/n\neq \mathrm{REG}/n$, which has not been known so far. Our proof of the swapping lemma for regular languages is considerably simple and can be obtained from a direct application of the pigeonhole principle.
Likewise, we also introduce a similar form of swapping lemma for context-free
languages to deal with the non-membership to the advised family $\mathrm{CFL}/n$.
With help of this swapping lemma, as an example, we prove that the language $Dup=\{ww\mid w\in\{0,1\}^*\}$ (duplicating strings) is not in $\mathrm{CFL}/n$ (and therefore not in $\mathrm{CFL}$, the family of context-free languages).
Another (slightly contrived) example is the language $Equal_{6}$ consisting of all strings $w$ over an alphabet of $6$ symbols together with a special separator $\#$ for which each symbol except $\#$ appears the same number of times in $w$.
Since $Equal_{6}$ is in the complement of $\mathrm{CFL}$, denoted $\mathrm{co}\mbox{-}\mathrm{CFL}$, we obtain a strong separation between $\mathrm{CFL}/n$ and $\mathrm{co}\mbox{-}\mathrm{CFL}/n$; in other words,
$\mathrm{CFL}/n$ is not closed under complementation.
Our proof of the swapping lemma for context-free languages is quite different from a standard proof of the pumping lemma for context-free languages. Rather than using context-free grammars, our proof deals with a certain restricted form of nondeterministic pushdown automata to track down their behaviors in terms of transitions of first-in last-out stack contents.
The main purposes of this paper are summarized as follows: (i)
to introduce the two swapping lemmas for regular and context-free languages, (ii) to give their proofs by exploiting certain structural properties of finite automata, and (iii) to demonstrate the strong separations between $\mathrm{DCFL}/n$ and $\mathrm{REG}/n$ and between $\mathrm{CFL}/n$ and $\mathrm{co}\mbox{-}\mathrm{CFL}/n$.
We hope that the results of this paper should contribute to a fundamental progress of formal language theory and revive fresh interest in basic notions of regular languages and context-free languages.
\section{Notions and Notation}\label{sec:notation}
The {\em natural numbers} are nonnegative integers and we write $\mathbb{N}$ to denote the set of all such numbers. For any two integers $m,n$ with $m\leq n$, the notation $[m,n]_{\mathbb{Z}}$ stands for the integer interval
$\{m,m+1,m+2,\ldots,n\}$.
An {\em alphabet} is a nonempty finite set and our alphabet is denoted by either $\Sigma$ or $\Gamma$. A {\em string} over an alphabet $\Sigma$ is a series of symbols from $\Sigma$. In particular, the {\em empty string} is always denoted $\lambda$.
The notation $\Sigma^*$ expresses the set of all strings over $\Sigma$.
The {\em length} of a string $w$, denoted $|w|$, is the total number of symbols in $w$. For each length $n\in\mathbb{N}$, we write $\Sigma^n$ (resp., $\Sigma^{\leq n}$) for the set of all strings over $\Sigma$ of length exactly $n$ (resp., at most $n$). For any non-empty string $w$ and any number $i\in[0,|w|]_{\mathbb{Z}}$, $pref_{i}(w)$ denotes the first $i$ symbols of $w$; namely, the substring $s$ of $w$ such that $|s|=i$ and $sx=w$ for a certain string $x$. In particular, $pref_{0}(w)=\lambda$ and $pref_{|w|}(w)=w$.
Similarly, let $suf_{i}(w)$ be the last $i$ symbols of $w$.
For any string $x$ of length $n$ and two arbitrary indices $i,j\in[1,n]_{\mathbb{Z}}$ with $i\leq j$, the notation $midd_{i,j}(x)$ denotes the string obtained from $x$ by deleting the first $i$ symbols and the last $n-j$ symbols of $x$; thus, $midd_{i,j}(x)$ contains exactly $j-i$ symbols taken from $x$. As a special case, $midd_{i,i}(x)$ expresses the $i$th symbol of $x$. It always holds that $x= pref_{i}(x)midd_{i,j}suf_{n-j}(x)$.
For any language $L$ over $\Sigma$, the {\em complement} $\Sigma^* - L$ of $L$ is denoted $\overline{L}$ whenever $\Sigma$ is clear from the context.
The {\em complement} of a family
${\cal C}$ of languages is
the collection of all languages whose complements are in ${\cal C}$.
We use the conventional notation $\mathrm{co}\mbox{-}{\cal C}$ to describe the complement of ${\cal C}$.
We denote by $\mathrm{REG}$ the family of {\em regular languages}. Similarly, the notation $\mathrm{CFL}$ represents the family of {\em context-free languages}.
We also use the notation $\mathrm{DCFL}$ for the {\em deterministic
context-free language} family.
We assume the reader's familiarity with fundamental mechanisms of {\em one-tape one-head finite-state automata} and their variant, {\em pushdown automata}
(see, \textit{e.g.},\hspace*{2mm} \cite{HU79} for their formal definitions).
Concerning these machine models, $\mathrm{REG}$, $\mathrm{CFL}$, and $\mathrm{DCFL}$ can
be characterized by deterministic automata
(or dfa's), nondeterministic pushdown automata (or npda's), and deterministic
pushdown automata (or dpda's), respectively.
Now, let us briefly state the notion of {\em advice} in the form given in \cite{TYL04}, which is slightly different from \cite{DH95}.
First, we explain how to provide advice strings to finite automata using a ``track'' notation. Consider a finite automaton with one scanning head moving on a read-only input tape, which consists of tape cells indexed with integers. For simplicity, the leftmost symbol of any input string is always placed in the cell indexed $1$. Now, we split the tape into two tracks. The upper track contains an original input $x$ given to the machine and the lower track carries a piece of advice, which is a string $w$ (over a possibly different alphabet) of the length $|x|$.
More precisely, the tape contains $n$ tape cells consisting of the string $\track{x}{w} = \track{x_1}{\sigma_1}\track{x_2}{\sigma_2}\cdots \track{x_n}{\sigma_n}$, where $x=x_1x_2\cdots x_n$ and $w=\sigma_1\sigma_2\cdots\sigma_n$, in such a way that each $i$th cell contains the symbol $\track{x_i}{\sigma_i}$. The machine takes advantages of this advice $w$
to determine whether it accepts the input $x$ or not.
To deal with all different lengths,
advice is generally given as a form of function\footnote{In the original definition of advice functions by Karp and Lipton \cite{KL82}, these functions are not necessarily computable. This is mainly because we are interested in how much ``information'' with which each piece of advice provides an underlying machine, rather than how to generate such information.} $h$ (which is called an {\em advice function}) mapping $\mathbb{N}$ to $\Gamma^*$, where $\Gamma$ is another alphabet, such that $|h(n)|=n$ for any length $n\in\mathbb{N}$.
The succinct notation $\mathrm{REG}/n$, given in \cite{TYL04}, denotes the collection of all languages $L$ over an alphabet $\Sigma$ such that there are an advice function $h$ and a dfa $M$ for which,
for all strings $x\in\Sigma^*$, $x\in L$ iff $M$ accepts $\track{x}{h(|x|)}$.
Since each dfa $M$ characterizes a certain regular language, say, $L'$, the above definition of $\mathrm{REG}/n$ can be made machine-independent by replacing the dfa $M$ by the regular language $L'$: let $L\in\mathrm{REG}/n$ if
there exist an advice function $h$ and another language $L'$ in $\mathrm{REG}$ for which,
for all strings $x\in\Sigma^*$, $x\in L$ iff $\track{x}{h(|x|)}\in L'$. Similarly, we can define the advised families $\mathrm{CFL}/n$ and $\mathrm{DCFL}/n$ from $\mathrm{CFL}$ and $\mathrm{DCFL}$, respectively.
To help the reader grasp the concept of advice, we shall see a quick example of how to prepare such advice and use it to accept our target strings. Consider the context-sensitive language $L_{3eq} = \{a^nb^nc^n\mid n\in\mathbb{N}\}$. It is obvious that $L_{3eq}$ is not a regular language. Let us claim that $L_{3eq}$ belongs to $\mathrm{REG}/n$.
\begin{example}\label{3eq-advice}
Consider the non-regular language $L_{3eq} = \{a^nb^nc^n\mid n\in\mathbb{N}\}$. It is easy to check that $L_{3eq}$ belongs to $\mathrm{REG}/n$ by choosing an advice function $h$ defined as $h(n)= a^{n/3}b^{n/3}c^{n/3}$ if $n\equiv0\;(\mathrm{mod}\;3)$ and $h(n)=0^{n}$ otherwise.
How can we recognize this language with advice?
Consider a dfa $M$ that behaves as follows.
On input $\track{x}{h(|x|)}$ with advice $h(|x|)$, if
$x=\lambda$, then accept the input immediately. Otherwise, check if the first bit of $h(n)$ is $a$. If so, check if $x=h(|x|)$ by moving the tape head one by one to the right. This is possible by scanning the upper and lower tracks at once at each cell.
If the first bit of $h(n)$ is $c$ instead, reject the input. It is obvious that $M$ accepts $\track{x}{h(|x|)}$ iff $x\in L_{3eq}$. Therefore, $L_{3eq}$ belongs to $\mathrm{REG}/n$.
\hfill$\Box$ \bs
\end{example}
Another example is a context-free language $Pal = \{ww^R\mid w\in\{0,1\}^*\}$ (even-length palindromes), where $w^R$ denotes $w$ in reverse. It is well-known that $Pal$ is located outside $\mathrm{DCFL}$; however, as we show below,
advice helps $Pal$ sit inside $\mathrm{DCFL}/n$.
\begin{example}\label{pal-advice}
The non-``deterministic context-free'' language $Pal$ belongs to $\mathrm{DCFL}/n$. This claim is shown as follows. It is well-known that the ``marked'' language $Pal_{\#} =\{w\#w^{R}\mid w\in\{0,1\}^*\}$, where $\#$ is a center marker, can be recognized by a certain dpda, say, $M$ (see, \textit{e.g.},\hspace*{2mm} \cite{HU79} for its proof). The center marker in $Pal_{\#}$ gives $M$ a cue to switch the dpda's inner mode at the time the dpda's head moves from $w$ to $w^R$. More precisely, the dpda $M$ stores the left substring $w$ in its stack and, upon its cue from the marker, $M$ checks whether this stack content matches the rest of the tape content. Since there is no center marker in $Pal$, we instead use an advice function $h$ to mark the boundary between $w$ and $w^{R}$ in $ww^{R}$. We define $h(0)=\lambda$, $h(n) = 0^{n/2-1}101^{n/2-1}$ if $n$ is even with $n\geq 2$, and $h(n)=1^n$ if $n$ is odd. The first occurrence of $10$ in $h(n)$ in the even case, for instance, signals the time of transition from $w$ to $w^R$ in a similar way as the dpda $M$ does for $Pal_{\#}$. This advice places $Pal$ in $\mathrm{DCFL}/n$.
\hfill$\Box$ \bs
\end{example}
\section{Swapping Lemma for Regular Languages}
Our goal of this section is to develop a new form of useful lemma, which can substitute the well-known {\em pumping lemma} \cite{BPS61} for
regular languages, even in the presence of advice.
We have seen the power of advice in Examples \ref{3eq-advice} and \ref{pal-advice}: advice helps dfa's recognize non-``regular'' languages and also helps dpda's recognize non-``deterministic context-free'' languages.
When we want to show that a certain language $L$, such as $Equal$ and $Pal$, does not belong to $\mathrm{REG}/n$, a standard way (stated in many undergraduate textbooks) is an application of the pumping lemma for
regular languages.
A basic scheme of the standard pumping lemma (and many of its variants) states that, for any infinite regular language $L$ and any string $w$ in $L$, as far as its string size is large enough (at least a certain constant that depends only on $L$), we can always pump the string $w$ (by repeating a middle portion of $w$) while keeping its pumped string within the language $L$. Unfortunately, as discussed in Section \ref{sec:introduction}, the pumping lemma is not as useful as we hope it should be, when we wish to prove that certain languages are located outside the advised family $\mathrm{REG}/n$. To achieve our goal, we need to develop a different type of lemma, which we call the {\em swapping lemma} for regular languages.
We begin with a simpler form of our swapping lemma.
\begin{lemma}\label{swapping-lemma}{\rm [Swapping
Lemma for Regular Languages]}\hs{1}
Let $L$ be any infinite regular language over an alphabet
$\Sigma$ with $|\Sigma|\geq2$.
There exists a positive integer $m$ (called a swapping-lemma constant) such that, for any integer $n\geq 1$ and any subset $S$ of $L\cap\Sigma^n$ of cardinality more than $m$, the following condition holds: for any integer $i\in[0,n]_{\mathbb{Z}}$, there exist two strings $x=x_1x_2$ and $y=y_1y_2$ in $S$ with $|x_1|=|y_1|=i$ and $|x_2|=|y_2|$ satisfying that (i) $x\neq y$, (ii) $y_1x_2\in L$, and (iii) $x_1y_2\in L$.
\end{lemma}
\begin{proof}
We prove the lemma by a simple counting argument with use of the pigeonhole principle. Let $L$ be any infinite regular language over an alphabet $\Sigma$. Choose a dfa $M = (Q,\Sigma,\delta,q_0,F)$ that {\em recognizes} $L$, where $Q$ is a finite set of inner states, $\delta$ is a transition function, $q_0\in Q$ is the initial state, and $F\subseteq Q$ is a set of final states. We define our swapping-lemma constant $m$ as $|Q|$. Let $n$ be any integer at least $1$ and let $S$ be any subset of $L\cap\Sigma^n$ with $|S|> m$. Clearly, $|S|\geq 2$. Choose an arbitrary index $i\in[0,n]_{\mathbb{Z}}$. If either $i=0$ or $i=n$, then the lemma is trivially true (by choosing any two distinct strings $x,y$ in $S$). Henceforth, we assume that $n\geq 2$ and $1\leq i\leq n-1$.
Consider internal states just after scanning the $i$th cell.
Since $|S|>|Q|$, there are two distinct strings $x,y\in S$ for which $M$ enters the same internal state, say $q$, after reading the $i$th symbol of $x$ as well as $y$. Since the dfa cannot distinguish $pref_{i}(x)$ and $pref_{i}(y)$ after reading these substrings, $M$
should accept the swapped strings $pref_{i}(x)suf_{n-i}(y)$ and $pref_{i}(y)suf_{n-i}(x)$. This completes the proof.
\end{proof}
Notice that, in the no-advice case, our swapping lemma can be used as a substitute for the pumping lemma when a target language $L$ is not ``slim'' enough (\textit{i.e.},\hspace*{2mm} $|L\cap\Sigma^n|>m$).
Let us demonstrate two simple examples of how to use our swapping lemma.
The first example is the context-free language $Pal = \{ww^R\mid w\in\{0,1\}^*\}$.
\begin{example}\label{pal-case}
The context-free language $Pal$ is not in $\mathrm{REG}/n$ (and thus not in $\mathrm{REG}$). Assume that $Pal$ belongs to $\mathrm{REG}/n$ and apply the swapping lemma for regular languages. Since $Pal\in\mathrm{REG}/n$, there are a language $L\in\mathrm{REG}$
over an alphabet $\Sigma$ and an advice function $h$ such that,
for every string $x\in\{0,1\}^*$, $\track{x}{h(|x|)}\in L$
iff $x\in Pal$.
Take a swapping-lemma constant $m$ that satisfies the swapping lemma for $L$.
Choose $n = 2m$ and $i= n/2$.
Let a subset $S$ of $L\cap\Sigma^n$ be
$S = \left\{\track{x}{h(n)}\in L \mid |x|=n\right\}$.
Notice that $|S|\geq 2^{n/2}>m$. By the lemma, there are two distinct strings $x,y\in\{0,1\}^n$
that force $\track{x}{h(n)}$ and $\track{y}{h(n)}$ to fall into $S$.
By letting $u_1 = pref_{i}(x)$ and $u_2 = pref_{i}(y)$, the strings $x$ and $y$ are written as $x = u_1u_1^{R}$ and $y = u_2u_2^{R}$.
Now, let us consider the two swapped strings $u_1u_2^{R} = pref_{n/2}(x)suf_{n/2}(y)$ and $u_2u_1^{R} = pref_{n/2}(y)suf_{n/2}(x)$.
These strings are clearly not of the form $ww^{R}$, and thus the swapped strings $\track{u_1u_2^R}{h(n)}$ and $\track{u_2u_1^R}{h(n)}$ cannot belong to $L$. This is a contradiction against the swapping lemma. Therefore, $Pal$ is not in $\mathrm{REG}/n$.
\hfill$\Box$ \bs
\end{example}
The use of the subset $S$ in the swapping lemma, Lemma \ref{swapping-lemma}, is of great importance in dealing with the advised family $\mathrm{REG}/n$ because, for instance, $S$ in the above example cannot be defined as $S= L\cap\Sigma^n$ in order to lead to a desired contradiction. There are also cases that require more dexterous choices of $S$. One of those cases is
the non-regular language
$Equal = \{ w\in\{0,1\}^* \mid \#_0(w)=\#_1(w) \}$.
\begin{example}\label{equal-case}
The deterministic context-free language $Equal$ is not in $\mathrm{REG}/n$. This statement was first stated in \cite{TYL04}. Our purpose here is to apply our swapping lemma to reprove this known result. Assume that $Equal$ is in $\mathrm{REG}/n$.
There are a regular language $L$ and an advice function $h$ such that, for every binary string $x$, $x\in Equal$ iff $\track{x}{h(|x|)}\in L$. Take a swapping-lemma constant $m$ and set $n = 2m$
as well as $i=n/2$. In this example, we cannot take a subset
$S = \left\{\track{x}{h(n)}\in L \mid |x|=n\right\}$ as we have done
in Example \ref{pal-case}; instead, we rather choose $n/2+1$
distinct strings $w_0,w_1,w_2,\ldots,w_{n/2}\in\{0,1\}^n$, where $w_k = 0^{k}1^{n/2-k}0^{n/2-k}1^{k}$ for each index $k\in[0,n/2]_{\mathbb{Z}}$,
and we then define $S=\left\{\track{w_0}{h(n)},\ldots,
\track{w_{n/2}}{h(n)}\right\}$.
Clearly, the cardinality $|S|$ is more than $m$. The crucial point of the choice of $w_{k}$'s is explained as $\#_{0}(pref_{n/2}(w_k)) =k$ for any number $k\in[0,n/2]_{\mathbb{Z}}$.
The swapping lemma provides two distinct strings
$x=w_j$ and $y=w_k$ ($j\neq k$) such that
$\track{x}{h(n)},\track{y}{h(n)}\in S$ and
$\track{u_1}{h(n)},\track{u_2}{h(n)}\in L$, where
the swapped strings $u_1$ and $u_2$ are of the form
$u_1= pref_{n/2}(x)suf_{n/2}(y)$ and
$u_2 = pref_{n/2}(y)suf_{n/2}(x)$. It easily follows that
\[
\#_0(u_1) = \#_0(pref_{n/2}(x)) + \#_0(suf_{n/2}(y)) = j + \frac{n}{2}-k \neq \frac{n}{2}
\]
since $j\neq k$. This contradicts the result that
$\track{u_1}{h(n)}\in L$.
Therefore, $Equal$ cannot be in $\mathrm{REG}/n$.
\hfill$\Box$ \bs
\end{example}
Next, we present a more general form of our swapping lemma. In Lemma \ref{swapping-lemma}, two strings $x$ and $y$ are both split into two blocks and one of these blocks is used for swapping. In the next lemma, in contrast, these strings are split into any fixed number of blocks, one of which is actually used for swapping. This form
is useful when we want to show that, for instance,
the non-regular language $GT = \{w\in\{0,1\}^*\mid \#_0(w)>\#_1(w)\}$ does not belong to $\mathrm{REG}/n$.
\begin{lemma}\label{2nd-swapping-lemma}{\rm [Swapping Lemma
for Regular Languages]}\hs{1}
Let $L$ be any infinite regular language over an
alphabet $\Sigma$ with $|\Sigma|\geq 2$. There is a positive number $m$ such that, for any number $n\geq 1$, any set $S\subseteq L\cap\Sigma^{n}$, and any series $(i_1,i_2,\ldots,i_k)\in([1,n]_{\mathbb{Z}})^{k}$ with $\sum_{j=1}^{k}i_j\leq n$ for a certain number $k\in[1,n]_{\mathbb{Z}}$,
the following statement holds. If $|S| > m$, then there exist two strings $x = x_1x_2\cdots x_{k+1}$ and $y=y_1y_2\cdots y_{k+1}$ in $S$ with
$|x_{k+1}|=|y_{k+1}|$ and $|x_{j'}|=|y_{j'}|=i_{j'}$ for each index $j'\in[1,k]_{\mathbb{Z}}$
such that, for every index $j\in[1,k]_{\mathbb{Z}}$, (i) $x\neq y$, (ii)
$x_1\cdots x_{j-1}y_{j}x_{j+1}\cdots x_{k+1}\in L$, and (iii) $y_1\cdots y_{j-1}x_{j}y_{j+1}\cdots y_{k+1}\in L$.
\end{lemma}
\begin{proof}
Note that, when $k=1$, this lemma is indeed Lemma \ref{swapping-lemma}. Consider a dfa $M=(Q,\Sigma,\delta,q_0,F)$ that recognizes $L$. Let $S\subseteq L\cap\Sigma^{n}$ satisfy $|S| > m$, where $m = |Q|^k$. To each string $s\in S$, we assign a $k$-tuple $(q_1,q_2,\ldots,q_k)$ of internal states of $M$ such that, for each $j\in[1,k]_{\mathbb{Z}}$, $M$ enters state $q_j$ after scanning the $(\sum_{e=1}^{j}i_e)$th cell. There are at most $|Q|^k$ such tuples. Since $|S| > |Q|^k$, there are two distinct strings $x,y$ in $S$ such that they correspond to the same series of internal states, say $(q_1,q_2,\ldots,q_k)$.
Write $x = x_1x_2\cdots x_{k+1}$ and $y=y_1y_2\cdots y_{k+1}$, where $|x_{k+1}|=|y_{k+1}|$ and $|x_{j'}|=|y_{j'}|=i_{j'}$ for
every index $j'\in[1,k]_{\mathbb{Z}}$.
Notice that, for each index $j\in[1,k]_{\mathbb{Z}}$, $M$ enters the same internal state $q_{j}$ after scanning $x_{j}$ as well as $y_{j}$ on the inputs $x$ and $y$, respectively. Fix an index $j\in[1,k]_{\mathbb{Z}}$ arbitrarily. {}From the choice of $x$ and $y$, we can swap the two blocks $x_{j}$ and $y_{j}$ in $x$ and $y$ without changing the acceptance condition of $M$. Therefore, the swapped strings $x_1\cdots x_{j-1}y_{j}x_{j+1}\cdots x_{k+1}$ and $y_1\cdots y_{j-1}x_{j}y_{j+1}\cdots y_{k+1}$ are both accepted by $M$.
This gives the conclusion of the lemma.
\end{proof}
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=13cm]{gt-string.eps}
\caption{A string $w^{(j)}$ with $j=3$ and $m=7$}\label{fig:GT-string}
\end{center}
\end{figure}
Let us demonstrate how to apply Lemma \ref{2nd-swapping-lemma} to the deterministic context-free language $GT$.
\begin{example}\label{GT-regular}
The deterministic context-free language $GT$ is not in $\mathrm{REG}/n$. Assuming that $GT\in \mathrm{REG}/n$, we choose an advice function $h$ and a regular language $L$ over an alphabet $\Sigma$ such that,
for every binary string $x$, $x\in GT$ iff $\track{x}{h(|x|)}\in L$. Since $L$ is an infinite regular language, we can apply Lemma \ref{2nd-swapping-lemma} to $L$. Let $m$ be a swapping-lemma constant. Without loss of generality, we can assume that $m$ is odd and at least $3$. Define $n = m^2$
and we are focused on the set $L\cap\Sigma^n$.
Let $(i_1,i_2,\ldots,i_{m-1})$ be a unique series defined by $i_j=m$ for
every index $j\in[1,m-1]_{\mathbb{Z}}$. This series makes each $n$ bit-string partitioned into $m$ blocks of equal size $m$. For each index $j\in[1,m]_{\mathbb{Z}}$, let $w^{(j)}$ denote the string $w^{(j)}_1w^{(j)}_2\ldots w^{(j)}_m$ of the following form: (i) the $j$th block $w^{(j)}_j$ equals $01^{m-1}$ and (ii) any other block $w^{(j)}_i$ equals $0^{m'+1}1^{m'}$, where $m' = \floors{m/2}$.
Figure \ref{fig:GT-string} gives an example of $w^{(j)}$
when $j=3$ and $m=7$.
Since $\#_0(w^{(j)}) = \#_1(w^{(j)})+1$, this string $w^{(j)}$
belongs to $GT$.
The desired set $S$ is thus defined as $\{\track{w^{(1)}}{h(n)},\ldots,\track{w^{(m)}}{h(n)}\}$.
Clearly, $|S|\geq m$.
By Lemma \ref{2nd-swapping-lemma}, there are two distinct
strings $w^{(k)}$ and $w^{(l)}$ ($k\neq l$) such that $\track{w^{(k)}}{h(n)},\track{w^{(l)}}{h(n)}\in S$ and $\track{\tilde{w}^{(k)}}{h(n)}, \track{\tilde{w}^{(l)}}{h(n)}\in L$,
where $\tilde{w}^{(k)}$ and $\tilde{w}^{(l)}$ are obtained respectively
from $w^{(k)}$ and $w^{(l)}$ by swapping their $l$th blocks.
Notice that, for each $i\in\{0,1\}$,
$\#_i(\tilde{w}^{(k)}) = \#_i(w^{(k)}) - \#_i(w^{(k)}_{l}) + \#_i(w^{(l)}_{l})$.
Since $\#_0(w^{(k)}_{l}) = \#_1(w^{(k)}_{l})+1$, $\#_0(w^{(l)}_{l}) = 1$,
and $\#_1(w^{(l)}_{l}) = m-1$, it immediately follows that
$\#_{1}(\tilde{w}^{(k)}) = \#_{0}(\tilde{w}^{(k)}) + m-2$,
which is greater than $\#_{0}(\tilde{w}^{(k)})$.
This implies that $\tilde{w}^{(k)}\not\in GT$,
contradicting the conclusion of Lemma \ref{2nd-swapping-lemma}. Therefore, $GT$ cannot be in $\mathrm{REG}/n$.
\hfill$\Box$ \bs
\end{example}
{}From Examples \ref{equal-case} and \ref{GT-regular}, since $Equal$ and $GT$ are both deterministic context-free, we can obtain a separation between $\mathrm{DCFL}$ and $\mathrm{REG}/n$. This gives a strong separation between $\mathrm{REG}/n$ and $\mathrm{DCFL}/n$, because $\mathrm{REG}/n=\mathrm{DCFL}/n$ implies $\mathrm{DCFL}\subseteq \mathrm{REG}/n$ (using the fact that $\mathrm{DCFL}\subseteq \mathrm{DCFL}/n$).
\begin{proposition}
$\mathrm{DCFL}\nsubseteq \mathrm{REG}/n$.
Or equivalently, $\mathrm{REG}/n \neq \mathrm{DCFL}/n$.
\end{proposition}
\section{Swapping Lemma for Context-Free Languages}\label{sec:CFL}
We have shown the usefulness of our swapping lemma for regular languages
by proving that three typical languages cannot belong to $\mathrm{REG}/n$. Now, we turn our attention to $\mathrm{CFL}/n$, the family of context-free languages with advice. A standard\footnote{There are several well-known variants of pumping lemma for context-free languages. Ogden's lemma \cite{Ogd69}
is such a variant.} pumping lemma for context-free languages helps us pin down, for example, the language $Dup =\{ww \mid w\in\{0,1\}^*\}$ (duplicating strings) into the outside of $\mathrm{CFL}$.
As in the case of regular languages, this pumping lemma is also of no use when we try to prove that $Dup$ is not in $\mathrm{CFL}/n$.
This situation urges us to develop a new form of lemma, the {\em swapping lemma} for context-free languages.
To state our swapping lemma, we introduce the following notation
for each fixed subset $S$ of $\Sigma^n$. For any two indices
$i,j\in[1,n]_{\mathbb{Z}}$ with $i+j\leq n$ and any string $u\in\Sigma^{j}$, the notation $S_{i,u}$ denotes the set $\{x\in S\mid u = midd_{i,i+j}(x)\}$. It thus follows that $S = \bigcup_{u\in\Sigma^j}S_{i,u}$ for each fixed index $j\in[1,n]_{\mathbb{Z}}$.
\begin{lemma}\label{swapping-lemma-CFL}{\rm
[Swapping Lemma for Context-Free Languages]}\hs{1}
Let $L$ be any infinite context-free language over an
alphabet $\Sigma$ with $|\Sigma|\geq 2$. There is a positive number $m$ that satisfies the following. Let $n$ be any positive number at least $2$,
let $S$ be any subset of
$L\cap\Sigma^{n}$, and let $j_0,k\in[2,n]_{\mathbb{Z}}$ be any two indices satisfying that $k \geq 2j_0$ and $|S_{i,u}|< |S|/m(k-j_0+1)(n-j_0+1)$ for any index $i\in[1,n-j_0]_{\mathbb{Z}}$ and any string $u\in\Sigma^{j_0}$.
There exist two indices $i\in[1,n]_{\mathbb{Z}}$ and $j\in[j_0,k]_{\mathbb{Z}}$ with $i+j\leq n$ and two strings $x =x_1x_2x_3$ and $y=y_1y_2y_3$ in $S$ with $|x_1|=|y_1|=i$, $|x_2|=|y_2|=j$, and $|x_3|=|y_3|$ such that
(i) $x_2\neq y_2$, (ii) $x_1y_2x_3\in L$, and
(iii) $y_1x_2y_3\in L$.
\end{lemma}
The above form of our swapping lemma is similar to Lemma \ref{2nd-swapping-lemma}; however, we can no longer choose a pair $(i,j)$ at our will. Moreover, the cardinality of a subset $S$ must be much larger than that in the case of regular languages. Since the proof of this lemma is more involved than that of Lemma \ref{2nd-swapping-lemma}, we postpone it until the next section.
Meanwhile, we see how to use the swapping lemma for context-free languages. First, it is not difficult to show that $Dup$ does not belong to $\mathrm{CFL}/n$ by applying the lemma directly.
\begin{example}\label{duplicated-cfl}
The language $Dup$ is not in $\mathrm{CFL}/n$ (and thus not in $\mathrm{CFL}$). Let us assume that $Dup\in\mathrm{CFL}/n$ to lead to a contradiction.
First, choose an advice function $h$ and a context-free language $L$ such that, for any binary string $x$, $x\in Dup$ iff $\track{x}{h(|x|)}\in L$. Let $m$ be a swapping-lemma constant for $L$.
Second, choose any sufficiently large even number $n$ satisfying that $2^{n/2}> 2mn^2$. Now, let us define a subset $S$ as $S = \left\{\track{x}{h(n)}\in L\mid |x|=n\right\}$.
It suffices to satisfy the condition that $|S_{i,u}|\leq |S|/kmn$ for any index $i\in[1,n-j_0]_{\mathbb{Z}}$ and any string $u\in\Sigma^{j_0}$. Since $|S|=2^{n/2}$ and
$|S_{i,u}|= 2^{n/2-|u|}$ for any string $u\in\Sigma^{\leq n/2}$, we need to set $k=n/2$ and
$j_0 = \ceilings{\log_{2}{mn^2}}+1$.
By the swapping lemma for context-free languages,
there are two indices $j\in[j_0,k]_{\mathbb{Z}}$ and $i\in[1,n-j]_{\mathbb{Z}}$ and two strings $x = x_1x_2x_3$ and $y=y_1y_2y_3$ in $S$ with $|x_1|=|y_1|=i$, $|x_2|=|y_2|=j$, and $|x_3|=|y_3|$
such that (i) $x_2\neq y_2$, (ii) $x_1y_2x_3\in L$,
and (iii) $y_1x_2y_3\in L$.
There are three cases to consider: (a) $i+j\leq n/2$, (b) $i<n/2<i+j$, and (c) $n/2\leq i$. Let us consider Case (a).
Since $i\geq1$ and $2\leq j\leq n/2$, both $x_2$ and $y_2$ are respectively in the first half portion of $x$ and $y$. Therefore, it is obvious that the swapped strings $x_1y_2x_3$ and $y_1x_2y_3$ are not of the form $\track{ww}{h(n)}$.
This is a contradiction. The other cases are similar. Therefore, $Dup$ cannot be in $\mathrm{CFL}/n$.
\hfill$\Box$ \bs
\end{example}
In the above example, the choice of $k$ is crucial. For instance, when $k=n/2+1$, there is a case where we cannot lead to any contradiction. Consider the following case. Take two strings $x= x_1x_2x_3$ and $y=y_1y_2y_3$ satisfying that $x_1 = y_1 =0^{n/4-1}$, $x_3=y_3=1^{n/4}$, and
$x_2=0x_3x_10$, and $y_2=1y_3y_11$. Clearly, $x$ and $y$ are in $L\cap\{0,1\}^{n}$
and the swapped strings $x_1y_2x_3$ and
$y_1x_2y_3$ are also in $L$.
Our next example is a slightly artificial language $Equal_{6}$, which consists of all strings $w$ over the alphabet $\Sigma=\{a_1,a_2,\ldots,a_{6},\#\}$ such that each symbol except $\#$ in $\Sigma$ appears the same number of times in $w$; that is, $\#_{a}(w)= \#_{b}(w)$ for any pair $a,b\in\Sigma-\{\#\}$.
Note that the complement $\overline{Equal_{6}}$
is in $\mathrm{CFL}$.
This containment is shown by considering an npda that behaves as follows: on input $w$, choose two distinct symbols, say $a$ and $b$, in $\Sigma-\{\#\}$ nondeterministically and check if $\#_{a}(w) \neq \#_{b}(w)$. In other term, $Equal_{6}$ is in $\mathrm{co}\mbox{-}\mathrm{CFL}$.
On the contrary, we can show that $Equal_{6}$ cannot belong to $\mathrm{CFL}/n$.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=13.5cm]{equal6-string.eps}
\caption{A form of $w$ with $\#_{a_i}(w)=4$ and $\#_{\#}(w)=24$ when $n=48$}\label{fig:Equal6-string}
\end{center}
\end{figure}
\begin{example}\label{3eqaul-CFL}
The language $Equal_{6}$ is not in $\mathrm{CFL}/n$.
Assuming that $Equal_{6}\in\mathrm{CFL}/n$, we choose an advice function $h$ and a language $L\in\mathrm{CFL}$ such that, for every string $x\in\Sigma^*$,
$x\in Equal_{6}$ iff $\track{x}{h(|x|)}\in L$. Since $L\in\mathrm{CFL}$, take a swapping-lemma
constant $m$ for $L$.
Let $n = 864 m$. For each symbol $a_{i}$ in $\Sigma$, we use the notation $a_{(i,e)}$ for the string $(a_{i})^{e}\#^{n/12-e}$ of length $n/12$.
As a special case, we have $a_{(i,0)} = \#^{n/12}$ and $a_{(i,n/12)} = (a_i)^{n/12}$. For convenience, we denote by $w_{(e_1,e_2,\ldots,e_{6})}$
the string made up by the following $6$ blocks:
$a_{(1,e_1)}a_{(2,e_2)}\cdots a_{(6,e_{6})}$.
Let $S$ be the set consisting of all strings $\track{w}{h(n)}$, where $w$ is of the form $w_{(e_1,\ldots,e_{6})}w_{(n/12-e_1,\ldots,n/12-e_{6})}$ for any
six indices $e_1,e_2,\ldots,e_{6}\in [0,n/12]_{\mathbb{Z}}$.
An example of such $w$ is shown in Figure \ref{fig:Equal6-string}.
Notice that, for any symbol $a\in \Sigma-\{\#\}$, if $w\in S$ then $\#_{a}(w) = n/12$. Moreover, $\#_{\#}(w) = n/2$. Now, we choose $j_0=n/4$ and $k=n/2$. A simple observation gives $|S| = \left(n/12+1\right)^6$. Let $u$ be an arbitrary string in $\Sigma^{j_0}$.
To estimate $|S_{i,u}|$, we note that $|S_{i,u}| \leq |S_{1,\#^{n/4}}|$
for any index $i\in[1,n-j_0]_{\mathbb{Z}}$. This gives a simple upper bound: $|S_{i,u}| \leq \left(n/12+1\right)^3$. Obviously, since $n=864m$, we have
\[
|S_{i,u}| \cdot kmn \leq \left(\frac{n}{12}+1\right)^3
\cdot \frac{mn^2}{2}
= \frac{m(n+12)^5}{3456} < \left(\frac{n}{12}+1\right)^6 = |S|.
\]
The swapping lemma provides an index pair $i,j$ with $n/4 \leq j \leq n/2$ and $i+j\leq n$ and a string pair $x,y\in\Sigma^n$ with $midd_{i,i+j}(x)\neq midd_{i,i+j}(y)$ such that $\track{x}{h(n)},\track{y}{h(n)}\in S$ and $\track{x'}{h(n)},\track{y'}{h(n)}\in L$ , where $x'$ and $y'$ are two swapped strings defined as
$x' = pref_{i}(x)midd_{i,i+j}(y)suf_{n-i-j}(x)$ and $y' = pref_{i}(y)midd_{i,i+j}(x)suf_{n-i-j}(y)$.
\sloppy Since the substrings $midd_{i,i+j}(x)$ and $midd_{i,i+j}(y)$ have length $j$, $midd_{i,i+j}(x)\neq midd_{i,i+j}(y)$ implies that $\#_{a}(midd_{i,i+j}(x)) \neq \#_{a}(midd_{i,i+j}(y))$ for a certain symbol $a\in\Sigma-\{\#\}$. Hence, we conclude that $\#_{a}(x')\neq n/12$. This is a contradiction against the statement that $\track{x'}{h(n)}\in L$.
Therefore, $Equal_{6}$ cannot be in $\mathrm{CFL}/n$.
\hfill$\Box$ \bs
\end{example}
Since the language $Equal_{6}$ belongs to $\mathrm{co}\mbox{-}\mathrm{CFL}$, as shown in Example \ref{3eqaul-CFL}, we can
derive the following strong separation between $\mathrm{CFL}/n$ and $\mathrm{co}\mbox{-}\mathrm{CFL}/n$.
\begin{proposition}
$\mathrm{co}\mbox{-}\mathrm{CFL}\nsubseteq \mathrm{CFL}/n$. Or equivalently, $\mathrm{co}\mbox{-}\mathrm{CFL}/n\neq \mathrm{CFL}/n$.
\end{proposition}
The second part of the above proposition follows from the fact that
$\mathrm{co}\mbox{-}\mathrm{CFL}\subseteq \mathrm{co}\mbox{-}\mathrm{CFL}/n$.
This proposition also indicates that $\mathrm{CFL}/n$ is not closed under complementation.
\section{Proof of Lemma \ref{swapping-lemma-CFL}}
As we have seen in Examples \ref{duplicated-cfl} and \ref{3eqaul-CFL}, the swapping lemma for context-free languages are a powerful tool in proving non-context-freeness with advice.
This section will describe in detail the proof of Lemma \ref{swapping-lemma-CFL}. The proof requires an analysis of a stack's behavior of a nondeterministic pushdown automaton (or an npda).
\subsection{Nondeterministic Pushdown Automata}\label{sec:npda}
Although there are several models to describe context-free languages, here, we use the machine model of npda's. We first review certain facts regarding the npda's.
Let $L$ be any infinite context-free language over an alphabet $\Sigma$ with $|\Sigma|\geq2$. Since Lemma \ref{swapping-lemma-CFL} targets only inputs of length at least $2$,
it is harmless for us to assume that $L$ contains no empty string $\lambda$. Now, consider a
context-free grammar $G=(V,T,S,P)$ that generates $L$ with $T= \Sigma$,
where $V$ is a set of variables, $T$ is a set of terminal symbols, $S\in V$ is the start variable, and $P$ is a set of productions. Without loss of generality, we can assume that $G$ is in {\em Greibach normal form}; that is, $P$ consists of the production rules of the form $A\,\rightarrow au$, where $A\in V$, $a\in\Sigma$, and $u\in V^*$.
A process of transforming a context-free grammar into Greibach normal form is described in many undergraduate textbooks (\textit{e.g.},\hspace*{2mm} \cite{HU79}).
Closely associated with the grammar $G$, we want to build an npda $M =(Q,\Sigma,\Gamma,\delta,q_0,z,F)$, where $Q$ is a set of internal states, $\Gamma$ is a stack alphabet, $\delta$ is a transition function, $q_0\in Q$ is the initial state, $z\in \Gamma$ is the stack start symbol, and $F\subseteq Q$ is a set of final states. For our npda $M$, we define $Q=\{q_0,q_1,q_f\}$, $z\not\in V$, and $\Gamma = V\cup\{z\}$, $F=\{q_f\}$.
To make our later argument simpler, we include two special {\em end-markers}
$\cent$ and $\$$, which mark the left end and the right end of an input, respectively. Hereafter, we consider only inputs of the form $\cent x \$$, where $x\in\Sigma^*$, and we sometimes treat the endmarkers as an integrated part of the input. Notice that $|\cent x \$|=|x|+2$. For convenience, every tape cell is indexed with integers and the left endmarker $\cent$ is always written in the $0$th cell. The input string $x$ of length $n$ is written in the cells indexed between $1$ and $n$ and the right endmarker $\$$ is written in the $n+1$st cell.
When we express the content of the stack of $M$ as a series $s = s_1s_2s_3\cdots s_m$, we understand that the leftmost symbol $s_1$ is located at the top of the stack and the $s_m$ is at the bottom of the stack. At last, the transition function $\delta$ is defined as follows:
\begin{enumerate}\vs{-1}
\item[(1)] $\delta(q_0,\cent,z) = \{ (q_1,Sz) \}$;
\vs{-2}
\item[(2)] $\delta(q_1,a,A) = \{ (q_1,u) \mid u\in V^{*}, \text{$P$ contains } A\rightarrow au \}$ for every $a\in\Sigma$ and $A\in V$; and
\vs{-2}
\item[(3)] $\delta(q_1,\$,z) = \{(q_f,z)\}$.
\end{enumerate}\vs{-1}
It is important to note that $M$ is always in the internal state $q_1$ while the tape head scans any cell located between $1$ and $n$. Note also that, during an accepting computation, the stack of the npda $M$ never becomes empty because of the form of production rules in $P$. Therefore, we can demand that $\delta$ should satisfy the following requirement.
\begin{enumerate}\vs{-1}
\item[(4)] For any symbol $a\in\Sigma$, $\delta(q_1,a,z) = \mathrm{\O}$.
\end{enumerate}\vs{-1}
Additionally, we modify the npda $M$ and force its stack to increase in size by at most two by encoding several consecutive stack symbols (except for $z$)
into one new stack symbol.
For instance, provided that the original npda $M$ increases its stack size by at most $3$, we introduce a new stack alphabet $\Gamma'$ consisting of $(v_1)$, $(v_1v_2)$, and $(v_1v_2v_3)$, where $v_1,v_2,v_3\in\Gamma$. A new transition $\delta'$ is defined as follows. Initially, we define $\delta'(q_0,\cent,z')=\{(q_1,S'z')\}$, where $S'=(S)$ and $z'=(z)$. Consider the case where the top of a new stack contains a new stack symbol $(v_1v_2v_3)$, which indicates that the top three stack symbols of the original computation are $v_1v_2v_3$. If $M$ applies a transition of the form $(q_1,w_1w_2w_3) \in\delta(q_1,a,v_1)$, then we instead apply
$(q_1,(w_1w_2)(w_3v_2v_3)) \in\delta'(q_1,a,(v_1v_2v_3))$. In case of $(q_1,\lambda)\in\delta(q_1,a,v_1)$, we now apply $(q_1,(v_2v_3))\in\delta'(q_1,a,(v_1v_2v_3))$.
The other cases of $\delta'$ are similarly defined.
See, \textit{e.g.},\hspace*{2mm} \cite{HU79} for details.
Overall, we can assume the following extra condition.
\begin{enumerate}\vs{-1}
\item[(5)] for any $a\in\Sigma$, any $v\in\Gamma$, and any $w\in\Gamma^*$, if $(q_1,w)\in \delta(q_1,a,v)$, then $|w|\leq 2$.
\end{enumerate}\vs{-1}
The aforementioned five conditions significantly simplify the proof of Lemma \ref{swapping-lemma-CFL}. In the rest of this paper, we assume that our ndpa $M$ satisfies these conditions. For each string $x\in S$, we write $ACC(x)$ for the set of all accepting computation paths of $M$ on the input $x$. Moreover, let $ACC_n = \bigcup_{x\in S}ACC(x)$.
\subsection{Stack Transitions, Intervals, and Heights}\label{sec:stack-interval}
For the proof of
Lemma \ref{swapping-lemma-CFL}, we wish to present our key lemma, Lemma \ref{height-interval}. To describe our lemma, we need to introduce several necessary notions and notations. An {\em intercell boundary} $i$ refers to a boundary or a border between two adjacent cells---the $i$th cell and the $i+1$st cell---in our npda's input tape.
We sometimes call the intercell boundary $-1$ the {\em initial intercell boundary} and the intercell boundary $n+1$ the {\em final intercell boundary}.
Meanwhile, we fix a subset $S\subseteq L\cap\Sigma^n$, a string $x$ in $S$, and a computation path $p$ of $M$ in $ACC(x)$.
Along this path $p$, we assign to intercell boundary $i$
a stack content produced after scanning the $i$th cell and before scanning the $i+1$st cell. For convenience, such a stack content is referred to as the ``stack content at intercell boundary $i$.'' For instance, the stack contents at the initial and final intercell boundaries are both $z$, independent of the choice of accepting paths.
Figure \ref{fig:intercell-boundary} illustrates intercell boundaries and a transition of stack contents.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=12cm]{intercell-boundary.eps}
\caption{An example of intercell boundaries and stack contents}\label{fig:intercell-boundary}
\end{center}
\end{figure}
An accepting computation path of the npda generates a length-$(n+2)$ series $(s_{-1},s_0,s_1,\ldots,s_n,s_{n+1})$ of stack contents with $s_{-1}= s_n = s_{n+1} =z$ and $s_0=Sz$. We refer to this series as a {\em stack transition} associated with the interval $I_0=[-1,n+1]_{\mathbb{Z}}$. More generally, when $I=[i_0,i_1]_{\mathbb{Z}}$ is a subinterval of $I_0$, we call an associated subsequence $\gamma = (s_{i_0},s_{i_0+1},\ldots,s_{i_1})$
a {\em stack transition} with the interval $I$. We define the {\em height} at intercell boundary $b$ of $\gamma$ to be the length $|s_{b}|$ of the stack content $s_b$ at $b$. By our choice of the ndpa $M$ given in Section \ref{sec:npda},
the minimal height is $1$.
For our purpose, we hereafter focus our attention only on stack transitions $\gamma$ with intervals $[i_0,i_1]_{\mathbb{Z}}$ in which (i) we have the same height $\ell$ at both of the intercell boundaries $i_0$ and $i_1$ and (ii) all heights within this interval are more than or equal to $\ell$. We briefly call such $\gamma$ {\em ideal}.
Let $I=[i_0,i_1]_{\mathbb{Z}}$ be any subinterval of $I_0$ and let $\gamma= (s_{i_0},s_{i_0+1},\ldots,s_{i_1})$ be any ideal stack transition with this interval $I$. For each possible height $\ell$, we define the {\em minimal width}, denoted $minwid_{I}(\ell)$ (resp., the {\em maximal width}, denoted $maxwid_{I}(\ell)$), to be the minimal value (resp., maximal value) $|I'|=i'_1-i'_0$ for which (i) $I'=[i'_0,i'_1]_{\mathbb{Z}}\subseteq I$, (ii) $\gamma$ has height $\ell$ at both intercell boundaries $i'_0$ and $i'_1$, and (iii) at no intercell boundary $i\in I'$, $\gamma$ has height less than $\ell$. Such a pair $(i'_0,i'_1)$ produces a subsequence $\gamma'=(s_{i'_0},s_{i'_0+1},\ldots,s_{i'_1})$ of $\gamma$. In such a case, we say that $\gamma'$ {\em realizes} the minimal width $minwid_{I}(\ell)$ (resp., maximal width $maxwid_{I}(\ell)$).
We say that a stack transition $\gamma$ has a {\em peak at $i$} if $|s_{i-1}|<|s_i|$ and $|s_{i+1}|<|s_i|$. Moreover, $\gamma$ has a {\em flat peak in $(i'_0,i'_1)$} if $|s_{i'_0-1}|<|s_{i'_0}|=|s_{i'_0+1}|=\cdots = |s_{i'_1}|$ and $|s_{i'_1+1}|<|s_{i'_1}|$.
On the contrary, we say that $\gamma$ has a {\em base at $i$} if
$|s_{i-1}|>|s_i|$ and $|s_{i+1}|>|s_i|$; $\gamma$ has a {\em flat base in $(i'_0,i'_1)$} if $|s_{i'_0-1}|>|s_{i'_0}|=|s_{i'_0+1}|=\cdots = |s_{i'_1}|$ and $|s_{i'_1+1}|>|s_{i'_1}|$. See Figure \ref{fig:stack-transition} for an example of (flat) peaks and (flat) bases.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=11.0cm]{stack-transition.eps}
\caption{An example of track transition with interval $I=[i_0,i_1]_{\mathbb{Z}}$ and height $\ell$}\label{fig:stack-transition}
\end{center}
\end{figure}
At last, we state our key lemma, which holds for any accepting computation path $p$ without any assumption other than $j_0\geq2$ and $2j_0\leq k \leq n$.
\begin{lemma}{\rm [key lemma]}\label{height-interval}
Let $S\subseteq L\cap\Sigma^n$, let $x$ be any string in $S$, and let $p$ be any computation path of $M$ in $ACC(x)$. Assume that $j_0\geq2$ and $2j_0\leq k \leq n$. For any interval $I =[i_0,i_1]_{\mathbb{Z}} \subseteq[-1,n+1]_{\mathbb{Z}}$ with $|I|>k$ and for any ideal stack transition $\gamma$ with the interval $I$ having height $\ell_0$ at the two
intercell boundaries $i_0$ and $i_1$, there are a subinterval $I'=[i'_0,i'_1]_{\mathbb{Z}}$ of $I$ and a height $\ell\in[1,n]_{\mathbb{Z}}$
such that $\gamma$ has height $\ell$ at both intercell boundaries $i'_0$ and $i'_1$, $j_0\leq |I'|\leq k$, and $minwid_{I}(\ell)\leq |I'| \leq maxwid_{I}(\ell)$.
\end{lemma}
\begin{proof}
Fix six parameters $(x,p,\gamma,\ell_0,I)$ given in the premise of the lemma. We prove the lemma by induction on the number of peaks or flat peaks along the computation path $p$ of $M$ in $ACC(x)$.
\smallskip
(Basis Case) In this particular case, there is either one peak or one flat peak
in $\gamma$ in the interval $I=[i_0,i_1]_{\mathbb{Z}}$. First, we consider the case where there is a peak.
Let $\ell_1$ be the height of such a peak.
Note that $minwid_{I}(\ell) = maxwid_{I}(\ell+1) +2$ for any height $\ell$ with $\ell_0\leq \ell<\ell_1$, because of the condition 5
provided for the npda's transition function $\delta$.
Now, let $\ell'$ be the maximal height satisfying that $minwid_{I}(\ell'+1)\leq j_0 < minwid_{I}(\ell')$.
Such $\ell'$ exists because $|I|>k > j_0$.
Let $I_{min}=[i'_0,i'_1]_{\mathbb{Z}}$ be the {\em minimal} interval such that $\gamma$ has height $\ell'+1$ at the two
intercell boundaries $i'_0$ and $i'_1$. Similarly, let $I_{max} = [i''_0,i''_1]_{\mathbb{Z}}$ be the {\em maximal} interval that satisfies a similar condition with $\ell'+1$, $i''_0$, and $i''_1$.
If $j_0= minwid_{I}(\ell'+1)$, then we choose
the desired interval $I'=I_{min}$ and the height $\ell=\ell'+1$ for the lemma.
If $j_0\leq maxwid_{I}(\ell'+1)$, then we pick
an interval $I'$ satisfying that $I_{min}\subseteq I'\subseteq I_{max}$ and $|I'|=j_0$. We also define $\ell=\ell'+1$ for the lemma.
The remaining case to consider is that $maxwid_{I}(\ell'+1) < j_0 < minwid_{I}(\ell')$. In this case, it follows that
\[
j_0< minwid_{I}(\ell') = maxwid_{I}(\ell'+1)+2 < j_0 + 2 \leq 2j_0 \leq k
\]
since $j_0\geq 2$. Let $I'_{min}=[\hat{i}_0,\hat{i}_1]_{\mathbb{Z}}$ be the minimal interval such that $\gamma$ has height $\ell'$ at the two intercell boundaries $\hat{i}_0$ and $\hat{i}_1$. It is thus enough to define $I'=I'_{min}$ and $\ell= \ell'$ for the lemma.
Next, we consider the case where there is a flat peak in $(i_2,i_3)$ with height $\ell_1$.
If $i_3-i_2 \geq j_0$, then we choose $I'=[i_2,i_2+j_0]_{\mathbb{Z}}$ and $\ell=\ell_1$ for the lemma. The other case where $i_3-i_2 < j_0$ is similar to the ``peak'' case discussed above.
\smallskip
(Induction Step) First, let $c>1$ and consider the case where there are $c$ peaks and/or flat peaks in the given interval $I=[i_0,i_1]_{\mathbb{Z}}$ with $|I|>k$. Choose the lowest base or flat base within this interval.
If we have more than one such base and/or flat base, then we always choose the leftmost one. Now, consider the case where
there is the lowest base at $i_2$ and let $\ell_2$ be the height at $i_2$. Since $\gamma$ is an ideal stack transition, we have $\ell_2\geq \ell_0$. Let $I^*=[i'_0,i'_1]_{\mathbb{Z}}$ be the {\em largest} interval for which the height at both $i'_0$ and $i'_1$ equals $\ell_2$. The choice of $I^*$ implies that $|I^*| = maxwid_{I}(\ell_2)$. We split $I^*$ into two subintervals $I_1=[i'_0,i_2]_{\mathbb{Z}}$ and $I_2 = [i_2,i'_1]_{\mathbb{Z}}$.
If $j_0\leq |I^*|\leq k$, then we set $I'=I^*$ and $\ell=\ell_2$ for the lemma.
If $|I^*|<j_0$, then a similar argument used for the basis case proves the lemma.
Now, assume that $|I^*|> k$. Since $k\geq 2j_0$, either one of of $I_1$ and $I_2$ has size more than $j_0$. We pick such an interval, say $I_3$. Let $\gamma'$ be a unique subsequence of $\gamma$ defined in the interval $I_3$.
If $|I_3|\leq k$,
then we choose $I'=I_3$ and $\ell=\ell_2$ for the lemma.
Let us assume that $|I_3|>k$. By the choice of $I_3$,
$\gamma'$ is an ideal stack transition. Since $\gamma'$ has fewer than $c$ peaks and/or flat peaks, we can apply the induction hypothesis to obtain the lemma.
Next, we consider the other case where there is the lowest flat base in $(i_2,i_3)$. We define $I^*=[i'_0,i'_1]_{\mathbb{Z}}$ as before. Unlike the previous ``lowest base'' case, we need to split $I^*$ into three intervals $I_1=[i'_0,i_2]_{\mathbb{Z}}$, $I_2=[i_2,i_3]_{\mathbb{Z}}$, and $I_3=[i_3,i'_1]_{\mathbb{Z}}$.
If either $|I^*|<j_0$ or $j_0\leq |I^*|\leq k$, then it suffices to apply a similar argument used for the ``lowest base'' case. Now, assume that $|I^*|>k$. Since $k\geq 2j_0$, either one of the two intervals $I_1\cup I_2$ and $I_3$ has size more than $j_0$.
We pick such an interval. The rest of our argument is similar to the one for the ``lowest base'' case.
\end{proof}
\subsection{Technical Tools}
Let $M=(Q,\Sigma,\Gamma,\delta,q_0,z,F)$ be our npda for $L$ defined in Section \ref{sec:npda}. We have already seen fundamental properties of our npda in Section \ref{sec:stack-interval}. Now, let us begin proving Lemma \ref{swapping-lemma-CFL} by contradiction.
First, we set our swapping-lemma constant $m$ to be $|\Gamma|^2$ and
assume that the conclusion of Lemma \ref{swapping-lemma-CFL} is false for this $m$;
that is, the following assumption (a) holds for four fixed parameters $(n,j_0,k,S)$ given in the premise of the lemma. We fix these parameters throughout this subsection and its subsequent subsection.
\begin{itemize}\vs{-1}
\item[(a)] There are no indices $i\in[1,n]_{\mathbb{Z}}$ and $j\in[j_0,k]_{\mathbb{Z}}$ with $i+j\leq n$ and no strings $x =x_1x_2x_3$ and $y=y_1y_2y_3$ in $S$ with $|x_1|=|y_1|=i$, $|x_2|=|y_2|=j$, and $|x_3|=|y_3|$ such that
(i) $x_2\neq y_2$, (ii) $x_1y_2x_3\in L$, and
(iii) $y_1x_2y_3\in L$.
\end{itemize}
Recall that $S$ is a fixed subset of $L\cap\Sigma^n$. Meanwhile, we fix additional five
parameters $x\in S$, $j\in[j_0,k]_{\mathbb{Z}}$, $i\in[1,n-j]_{\mathbb{Z}}$, $v\in\Gamma$, and $p\in ACC(x)$.
As a technical tool, we introduce the notation $G_{i,j,p}(x:v)$.
Roughly speaking, $G_{i,j,p}(x:v)$ expresses a part of stack content that is newly produced from its original content $vs$ during the head's scanning the cells indexed between $i+1$ and $i+j$, provided that the npda scans no symbol in $s$. Note that, when the npda is deterministic, the information on $p$ can be discarded, because $p$ is completely determined
from $x$. More precisely, $G_{i,j,p}(x:v)$ denotes
a unique string $t\in\Gamma^*$ (if any) that satisfies the following three conditions, along the computation path $p$ with the input $x$.
\begin{enumerate}\vs{-1}
\item The stack consists of $vs$ at the intercell boundary $i$, where $s\in \Gamma^*$.
\vs{-2}
\item At the intercell boundary $i+j$, the stack consists of $ts$.
\vs{-2}
\item While the head scans any cell indexed between $i+1$ and $i+j$, the npda never accesses any symbol in $s$; that is, no transition of the form $(q_1,w)\in \delta(q_1,a,r)$, where $r$ is a symbol in $s$, is applied.
\end{enumerate}\vs{-1}
With the fixed parameters $(i,j,v,t,p)$ described above,
we use the notation
$T^{(i)}_{j,v,t,p}$ to denote the set $\{x\in S\mid G_{i,j,p}(x:v)=t\}$. A crucial property of $T^{(i)}_{j,v,t,p}$ is stated in the following lemma.
\begin{lemma}\label{change-middle}
We fix $j\in[j_0,k]_{\mathbb{Z}}$, $i\in[1,n-j]_{\mathbb{Z}}$, $p,p'\in ACC_n$, $v\in \Gamma$, and $t\in\Gamma^*$. For any two strings $x,y$ in $S$, if $x\in T^{(i)}_{j,v,t,p}$ and $y\in T^{(i)}_{j,v,t,p'}$, then two swapped strings $pref_{i}(x)midd_{i,i+j}(y)suf_{n-i-j}(x)$ and $pref_{i}(y)midd_{i,i+j}(x)suf_{n-i-j}(y)$ are both in $L$.
\end{lemma}
\begin{proof}
Assume that the npda's stack consists of $vs$ (resp., $vs'$) at an intercell boundary $i$ along an accepting path $p$ (resp., $p'$) on an input $x$ (resp., $y$). Since $x\in T^{(i)}_{j,v,t,p}$ (resp., $y\in T^{(i)}_{j,v,t,p'}$), the npda generates a stack content $ts$
(resp., $ts'$) at the intercell boundary $i+j$.
Note that, during the head's scanning the cells indexed between
$i+1$ and $i+j$, the npda never accesses $s$ (resp., $s'$) along the path $p$ (resp., $p'$).
Therefore, we
can swap two substrings $midd_{i,i+j}(x)$ and $midd_{i,i+j}(y)$ written in the cells indexed between
$i+1$ and $i+j$ in $x$ and $y$, respectively, without changing the acceptance condition of the npda. Therefore, the npda accepts the two strings $pref_{i}(x)midd_{i,i+j}(y)suf_{n-i-j}(x)$ and $pref_{i}(y)midd_{i,i+j}(x)suf_{n-i-j}(y)$. This implies the conclusion of the lemma.
\end{proof}
Recall from Section \ref{sec:CFL} the notation $S_{i,w}$. Now, we consider the following statement.
\begin{itemize}\vs{-1}
\item[(b)] There are no $j\in[j_0,k]_{\mathbb{Z}}$, $i\in[1,n-j]_{\mathbb{Z}}$, $p,p'\in ACC_n$, $t\in\Gamma^*$, or $u,w\in\Sigma^{j}$ such that $u\neq w$, $T^{(i)}_{j,v,t,p}\cap S_{i,u} \neq\mathrm{\O}$, and $T^{(i)}_{j,v,t,p'}\cap S_{i,w} \neq\mathrm{\O}$.
\end{itemize}\vs{-1}
\begin{lemma}\label{statement-b}
The statement (a) implies the statement (b).
\end{lemma}
\begin{proof}
Assume the statement (a). To show the statement (b),
let us assume on the contrary
that the statement (b) does not hold.
This means that certain parameters $(i,j,p,p',t,u,w)$ satisfy
the following conditions:
$u\neq w$, $T^{(i)}_{j,v,t,p}\cap S_{i,u} \neq\mathrm{\O}$, and $T^{(i)}_{j,v,t,p'}\cap S_{i,w} \neq\mathrm{\O}$.
Now, we take two strings $x\in T^{(i)}_{j,v,w,p}\cap S_{i,u}$ and $y\in T^{(i)}_{j,v,w,p'}\cap S_{i,w}$. Notice that $|u|=|w|=j$. Since $x\in S_{i,u}$ and $y\in S_{i,w}$, it follows that $u = midd_{i,i+j}(x)$ and $w = midd_{i,i+j}(y)$. Lemma \ref{change-middle} then implies that the swapped strings $x' = pref_{i}(x)midd_{i,i+j}(y) suf_{n-i-j}(x)$ and $y' = pref_{i}(y) midd_{i,i+j}(x) suf_{n-i-j}(y)$ are both in $L$. This contradicts the statement (a). Therefore, the statement (b) holds.
\end{proof}
{}From Lemmas \ref{change-middle} and \ref{statement-b}, the choice of accepting paths for strings in $S$ is of little importance. Hence, for later convenience, we write $T^{(i)}_{j,v,t}$ to denote the union $\bigcup_{p\in ACC_n} T^{(i)}_{j,v,t,p}$.
\subsection{Closing Argument}
Under our assumption that Lemma \ref{swapping-lemma-CFL} is false,
we want to lead to a desired contradiction,
which immediately proves the lemma. To achieve our goal, we utilize Lemma \ref{height-interval} given in Section \ref{sec:stack-interval}.
Notice that, by Lemma \ref{statement-b},
the statement (b) now holds. Recall also that three parameters $(n,j_0,k,S)$ are fixed through our proof.
For our convenience, we write $\Delta$ for the index set $\{(i,j,v,w)\mid j\in[j_0,k]_{\mathbb{Z}},i\in[1,n-j]_{\mathbb{Z}},v,w\in\Gamma\}$. The cardinality of this set $\Delta$ is equal to
\[
|\Delta| = (k-j_0)(n-j_0+1)|\Gamma|^2 = m(k-j_0+1)(n-j_0+1)
\]
since $m=|\Gamma|^2$.
We want to assign each string $x$ in $S$ to a certain element $(i,j,v,w)$ in $\Delta$. For this purpose, we first show the following lemma, which can be obtained immediately from Lemma \ref{height-interval}.
\begin{lemma}\label{good-interval}
Assume that the statement (a) holds.
Let $j_0$ and $k$ satisfy that $j_0\geq 2$ and $2j_0 \leq k\leq n$.
For any string $x\in S$, there is an element $(i,j,v,w)\in \Delta$ for which $x\in T^{(i)}_{j,v,w}$.
\end{lemma}
\begin{proof}
Let $x$ be any string in $S$ and consider any computation path $p$ of the npda $M$ in $ACC(x)$.
We denote by $\gamma$ the stack transition of $M$ with the interval $I_0=[-1,n+1]_{\mathbb{Z}}$.
Lemma \ref{height-interval} guarantees, in the stack transition $\gamma$, the existence of a subinterval $I'=[i'_0,i'_1]_{\mathbb{Z}}$ and a height $\ell$ satisfying that $j_0\leq |I'| \leq k$, $minwid_{I}(\ell)\leq |I'| \leq maxwid_{I}(\ell)$, and $\gamma$ has height $\ell$ at both of the intercell boundaries $i'_0$ and $i'_1$.
For our desired index values $i$ and $j$,
we define them as $i=i'_0$ and $j = |I'|$ ($=i'_1-i'_0$).
Now, let us consider the changes of stack contents while $M$'s head scans through the interval $I'$. Let $vs$ be the stack content at the intercell boundary $i'_0$ and let $ws'$ be the stack content at $i'_1$,
where $v,w\in\Gamma$ and $s,s'\in\Gamma^*$. Note that $\ell=|vs|=|ws'|$. Since all heights in $I'$ are at least $\ell$, within this interval $I'$, $M$ does not access any symbol in $s$; hence, we conclude that $s=s'$. This implies that $G_{i,j,p}(x:v)$ equals $w$. Therefore, $x$ should be in $T^{(i)}_{j,v,w}$.
\end{proof}
Lemma \ref{good-interval} establishes a key association of strings in $S$ with elements in $\Delta$. Using this association, we introduce a map $e$ from $S$ to $\Delta$. For each string $x$ in $S$, assuming an appropriate lexicographic order for $\Delta$, we denote by $e(x)$ the {\em minimal} element $(i,j,v,w)\in \Delta$ satisfying that $x\in T^{(i)}_{j,v,w}$.
Notice that this minimality requirement
makes $e(x)$ uniquely determined from $x$.
With this map $e$, we define $A_{i,j,v,w}$ as the set
$\{x\in S\mid e(x) = (i,j,v,w)\}$. Obviously, it follows that $A_{i,j,v,w}\subseteq T^{(i)}_{j,v,w}$.
Now, we claim the following property of the map $e$.
\begin{claim}\label{encoding}
There exist two strings $x,y\in S$ and also
two strings $u,z\in\Sigma^{j}$ such that $u\neq z$,
$x\in S_{i,u}$, $y\in S_{i,z}$, and $e(x)=e(y)=(i,j,v,t)$.
\end{claim}
If this claim is true, then we take four strings $(x,y,u,z)$ given in the claim. Since $e(x)=e(y)=(i,j,w,t)$, we obtain $x,y\in A_{i,j,v,w}$.
Since $A_{i,j,v,w}\subseteq T^{(i)}_{j,v,w}$, it follows that
$x\in T^{(i)}_{j,v,w}\cap S_{i,u}$ and $y\in T^{(i)}_{j,v,w}\cap S_{i,z}$.
This is obviously a contradiction against the assumption (b)
and hence the assumption (a).
Therefore, Lemma \ref{swapping-lemma-CFL} should hold.
What remains undone is the verification of Claim \ref{encoding}. Let us prove the claim.
Since $e(\cdot)$ is a map from $S$ to $\Delta$, there is a certain element $(i,j,v,w)\in\Delta$ satisfying that $|A_{i,j,v,w}|\geq |S|/|\Delta|$.
Fix such an element $(i,j,v,w)$.
For any string $u\in\Sigma^{j}$, since $|\Delta| = m(k-j_0+1)(n-j_0+1)$,
we obtain
\[
|S_{i,u}|< \frac{|S|}{m(k-j_0+1)(n-j_0+1)} = \frac{|S|}{|\Delta|} \leq |A_{i,j,v,w}|,
\]
where the first inequality is one of the premises of Lemma \ref{swapping-lemma-CFL}.
Since $S=\bigcup_{u\in\Sigma^j} S_{i,u}$, the above inequality concludes that there are at least two distinct strings $u,z\in\Sigma^j$ for which certain strings $x\in S_{i,u}$ and $y\in S_{i,z}$ map to the same element $(i,j,v,t)$. This completes the proof of the claim and therefore completes the proof of Lemma \ref{swapping-lemma-CFL}.
\bigskip\bs
\noindent{\bf Acknowledgments.}
The author thanks Satoshi Okawa and Francois Le Gall for a helpful
discussion on a fundamental structure of context-free languages.
\bibliographystyle{alpha}
|
1,314,259,993,522 | arxiv | \section{Introduction}
In the standard formulation of quantum mechanics, the statistical outcomes of
an ideal measurement of an observable $\hat{A} |a\rangle = a |a\rangle$ are
described by the spectral measure [1]:
\begin{equation}
p_{\psi}(a) =|\langle a | \psi \rangle|^{2}
\end{equation}
where $|\psi\rangle \in H$ is the
state vector of the measured system. The spectral measure contains all the
relevant statistical informations about the system, but it makes no reference
to the apparatus employed in the actual measurement. Because of this property
we shall refer to $\hat A$ as to an intrinsic quantum observable.
A realistic experiment necessarily involves additional degrees of freedom
[2] which eventually enable the experimenter to convert the raw data into a
operational {\em propensity\/} density,
$\Pr ({a})$ of a classical variable $a$ [3-4]. This propensity
depends on the state of the system and on all the devices used in a realistic
detection scheme. All these additional devices will be referred to as a
filter $\cal F$, that represents the experimental setup required for the
measurement of the observable $\hat{A}$. The measuring device is
described by the following positive and hermitian operator $\hat{\cal F}(a)$
that satisfies the relation:
\begin{equation}
\int da\ \hat{\cal F}(a) =1.
\end{equation}
In terms of this filter operator the propensity is
\begin{equation}
\Pr ({a})=\langle\hat{\cal F}(a) \rangle .
\end{equation}
We see that in a realistic measurement the spectral decomposition of $\hat A$
is effectively replaced by a positive operator valued measure (POVM) $ da\
\hat{\cal F}(a)$ [5]. In view of the linear relation between the propensity
and the POVM, the operational statistical moments of the measured quantity
are:
\begin{equation}
\label{oodef}
\overline{{a}^n}
=\int\! {d}{a}\,{a}^n\Pr ({a})\ =\int\! {d}{a}\,{a}^n\langle\hat{\cal
F}(a)\rangle = \langle\hat A^{(n)}_{\cal F}\rangle
\end{equation}
where
\begin{equation}
\hat A^{(n)}_{\cal F}=\int\! {d}{a}\,{a}^n\hat{\cal F}(a)\,,
\end{equation}
defines a unique set of operational observables associated with a given POVM
for a given filter $\cal{F}$ [6].
As a rule, the algebraic properties of the $\hat A^{(n)}_{\cal F}$ operators
are quite different from those of the powers of $\hat A$. In particular, a
factorization is typically impossible, so that, for instance,
$\hat A^{(2)}_{\cal F}$ does not equal $(\hat A^{(1)}_{\cal F})^2$.
It is the purpose of this work to provide an explicit construction of the
POVM and the associated set of operational observables for two distinct
systems, both leading to an operational algebra of sinus and cosines
operators. The first system will be related to phases of the spin [7] probed
by the so called Malus filter [8], and the second system will be an optical
field probed by the so called homodyne filter [9].
In both cases we shall derive operational operators corresponding to the
phases of the spin $s$ or of the optical field. These operational
observables will define an operational quantum trigonometry of the
corresponding phase measurements.
\section{Spin Operational Observables}
In this section we derive operational operators of the spin phases and
describe a simple idealized experimental scheme leading to such operational
observables.
This experiment is based on the Malus law for spin. This law predicts that
the transmission of a spin-$\frac{1}{2}$ through a measuring apparatus is
given
by $\cos^{2}\frac{ \alpha}{2}$, where $\alpha$ is the relative angle between
the orientation of the detected spin and the orientation of the Stern-Gerlach
polarizer. This
property can be generalized to a system with arbitrary spin $s$. We shall
assume that such a system is described by spin coherent states
$|\Omega\rangle$, where the solid angle characterizes an arbitrary spin
orientation on a unit sphere. These spin $s$ coherent states are obtained by
a rotation of the maximum "down" spin state $|s,-s\rangle$ [10]:
\begin{equation}
|\Omega \rangle =\exp( \tau {\hat S}_{+} -
\tau^{*} {\hat S}_{-})|s,-s\rangle ,
\end{equation}
where $\tau=\frac{1}{2}\theta e^{-i\phi}$ and ${\hat S}_{\pm}$ are
the spin-$s$ ladder operators.
The spin coherent states form an over complete set of states on the Bloch
sphere:
\begin{equation}
\frac{2s+1}{4\pi} \int d \Omega\
|\Omega\rangle \langle \Omega|= I.
\end{equation}
Using these formulas, it is easy to obtain the
Malus probability for a transmission of
such a state through a Stern-Gerlach apparatus with orientation
$\Omega^{\prime}$. As a result one obtains:
\begin{equation}
p =|\langle \Omega|\Omega^{\prime}\rangle|^{2} =
(\cos \frac{\alpha}{2})^{4s} .
\end{equation}
This quantum mechanical expression for
the transmission function provides a generalization of the
spin-$\frac{1}{2}$ Malus Law to the case of an arbitrary
spin $s$. A measurement leading to the Malus law can be easily constructed at
least in principle. Let assume that the Hilbert space of the system is
extended by a filtering device (another spin-$s$) initially in the "down"
spin state. A measurement is described by a dynamical process which generates
a correlation between the system being detected and the measuring filter. Due
to the unitarity of the interaction with the filter, it is possible to
select the interaction parameters in such a way, that the wave function of
the combined system evolves in the following way:
\begin{equation}
|s,-s\rangle_{\cal F} \otimes|\Omega\rangle\ \rightarrow \
|\Omega\rangle_{\cal F} \otimes|s,-s\rangle
\end{equation}
>From this relation is clear that a measurement of the filter orientation
leads to the spin Malus law, which in the space of the detected spin is
equivalent to the following propensity:
\begin{equation}
\label{propensity-spin}
\Pr(\Omega^{\prime})=\frac{2s+1}{4\pi}
|\langle\Omega^{\prime}|\Omega\rangle|^{2}.
\end{equation}
This relation shows that the corresponding POVM is just:
\begin{equation}
\hat{\cal F}(\Omega)= \frac{2s+1}{4\pi}
|\Omega \rangle \langle \Omega | \ \ {\rm with} \ \ \int d\Omega \ \hat{\cal
F}(\Omega) =1.
\end{equation}
Having this simple picture of spin measurement we will look for
quantum operational observables connected with the Malus experiment.
There is a variety of operational operators that can be associated with
such phase measurements. For example the statistical moments of the azimuthal
orientation are given by:
\begin{equation}
\overline{\cos^{n}\theta}
=\int\! {d}{\Omega}\, \cos^{n}\theta \Pr ({\Omega})\ = \langle\hat
{\Theta}^{(n)}_{\cal F}\rangle
\end{equation}
where the operational azimuthal cosine operators
\begin{equation}
\hat {\Theta}^{(n)}_{\cal F}=\int\! {d}{\Omega}\, \cos^{n}\theta\hat{\cal
F}(\Omega)\,,
\end{equation}
define a unique set of operational observables associated with this POVM.
All integrals in this expression can be calculated and we obtain:
\begin{equation}
\hat{\Theta}^{(n)}_{\cal F}={\rm F}(-n,s+\hat S_{3}+1,2s+2;2),
\end{equation}
where ${\rm F}(a,b,c;z)$ is a hypergeometric
function. The first two operational azimuthal cosine operators are:
\begin{eqnarray}
\hat{\Theta}^{(1)}_{\cal F}&=&-\frac{1}{1+s}\hat{S}_{3}, \\
\hat{\Theta}^{(2)}_{\cal F}&=&\frac{2}{(1+s)(3+2s)}\hat{S}_{3}^{2}+
\hat{1}\frac{1}{3+2s}.
\end{eqnarray}
In the same way one can construct a corresponding set of operators describing
the operational properties of the polar coordinate of the spin system.
Statistical moments of the polar angle
\begin{equation}
\overline{\exp(in\varphi)}
=\int\! {d}{\Omega}\, \exp(in\varphi) \Pr ({\Omega})\ = \langle\hat
{E}^{(n)}_{\cal F}\rangle
\end{equation}
lead to the following
operational set of polar phasors defined as
\begin{equation}
\hat{E}^{(n)}_{\cal F}= \int d\Omega\,
e^{in\varphi}\,\hat{{\cal F}}(\Omega)\, .
\end{equation}
We assume, that $n$ is a positive integer and that $\hat{E}^{(-n)}_{\cal
F}=\hat{E}^{(n)\:\dagger}$. Simple calculations give
\begin{equation}
\label{exp}
\hat{E}^{(n)}_{\cal F}=\hat{S}^{n}_{+}
\frac{\Gamma(s-\hat{S}_{3} +1-n/2)\Gamma(s+\hat{S}_{3}+1+n/2)}
{\Gamma(s+\hat{S}_{3}+n+1) \Gamma(s-\hat{S}_{3}+1)},
\end{equation}
with $\hat{E}^{(n)}_{\cal F}=0$ for $n>2s$ .
Two first moments are given by
\begin{eqnarray}
\hat{E}^{(1)}_{\cal F}&=&\hat{S}_{+}\frac{\Gamma(s-\hat{S}_{3}+1/2)
\Gamma(s+\hat{S}_{3}+3/2)}
{\Gamma(s+\hat{S}_{3}+2) \Gamma(s-\hat{S}_{3}+1)}, \\
\hat{E}^{(2)}_{\cal F}&=&
\hat{S}^{2}_{+}\frac{1}{(s+\hat{S}_{3}+2)(s-\hat{S}_{3})}.
\end{eqnarray}
So far we have derived operational operators associated only with polar
$\varphi$ and azimuthal $\theta$ directions of the spin. In the same way,
from the statistical properties of the spin propensity, it is possible to
derive operational spin operators. These operators correspond to Malus
measurements of unit directions with
a filter defined by a spin coherent state POVM.
We can parameterize the three spin coordinates by a solid angle on a
unit sphere in the following way
\begin{eqnarray}
\hat{S}_{1}\longrightarrow\cos\varphi\sin\theta,\,\,\,\,
\hat{S}_{2}\longrightarrow\sin\varphi\sin\theta,\,\,\,\,
\hat{S}_{3}\longrightarrow -\cos\theta.
\end{eqnarray}
The corresponding spin operational operators may be naturally defined as
follows
\begin{equation}
\nonumber
\hat{\Sigma}^{(n)}_{i}=\int d\Omega\,(n_{i})^{n}\,
\hat{{\cal F}}(\Omega)\,,\,\,\,\,\, i=1,2,3\,,
\end{equation}
where $\vec{n}=(\cos\varphi\sin\theta,\,\sin\varphi\sin\theta,\,-\cos\theta)$
is a unit vector.
In further discussion we concentrate only on the first two operators from
the whole operational spin algebra
\begin{eqnarray}
\hat{\Sigma}^{(1)}_{i}&=&\frac{1}{1+s}\hat{S}_{i}, \\
\hat{\Sigma}^{(2)}_{i}&=&\frac{2}{(1+s)(3+2s)}\hat{S}_{i}^{2}+
\hat{1}\frac{1}{3+2s}.
\end{eqnarray}
Operational spin observables are proportional to the intrinsic spin
operators, however they are modified due to the noise imposed by
the measuring apparatus forming the Malus analyzer.
\section{Squeezed Quantum Trigonometry}
As a second example of a possible application of the presented
formalism we give a theoretical description and generalization of the
experiments recently
performed by Noh, Foug\`eres and Mandel [9].
The authors have used an eight-port homodyne detector (NFM apparatus)
in order to measure the relative phase between two classical or quantum
light fields.
In such an experiment we measure the difference of the photon counts on two
pairs of detectors. This quantity is either related
to the sine and cosine of the phase difference of two
classical electromagnetic fields (classical case) or may be used to
define the set of operational operators associated with an arbitrary
classical function of the phase (quantum case).
Particularly, we can find quantum analogs of trigonometric
functions and their powers obtaining the so called ``quantum trigonometry''
[11].
Below we derive such a ``quantum trigonometry'' for modified NFM
apparatus. We will assume here, that the additional noise
coming through the two free unused ports of the NFM experimental device is
described by squeezed vacuum state (in the original NFM experiment this
field was a vacuum state). Because of this the resulting operational
algebra will be represent by a ``squeezed quantum trigonometry''.
In our case the propensity $\Pr(\varphi;s,\phi)=\Pr(\varphi+2\pi;s,\phi)$
is a periodic function of the phase, normalized to unity in the following way
\begin{equation}
\label{normalization}
\int \frac{d\varphi}{2\pi}\Pr(\varphi;s,\phi)=1\,.
\end{equation}
By $s$ and $\phi$ we denoted the amplitude and the phase of the squeezed
vacuum.
\begin{figure}[t]
\begin{center}
\begin{tabular}{rcl}
\put(84,45){$I$}
\put(225,80){$\varphi$}
\leavevmode \epsfxsize=9.0cm \epsffile{cos1.eps} \\
\end{tabular}
\end{center}
\centerline{Fig. 1: Wigner function of operational operator
$\hat{C}^{(2)}(s=0.5,\phi=\pi/2)$.}
\label{c1}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{rcl}
\put(84,45){$I$}
\put(225,80){$\varphi$}
\leavevmode \epsfxsize=9.0cm \epsffile{cos2.eps} \\
\end{tabular}
\end{center}
\centerline{Fig. 2 Wigner function of operational operator
$\hat{C}^{(2)}(s=1.5,\phi=0)$.}
\label{c2}
\end{figure}
In accordance with the general scheme (\ref{oodef}) the phasor operators
corresponding to an operational squeezed quantum trigonometry are defined
as follows
\begin{equation}
\label{meanv-phasor-def}
\overline{e^{in\varphi}}\equiv\langle \hat{E}^{(n)}(s,\phi)\rangle,
\;\;\; n=\pm1, \pm2, \ldots\,.
\end{equation}
As in [11], we assume from the beginning, that the local
oscillator (reference field) is a strong coherent laser field and therefore
we are allowed to neglect the noise of the input field mixed
by the beam splitter with the reference signal.
We start with the formula obtained in [11]
\begin{equation}
\label{phasor}
\hat{E}^{(n)}(s,\phi)=
{\rm Tr}_{v}\left\{\frac{(\hat{b}+\hat{v}^{\dagger})^{n}}
{\left((\hat{b}+\hat{v}^{\dagger})
(\hat{b}^{\dagger}+\hat{v})\right)^{\frac{n}{2}}}
\hat{S}(s,\phi)|0_{v}\rangle\langle 0_{v}|\hat{S}^{\dagger}(s,\phi)
\right\}\,,
\end{equation}
and change it for our purpose using a unitary operator $\hat{S}(s,\phi)$,
which generates the squeezed vacuum state $|s\rangle\langle s|$ from the
vacuum state $|0\rangle\langle 0|$ [12].
The bosonic creation and annihilation operators
$\hat{v}^{\dagger}$, $\hat{v}$ represent an additional degree of freedom
associated with the squeezed vacuum input at the beam splitter,
by $\hat{b},\;\hat{b}^{\dagger}$ we denote the creation and the annihilation
operators of the signal field. Because of the fact, that the trace
${\rm Tr}_v$ in (\ref{phasor}) is invariant under any unitary transformation,
variables connected with the local oscillator can be removed and the only
place in (\ref{phasor}), where the reference field contributes is the
operator $\hat{S}(s,\phi)$ with $\phi$ shifted by an unimportant phase that
we shall ignore.
Using various properties of coherent and squeezed states
[12], the trace in (\ref{phasor}) can
be calculated and we have
\begin{equation}
\label{phasor-int}
\hat{E}^{(n)}(s,\phi)=
\int\frac{d^{2}w}{\pi}\frac{w^{n}}{(w^{\ast}w)^{\frac{n}{2}}}
|w,s\rangle\langle w,s|,
\end{equation}
where $|w,s\rangle ={\hat D}(w) \hat{S}(s,\phi) |0\rangle$ is the squeezed
coherent state with amplitude $w$ and squeezed parameters $s$ and $\phi$.
According to our terminology
$\hat{{\cal F}}(s,\phi)=|w,s\rangle\langle w,s|$
is the positive valued operator measure (POVM) associated with
the described experimental scheme.
An exact formula for the phasor may be derived straight
from (\ref{phasor-int}). Recalling the unity decomposition for
the (squeezed) coherent states we obtain
\begin{equation}
\label{phasor-formula}
\hat{E}^{(n)}(s,\phi)=\hat{S}(s,-\phi)
\vdots\frac{(\hat{b}\cosh s-\hat{b}^{\dagger}e^{i\phi}\sinh s)^{n}}
{\left[(\hat{b}\cosh s-b^{\dagger}e^{i\phi}\sinh s)
(\hat{b}^{\dagger}\cosh s-\hat{b}e^{-i\phi}\sinh s)\right]^{\frac{n}{2}}}
\vdots\hat{S}^{\dagger}(s,-\phi).
\end{equation}
When the squeezing parameter $s$ tends to zero the above formula reduces to
the results obtained in the reference [11].
The propensity density can be simply derived from (\ref{phasor-int}). If
we set $ w= \sqrt{I} \exp(i\varphi)$, we obtain the propensity in the form of
the following marginal integration
\begin{equation}
\Pr(\varphi;s,\phi)=\int_{0}^{\infty}dI\ \langle w,s|\hat{\rho}|w,s\rangle.
\end{equation}
It's clear, that the propensity, contrary to the quantum mechanical
probability density, depends on the experimental device. In fact,
for each value of the squeezing parameter $s$ we obtain a
different propensity $\Pr(\varphi;s,\phi)$ and a different phasor basis,
even though the probe field remains unchanged.
As it is easy to see, phasors (\ref{phasor-formula}) are not Hermitian
operators so they cannot correspond to observable quantities.
Nevertheless, using phasor basis (\ref{phasor-formula})
it is possible to define naturally ``trigonometric operators'', whose
mean values can be measured in a real experiment --- for $s=0$ they
have been actually measured by Noh et al. [9].
For example, two first ``cosine'' operators are defined in the following way
\begin{eqnarray}
\label{trigfun-def}
\hat{C}^{(1)}\equiv\frac{1}{2}(\hat{E}^{(1)}+\hat{E}^{(-1)}), \nonumber \\
\label{trigfun}
\hat{C}^{(2)}\equiv\frac{1}{2}+\frac{1}{4}(\hat{E}^{(2)}+\hat{E}^{(-2)}).
\end{eqnarray}
In a similar manner we can find moments of ``sine'' operators
or, if it's needed, of any periodic function of the phase, provided
we know its Fourier decomposition. Replacing the Fourier components
$\exp{(in\varphi)}$ by the corresponding $n$-th phasors, we construct in
such a way the operational operator corresponding to an arbitrary function of
the phase.
In order to investigate the properties of the phasors we evaluate
(numerically) the corresponding Wigner functions of these operators.
Examples of such Wigner functions are presented in
fig. 1 and fig. 2.
First we notice, that, according to the terminology introduced in [13],
the phasors have a proper classical limit. If the incoming intensity of the
signal field tends to infinity, the Wigner functions of the operational
phasors reproduce a classical trigonometry.
This limit can be seen in the fig. 1. and in fig. 2.
Comparing both figures, we observe, that an increase of $s$ causes
a reduction of the phasors amplitude.
As it might have been expected, the dependence on
the squeezing parameters is gone in the classical limit.
In the limit of very small $I$, and with the squeezed phase $\phi$ equal to
$0$ or $\pi$ we have
\begin{equation}
C^{(1)}_{W}(s,\phi)\stackrel{I\rightarrow 0}{=}
\sqrt{I}{\cal A}_{0,\pi}(s)\cos\theta
\end{equation}
where ${\cal A}_{0,\pi}$ are two amplitudes of the Wigner function,
that depend only on the squeezing parameter $s$ and $\phi=0$ or $\phi=\pi$.
This result shows that the amplitude of the cosine Wigner function is
literally "squeezed".
For arbitrary values of $\phi$ the separation of the cosine Wigner function
into an amplitude, and a purely angle dependent part is no longer possible,
but a clear squeezing of the amplitude is also observed [14]. For $I=0$ the
Wigner cosine function is zero, which is in agreement with the property that
the phase of a light field in the vacuum state is randomly distributed.
Similarly we can find an asymptotic expression for $C^{(2)}_{W}(s,\phi)$.
In the limit of small $I$
\begin{equation}
C^{(2)}_{W}(s,\phi)=\frac{1}{2}(1-c(s,\phi)),
\end{equation}
where $c(s,\phi)$ is an $I$-independent function of the squeezed parameters.
It's easy to check, that $c(s,\phi)$ tends either to unity ($\phi=0$)
or to minus unity ($\phi=\pi$).
As a result, for small intensities $C^{(2)}_{W}(s,\phi)$
becomes zero ($\phi=0$) or one ($\phi=\pi$). For
$\phi=\pi/2,\,3\pi/2$ $c(s,\phi)=0$ and
$\lim_{I\rightarrow 0}C^{(2)}_{W}(s,\phi)=1/2$.
For small $I$ the squeezing
influences the system very strongly. If the squeezed phase $\phi$ equals
to $\pi/2$, the phasor's $\hat{C}^{(2)}$ Wigner
function tends $1/2$
(fig. 1), whereas for $\phi=0$ it takes values near zero
(fig. 2). Such a dramatic change of the cosine quadratures occurs because
in the limit of small $I$, purely quantum effects of the squeezed vacuum are
important. The squeezing allows one of the quadratures to be reduced below
the vacuum level represented by a uniformly distributed random phase.
The uniform distribution of the phase corresponding to the vacuum state
leads to an operational quadrature $\frac{1}{2}$.
For a squeezed vacuum, this uniform random-phase distribution is modified
[15] and a significant change of the operational quadrature is possible.
In fact fluctuations below $\frac{1}{2}$ in the Wigner
function exhibit the quantum nature of the squeezed vacuum.
Another interesting observation can be made if we look at the phasor's
squeezed coherent states POVM decomposition (\ref{phasor-int}) and
recall the fact, that the Glauber $P$-representation of a
squeezed coherent state does not exist. This property is related to the
dynamical ordering of the creation and
annihilation operators, induced by the measuring device.
For the modified NFM apparatus, with a squeezed
vacuum in the unused port, the antinormal ordering of operational phasor is
impossible to achieve.
\section{Acknowledgment}
This work has been partially supported by the Polish KBN grant 2 PO3B 006 11.
\section*{References}
\begin{description}
\item[[1]]
J. von Neumann, {\sl Mathematische Grundlagen der Quantenmechanik},
Springer--Verlag, Berlin 1932;
\item[[2]]
See, for example, {\sl Quantum Theory of Measurement}, edited by
J.~H.~Wheeler, W.~H.~Zurek, Princeton University Press, Princeton 1983;
\item[[3]]
K. W\'odkiewicz, Phys.Rev. Lett. {\bf 52}, 1064 (1984);
\item[[4]]
K. W\'odkiewicz, Phys. Lett. A {\bf 115}, 304 (1986);
\item[[5]]
See, for example, P. Busch, P. J. Lahti, and P. Mittelstaedt
{\sl The Quantum Theory of Measurement}, Springer--Verlag,
Berlin 1991;
\item[[6]]
B.--G. Englert, K. W\'odkiewicz, Phys. Rev. A{\bf 51}, R2661 (1995);
\item[[7]]
A. Vourdas, Phys. Rev. A {\bf 41}, 1653 (1990);
\item[[8]]
K. W\'odkiewicz, Phys. Rev. A {\bf 51}, 2785 (1995);
\item[[9]]
J.W. Noh, A. Foug\`eres, L. Mandel, Phys. Rev. Lett. {\bf 71}, 2579 (1993);
\item[[10]]
F.T. Arecchi, E. Courtens, R. Gilmore, H. Thomas,
Phys. Rev. A {\bf 6}, 2211 (1972);
\item[[11]]
B.--G. Englert, K. W\'odkiewicz, P. Riegler,
Phys. Rev. A {\bf 52}, 1704 (1995);
\item[[12]]
H. Yuen, Phys. Rev. A {\bf 13}, 2226 (1976);
\item[[13]]
J. Bergou, B.--G. Englert, Ann. Phys. (NY) {\bf 209}, 479 (1991);
\item[[14]]
P. Kocha\'nski and K. W\'odkiewicz, to be published (1996);
\item[[14]]
D. Burak and K. W\'odkiewicz, Phys. Rev A {\bf 46}, 2774 (1992);
\end{description}
\end{document}
|
1,314,259,993,523 | arxiv | \section{Introduction}
The Lightning Network (LN) \cite{poon2016bitcoin} is emerging at the most popular layer-2 scaling technology for Bitcoin.
We model the LN by a graph $G$;
every vertex can be assumed to be a user
and every edge is a bi-directional payment channel between two nodes\footnote{We omit private channels and focus on publicly known ones that are broadcasted in the peer-2-peer (p2p) layer of the LN.}.
The LN is implemented via a cryptographic protocol executed by the nodes. Effectively, pairs of nodes implement a payment channel that allows them to exchange promises to pay a certain amount to each other. These promises are valid, as long as they are redeemable on layer-1, i.e.~the Bitcoin blockchain.
The LN is \emph{permissionless}, that is, any user can join the protocol pseudonymously.
Therefore, users can have arbitrarily adversarial behavior, so the LN must be able to handle byzantine failures.
This implies that each node in a channel $e\in E(G)$ must be able to unilaterally close $e$ by performing layer-1 transactions.
However, in order to prevent double-spend, the protocol must prohibit a node from closing a channel using a layer-1 transaction from an earlier state of the channel.
This implies that each closing transaction, $e\in E(G)$, must come with a delay, $D_e$, measured in block height, that is chosen during the creation of the channel $e$ by the two participating nodes.
Specifically, suppose that a node, $u$, broadcasts its intention to unilaterally close a channel, $e=\{u,v\}$, by confirming on the blockchain a transaction, $\tau$, that corresponds to an old state of $e$.
Then, the node $v$ can get all the funds in the channel by confirming a transaction, $\tau'$, that effectively proves that $\tau$ has expired, within a time window of $D_e$ blocks since the confirmation of $\tau$.
\subsection{The attacks}
The above inherent timing limitations of the LN protocol renders it susceptible to attacks that try to force a mass-exit event that violates the delay bounds of many edges, thus forcing a deviation from the intended protocol behaviour.
We study the following two attacks of this type, that are performed by an adversary that controls a small coalition of $k$ nodes:
\begin{description}
\item{\textbf{(1) Zombie attack:}}
The adversary controls a set of $k$ nodes, $Z\subseteq V(G)$, that hold exactly one side of many channels; that is, $E' = E(G) \cap (Z\times (V(G)\setminus Z))$.
The adversary renders all channel in $E'$ unresponsive, simply by not participating in the protocol.
This can be implemented either when $Z$ is a coalition of adversarial nodes, or when an eclipse-type attack (see, e.g.~\cite{heilman2015eclipse, marcus2018low}) is performed on all nodes in $Z$.
This forces many honest nodes to submit layer-1 channel closing transactions.
We show experimentally that under realistic layer-1 congestion conditions
the zombie attack can cause a significant amount of funds to be locked for a large window of time.
This deviates from the protocol which attempts to impose a much smaller delay, $D$, for closing a channel.
This attack is similar to the \textit{griefing attack}: the user is forced to broadcast a transaction on Bitcoin Layer 1 to unilaterally close the channel, and potentially pay a high fee due to the congestion generated by the attack.
This is an attack that does not aim at stealing funds, but rather to ``vandalize'' both layer-2 (unusable channels) and layer-1 (congestion).
The attack also causes an implicit monetary cost due to the loss of income from serving LN payments during the period the funds are locked.
\item{\textbf{(2) Mass double-spend attack:}}
The adversary again controls $Z$ as in the zombie attack (in contrast to the zombie attack, for a mass double-spend attack it is not enough to simply eclipse $Z$).
The adversary attempts to perform multiple double-spends by submitting closing transactions for all channels in $E'$ that correspond to earlier states of the protocol.
The honest nodes that monitor the blockchain respond by submitting disputing transactions on the blockchain, as soon as any offending transaction is confirmed.
The adversary succeeds in stealing the funds when an honest disputing transaction fails to get confirmed on the blockchain before the delay $D$ has passed.
We experimentally show that, under realistic congestion conditions, if the watchtowers algorithms monitoring the blockchain are not configured correctly, the adversary can succeed in stealing significant amounts of funds.
\end{description}
\subsection{Our main results}
Informally, our main findings can be summarized as follows.
\begin{itemize}
\item
\textbf{Susceptibility to vandalism by a small coalition:}
We demonstrate experimentally that the zombie attack can be used to lock a significant amount of funds for long periods of time.
Our simulations are performed
under historically-plausible congestion conditions.
The attack succeeds in forcing the funds of honest protocol participants to be locked for a duration that is much longer than the intended upper bound.
A detailed exposition of our findings
is given in Section \ref{sec:zombie}.
\item
\textbf{Incentive-incompatibility in the presence of a small coalition:}
We demonstrate experimentally that,
assuming an adversarial coalition of $k=30$ nodes,
and under a plausible model for the expected profit of the adversary,
the LN is not incentive compatible.
Our simulations are performed on a current LN topology,
under historically-realistic congestion conditions,
and assuming a realistic watchtower algorithm.
Our mathematical modeling makes mild statistical assumptions on channel balances.
The full details of our mathematical modeling are given in Section \ref{sec:coalition},
and our experimental findings are presented in Section \ref{sec:mass-double-spend}.
We demonstrate experimentally that
with the current LN topology, under historically-realistic congestion conditions, using a realistic watchtower algorithm, under mild statistical assumptions on channel balances, and assuming a coalition of at most $k=30$ nodes, the LN protocol is not incentive-compatible.
In fact, under reasonable model parameters, the adversarial coalition has a strategy for stealing funds from the honest protocol participants, with an expected profit of more than 750 BTC. In this case, clearly there is a simplification since we are considering that the coalition of attackers has nothing to lose from the attack, or at least that the business of the attackers has a volume that is less than the expected profit of the attack. Nonetheless, we believe that the \emph{trustless} nature of the network is questioned by the possibility to perform these attacks.
We emphasize that our results doe not imply that \emph{any} $k$-coalition can perform this strategy, but rather that such a $k$-coalition \emph{exists} and can be computed efficiently in practice.
\end{itemize}
\subsection{Theoretical justification}
Our experimental findings outlined above suggest that the LN is potentially susceptible to mass exit attacks by a small coalition of adversarial nodes.
This phenomenon can be explained by the topological properties of the graph $G$ of the $LN$.
In particular, it has been recently observed that
$G$ is scale-free, and follows a power-law degree distribution \cite{martinazzi2020evolving}.
This is to be expected due to the complex social dynamics that lead to the formation of $G$.
It has been shown in \cite{gast2016approximation} that in power-law degree graphs, by splitting the nodes into sets of high and low degree, $V(G)=V_{\text{high}} \cup V_{\text{low}}$, we obtain a nearly-optimal max-cut.
In fact, this is true even for very small sets $V_{\text{high}}$ (depending on the power-law exponent; see \cite{martinazzi2020evolving} for precise bounds).
As we explain in Section \ref{sec:coalition},
for our attacks this implies that if the adversary controls $Z=V_{\text{high}}$, then both attacks can be performed effectively by targeting all the channels between $V_{\text{high}}$ and $V_{\text{low}}$.
The above observation implies that the effects of the attacks that we study should be expected to get \emph{worse} for large sizes of the LN, in the following sense: the worst-case $k$-coalition, for the \emph{same size}, $k$, can cause more damage in a larger network (either by causing higher total delay in closing channels, or in total funds stolen, depending on the attack).
\subsection{Organization}
The rest of the paper is organized as follows.
In Section \ref{sec:coalition} we show how both the zombie and the mass double-spend attacks can be modeled mathematically so that the problem of selecting a $k$-coalition of adversarial nodes that maximizes the efficiency of the attack,
can be expressed as a variant of the Max-Cut problem.
We also present a simple greedy heuristic for solving this problem.
In Section \ref{sec:experimental} we discuss the main statistical assumptions for our simulations.
Section \ref{sec:zombie} presents our experimental results on the zombie attack.
Section \ref{sec:mass-double-spend} presents our experimental results on the mass double-spend attack.
In Section \ref{sec:mitigations} we discuss possible measures for mitigating the attacks.
Section \ref{sec:related} reviews related work.
We conclude in Section \ref{sec:conclusions}.
\section{Selecting an adversarial coalition as a graph cut problem}
\label{sec:coalition}
We now describe our methodology for choosing an adversarial $k$-coalition, i.e.~a coalition of $k$ nodes.
To that end, we first formulate an auxiliary graph-cut combinatorial optimization problem.
We then explain how this problem can be used to compute near-optimal coalitions.
\subsection{The graph cut problem}
In a graph, $G$, a \emph{$k$-lopsided cut} is a bipartition, $(Z, V(G)\setminus Z)$, of $V(G)$, with $|Z|=k$.
The \emph{$k$-Lopsided Max-Cut} problem ($k$-LMC) is defined as follows. The input consists of a graph $G$ and some $k\in \mathbb{N}$, and the goal is to find a $k$-lopsided cut, $(Z, V(G)\setminus Z)$, maximizing the number of edges in the cut, i.e.~$|E(Z,V(G)\setminus Z)|$.
Similarly, in the \emph{$k$-Lopsided Weighted Max-Cut} problem ($k$-LWMC), the input consist of a graph, $G$, with non-negative edge capacities, and some $k\in \mathbb{N}$, and the goal is to find a cut maximizing the total capacity of the edges in the cut, i.e.~$\sum_{e\in E(Z,V(G)\setminus Z)} c(e)$, where $c(e)$ denotes the capacity of $e$.
\subsection{From graph cuts to mass exit attacks}
\subsubsection{From $k$-LMC to zombie attacks}
The problem of computing a worst-possible $k$-coalition $Z\subseteq V(G)$ of adversarial nodes to perform the zombie attack is precisely the problem of computing a $k$-LMC in $G$.
This is because any edge can be attacked precisely when exactly one of its endpoints is in the adversarial coalition (an adversary cannot attack itself, and an honest node cannot attack another honest node).
Moreover, the effectiveness of a zombie attack is maximized when the average delay to close a channel is maximized.
This occurs when the number of channels under attack, i.e.~the number of edges in the cut, is maximized.
\subsubsection{From $k$-LWMC to mass double-spend attacks}
We now argue that, under mild assumptions, the problem of computing a worst-possible $k$-coalition of adversarial nodes to perform the mass double-spend attack, can be reduced to the problem of solving $k$-LWMC on $G$.
For any channel $e=\{u,v\}\in E(G)$ under attack, fix an orientation $(u,v)$ such that $u\in Z$, and $v\notin Z$;
let $b_t(e)\in [-c(e),c(e)]$ denote the balance of channel at time $t$; in particular, $b_t(e)=\alpha$ when at time $t$ node $u$ owns $\alpha$ of the capacity of $e$ and $v$ owns $c(e)-\alpha$.
The quantity $b_t(e)$ as a function of $t$ is difficult to model and to predict during the attack.
This is a challenge for our modeling because the amount of funds that the attacker can steal from a channel depends on the current balance.
We avoid the issue of estimating $b_t(e)$ by introducing a mild technical assumption.
Suppose that the adversary participates honestly in the protocol for some amount of time before the attack begins.
Let $e$ be some channel where the adversary controls exactly one of its endpoints.
Assuming that funds in the LN are allocated efficiently, it follows that in any long enough window of time, we expect to observe times $t$ and $t'$ where $b_t(e) \geq c(e)-\varepsilon$, and $b_{t'}(e) \leq -c(e)+\varepsilon$, for any $\varepsilon>0$.
This is because, otherwise, we could remove some of the capacity in $e$ and still be able to route the same set of LN transactions, which would violate capital efficiency.
Before the attack begins, the adversary participates in the protocol honestly.
During this period, for all of its channels, the adversary collects closing transactions that give almost all the funds to the adversary.
When the attack begins at time $t$, the adversary stops participating in the LN protocol.
Thus, the balances of all channels under attack remain fixed throughout the duration of the attack.
By the above discussion, it follows that if the adversary succeeds in stealing the funds from a channel, $e$, then the adversary profits a total of at least $c(e)-b_t(e)-c(e)/2-\varepsilon = c(e)/2-b_t(e)-\varepsilon$, where the $-c(e)/2$ term accounts for the cost of creating the channel.
On the other hand, the adversary fails to steal the funds of channel $e$ precisely when an honest node manages to get the justice transaction confirmed before the expiration;
when this happens, the adversary gets penalized by losing all the funds in the channel, thus the adversary gains $-c(e)/2$, due to the cost of creating the channel. In practice, we first assume that attacked channels are equally funded by both parties. We believe that this can be considered a reasonable assumption, since dual funding has been implemented by one of the major LN client implementations \cite{dual-funding}.
Unfortunately, the balances of the channels are not publicly known.
There has been some work on estimating balances \cite{tikhomirov2020probing, dam2020improvements}, but it is not clear how to model them accurately. In addition, since we cannot probe channel balances because we make use of historical data in our simulations.
Moreover, it is possible for the two participating nodes to contribute different amounts of capacity during channel creating.
For these reasons, we make the following simplifying assumption:
\begin{description}
\item{\textbf{Assumption 1:}}
For any channel $e$, conditioned on the adversary successfully stealing the funds from $e$, the expected profit of the adversary is at least $c(e)/2$.
\end{description}
In particular, Assumption 1 holds if we assume that at the time of channel creation $t_0$, we have $b_{t_0}(e)=0$, and that for the time that attack happens, $t$, we have $\mathbf{E}[b_t(e)=0]$, and setting $\varepsilon=0$.
To support Assumption 1, we performed a brief analysis of the nodes in 30-LWMC: they are well-connected nodes (in terms of number of open channels), and most of them seem to be sellers of inbound liquidity, routing nodes, exchanges, and wallet providers. This empirically supports the validity of Assumption 1: if a lot of the nodes in the attacker coalition, for example, sell liquidity, this means that they open channels with their customers, and then the customers use these channels in order to get paid to sell goods or services. Therefore, the coalition of attackers would have a favorable commitment (i.e., the first one), since they opened the channel providing the liquidity, but it is expected that, over time, these channels will be used in order to acquire services, so the balance of the channels will eventually go to the victim node.
The total profit of the adversary depends on the probability of stealing the funds of any channel $e$.
To that end, it is reasonable to assume that all honest nodes use the same strategy.
Therefore, by symmetry, we arrive at the following assumption:
\begin{description}
\item{\textbf{Assumption 2:}}
There exists some $p>0$ such that for all $e\in E(G)$ under attack, the adversary succeeds in stealing the funds from $e$, independently, and with probability $p$.
\end{description}
Let $P_e$ denote the profit that the adversary makes by attacking $e$, and let $P=\sum_e P_e$ be the total profit of the adversary.
By Assumptions 1 \& 2 and the linearity of expectation, we get that
$\mathbf{E}[P] = \sum_{e\in E(Z,V(G)\setminus Z)} \mathbf{E}[P_e] = \sum_{e\in E(Z,V(G)\setminus Z)} (p-1/2)\cdot c(e) = (p-1/2) \sum_{e\in E(Z,V(G)\setminus Z)} c(e)$.
In other words, the expected profit of the adversary is equal to the total capacity of the computed cut, times a scaling factor of $p-1/2$.
The following summarizes the above discussion.
\begin{proposition}
Under Assumptions 1--2, for any $p>1/2$, the mass double-spend attack is profitable in expectation.
Moreover, the $k$-coalition that maximizes the profit of the adversary is precisely an optimal solution to $k$-LWMC.
\end{proposition}
\subsection{Computing lopsided max-cuts}
Since the $k$-LMC problem is a generalization of Max-Cut, it follows that it is also NP-hard \cite{karp1972reducibility}.
Perhaps surprisingly, the generalization of Max-Cut that we consider does not appear to have been studied in this precise form previously.
We observe empirically that a simple greedy algorithm computes very good solutions.
The greedy algorithm is as follows.
We start with the cut $(\emptyset, V(G))$, that contains all vertices on the right side.
For $k$ steps we repeat the following. We greedily pick the vertex on the right side to move to the left side, maximizing the total capacity of edges in the cut.
We compared this greedy algorithm with
the implementation of approximate max-cut solver in the
Neo4j \textit{Graph Data Science} library,
which uses a greedy randomized adaptive search procedure parallelized (GRASP) style algorithm \cite{max-cut-neo4j}.
When taking the maximum over all $k\in \{1,\ldots,|V(G)|-1\}$, the simple greedy algorithm we consider outperforms the results of the Neo4j library.
For this reason, we have chosen the simple greedy algorithm for our experiments.
We remark, however, that obtaining a better solver for $k$-LMC and $k$-LWMC can only strengthen our results, since better solutions directly improve the effectiveness of our attacks.
The values of the solutions for $k$-LMC and $k$-LWMC that we computed using the greedy algorithm, as a function of $k$, are presented in Figure \ref{fig:lopsided}.
We remark that the global solution that we compute for Max-Cut contains $\mathsf{OPT} = 63251$ edges.
Similarly, the best solution computed for weighted Max-Cut has total capacity of $\mathsf{OPT_W} = 2464.37$ BTC, while covering $58531$ edges.
The results on the greedy solutions for $k$-LMC and $k$-LWMC that we obtained are presented in figure \ref{fig:lopsided}.
Table \ref{tab:lopsided} gives the solutions for some representative values of $k$, related to $k$-LWMC.
We observe that even for very small coalition size, $k$, there exist solutions that are close the global optimum.
For example, even with $k=30$ nodes, the adversary can attack a number of channels that is $31\%$ of the global max-cut, which is $23\%$ of all channels in the network.
This phenomenon agrees with the behavior expected from the theory of scale-free networks, as discussed in the previous Section.
\begin{table}[H]
\begin{tabular}{ccc}
\toprule
$k$ & capacity in BTC (\% of $\mathsf{OPT_W}$) & \# edges (\% of $\mathsf{OPT}$) \\
\midrule
10 & 1199.89 (48.69\%) & 10911 (17.25\%) \\
30 & 1685.13 (68.38\%) & 20084 (31.75\%) \\
100 & 2107.70 (85.53\%) & 35447 (56.04\%) \\
300 & 2312.47 (93.84\%) & 44522 (70.39\%)\\
\bottomrule
\end{tabular}
\caption{Result on the greedy solutions computed for $k$-LMC and $k$-LWMC on the LN graph $G$.}
\label{tab:lopsided}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.33\textwidth]{final-paper-images/lopsided-max-cut/lopsided-edges-unweighted.png}
\includegraphics[width=0.33\textwidth]{final-paper-images/lopsided-max-cut/lopsided-capacity-weighted.png}
\caption{The value of the greedy solutions for $k$-LMC (left) and $k$-LWMC (right), as a function of $k$.}
\label{fig:lopsided}
\end{figure*}
\section{The experimental setup}
\label{sec:experimental}
We now describe the modeling of the LN used in our experiments.
The graph $G$ of the LN used in our experiments is obtained using the lnd lightning node implementation.
The snapshot of the network that we use was taken on May 2022,
and contains $n=|V(G)|=17813$ nodes and $|E(G)|=84927$ channels.
\subsection{Modeling layer-1 congestion}
The effectiveness of the attacks we consider depends on the confirmation times of the transactions performed by the adversary and the honest nodes.
Therefore, the results depend on the congestion on the Bitcoin blockchain during the attack.
One way to estimate confirmation times is to use statistical models, such as \cite{gundlach2021predicting}.
However, this approach requires estimating the parameters of the models involved, which can introduce bias on the results.
For this reason, we decide to perform simulations using historical data on the Bitcoin mempool \footnote{The code implementing the simulations and the experiments can be found at the following URL: \url{https://drive.google.com/file/d/1jPZHvm2OCovhKUHso6TkPvmJM89vQS5F/view?usp=sharing}}.
All of our simulations are performed under two different congestion scenarios:
\begin{description}
\item{\textbf{Scenario 1:}} The attack starts on Dec. 7, 2017 at 08:15 (CDT), which marks the beginning of a period of high congestion.
\item{\textbf{Scenario 2:}} The attack
starts on Jan. 1, 2022 at 00:00 (CDT), which is during a period of typical congestion.
\end{description}
Scenario 1 represents a worst-case situation for the victims, since the attack takes place during a period of high congestion on layer-1.
The data used to run the experiments was obtained from \cite{mempool-data}, with a snapshot taken every minute regarding the number of transactions in the mempool for a number of predefined fee ranges.
Figure \ref{fig:congestion} depicts the congestion during the selected period.
The mempool congestion in the selected period lasted approximately until the end of January (about 8000 blocks).
\begin{figure}[H]
\begin{center}
\includegraphics[width=.45\textwidth]{final-paper-images/congestion-watermark.png}
\end{center}
\caption{Levels of congestion between December 2017 and February 2018, showing the end of the period of congestion that started around Dec. 6, 2017. Different lines represent number of transactions for different fee ranges in the mempool, with blue and green lines representing lower fee transactions, and so on. The red square represents approximately the time window considered for experiments.}
\label{fig:congestion}
\end{figure}
\subsection{Modeling the strategy of honest nodes}
In both of the attacks presented in this paper, we consider two different strategies for the honest players:
\begin{itemize}
\item \emph{Static strategy}:
In this case, all honest nodes use a fixed fee when transmitting their transactions.
This is the optimistic case for the adversary, since it is harder for honest players to get their transactions confirmed during periods of high congestion.
\item \emph{Dynamic strategy}:
The honest nodes increase the fee of their transactions until they get confirmed or the delay expires.
The rate of increase is controlled by three parameters: The initial fee, the \emph{step} parameter, $t$, and a parameter $\beta>1$.
Every $t$ blocks, any honest node increases the fee of any pending transaction via the replace-by-fee (RBF) mechanism, by multiplying its fee by $\beta$.
Therefore, the fees increase exponentially.
\end{itemize}
The dynamic strategy requires some considerations regarding the various type of on-chain transactions that both the attacker and the victims have to broadcast. First of all, justice transactions only require honest nodes' signature, therefore RBF can be used, and the dynamic case is feasible in the mass double-spend attack. In the zombie attack, the fee increase is related to channel force-closing transactions: channel commitments are signed by both parties when the commitment is created, therefore in general the fee is static and decided when the transaction is signed. However, with the implementation of \emph{anchor output-based channels} \cite{anchor-outputs}, the fee of force-closing transactions can be bumped, therefore the dynamic strategy of honest nodes can also be implemented during the zombie attack.
In the zombie attack, if the strategy is static, the fee will remain the same for all the duration of the simulation, until all the zombie channels have been closed. When the dynamic strategy is employed, the fee will be increased every $t$ blocks, for all the transactions that are still unconfirmed. For example, if $step=2$, then every 2 blocks the fee related to unconfirmed zombie channels' closing transactions will be bumped.
In the mass double-spend attack, honest nodes' transactions are submitted as soon as the corresponding malicious transaction is confirmed (effectively simulating the behaviour of \emph{watchtowers}). The initial fee for victims' justice transactions is set as the average fee at the moment of submission. After submission, fee increments in the dynamic scenario follow the same mechanism as in the zombie attack simulation.
\section{Results on the zombie attack}
\label{sec:zombie}
\subsection{Outline of the experiments}
The simulation of the zombie attack works as follows.
\begin{enumerate}
\item At the beginning of the simulation, there are $n$ zombie channels that must be closed. These would typically be the channels in the solution to $k$-LMC.
\item We compute the number of transactions contained in any new block from historical blockchain data.
For example, in scenario 1, since the time window starts on Dec. 7, 2017 at 08:15 (CDT), the first block that will be mined is the block \#498084 \footnote{\url{https://www.blockchain.com/btc/block/498084}}.
\item We check if there is enough space in the block for any transaction by counting the number of transactions in the mempool with a higher fee (also considering transactions which were already in the mempool with the same fee rate as our closing channel transactions).
\item If the block can fit some of our closing channel transaction, we remove them from the remaining number of zombie channels to be closed.
\item When we reach a state in which all the LN channel closing transactions have been confirmed, the simulation ends.
\end{enumerate}
In the results we do not consider the delay set at channel opening (\texttt{to\_self\_delay}).
However, as we will see in section \ref{sec:mass-double-spend-implementations}, an average delay can be considered to be about 500 blocks.
\subsection{Results}
\subsubsection{Static Case}
\begin{figure*}
\begin{center}
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/remaining-zombie-channels-to-close-vs-time-scenario1.png}
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/remaining-zombie-channels-to-close-vs-time-scenario1-k-30.png}
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/remaining-zombie-channels-to-close-vs-time-scenario2.png}
\caption{Number of remaining Zombie channels to be closed as a function of time measured in blocks, for various fee ranges, with static honest player strategy, in Scenario 1 (attacking the total LMC on the left and 30-LMC on the middle) and Scenario 2 (attacking the total LMC, on the right).}
\label{fig:zombie_channels_remaining}
\end{center}
\end{figure*}
Figure \ref{fig:zombie_channels_remaining} shows the number of remaining zombie channels to be closed as a function of time (measured in blocks) elapsed during the simulation, for various fees used by honest nodes (that are fixed since we are in the static case). The results in the high congestion situation is consistent with respect to the duration of the congestion: when the fee is less than 50 sat/vByte, all transactions tend to be confirmed at approximately the same time as congestion decreases significantly (around the 8000th block from the beginning, just under two months).
This implies that the bottleneck is caused due to the congestion in the mempool. In the same figure, in the middle it is possible to notice that, even for small coalition sizes, the zombie attack causes high delay. Actually, the difference between attacking MC and 30-LMC is not much pronounced: this is caused by the fact that in this case the layer-1 congestion is not dominated by LN transactions, but rather by standard transactions that were in the mempool in the analyzed time window.
For Scenario 2 (low congestion), in Figure \ref{fig:zombie_channels_remaining} we observe that the delays to close a channel are significantly smaller.
Figure \ref{fig:zombie_channels_remaining_various_k} depicts the number of blocks needed to close all the zombie channels as a function of the fee used by the victims for the closing channel transactions, for different values of $k$.
We observe that even when attacking only a few nodes ($k=10$), the attacker cause a lot of damage to users in terms of time their funds are locked, when the congestion is high.
The number of blocks that victims need to wait in order to unlock their funds varies significantly based on the fee used to close the channels.
In fact, users are forced to pay a high fee of about 100 sat/vByte to redeem funds before two weeks ($\approx 2000$ blocks) since the moment of the attack.
\begin{figure*}
\begin{center}
\includegraphics[width=.45\textwidth]{final-paper-images/zombie-attack/blocks-to-close-all-zombie-channels-vs-victim-fee-scenario1.png}
\includegraphics[width=.45\textwidth]{final-paper-images/zombie-attack/blocks-to-close-all-zombie-channels-vs-victim-fee-scenario2.png}
\end{center}
\caption{Time measured in blocks needed to close all the channels during a zombie attack, as a function of the fee used by the victims for different number of adversarial nodes (in sat/vByte), with static honest player strategy, in Scenario 1 (left) and Scenario 2 (right).} \label{fig:zombie_channels_remaining_various_k}
\end{figure*}
\subsubsection{Dynamic Case}
In the dynamic case, the fees used by victims of the attack can increase over time (using RBF) in response to a delay in the confirmation of the transactions.
We consider $\beta=1.01$, so in every bump the fee increases by $1\%$, and various step values.
Figure \ref{fig:zombie_channels_remaining_various_fee_and_step_dynamic_scenario1} depicts the number of blocks needed to close all the zombie channels, as a function of the parameters of the dynamic strategy.
We observe that,
even for a coalition of size $k=30$,
if honest users wish to close their channels in significantly less that 1000 blocks, then either the initial fee must be set high (about 80 sat/vByte), or the fee must be bumped aggressively (at least once every 10 blocks).
Figures \ref{fig:zombie_channels_remaining_various_fee_and_step_dynamic_scenario1} and \ref{fig:zombie_channels_remaining_various_fee_and_step_dynamic_scenario2} also consider a hypothetical attack with 1 million zombie channels, which corresponds to a much larger graph in a possible future snapshot of the LN.
\begin{figure*}
\begin{center}
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/zombie-attack-dynamic-all-scenario1.png}
~
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/zombie-attack-dynamic-k-30-scenario1.png}
~
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/zombie-attack-dynamic-1M-scenario1.png}
\end{center}
\caption{Number of blocks needed to close all the Zombie channels, with dynamic honest player strategy, as a function of the step parameter, with beta=1.01, in Scenario 1, under three different attacks: total LMC (left), $30$-LMC (middle), and 1 million channels (right).}
\label{fig:zombie_channels_remaining_various_fee_and_step_dynamic_scenario1}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/zombie-attack-dynamic-all-scenario2.png}
~
\includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/zombie-attack-dynamic-k-30-scenario2.png}
~ \includegraphics[width=.33\textwidth]{final-paper-images/zombie-attack/zombie-attack-dynamic-1M-scenario2.png}
\end{center}
\caption{Number of blocks needed to close all the Zombie channels, with dynamic honest player strategy, as a function of the step parameter, with beta=1.01, in Scenario 2, under three different attacks: total LMC (left), $30$-LMC (middle), and 1 million channels (right).}
\label{fig:zombie_channels_remaining_various_fee_and_step_dynamic_scenario2}
\end{figure*}
\section{Results on the Mass double-spend attack}
\label{sec:mass-double-spend}
We now present our results on the mass double spend attack.
\subsection{Outline of the experiments}
The simulation is performed as follows.
\begin{enumerate}
\item When a new block is minted, we check if some of the LN transactions can be included in the block, counting the number of transactions with a higher fee and checking if this number is less than the number of transactions historically contained in the block at the current index.
\item If there is some potential space for LN transactions in the new block, we start by iterating over them: before including them into the block, we check their position in their specific fee range in the mempool. If, at the moment of insertion in the mempool, there were already some transactions with the same fee rate, these transactions would need to be confirmed before those that we are monitoring.
\item At the beginning, there are only attacker transactions: we fit as many as we can in the first block that has available space for them. For each confirmed attacker transaction, we submit to the mempool the corresponding victim's justice transaction. The fee of the justice transactions is computed as the average fee from the mempool data at the moment of submission of the transaction.
\end{enumerate}
We remark that in this case is that, unlike the zombie attack simulation, the victims' justice transactions are submitted at different times and potentially with different fees.
Therefore, we must keep track of the position of each justice transaction in the mempool in its fee range, to detect its confirmation block as accurately as possible.
Another detail is that when we decide that, for example, $n$ attacker transactions are included in a block, we are actually replacing $n$ transactions which, if we look at historical mempool data, were removed from the mempool, but in the simulation that we are executing they should still be in the mempool as unconfirmed. For simplicity, in our simulation we consider only the transactions that are directly replaced by LN transactions, and we don't consider the ``cascade'' of replaced transactions. This is an optimistic assumption for victims because it implies less transactions in the mempool and thus less congestion, therefore it does not weaken our results.
\subsection{Results}
\subsubsection{Static Case}
\begin{figure*}
\centering
\includegraphics[width=.45\textwidth]{final-paper-images/justice-bottleneck/failed-justice-transactions-vs-attacker-fee-scenario1.png}
\includegraphics[width=.45\textwidth]{final-paper-images/justice-bottleneck/failed-justice-transactions-vs-attacker-fee-scenario2.png}
\caption{Number of justice transactions that are not able to be confirmed before the \texttt{to\_self\_delay} as a function of the fee used by the attacker (in sat/vByte) for various values of \texttt{to\_self\_delay} with static honest player strategy, in Scenario 1 (left) and Scenario 2 (right), attacking the total LWMC.}
\label{fig:mass-double-spend-static}
\end{figure*}
Figure \ref{fig:mass-double-spend-static} shows the number of justice transactions that get confirmed after the predefined delay (which are therefore unsuccessful in avoiding the loss of funds), in Scenario 1, as a function of the fee paid by the attacker.
We observe that, even for large delays (e.g.~1000 blocks), the number of exploited channels is high, especially if the attacker is willing to pay a high fee for its transactions.
We also note that, it is not guaranteed that increasing the fee for the attacker leads to better attack outcome, as it is possible to see with a delay of 500 blocks.
This is due to the fact that a higher fee also implies that the victim transaction is submitted earlier, and the effect on the attack results depends on the level of congestion in the moment that the each victim transaction is submitted.
The level of congestion in the considered time window is not constant, it changes significantly and not monotonically over time: therefore, it could happen that using a higher fee causes victim transactions to be submitted in a moment in which the mempool is less overloaded, hence they can also be confirmed earlier, actually reducing the effect of the attack.
Figure \ref{fig:mass-double-spend-static} also shows consistent results with respect to a low congestion period: a small \texttt{to\_self\_delay} is generally enough to minimize the risks for LN users.
\subsubsection{References to LN nodes implementation}
\label{sec:mass-double-spend-implementations}
We now investigate how the most popular LN node implementation currently handles the delay agreed at channel opening, and how this affects our attack. Currently, the most used LN node implementation is LND (Lightning Network Daemon). In the Lightning Network specification (BOLT), \texttt{to\_self\_delay} is the parameter that sets the delay (in blocks) that the other end of the channel must wait before redeeming the funds in case of uncooperative channel closure. LND offers users the option to set a fixed custom value for this parameter in the \texttt{lnd.conf} configuration file (the \texttt{bitcoin.defaultremotedelay} entry). By default, if this parameter is not set, it is handled dynamically and scaled according to the capacity of the channel, with a maximum delay of $2016$ blocks and a minimum delay of $144$ blocks. With respect to the attack that we are describing, it is interesting to see the effect of the usage of this algorithm to choose the delay. Currently, as of April, 2022, the average capacity of a LN channel is 0.044 BTC \footnote{Data from \url{1ML.com}}. This means that, with the default LND algorithm, a channel of average capacity has
$$delay = 2016\;blocks \cdot \frac{chanAmt}{MaxFundingAmount} \approx 529\;blocks$$
According to the LND documentation, \texttt{MaxFundingAmount} is defined the the BOLT-0002 specification and it is a soft-limit of the maximum channel size at a protocol level: it is currently set to $2^{24} - 1$ satoshis. Obviously, in our case \texttt{chanAmt} is set to 0.044 BTC.
For the dynamic case we will make use of the delay computed above ($529$ blocks).
\subsubsection{Dynamic Case}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{final-paper-images/justice-bottleneck/failed-justice-transactions-vs-attacker-fee-dynamic-scenario1.png}
\caption{Number of justice transactions that are not able to be confirmed before the \texttt{to\_self\_delay} as a function of attacker fee (in sat/vByte)
with dynamic honest player strategy, for various values of the step parameter (Scenario 1), attacking the total LWMC.}
\label{fig:mass-double-spend-dynamic-scenario1}
\end{figure}
The dynamic case, in which victims are also increasing their fee if the corresponding justice transaction is not confirmed after the number of blocks determined by the \emph{step} parameter, has been implemented and is shown in Figure \ref{fig:mass-double-spend-dynamic-scenario1}. Similarly to the zombie attack, the study of the dynamic case is useful to propose possible strategies to defend against this type of attacks.
As a remark, the curves shown in Figure \ref{fig:mass-double-spend-dynamic-scenario1} are not monotonic, because the congestion level in the considered time window is not constant. Since the initial victim's fee is computed as the average fee at the moment of submission to the mempool, and each victim's transaction is sent when the corresponding attacker's transaction is included in a block, even using an higher fee doesn't necessarily always optimize the attacker's strategy. As a matter of fact, it is possible that using a fee of 40 sat/vByte leads to victims' transactions with an higher initial fee, that allows a greater number of justice transactions to be confirmed on time, with respect to the usage of 30 sat/vByte as the attacker fee.
\subsubsection{Considerations regarding the feasibility of the attack}
If the attackers' coalition is able to steal funds from all the attacked channels, then there are no costs for the attacker, other than those of transaction fees. Nonetheless, in a realistic setting it is possible that some of the justice transactions submitted by the victims will be confirmed before the expiration of the \texttt{to\_self\_delay}. These transactions will cause a penalty for the attacker, as the balance of the channel will be claimed by the victim.
By Assumption 1,
for any successfully compromised channel, $e$,
the attacker will gain $c(e)/2$, in expectation.
Therefore, the whole attack is profitable when the attacker steals funds from a set of channels that contain at least half of the total capacity of all channels under attack.
The profit of the attacker under different conditions is depicted in Figures \ref{fig:attacker_profits1}, \ref{fig:attacker_profits2} and \ref{fig:attacker_profits3}.
We observe that even with a coalition of $k$=30 nodes, under heavy congestion,
using adversarial transaction fee of 50 sat/vByte,
when the honest nodes bump their fee every 3 blocks (i.e.~about every 30'), the adversary has total expected profit of more than 750 BTC.
\begin{figure*}
\begin{center}
\includegraphics[width=.4\textwidth]{final-paper-images/justice-bottleneck/profits-static-all-max-cut.png}
\includegraphics[width=.4\textwidth]{final-paper-images/justice-bottleneck/profits-dynamic-all-max-cut.png}
\caption{Attacker profits as a function of the fee used by the attack in Scenario 1 (in sat/vByte), attacking all the edges in the weighted Max-Cut in the static case (left) and dynamic case (right), for various values of the step parameter.}
\label{fig:attacker_profits1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=.4\textwidth]{final-paper-images/justice-bottleneck/profits-static-k-30.png}
\includegraphics[width=.4\textwidth]{final-paper-images/justice-bottleneck/profits-dynamic-k-30.png}
\caption{Attacker profits as a function of the fee used by the attack in Scenario 1 (in sat/vByte), attacking 30-LWMC in the static case (left) and the dynamic case (right), for various values of the step parameter.}
\label{fig:attacker_profits2}
\end{center}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{final-paper-images/justice-bottleneck/profit_k_fee_60_step_6_beta_101.png}
\caption{Attacker profits as a function of $k$ parameter, in the dynamic case, with 60-69 sat/vByte as attacker fee, beta=1.01 and step=6.}
\label{fig:attacker_profits3}
\end{figure}
\section{Mitigations}
\label{sec:mitigations}
We now discuss potential mitigations for the attacks considered.
\subsection{Mitigations for Mass Double-Spend Attack}
\subsubsection{Increase \texttt{to\_self\_delay}}
As we observed in the simulation results, increasing the \texttt{to\_self\_delay} helps in reducing the damage made by the attack.
However, this comes at a high cost in terms of usability:
if the other party is unresponsive, an honest user has to wait for that increased delay to be able to recover and use the funds locked in the channel.
We note also that this modification would worsen the delay in the zombie attack.
\subsubsection{Watchtowers}
Watchtowers are services that offer protection against fraudulent commitment to prevent funds losses when users are offline. They act in response to a non-valid commitment posted on-chain, by broadcasting a justice transaction as soon as the attacker transaction has been confirmed. The exact parameters of the strategy of the watchtower should be carefully set by the operator:
our simulations suggest that during periods of high layer-1 congestion, the watchtowers should bump the fees of unconfirmed justice transactions aggressively.
\subsubsection{Avoid waiting until the attacker transaction is confirmed}
A different watchtower strategy would be to monitor the mempool for adversarial transactions.
In this case, when an adversarial transaction is detected in the mempool, the watchtower attempts to frontrun the adversary by submitting a channel-closing transaction with a higher fee.
\subsection{Mitigation for Both Attacks}
\subsubsection{Increased block size}
Increasing the block size allows more transactions to be included, and therefore it increases the throughput of the network, reducing the effects of the attacks just described.
However, increasing block size decreases decentralization. This is due to the fact that increasing the block size leads to a higher cost for running a Bitcoin node. Obviously, given the historical implications of such a modification of the layer-1 protocol, this can only be considered as a merely theoretical mitigation technique.
\subsubsection{Account-based blockchains}
The attacks we consider exploit the fact that the bitcoin blockchain uses UTXOs to keep track of users' funds.
If a different blockchain is used, such as Ethereum, where user funds are stored in account balances, then the following mitigation is possible.
The payment-channel protocol can use a smart contract for depositing funds during channel creation, and for withdrawing them during channel closing.
The same smart contract is used for all the channels in the network.
If, within a short time period, honest users prove to the smart contract that several invalid channel closures occurred, then the smart contract pauses channel closures for a specified period of time.
This mechanism can be used to pause the network during an attack occurring in high congestion conditions. This solution has the drawback to introduce the risk of \emph{griefing}-like attacks: a user could open useless channels, and then broadcast past commitments in order to pause channel closures for all the users. To avoid this, the mechanism could be driven by the votes of a DAO instead of an automatic pause, in order to evaluate each case to understand if there is a real attack going on in the network.
\subsubsection{Incentivize low-degree nodes}
Our attacks can be effective even in the presence of small adversarial coalitions.
This is because the LN network has a power-law degree distribution, which implies that there exists large $k$-LMCs and $k$-LWMCs, for small values of $k$.
If LN routing algorithms were to give priority to paths that use low-degree nodes, then it is reasonable to expect that that degree distribution of the LN would become closer to uniform, and thus the sizes of the maximum $k$-LMCs and $k$-LWMCs should decrease.
This would render both attacks less effective, for the same coalition size.
However, it is unclear how this modification would affect the routing properties of the LN.
\section{Related Work}
\label{sec:related}
LN attacks which exploit layer-1 congestion have already been described in the literature: \cite{harris2020flood} presents the \emph{Flood \& Loot} attack. This attack makes use of HTLCs and multi-hop payments: the attacker controls two nodes that are not directly connected, the source and the target. The paths between them are flooded with a high number of low-value HTLC payments from the source to the target. When all the HTLC payments have been completed, the source node refuses to resolve the payments with the first node in the payment path, which is the victim. The victim is therefore forced to unilaterally close the channel, since the source node is unresponsive. The high number of unresolved HTLCs and the consequent congestion on layer-1 cause some of the victim transactions not to be confirmed in time, allowing the attacker to steal some of the funds.
The exploited mechanisms partially differ from those used by the attacks presented in this paper, since we do not make use of multi-hop payments, but rather direct channels between nodes.
An interesting property of this attack is that it causes a lot of congestion on layer-1 with low cost for the attacker: therefore it is possible to use it to amplify the effects of zombie attacks and mass double-spend attacks.
Similar results obtained in \cite{mizrahi2021congestion}: however, in this work they target the entire network, and an attack that is similar to the zombie attack is described. A greedy algorithm is designed to select the channels to be flooded in order to maximize the locked capacity. This is also similar to what is proposed in \cite{rohrer2019discharged}, where they analyze attack strategies by making use of the topological properties of the LN: for example they present some strategies to choose nodes to be removed for partitioning attacks.
They propose strategies such as removing nodes by decreasing degree, by decreasing betweenness and eigenvector centrality, and by highest ranked minimum cuts.
This is similar to our strategy for selecting an adversarial coalition for the zombie attack, but with a different objective.
A well-known attack on the LN is the \emph{griefing attack} \cite{mazumdar2020griefing}. In this case the attacker saturates HTLC multi-hop payments path, and makes the channels that constitute the path unusable for new payments. The corresponding channels must be force-closed on-chain, therefore the victims' funds are locked for the delay specified at channel creation. In general, griefing attacks do not aim to steal funds, but users are forced to pay high fees to close the channels in a small number of blocks. Moreover, layer-2 is disrupted since potentially many channels are out of service and cannot route payments.
The \emph{time-dilation attacks} described in \cite{riard2020time} aim to eclipsing LN nodes to delay the time at which they become aware of new blocks. They target layer-2 mechanisms that rely on timely reactions, such as LN justice transactions. To make detection difficult they exploit the random arrival of new Bitcoin blocks (it is a Poisson process). Some countermeasures are proposed, such as watchtowers, which do not prevent the attack but raise its difficulty, especially if watchtowers are operated by third-party providers. They mention that this attack can be also amplified if the attacker performs a DoS attack against Bitcoin: this brings us to the conclusion that time-dilation attacks, and in general eclipse attacks, can be used to amplify the mass double-spend attack since they can further reduce the time honest nodes have to react to the attack.
Finally, mass exit attacks on Plasma are studied in \cite{dziembowski2021lower}.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have analyzed two attacks that exploit congestion on the bitcoin blockchain to cause damage on the Lightning Network. Using historical mempool data taken during periods of high and low congestion, we have simulated two attacks: in the case of the zombie attack, funds remain locked for a lot of blocks before users are able to retrieve them, while, in the mass double-spend attack, funds are at risk of being stolen.
We presented a theoretical justification for the effectiveness of the attacks, even when the adversary controls only a very small coalition.
This is based on the observation that the LN network is scale-free, and scale-free graphs have large $k$-lopsided cuts, for small values of $k$.
This suggests that our attacks should become more effective for larger network sizes, for the same coalition size, $k$.
In particular, the average expected profit of a node participating in the worst-possible $k$-coalition should grow in the future with network size.
Our simulation results suggest that watchtower algorithms should be configured carefully.
Ideally, they should monitor layer-1 congestion, and respond aggressively in the case of high congestion.
We leave as future work the study of both attacks under a more detailed model that incorporates transaction fees in the costs of the participants.
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,993,524 | arxiv | \section{Introduction}
\label{sec:intro}
Polynomials with coefficients over a finite field $\mathbb{F}_q$ play an important role in several areas of computer science, including cryptography and coding theory~\cite{shparlinski13}. The notion of \emph{coprimality} between two polynomials $f,g \in \mathbb{F}_q[x]$---i.e. that the greatest common divisor of $f$ and $g$ is 1---is of particular interest for many applications. For example, pairs of coprime polynomials are required in \emph{Coppersmith's algorithm} to efficiently compute discrete logarithms~\cite{coppersmith84}, which is relevant in attacking the Diffie-Hellman key exchange protocol when it is based on a finite field of characteristic 2. Further, in coding theory, coprime polynomials are used to solve the so-called \emph{key equation} for the decoding of alternant codes, which include well-known families such as BCH and Reed-Solomon codes~\cite{fitzpatrick95}.
The problem of counting how many pairs of coprime polynomials exist, or equivalently of determining what is the probability that two polynomials drawn at random are relatively prime, has been investigated in multiple previous works. Corteel et al.~\cite{corteel98} described a sieve for pentagonal numbers used to count pairs of integer partitions with no parts in common. Interestingly, this sieve allowed them to determine also the number of $m$-tuples of monic coprime polynomials of degree $n$ over $\mathbb{F}_q$, which is $q^{mn} - q^{(n-1)m+1}$. A nice consequence of this fact is that, over the binary field $\mathbb{F}_2$, there are as many coprime as non-coprime polynomial pairs of degree $n$. Indeed, setting $m=2$ and $q=2$ in the previous formula yields $2^{2n} - 2^{2n-1}$. Reifegerste~\cite{reifegerste00} showed an involution based on the use of resultant matrices that proved this particular case. Later, Benjamin and Bennett~\cite{benjamin07} described a simple bijection by exploiting Euclid's algorithm and its reversed variant, which they aptly named \emph{dilcuE's algorithm}. Their method is also easy to generalize to the case of coprime pairs over a generic finite field $\mathbb{F}_q$, and to the case of $m$-tuples of relatively prime polynomials.
All of the works mentioned above target the \emph{counting} aspect of the problem, i.e. determining the number of coprime pairs of polynomials over a finite field. Comparatively, there seem to be fewer works addressing \emph{enumeration}, i.e. exhaustively generating all coprime pairs once the ground field and the degrees of the polynomials are fixed. Clearly a trivial solution is to generate all pairs of polynomials of a given degree, and just retain those that are relatively prime. However, the smaller is the ground field the less efficient this method becomes. When exhaustive enumeration is not needed, random generation is usually an efficient method. To the best of our knowledge, the only deterministic approach to generate coprime pairs has been proposed by Fragneto et al. in~\cite{fragneto05}, which leverages on a Gr\"{o}bner basis technique.
In this paper, we are interested in a special case of the problem above. Namely, we aim to enumerate coprime polynomial pairs of fixed degree $n$ where \emph{both polynomials have a nonzero constant term}. The motivation for this research comes from a work that we recently published with two other co-authors in~\cite{mariot20}. There, we addressed the construction of orthogonal Latin squares via linear cellular automata (CA). In particular, we showed that the local rule of a linear CA which generates a Latin square of order $q^n$ is described by a (monic) polynomial of degree $n$ over $\mathbb{F}_q$ with a nonzero constant term. Moreover, we proved that the Latin squares induced by two such CA are orthogonal if and only if their associated polynomials are relatively prime. Therefore, to determine the total number of Latin squares generated by linear CA and to generate them, one has to respectively count and enumerate the number of pairs of coprime polynomials of equal degree with a nonzero constant term. This is relevant in further applications of the CA-based Latin square construction, such as the design of pseudorandom number generators~\cite{mariot21} and bent functions~\cite{gadouleau20}. Notice that the bijections provided in the previous literature cannot be applied ``as is'' to this specific problem, since they do not give any control over the constant terms of the coprime pairs.
We remark that the counting question has already been settled in~\cite{mariot20}, where we proved a recurrence formula for any such pair of coprime polynomials over a generic finite field $\mathbb{F}_q$. However, up to now finding an enumeration algorithm remained an open question. In this work we consider the enumeration problem for the specific case of polynomials over $\mathbb{F}_2$.
The core of our work exploits Benjamin and Bennett's bijection of Euclid/dilcuE's algorithm to enumerate and count the \emph{sequences of quotients} that characterize all pairs of coprime polynomials with a nonzero constant term. More precisely, we show that the study of these sequences can be broken up in three parts, namely centering on their \emph{constant terms}, \emph{degrees} and \emph{intermediate terms}. For the first part, we unveil an interesting connection between sequences of constant terms and formal languages. In particular, we prove that the desired sequences of constant terms form a regular language recognized by a simple finite state automaton, whose transitions are described by a de Bruijn graph. Then, leveraging on the classic \emph{Chomsky-Sch\"{u}tzenberger theorem}~\cite{chomsky59}, we derive the generating function of this language and the corresponding closed form, which allows us to count all sequences of constant terms of a fixed length. For the second part, we remark that the sequences of quotients' degrees are equivalent to \emph{compositions} of the degree $n$ for the final coprime pair. This in turn allows us to enumerate and count all desired sequences in terms of the power set lattice, expunged of its top element (since the trivial composition $n+0$ cannot occur during dilcuE's algorithm). Finally, for the intermediate terms of the quotients we observe that they can be chosen freely, once the composition of degrees has been fixed. Hence, this is equivalent to enumerate binary strings of length $n-k$, where $n$ is the degree and $k$ the length of the quotients' sequence.
As a straightforward application of the results above, we present the pseudocode of a combinatorial algorithm that generates all pairs of coprime polynomials of degree $n$ and nonzero constant term by independently enumerating the sequences of constant terms, degrees and intermediate terms of the quotients' sequences. As a side result, we also give an alternative derivation of the same counting formula in~\cite{mariot20} for $q=2$, which can be considered as an indirect proof of correctness of our enumeration algorithm.
\section{Problem Statement and Decomposition}
\label{sec:prob}
Let $q=2$, $\mathbb{F}_2 \{0,1\}$ be the finite field with two elements and $\mathbb{F}_2[x]$ be the ring of polynomials with coefficients in $\mathbb{F}_2$. For all $n \in \mathbb{N}$, we define the set $S_n$ as:
\begin{equation}
\label{eq:sn}
S_n = \{f \in \mathbb{F}_2[x]: x^n + a_{n-1}x^{n-1} + \ldots + a_1x + a_0: a_0 = 1 \} \enspace ,
\end{equation}
that is, $S_n$ is the set of binary polynomials of degree $n$ with non zero constant term. Clearly any non-trivial polynomial in $\mathbb{F}_2[x]$ is monic, since the only possible coefficients are only 0 or 1. Hence, in the remainder of this paper we will omit to specify that the polynomials are monic.
Define now the sets $A_n$ and $B_n$ for $n \in \mathbb{N}$ respectively as:
\begin{equation}
A_n = \{(f,g) \in S_n^2: \gcd(f,g) = 1\} \enspace , \enspace B_n = \{(f,g) \in S_n^2: \gcd(f,g) \neq 1\} \enspace .
\end{equation}
Thus, $A_n$ and $B_n$ are the sets of pairs of polynomials of degree $n$ and nonzero constant terms that are respectively coprime and non-coprime. Clearly, we have $A_n \cup B_n = S_n^2$ and $A_n \cap B_n = \varnothing$. In what follows we will indicate the cardinality of each of the above sets by the corresponding lowercase letter, thus $s_n = |S_n|$, $a_n = |A_n|$ and $b_n = B_n$. We can now give the formal statement of the problem addressed in this paper:
\begin{problem*}
Given $n \in \mathbb{N}$:
\begin{compactitem}
\item[(i)] \emph{Enumeration}: Find an algorithm to exhaustively generate all elements of $A_n$.
\item[(ii)] \emph{Counting}: Find a formula for $a_n$.
\end{compactitem}
\end{problem*}
Notice that Problem (ii) has already been solved for the general case where there are no constraints on the constant terms, i.e. $f$ and $g$ are just two polynomials of degree $n$
(see~\cite{reifegerste00,benjamin07}). The idea behind the proof of~\cite{benjamin07} is that for each non-coprime pair $(f,g)$ one can construct a coprime pair $(f',g') \in C_{n}$ in the following way:
\begin{compactenum}
\item Apply Euclid's algorithm to the pair $(f,g)$. Since $(f,g)$ is a
non-coprime pair, at the end of the algorithm the last remainder will be 0.
\item Replace the last remainder with 1, and reverse Euclid's algorithm using
the same sequence of quotients computed for $(f,g)$.
\item By construction, the pair $(f',g')$ obtained at the end of the reverse
algorithm will be coprime.
\end{compactenum}
The reversal of Euclid's algorithm in step 2 is also called \emph{dilcuE's algorithm} by the authors of~\cite{benjamin07}. A symmetric reasoning can be applied to obtain a non-coprime pair from a coprime one by setting the last remainder equal to 0, and again reversing the algorithm with the same sequence of quotients. Thus, \emph{the family of all sequences of quotients defines a bijection between coprime and non-coprime pairs} of polynomials of degree $n$ over $\mathbb{F}_2$. The only difference stems in the final remainder where dilcuE's algorithm starts from.
As an illustrative example, consider the case where $n=3$, and the two polynomials are respectively $f(x) = x^3 + x^2 + x + 1$ and $g(x) = x^3 + 1$. Since $f(x) = (x+1)^3$ and $x^3 + 1 = (x+1)(x^2+x+1)$, we have that $\gcd(f,g) = x+1$, and thus $f$ and $g$ are not coprime. When applying Euclid's algorithm, at any generic step $i$ we evaluate the following Euclidean division:
\begin{equation}
\label{eq:euc-div}
r_i(x) = q_{i+1}(x)r_{i+1}(x) + r_{i+2}(x) \enspace ,
\end{equation}
where $r_i(x)$ and $r_{i+1}(x)$ represent respectively the dividend and the divisor polynomial, $q_{i+1}(x)$ is the quotient, and $r_{i+2}(x)$ is the remainder of the division between $r_i(x)$ and $r_{i+1}(x)$. At the beginning (step 1) we set $r_1(x) = f(x)$ and $r_2(x) = g(x)$. Then, the process is repeated by shifting the divisor to become the dividend, whereas the remainder becomes the divisor. Using the compact notation of~\cite{benjamin07}, we obtain the following execution trace of Euclid's algorithm:
\begin{displaymath}
(x^3 + x^2 + x + 1, x^3 + 1)\xrightarrow{q_1=1} (x^3 + 1, x^2 + x) \xrightarrow{q_2=x+1} (x^2 + x, x + 1) \xrightarrow{q_3 = x} (x + 1, 0) \enspace ,
\end{displaymath}
where each pair of adjacent remainders is connected to the next one by an arrow indicating the corresponding quotient. Suppose now that we reverse the process by changing the last remainder from $0$ to $1$. The trace obtained by dilcuE's algorithm using the same sequence of quotients $q_3,q_2,q_1$ in reverse order is:
\begin{displaymath}
(x+1, \mathbf{1})\xrightarrow{x} (x^2+x+1, x+1)\xrightarrow{x+1} (x^3+x^2, x^2+x+1) \xrightarrow{1}(x^3+x+1, x^3+x^2) \enspace .
\end{displaymath}
By construction, the recovered pair $(f',g') = (x^3+x+1, x^3+x^2)$ is coprime.
Observe that, if we apply this procedure starting from a non-coprime pair $(f,g) \in B_n$, the polynomials $f'$ and $g'$ in the new coprime pair will not have necessarily a nonzero constant term, although not both of them will have a null constant term (otherwise, they would have a factor $x$ in common). Indeed, in the example above we mapped $(f,g) \in B_n$ to $(f',g')$ where $g'$ has a null constant term. For our problem, we thus need to analyze more in detail Euclid's algorithm, in order to see how changing the last remainder affects the constant terms of the intermediate remainders and, consequently, the constant terms of $f'$ and $g'$.
In what follows, we will make use of these two basic remarks:
\begin{remark}
\label{rem:first-last}
Let $(f,g)$ be two polynomials of degree $n$. Then:
\begin{compactitem}
\item[(i)] The first quotient obtained from Euclid's algorithm is always $q_1 = 1$. Indeed, since $f$ and $g$ both have degree $n$, the long division stops immediately after dividing $x^n$ by $x^n$.
\item[(ii)] Suppose that $\gcd(f,g) = 1$. Then, when the last pair of remainders is $(r_k(x), 1)$, if we apply Euclid's algorithm for one further step we will always obtain the pair $(1,0)$ with quotient $r_k(x)$. In fact we can write the division of $r_k(x)$ and $1$ as $r_k(x) = r_k(x) \cdot 1 + 0$.
\end{compactitem}
\end{remark}
As recalled above, each pair of polynomials $(f,g)$ is identified by a unique sequence of quotients \emph{and} the final pair of remainders. Therefore, to enumerate and count all elements in $A_n$, we have to characterize all sequences of quotients such that, when applied in reversed order from the last pair $(1,0)$ through dilcuE's algorithm, they yield a coprime pair of degree $n$ where both polynomials have a nonzero constant term. Suppose that $q_1, q_2,\cdots, q_k$ is such a sequence of quotients, represented as:
\begin{align*}
q_1 &\to \overbrace{x^{d_1}}^{\textrm{degrees}} \, + \overbrace{q_{1,d_1-1}x^{d_1-1} + \cdots + q_{1,1}x}^{\textrm{intermediate terms}} + \overbrace{s_1}^{\textrm{constant terms}} \\
q_2 &\to \ \ x^{d_2} \ \: + q_{2,d_2-1}x^{d_2-1} + \cdots + q_{2,1}x + \ \ \ \ \ \ s_2 \\
\vdots &\to \ \ \ \vdots \ \ \ \, \, \, + \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \, \; + \cdots + \ \ \vdots \ \ \ \, + \ \ \ \ \ \ \ \vdots \\
q_k &\to \ \ x^{d_k} \ \: + q_{k,d_k-1}x^{d_k-1} + \cdots + q_{k,1}x + \ \ \ \ \ \ s_k \\
\end{align*}
where $d_1,\cdots,d_k \in \mathbb{N}$ are the \emph{degrees} of the quotients, $q_{i,j} \in \mathbb{F}_2$ are the coefficients of the \emph{intermediate terms}, while $s_i \in \mathbb{F}_2$ are the \emph{constant terms}.
Remark that each sequence of quotients is defined by an \emph{independent choice} of these three elements: one can get any combination by juxtaposing a sequence of degrees with any other sequence of intermediate and constant terms. Clearly, since we are interested in obtaining coprime polynomial pairs of degree $n$ where both polynomials have a nonzero constant term, we have the following two constraints:
\begin{compactitem}
\item The degrees $d_i$ must sum to $n$. Indeed, since by Remark~\ref{rem:first-last}(i) the first quotient is always equal to $1$ (i.e. it has degree 0), this ensures that both polynomials at the end of dilcuE's algorithm have degree $n$.
\item The sequence of constant terms is such that the constant terms of the two last remainders are respectively $1$ and $0$ (due to Remark~\ref{rem:first-last}(ii)), while the first two remainders (i.e., the reconstructed pair) must have constant term $1$.
\end{compactitem}
On the other hand, there are no constraints on the intermediate terms. This means that they can be freely chosen, which allows us to immediately settle their enumeration and counting question. In particular, once the degree $n$ and the length of the quotients' sequence $k$ are fixed, \emph{enumerating the sequences of intermediate terms is equivalent to enumerate the set of binary strings of length} $n-k$, because the directive coefficients are excluded. Therefore, their number is:
\begin{equation}
\label{eq:seq-int}
I_{n,k} = 2^{n-k} \enspace .
\end{equation}
In the next sections, we focus on the enumeration and counting of the remaining two parts, namely the sequences of constant terms and degrees.
\section{The Regular Language of Constant Terms}
\label{sec:const}
Recall from Section~\ref{sec:prob} that if $r_i(x)$ and $r_{i+1}(x)$ are two intermediate remainders at step $i$ of Euclid's algorithm, then the quotient $q_{i+1}(x)$ and the next remainder $r_{i+2}(x)$ are determined through Equation~\eqref{eq:euc-div}. Notice that, if both $r_i$ and $r_{i+1}$ have a null constant term, then also $r_{i+2}$ will have a zero constant term, independently of the quotient $q_{i+1}$. Since Euclid's algorithm consists in applying Equation~\eqref{eq:euc-div} iteratively at each step, it follows that if we
start from a pair $(f,g)$ where both $f$ and $g$ have a null constant term then all intermediate remainders in the algorithm will also have null constant terms. Conversely, if we start from a pair $(f,g)$ where at least one of the two polynomials have a nonzero constant term, then not both adjacent remainders $r_i,r_{i+1}$ in all subsequent steps of Euclid's algorithm will have null constant terms.
More formally, for all steps $i$ in Euclid's algorithm we can consider the presence/absence of the constant terms in $r_i, r_{i+1}$ as the \emph{state} of a discrete dynamical system, described by a pair $(c_i, c_{i+1})$ where $c_i, c_{i+1} \in \mathbb{F}_2$ respectively denote the constant terms of $r_i$ and $r_{i+1}$. Since we are interested in pairs of polynomials which both have a nonzero constant term, from the discussion above we can rule out the possibility that $(c_i,c_{i+1}) = (0,0)$. Hence, for all steps $i$ we have that $(c_i,c_{i+1}) \in (\mathbb{F}_2^2)^* = \{(1,1), (1,0), (0,1)\}$.
Denoting by $s_{i+1} \in \mathbb{F}_2$ the value of the constant term in the quotient $q_{i+1}$, we can derive the transition function $\delta: (\mathbb{F}_2^2)^* \times \mathbb{F}_2 \rightarrow (\mathbb{F}_2^2)^*$ which maps a pair $(c_i,c_{i+1})$ to the next $(c_{i+1}, c_{i+2})$ using Equation~\eqref{eq:euc-div}, to assess the presence/absence of the constant term in $c_{i+2}$. Figure~\ref{fig:delta} reports the transition function $\delta$ for all possible $(2^2-1)\cdot 2 = 6$ inputs in $(\mathbb{F}_2^2)^*\times \mathbb{F}_2$, both in tabular form and as a transition graph.
\begin{figure}[t]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\begin{tabular}{cc|c}
\hline
$(c_i,c_{i+1})$ & $s_{i+1}$ & $\delta((c_i,c_{i+1}),s_{i+1})$ \\
\hline
$(1,1)$ & $0$ & $(1,1)$ \\
$(1,1)$ & $1$ & $(1,0)$ \\
$(1,0)$ & $0$ & $(0,1)$ \\
$(1,0)$ & $1$ & $(0,1)$ \\
$(0,1)$ & $0$ & $(1,0)$ \\
$(0,1)$ & $1$ & $(1,1)$ \\
\hline
\end{tabular}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\resizebox{!}{4cm}{
\begin{tikzpicture}
[->,auto,node distance=1.5cm, every loop/.style={min distance=12mm},
empt node/.style={font=\sffamily,inner sep=0pt,outer sep=0pt},
circ node/.style={circle,thick,draw,font=\sffamily\bfseries,minimum
width=0.8cm, inner sep=0pt, outer sep=0pt}]
\node [circ node] (n11) {$11$};
\node [empt node] (e1) [below = 2.25cm of n11] {};
\node [circ node] (n10) [right = 1.5cm of e1] {$10$};
\node [circ node] (n01) [left = 1.5cm of e1] {$01$};
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n11)
edge[bend left=20] node (f5) [above right]{$1$} (n10);
\draw[->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n11) edge[loop
above] node (f3) [above]{$0$} ();
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n10)
edge[bend left=20] node (f5) [below]{$0/1$} (n01);
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n01)
edge[bend left=20] node (f5) [above]{$0$} (n10);
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n01)
edge[bend left=20] node (f5) [above left]{$1$} (n11);
\end{tikzpicture}
}
\end{subfigure}
\caption{Transition table and graph realizing $\delta$.}
\label{fig:delta}
\end{figure}
An interesting remark is that the graph corresponding to $\delta$ is actually the \emph{de Bruijn graph} over $(\mathbb{F}_2^2)^*$. Indeed, this stems from the fact that at each step of Euclid's algorithm we shift the divisor to become the dividend, and the remainder becomes the new divisor. Thus, a path over the \emph{vertices} of this graph gives us a sequence of constant terms for the remainders generated by Euclid's algorithm, once each adjacent pair of vertices is overlapped respectively on the rightmost and leftmost coordinate. On the other hand, a path over the \emph{edges} yields a sequence $s_1,\cdots,s_k$ of constant terms for the quotients.
To characterize the sequences of constant terms of the quotients for our enumeration problem, we can thus consider the whole dynamical system as a finite state automaton (FSA): the idea is to define the relevant sequences as words of the language recognized by the FSA. This first requires the definition of an initial and accepting state for the automaton.
Consider a pair of polynomials $(f,g) \in S_n^2$. The sequence of quotients $q_1,q_2,\cdots$ yielded by Euclid's algorithm induces a path on the FSA graph starting from state $(1,1)$, which is labelled by the constant terms of the quotients. The final state at the end of the path will be either $(1,1)$, $(1,0)$ or $(0,1)$.
What does it happen if we change the final state to one of the remaining two states, and invert the process with the same sequence of constant terms, reading them in reverse order? Observe from the table of $\delta$ that the FSA is \emph{permutative}, meaning that if we take two distinct states and read the same quotient constant term $s_{i+1}$, then the two output states after applying $\delta$ will be distinct as well. Formally, for all $(c_i, c_{i+1}) \neq (c_i', c_{i+1}')$ and $s_{i+1}$, it holds thats
\begin{displaymath}
\delta((c_i,c_{i+1}),s_{i+1}) \neq \delta((c_i',c_{i+1}'),s_{i+1}) \enspace .
\end{displaymath}
A simple induction argument shows that this property stands also for sequences of constant terms. Thus, if we start from two different initial states and apply the same sequence of constant terms, the final states will be different. This allows us to define the \emph{inverse} automaton of $\delta$ by simply reversing the arrows in the transition graph of Figure~\ref{fig:delta}. Remark that the inverse FSA corresponds to the application of dilcuE's algorithm. Therefore, to characterize the sequences of constant terms for the quotients in our problem, we can define the initial and accepting states as follows:
\begin{compactitem}
\item On account of Remark~\ref{rem:first-last}, the initial state is $(1,0)$.
\item The accepting state should be $(1,1)$, since we want both our final polynomials to have a nonzero constant term. However, by Remark~\ref{rem:first-last}(i) the first quotient in Euclid's algorithm (therefore, the last one in dilcuE's) is always $1$. Hence, we can shorten our sequence by one element, and append $1$ to it. Since the only way to reach $(1,1)$ in the inverse FSA by reading a $1$ is from $(1,0)$, it follows that $(1,0)$ can also be considered as the only accepting state.
\end{compactitem}
Figure~\ref{fig:inv-fsa} depicts the transition graph of the inverse automaton, with the indication of the initial and final state $(1,0)$.
\begin{figure}[b]
\centering
\resizebox{!}{5cm}{
\begin{tikzpicture}
[->,auto,node distance=1.5cm, every loop/.style={min distance=12mm},
empt node/.style={font=\sffamily,inner sep=0pt,outer sep=0pt},
circ node/.style={circle,thick,draw,font=\sffamily\bfseries,minimum
width=0.8cm, inner sep=0pt, outer sep=0pt}]
\node [circ node] (n11) {$11$};
\node [empt node] (e1) [below = 2.25cm of n11] {};
\node [state,accepting] (n10) [right = 1.5cm of e1] {$10$};
\node [empt node] (e2) [below=0cm of n10] {$\Uparrow$};
\node [circ node] (n01) [left = 1.5cm of e1] {$01$};
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n10)
edge[bend right=20] node (f5) [above right]{$1$} (n11);
\draw[->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n11) edge[loop
above] node (f3) [above]{$0$} ();
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n01)
edge[bend right=20] node (f5) [below]{$0/1$} (n10);
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n10)
edge[bend right=20] node (f5) [above]{$0$} (n01);
\draw [->, thick, shorten >=0pt,shorten <=0pt,>=stealth] (n11)
edge[bend right=20] node (f5) [above left]{$1$} (n01);
\end{tikzpicture}
}
\caption{Transition graph for the inverse FSA associated to dilcuE's algorithm.}
\label{fig:inv-fsa}
\end{figure}
Using the classic state elimination method~\cite{hopcroft09}, we can obtain the regular expression that generates the language recognized by the inverse FSA, which is:
\begin{equation}
\label{eq:rl}
L_r = (0(0+1)+(10^*1(0+1)))^* \enspace .
\end{equation}
We have thus obtained the following result:
\begin{lemma}
\label{lm:reg-lang}
The sequences of constant terms for the quotients visited by dilcuE's algorithm when generating a coprime pair $(f,g) \in A_n$ form a regular language $L_r$, whose regular expression is defined by Equation~\eqref{eq:rl}.
\end{lemma}
Returning to our main problem, enumerating the sequences $s_1,\cdots,s_k$ of constant term basically amounts to generate all words in $L_r$ of length $k$. This can be accomplished, for instance, by M\"{a}kinen's enumeration algorithm in~\cite{makinen97}, which generates in lexicographic order all words of fixed length in a regular language. The algorithm is quite efficient, since the time required to generate the next word of length $k$ from the previous one is $\mathcal{O}(k)$.
Concerning the counting part, we are interested in determining the number of words of length $k \in \mathbb{N}$ that belong to the language $L_r$. To this end, we employ the \emph{Chomsky-Sch\"{u}tzenberger enumeration theorem}~\cite{chomsky59}, which associates to each regular\footnote{The thorem in its general form actually applies to the larger class of context-free languages, but here we are interested only in regular ones.} language a \emph{formal power series} (FPS) of the type:
\begin{equation}
\label{eq:fps}
\mathcal{F}_L = \sum_{k=0}^{\infty} \ell_kX^k \enspace ,
\end{equation}
where the coefficient $\ell_k$ gives the number of words of length $k$ in the language $L$. The theorem is applied by first recovering the \emph{generating function} of the FPS from the regular expression of the language, using the rules that $0$ and $1$ are mapped to the unknown $X$, alternative choice ($+$) and concatenation are mapped respectively to polynomial sum $+$ and multiplication $\cdot$, while the Kleene closure operator $^*$ is transformed in the function $\frac{1}{1-X}$. Using these rules, the generating function of our regular language $L_r$ is:
\begin{equation}
\label{eq:gf}
G(X) = \frac{1}{1 - \left( X (X+X) + (X \cdot \frac{1}{1-X} \cdot X(X+X) \right)} = \frac{1-X}{1-X-2X^2} \enspace .
\end{equation}
Then, the closed form for the generic coefficient $\ell_k$ can be obtained by rewriting the generating function as a sum of geometric series. This gives the following result:
\begin{lemma}
\label{lm:cf}
The number of words of length $k \in \mathbb{N}$ in the regular language $L_r$ for the sequences of constant terms is:
\begin{equation}
\label{eq:cf}
\ell_k = \frac{2^k + 2\cdot(-1)^k}{3} \enspace .
\end{equation}
\begin{proof}
We first rewrite the generating function as follows:
\begin{equation}
\label{eq:cf-1}
G(X) = \frac{1-X}{1-X-2X^2} = \frac{X-1}{2X^2 + X - 1} = \frac{X-1}{(X+1)(2X-1)} \enspace .
\end{equation}
Then, we break the fraction in two parts by:
\begin{equation}
\label{eq:cf-2}
\frac{X-1}{(X+1)(2X-1)} = \frac{A(2X+1) + B(X+1)}{(X+1)(2X-1)} = \frac{A(2X+1)}{X+1} + \frac{B(X+1)}{2x+1} \enspace .
\end{equation}
Solving the associated systems of equations gives us $A=\frac{2}{3}$ and $B=-\frac{1}{3}$. Hence we can rewrite Equation~\eqref{eq:cf-2} as:
\begin{equation}
\label{eq:cf-3}
\frac{X-1}{(X+1)(2X-1)} = \frac{2}{3} \cdot \frac{1}{X+1} - \frac{1}{3} \cdot \frac{1}{2X+1} \enspace .
\end{equation}
Recall that $\sum_{k=0}^\infty aX^k = \frac{a}{1-X}$. Thus, we have:
\begin{equation}
\label{eq:cf-4}
\frac{2}{3} \cdot \frac{1}{X+1} - \frac{1}{3} \cdot \frac{1}{2X+1} = \sum_{k=0}^\infty \left ( \frac{1}{3} \cdot (2(-1)^k + 2^k) \right ) X^k = \sum_{k=0}^\infty \ell_k X^k \enspace ,
\end{equation}
from which the result follows.
\end{proof}
\end{lemma}
\section{Quotients' Degrees Sequences and Compositions}
\label{sec:comp}
The second part of our problem consists in characterizing the sequences of \emph{degrees} of the quotients visited by dilcuE's algorithm. As observed in Section~\ref{sec:prob}, the only constraint that we put on such a sequence $d_1,\cdots, d_k$ is that its sum must be equal to the final degree $n$, i.e. $\sum_{i=1}^k d_i = n$. Remark that the \emph{order} of the summands matters here: indeed, changing the order of the degrees yields a different quotients' sequence, and thus a different polynomial pair. Hence, we are interested in enumerating and counting the number of ways in which we can obtain $n$ as an ordered sum of $k$ natural numbers. Such sums are known as \emph{compositions} in combinatorics~\cite{riordan12}. A simple way to represent a composition of $n \in \mathbb{N}$ is through $n-1$ \emph{boxes} interleaved between $n$ occurrences of $1$:
\[
1 \overbrace{\square 1 \square \ldots \square 1 \square}^{n-1} 1 \enspace ,
\]
where each box can be filled with a comma (,) or a plus (+). The semantic is as follows: a comma separates two different parts in a composition, while a plus adds two adjacent 1s together. Thus, for example, for $n=4$ one could have the assignment $(, \ + \ ,)$ in the boxes which gives $1, 1+1, 1 \to 1 + 2 + 1$. Since we have two choices for each box, it follows that there are $2^{n-1}$ compositions of $n \in \mathbb{N}$. Notice however that, in our case, we cannot take the extremal composition where all boxes are set to $+$. This is due to the fact that a sequence composed of just one quotient cannot occur in dilcuE's algorithm. Indeed, starting from the initial state $(1,0)$ in the inverse FSA of Section~\ref{sec:const}, one can see that both possible paths of length 1 terminate in a non-accepting state. Therefore, we have to consider all compositions of $n$ of length $k\ge 2$.
Once the length $k$ of the sequence of quotients has been fixed, generating the corresponding degrees is equivalent to the enumeration of all binary strings of length $n-1$ with $k-1$ ones in them. This can be done for example by using one of the various algorithms devised by Knuth~\cite{knuth11} for this task. On the other hand, the number of all compositions of $n$ of length $k$ is given by the binomial coefficient $\binom{n-1}{k-1}$. Therefore, we obtained the following result:
\begin{lemma}
\label{lm:comp}
The number of sequences of degrees $d_1,\cdots,d_k$ of the final degree $n \in \mathbb{N}$ for the quotients visited by dilcuE's algorithm is:
\begin{equation}
\label{eq:comp}
D_{n,k} = \binom{n-1}{k-1} \enspace .
\end{equation}
\end{lemma}
\section{Conclusion}
\label{sec:outro}
As an application of the results above, we solve Problem (i) by giving the pseudocode to generate all pairs $f,g$ of coprime polynomials of degree $n$ with nonzero constant term in Algorithm~\ref{alg:enum}.
\begin{algorithm}
\caption{Enumerate all pairs $(f,g) \in A_n$}
For each quotients' sequence length $2 \le k \le n$ do:
\begin{itemize}
\item For each composition $comp$ of $n$ of length $k$ do:
\begin{itemize}
\item[(1)] Generate the degrees' sequence $deg$ corresponding to $comp$
\item[(2)] For each intermediate terms sequence $seq$ do:
\begin{itemize}
\item[(2a)] Adjoin $seq$ to $deg$ to get a quotients's sequence $quot$
\item[(2b)] For each constant term sequence $const$ of length $k$ do:
\begin{itemize}
\item Adjoin $const$ to $quot$
\item Apply DilcuE's algorithm from $(1,0)$ by using the sequence $quot$
\end{itemize}
\end{itemize}
\end{itemize}
\end{itemize}
\label{alg:enum}
\end{algorithm}
The idea of the algorithm is quite straightforward: since the three parts in which we decomposed the problem in Section~\ref{sec:prob} are independent, the structure consists of four nested loops that goes respectively over each possible quotients' sequence length $k$, composition of $n$ in $k$ parts, intermediate terms sequence, and constant terms sequence. In each iteration of the innermost loop, a complete sequence of quotients is created, from which dilcuE's algorithm can be applied to get a coprime polynomial pair in $A_n$.
As a side result, we also give a derivation for the counting formula of $a_n$, thereby providing an alternative proof for the number of coprime polynomial pairs of degree $n$ over $\mathbb{F}_2$ with nonzero constant term. By exploiting again the fact that the three parts of our problem are independent, once the length $k$ is fixed the corresponding number of quotients' sequences amounts to the multiplication of the number of sequences of intermediate terms $I_{n,k}$ (Equation~\eqref{eq:seq-int}), degrees (Equation~\eqref{eq:comp}) and constant terms (Equation~\eqref{eq:cf}). Since we want to count all possible coprime pairs in $A_n$, we also have to sum over all possible lengths $2 \le k \le n$, giving us the following result:
\begin{lemma}
\label{lm:count}
The number of pairs of coprime polynomials of degree $n$ with nonzero constant term is equal to:
\begin{equation}
\label{eq:count}
a_n = \sum_{k=2}^n 2^{n-k} \cdot \binom{n-1}{k-1} \cdot \frac{2^k + 2\cdot(-1)^k}{3} = 2 \cdot \frac{4^{n-1}-1}{3} \enspace .
\end{equation}
\begin{proof}
The general term in the sum is given by the independent choice among all possible sequences of intermediate terms, degrees and constant terms. Hence, we have to multiply $I_{n,k}$, $D_{n,k}$ and $\ell(k)$ together. The sum starts from $k=2$ since as we observed earlier there cannot be valid paths of length 1 in dilcuE's algorithm (let alone of length 0). Hence, we have:
\begin{align}
\label{eq:cont-1}
\nonumber
a_n &= \sum_{k=2}^n 2^{n-k} \cdot \binom{n-1}{k-1} \cdot \frac{2^k + 2\cdot(-1)^k}{3} = \\ &= \frac{2^n}{3}\sum_{k=2}^{n} \binom{n-1}{k-1} + \frac{2^{n+1}}{3} \sum_{k=2}^{n} \binom{n-1}{k-1} \cdot \left ( - \frac{1}{2}\right )^k \enspace .
\end{align}
Setting $j = k-1$ we obtain:
\begin{equation}
\label{eq:cont-2}
a_n = \frac{2^n}{3}\sum_{j=1}^{n-1} \binom{n-1}{j} + \frac{2^{n+1}}{3} \sum_{j=1}^{n-1} \binom{n-1}{j} \cdot \left ( - \frac{1}{2}\right )^{j+1} \enspace .
\end{equation}
Remark that the first sum evaluates to $2^{n-1}-1$, since we can rewrite it as:
\begin{equation}
\sum_{j=1}{n-1} \binom{n-1}{j} 1^j \cdot 1^{n-1-j} = 2^{n-1} - 1 \enspace ,
\end{equation}
where we applied Newton's binomial formula, and the $-1$ stems from the fact that $j$ starts from $1$ instead of $0$. By bringing the $+1$ in the exponent of $-\frac{1}{2}^{j+1}$ out of the second sum, we can rewrite Equation~\eqref{eq:cont-2} as:
\begin{equation}
\label{eq:cont-3}
a_n = \frac{2^n}{3} \cdot (2^{n-1} - 1) - \frac{2^n}{3} \cdot \sum_{j=1}^{n-1} \binom{n-1}{j} \cdot \left( -\frac{1}{2} \right)^j \enspace .
\end{equation}
The second sum gives $\left(\frac{1}{2}\right)^{n-1} - 1$, since by applying again Newton's binomial formula it holds that:
\begin{equation}
\sum_{j=1}^{n-1} \binom{n-1}{j} \cdot \left( -\frac{1}{2}\right)^j \cdot 1^{n-1-j} = \left( 1 - \frac{1}{2}\right)^{n-1} - 1 = \left(\frac{1}{2}\right)^{n-1} - 1 \enspace .
\end{equation}
In conclusion, we obtain:
\begin{equation}
\label{eq:cont-4}
a_n = \frac{2^{2n-1} - 2^n}{3} - \frac{2-2^n}{3} = \frac{2^{2n-1} - 2}{3} = \frac{4^n - 4}{6} = 2 \cdot \frac{4^{n-1}-1}{3} \enspace .
\end{equation}
\end{proof}
\end{lemma}
Notice that the formula in Equation~\eqref{eq:count} is exactly \emph{twice} the formula that we proved in~\cite{mariot20}. This difference is however easy to explain, since the approach that we followed in this paper does not consider the \emph{order} of the polynomials in the pair. Hence, two pairs of the form $(f,g), (g,f) \in A_n$ are counted as distinct. On the other hand, the recurrence proved in~\cite{mariot20} focuses on counting the number of unordered pairs of coprime polynomials, thereby giving rise to the formula $\frac{4^{n-1}-1}{3}$, which corresponds to OEIS sequence A002450~\cite{a002450}. The fact that we reobtain the same formula only scaled by a constant factor serves as an independent confirmation that Algorithm~\ref{alg:enum} is correct.
There are a few interesting directions to explore for further research on this subject. The first natural extension of the problem is to consider coprime pairs over a generic finite field $\mathbb{F}_q$. While the counting question has already been settled in our previous work~\cite{mariot20}, enumerating all such pairs is still an open problem. Here we considered the case $q=2$ both because it is the simplest one and also the most useful for practical applications in cryptography and coding theory. However, a generalization of Algorithm~\ref{alg:enum} to any finite field $\mathbb{F}_q$ is certainly in order. Moreover, it would be interesting to extend this investigation to the case of $m$-tuples of pairwise coprime polynomials. We believe in particular that the part related to the sequences of constant terms could be handled through the \emph{product} of several automata. Finally, it would be interesting to compare the enumeration method described here with other more generic ones introduced in~\cite{mariot17} and~\cite{mariot18} for orthogonal Latin squares based on nonlinear CA.
\bibliographystyle{splncs04}
|
1,314,259,993,525 | arxiv | \section{Introduction}
While Einstein's general relativity (GR) has stood all direct
experimental tests~\cite{Will:2005va}, there are also reasons to
try to extend GR. For example
the mysterious nature of dark energy and dark matter might become
resolved within a modified theory of gravity.
Another reason to try to extend GR is the notion of generality.
Within the framework of GR torsion is not included in a natural,
geometric way. Indeed any calculation of the connection (either by
requiring metric compatibility, or by using the first order
formalism) leads to the (symmetric) Levi-Civit\`a connection. One
is then free to add torsion, but torsion does not follows
naturally from the theory. An interesting generalization of GR
would generate torsion in a purely geometric way, analogous to the
way the Levi-Civit\`a connection is generated in GR.
The Nonsymmetric Gravitational Theory (NGT)~\cite{Moffat:1994hv}
is an extension of GR that drops the standard axiom of GR that the
metric is a symmetric tensor. Thus we decompose the general,
nonsymmetric metric $g_{\mu\nu}$ in it's symmetric and
antisymmetric parts
\begin{equation} \label{naive}
g_{\mu\nu}=G_{\mu\nu}+B_{\mu\nu},
\end{equation}
where $G_{\mu\nu}=g_{(\mu\nu)}$, $B_{\mu\nu}=g_{[\mu\nu]}$ and
$(\cdot)$ and $[\cdot]$ indicate normalized symmetrization and
anti-symmetrization, respectively. Indeed there is no physical
principle that tells us that the metric should be symmetric and
therefore such a generalization is very interesting to study.
Indeed the extra structure of NGT produces
interesting results on the issues of dark energy and dark
matter~\cite{Moffat:2004bm}~\cite{Moffat:2004nw}~\cite{Prokopec:2006kr}
and it will also be clear that such a theory produces torsion in a
very natural way. Unfortunately the nonsymmetric theory of
gravitation suffers from all kinds of problems. The first main
problems is the non-uniqueness of the theory, as described
in~\cite{Janssen:2006nn}. Since torsion is available and since the
linearization procedure is not unambiguous, the final linearized
lagrangian is (degenerately) determined by 11 free parameters. The
second problem, as described in~\cite{Damour:1992bt}, is the
possibility of propagating ghost modes. Fortunately this problem
can be relatively easy solved by the introduction of a mass term
for the $B$-field~\cite{Moffat:1994hv}~\cite{Clayton:1995yi}.
In this talk we consider NGT linearized around a GR
configuration. By explicitly constructing two different
backgrounds (FLRW-universe and Schwarzschild) we show that the
evolution of the $B$-field is unstable. By considering the most
general form of the linearized lagrangian, we can explicitly point
out which terms cause these instabilities.
In~\cite{Janssen:2006nn} it is both shown that these terms cannot
be removed and that these terms are not a relic of the
linearization.
Based on this analysis we are able to
write down a consistent, stable linearized lagrangian for the
B-field. We next canonically quantize the B-field in inflation and
follow its dynamics in radiation and matter era. This analysis
shows that the B-field is an excellent dark matter candidate,
provided the mass is of the order of the neutrino masses.
\section{The linearized Lagrangian} \label{slin}
Since GR is very successful, it is natural to assume that any
modification of the theory should be relatively small. Therefore
we consider NGT in the limit of a small $B$, but an arbitrary $G$.
The linearization of the full, general lagrangian is done in
Appendix A of~\cite{Janssen:2006nn}. The result is
\begin{eqnarray} \label{lagrangian}
\mathcal{L}=\sqrt{-G}\bigg[&R+2\Lambda-\frac{1}{12}H^2+(\frac{1}{4}m^2+\beta R)B^2\\
\nonumber&-\alpha R_{\mu\nu}B^{\mu\alpha}B_\alpha{}^\nu-\gamma R_{\mu\alpha\nu\beta}B^{\mu\nu}B^{\alpha\beta}\bigg]+\mathcal{O}(B^3).
\end{eqnarray}
Here the curvature terms $R_{\mu\alpha\nu\beta}$, $R_{\mu\nu}$ and
$R$ all refer to the background, GR, curvature. $H_{\mu\nu\rho}$
is the field strength associated with $B_{\mu\nu}$. The
coefficients $\alpha,\beta$ and $\gamma$ are determined by the
parameters of the 'full' lagrangian and the unambiguous
decomposition of the metric in its symmetric and anti-symmetric
parts. It is important to note that one cannot consistently choose
the parameters of the full theory in such a way that $\gamma=0$
(see appendix A of~\cite{Janssen:2006nn}). The parameters $\alpha$
and $\beta$ can in principle be set to zero, however a priori
there is no reason to do this. A mass is naturally generated in
the presence of a nonzero cosmological constant and in fact one
has
\begin{equation} \label{assumption}
\frac{1}{4}m^2=\Lambda\Big(\frac{1}{2}-\rho+4\sigma\Big)\propto
10^{-84}~{\rm GeV}^2.
\end{equation}
where we assume that the parameters $\rho$ and $\sigma$ are order
unity. Note that the inequality is not necessarily true at all
times, since the cosmological term may change during the evolution
of the Universe (for example during phase transitions). The field
equations derived from the lagrangian (\ref{lagrangian}) are
\begin{eqnarray} \label{fieldeq}
&(\sqrt{-G})^{-1}\frac{1}{2}\partial_\rho
(\sqrt{-G}H^{\rho\mu\nu})
+ (\frac{1}{2}m^2+2\beta R)B^{\mu\nu}\\
\nonumber&\qquad\qquad -\alpha(B^{\nu\alpha}R^\mu{}_\alpha
+B^{\alpha\mu}R^\nu{}_\alpha)-2\gamma
B^{\alpha\beta}R^\mu{}_\alpha{}^\nu{}_\beta +\mathcal{O}(B^2)=0\\
&R_{\mu\nu}-\frac12 R G_{\mu\nu}-\Lambda G_{\mu\nu}+\mathcal{O}(B^2)=0
\,.
\end{eqnarray}
We see that to this order the field equations decouple and it
makes sense to consider the symmetric background, to be just a GR
background. The theory then reduces to an antisymmetric tensor
field coupled to GR.
\section{Instabilities in NGT}
We first focus on the dynamics of the B-field in an expanding universe\footnote{This section is based
on Ref.~\cite{Janssen:2006nn}}~\cite{Prokopec:2005fb}. Our
background metric is given by the (conformal)
Friedmann-Lemaitre-Robertson-Walker metric (FLRW):
\begin{equation} \label{metric}
G_{\mu\nu}=a(\eta)^2\eta_{\mu\nu},
\end{equation}
where $\eta_{\mu\nu}=\rm{diag}(1,-1,-1,-1)$, $\eta$ is conformal
time and $a(\eta)$ is the conformal scale factor. The conformal
time is related to the standard cosmological time by, $ad\eta=dt$.
The scale factor during the different cosmological eras is given
in table \ref{tabelletje}, where $H_I \sim 10^{13}~{\rm GeV}$ is
the Hubble parameter during inflation and $\eta_{eq}$ is the
conformal time at matter-radiation equality.
\begin{table}
\caption{The scale factor and conformal time in different eras}
\label{tabelletje}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
era & $a$ & $\eta$ \\
\hline
de Sitter inflation & $a=-\frac{1}{H_I\eta}$ &$\eta\leq-\frac{1}{H_I}$\\
Radiation & $a=H_I\eta$ & $\frac{1}{H_I}\leq\eta\leq\eta_{eq}$\\
Matter & $a=\frac{H_I}{4\eta_{eq}^2}(\eta+\eta_{eq})^2$ &$\eta \geq \eta_{eq}$\\
\hline
\end{tabular}
\end{center}
\end{table}
For the following discussion we focus on the 'electric' mode of
the B-field: $E_i\equiv B_{0i}$. (the 'magnetic' mode turns out
not to be very interesting for our present purpose). If we
evaluate the lagrangian (\ref{lagrangian}) and the field equations
(\ref{fieldeq}) in the FLRW background we find the follwing
equation of motion
\begin{equation} \label{finaleom}
\bigg[\partial_0\partial_0-\frac{\mathcal{Y}}{\mathcal{X}}\delta^{ij}\partial_i\partial_j+M^2_{eff}\bigg]\tilde{E}=0,
\end{equation}
where
\begin{equation} \label{transformation}
{E}=\frac{\sqrt{\mathcal{Y}}}{\mathcal{X}}\tilde E,
\end{equation}
and the effective mass term is given by
\begin{equation}\label{effmass}
M^2_{eff}=-2\mathcal{Y}a^2+\frac{\mathcal{Y}''}{2\mathcal{Y}}-\frac{3(\mathcal{Y}')^2}{4\mathcal{Y}^2}
\,.
\end{equation}
Furthermore we have defined
\begin{eqnarray}
&\mathcal{X}=a^{-2}\bigg((12\beta+2\alpha)\mathcal{H}^2+(12\beta+4\alpha-2\gamma)\mathcal{H}'-\frac{1}{2}m^2a^2\bigg) \label{X}\\
&\mathcal{Y}=a^{-2}\bigg((12\beta+4\alpha-2\gamma)\mathcal{H}^2+(12\beta+2\alpha)\mathcal{H}'-\frac{1}{2}m^2a^2\bigg)
\label{Y}
\end{eqnarray}
and
\begin{equation}
\mathcal{H}=\frac{a'}{a},
\end{equation}
where a prime indicates a derivative with respect to conformal
time. We see from (\ref{finaleom}) that $\tilde{E}$ behaves just
as a massive vector field, \emph{as long as,
${\mathcal{Y}}/{\mathcal{X}}>0$}. On the other hand, if
${\mathcal{Y}}/{\mathcal{X}}<0$ we see that the spatial
derivatives appear with the 'wrong' sign. Since in fourier space
these derivatives generate a term proportional to minus the
momentum squared, we see that a wrong sign will lead to an
exponential growth of the field. Large momenta are no longer
suppressed and thus the field will grow without bounds. One could
worry about the cases when, $M^2_{eff}<0$. However on dimensional
grounds, the effective mass squared scales in the worst case as,
${1}/{\eta^2}$. Such a scaling results in a standard power-law
enhancement on super-Hubble scales~\cite{Prokopec:2005fb} and
presents no problem.
\subsection{Instabilities during Radiation era}
In de Sitter inflation ${\mathcal{Y}}/{\mathcal{X}}=1$, and thus
the field dynamics are completely regular. However during
radiation era we obtain
\begin{equation}
\bigg[\partial_0\partial_0
- \frac{H_I^2m^2\eta^4+4(\gamma-\alpha)}
{H_I^2m^2\eta^4-4(\gamma-\alpha)}\delta^{ij}\partial_i\partial_j
+ M^2_{r}
\bigg]\tilde{E}_r=0.
\label{EOM:radiation era}
\end{equation}
Here $M_{r}$ is the effective mass during radiation, whose precise
form is not important for us. We see however, that we might have
problems with the sign of the coefficient in front of the spatial
derivatives. For example if we look at the beginning of radiation
era ($\eta={1}/{H_I}$) we see that if we want
${\mathcal{Y}}/{\mathcal{X}}$ to be positive, we need that
${m^2}/{H_I^2}$ is \emph{at least}, $\mathcal{O}(\alpha-\gamma)$.
In other words we approximately need:
\begin{equation} \label{error}
m\geq |\alpha-\gamma|H_I\sim |\alpha-\gamma|\times 10^{13}~{\rm GeV}
\,,
\end{equation}
which, unless $|\alpha-\gamma|$ is very small, contradicts
Eq.~(\ref{assumption}). Therefore if we require
${\mathcal{Y}}/{\mathcal{X}}$ to be positive, we could drop the
purely geometric origin of the lagrangian and add by hand a large
($10^{13}~{\rm GeV}$) mass for the $B$-field, we could fine-tune
$\alpha$ or $\gamma$ such that $\alpha-\gamma$ is sufficiently
small to satisfy the bound~(\ref{error}), or we could use the more
natural requirement that $\alpha=\gamma$. On theoretical grounds
only the last of these solutions is satisfactory. A big problem
with the first solution is that, while we can always find a mass
where the evolution of the mode is stable, we can than also think
of more extreme situations where the mode once again becomes
unstable. Therefore we conclude that a natural theory should have
$\alpha=\gamma$.
We have also investigated matter era and power-law inflation and
we find that similar instabilities are present. However also in
these cases $\alpha=\gamma$ stabilizes the system.
\subsection{Instabilities around a Schwarzschild mass}
We have done a similar analysis in a Schwarzschild background. We
won't give any details here (see section 4
of~\cite{Janssen:2006nn}), but will only mention that similar
instabilities are present; however now the requirements for a
stable system are either
\begin{equation}
\gamma=0
\end{equation}
or
\begin{equation} \label{massobey}
m^2>\frac{4\gamma G_N\hbar^2}{c^4}
\frac{M_0}{r_0^3}\qquad\qquad [kg^2],
\end{equation}
where we explicitly plugged back factors of $c$, $h$ and $G_N$.
$M_0$ is the mass of the object we are considering and $r_0$ is
the distance where we require stability. for $\gamma$ order 1 this
requires e.g. for the exterior of a neutron star ($M_0 \propto
M_{\rm{sun}}$ and $r_0\propto 20\rm{km}$):
\begin{equation}
m\gtrsim\sqrt{|\gamma|}\times 10^{-19}~{\rm GeV}
\end{equation}
However, on theoretical grounds, it is more appealing to require
that the $B$-field is stable for all values of $M_0$ and $r_0$.
This can only be achieved if we choose $\gamma=0$. However as
noted in section 2, this choice is not possible within our
linearization of NGT.
\section{Antisymmetric metric field as Dark matter}
Based on the previous section we know that the only consistent
linearized lagrangian for the $B$-field is
\begin{equation} \label{lagrangiangood}
\mathcal{L}=\sqrt{-G}\bigg[R+2\Lambda-\frac{1}{12}H^2+(\frac{1}{4}m^2+\beta
R)B^2\bigg].
\end{equation}
While this lagrangian is not obtainable in NGT, we like to stress
that our linearization procedure of NGT lacks any guiding
principle (which is reflected in the non-uniqueness of the
theory). The analysis above shows that if we want to make sense of
nonsymmetric gravity we need to find a guiding principle that,
upon linearization, leads to (\ref{lagrangiangood}). For now we do
not know this principle, but we can still study
(\ref{lagrangiangood}).
In this section\footnote{Based on
Ref.~\cite{Valkenburg}} we consider the generation and evolution
throughout the cosmological history of quantum fluctuations of the
$B$-field. In particular we only consider the longitudinal degrees
of freedom of the 'magnetic' component
~\cite{Prokopec:2006kr}~\cite{Prokopec:2005fb}~\cite{Valkenburg}
\begin{equation}
B_{ij}\equiv-\epsilon_{ijk}B_k,
\end{equation}
since this mode gives the dominant contribution to the energy
density in the limit $m\rightarrow 0$. For simplicity we take
$\beta=0$, but keep the mass arbitrary. When compared to
(\ref{assumption}) this means we allow the presence of a small
bare mass for the $B$-field. In order to quantize the field we
perform a Fourier transformation
\begin{equation}
B^L(x)=\int\frac{d^3k}{(2\pi)^{3/2}}\Bigg[e^{i\vec{k}\cdot\vec{x}}B^L(\eta,\vec{k})b_{\vec{k}}+e^{-i\vec{k}\cdot\vec{x}}B^{L\star}(\eta,\vec{k})b^\dagger_{\vec{k}}\Bigg],
\end{equation}
where $\eta$ is once again conformal time as given in table
\ref{tabelletje}, with canonical commutation relations
\begin{equation}
[b_{\vec{k}},b^\dagger_{\vec{k}'}]=(2\pi)^3\delta^3(\vec{k}-\vec{k}')
\end{equation} During de Sitter inflation we find that the mode
functions approach the conformal vacuum
\begin{equation}
B^L_{\rm{inf}}\propto
\frac{1}{\sqrt{2k}}e^{-ik\eta}+\mathcal{O}\bigg(\frac{m^2}{H_I^2}\bigg).
\end{equation}
During radiation era the field equations are solved by
\begin{equation}
B^L_{\rm{rad}}=\frac{1}{\sqrt{2k}}\Bigg[\alpha_{\vec{k}}\Bigg(1-\frac{i}{k\eta}\Bigg)e^{-ik\eta}+\beta_{\vec{k}}\Bigg(1+\frac{i}{k\eta}\Bigg)e^{ik\eta}\Bigg]+\mathcal{O}\bigg(\frac{m^2}{H_I^2}\bigg)
\end{equation}
with the Wronskian condition that
\begin{equation}
|\alpha_{\vec{k}}|^2-|\beta_{\vec{k}}|^2=1
\end{equation}
and we choose $\alpha$ and $\beta$ such that the solutions match
at the inflation-radiation transition. Unfortunately we cannot
analytically solve the equations of motion in matter era, so there
we need to use numerical analysis.
We are interested in the power spectrum, which is given by~\cite{Prokopec:2006kr}
\begin{equation}
P_B(\vec{k},\eta)=\frac{H_I^4}{4\pi^2 a^4}\Bigg[|\partial_\eta
B^L_{\vec{k}}(\eta)+\frac{a'}{a}B^L_{\vec{k}}(\eta)|^2+(k^2+a^2m^2)|B^L_{\vec{k}}|^2\Bigg].
\end{equation}
A snapshot of this power spectrum, during matter era, for
different redshifts is given in figure \ref{powerfigure}.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{powerspectrum.eps}
\caption{Snapshot of the power spectrum for $m H_I
\eta_{eq}^2=10^{-2}$} \label{powerfigure}
\end{center}
\end{figure}
We find that at late times the power spectrum becomes dominated by
a characteristic peak. This peak is caused by modes that are
superhorizon ($k\eta\lesssim 1$) at equality ($z=3230$), but start
to scale as nonrelativistic matter ($\propto a^{-3}$)in matter era
and enter the horizon. Modes on small enough scales
($k\eta>a/a_{eq}$) are effectively massless and scale as
relativistic matter $\propto a^{-4}$. The position of the peak is
determined by the mass of the $B$-field. In fact we have
\begin{equation}
k_{\rm{peak}}=\sqrt{H_I m}
\end{equation}
Now that we know the power spectrum, we can calculate the
energy density of the $B$-field, defined by
\begin{equation}
\rho_B=\int\frac{dk}{k}
P_B.
\end{equation}
A good dark matter candidate should have an energy density
\begin{equation}
\frac{\rho_B}{\rho_{\rm{rad}}}=1\qquad\qquad\rm{at}\qquad\qquad
\eta=\eta_{eq},
\end{equation}
where $\rho_{\rm{rad}}$ is the enrgy density of the cosmic
radiation. The calculation is done in~\cite{Valkenburg} and it is
found that
\begin{equation}
m =2.8\times
10^{-2}\Bigg(\frac{10^{13}\rm{GeV}}{H_I}\Bigg)^4\rm{eV}
\end{equation}
gives the right energy density.
\section{Discussion and conclusion}
We have shown that, while the nonsymmetric theory of gravitation
is an extremely interesting extension of general relativity to
study, the modes of the antisymmetric metric field are unstable.
This instability manifests itself through a wrong sign in front of
spatial derivatives in the equations of motion. Such a wrong sign
means that large momenta are no longer suppressed, and therefore
the field grows without bounds. We showed that the troublesome
terms in the lagrangian (\ref{lagrangian}) are the coupling to the
Riemann tensor and the Ricci tensor. Furthermore
in~\cite{Janssen:2006nn} it was shown
that the first of these terms cannot be
removed in NGT and that the instabilities are not a relic of the
linearization.
However, our linearization procedure was rather naive, and it
lacks a good guiding principle. Our analysis shows that \emph{if}
one could find a good principle from which to construct a
nonsymmetric theory of gravitation (e.g. by considering complex
manifolds as in ~\cite{Chamseddine:2005at} ~\cite{Mann:1982}), the
linearized lagrangian \emph{must} have the form of
(\ref{lagrangiangood}).
Based on this knowledge, we've studied the evolution of quantum fluctuations, generated at
inflation,
throughout the cosmological history. We find that the $B$-field
has the right energy density to fully take account for the dark
matter energy density if the mass of the field is given by
$m=2.8\times
10^{-2}\Bigg(\frac{10^{13}\rm{GeV}}{H_I}\Bigg)^4\rm{eV}$.
Furthermore the power spectrum develops a characteristic peak,
that for this mass and $z=10$ (start of structure formation) has a
length scale coincidentally corresponding to the earth sun
distance.
Although the mass of the $B$-field is small (equivalent to the mass of the $\tau$-neutrino), it still
is \emph{cold} dark matter. Indeed, since the field does not
couple to matter fields, it cannot thermalize and therefore the
spectrum stays primordial and highly non-thermal. Because of this,
it does not suffer from the problems that neutrino dark matter
has.
As a final remark we note that our dark matter candidate means
that gravity may get modified at scales $m^{-1}\propto 0.1\mu
\rm{m}\Bigg(\frac{H_I}{10^{13}\rm{GeV}}\Bigg)^4$. This is still
about two orders of magnitude below the current experimental
bound~\cite{Will:2005va}.
\ack
We would like to thank Wessel Valkenburg and Willem Westra
for many interesting discussions and insights on the issue of NGT.
Finally we thank John Moffat for his correspondence concerning
previous work on NGT
\\
\bibliographystyle{utcaps}
|
1,314,259,993,526 | arxiv | \section{Introduction}
\label{sec:intro}
Past observational studies have revealed that disc galaxies often exhibit an $m=1$ distortion or lopsidedness in the outskirts of the disc. Lopsidedness is common in the spatial distribution of the neutral hydrogen (H~{\sc i}) which extends further out than the stellar disc \citep[e.g., see][]{Baldwinetal1980,RichterandSancisi1994,Haynesetal1998,Matthewsetal1998,Angirasetal2006,vanEymeren2011} as well as in the spatial distribution of stars \citep[e.g., see][]{Blocketal1994,RixandZaritsky1995,Bournaudetal2005,Reichardetal2008,Zaritskyetal2013}. Previous work by \citet{KalberlaandDedes2008} showed the presence of a lopsidedness in the H~{\sc i} distribution of the Milky Way whereas a recent work by \citet{Romeroetal2019} suggested a lopsided (warped) stellar disc for the Milky Way. Simultaneous occurrence of an $m=1$ lopsided distortion and the $m=2$ bar and spiral arms are also common \citep[e.g., see][]{RixandZaritsky1995, Bournaudetal2005, Zaritskyetal2013}. The magnitude of the $m=1$ lopsidedness is shown to correlate with the strength of the spiral arms, but is not correlated with the occurrence of the bar \citep[e.g., see][but also see \citealt{Bournaudetal2005}]{Zaritskyetal2013}. Signature of lopsidedness has been reported in the H~{\sc i} velocity fields of galaxies as well \citep[e.g. ][]{Swatersetal1999,Schoenmakersetal1997,vanEymerenetal2011}. A lopsided pattern in the density distribution can often give rise to a kinematic lopsided feature \citep[e.g., ][]{Jog1997,Jog2002}. Indeed, such a co-existence of morphological and kinematic lopsidedness has been shown observationally in a sample of galaxies from the WHISP (Westerbork H~{\sc i} Survey of Spiral and Irregular Galaxies) survey \citep[see][]{vanEymerenetal2011,vanEymeren2011}.
\par
A variety of physical mechanisms ranging from disc response to halo lopsidedness arising due to tidal interactions \citep{Jog1997} or due to merging of a satellite galaxy \citep{ZaritskyandRix1997} or due to a tidal encounter \citep{Bournaudetal2005,Mapellietal2008}, to asymmetric gas accretion \citep{Bournaudetal2005} has been identified which can excite an $m=1$ lopsided pattern in a disc galaxy \citep[for a detailed exposition of this field, see][]{JogandCombes2009}. Also, an off-set disc in a spherical dark matter (hereafter DM) halo can excite a lopsidedness feature \citep{Noordermeer2001,PrasadandJog2017}. A recent study by \citet{SahaandJog2014} showed that a leading $m=1$ lopsidedness can take part in the outward angular momentum transport, thus facilitating the inflow of gas from the outer regions of galaxy. However, little is known about the pattern speed of the $m=1$ lopsidedness. Observationally, the pattern speed of lopsidedness has not been measured till date. Earlier theoretical works \citep[e.g., see][]{RixandZaritsky1995,Jog1997} have assumed a null pattern speed, for simplicity. Further theoretical explorations revealed that the \textit{slowly} varying global $m=1$ modes can survive for longer times in the near-Keplerian central regions of M~31 \citep{Tremaine2001} as well as in the pure exponential discs in spiral galaxies \citep{Sahaetal2007}. Previous works by \citet{JunqueiraCombes1996,Baconetal2001} also measured the pattern speed of an $m=1$ lopsidedness in the central regions using numerical simulations of galaxies. Measuring the pattern speed of the lopsided asymmetry is extremely crucial as it can potentially shed light about the dynamical role of the lopsidedness in the secular evolution and the angular momentum transport. Furthermore, it can provide important clues about the generating mechanisms of the lopsidedness \citep[e.g., see discussion in][]{Jog2011}.
\par
Also, a few studies of mass modelling from the rotation curve have furnished evidences/indications that there could be an off-set (ranging between $\sim 1-2.5 \kpc$) between the baryonic disc and the DM halo, for example, in NGC~5055 \citep{Battagliaetal2006}, in one galaxy residing in the galaxy cluster Abell 3827 \citep{Masseyetal2015}, and also in M~99 \citep{Cheminetal2016}. Furthermore, a recent theoretical study by \citet{Kuhlenetal2013} reported an off-set of $300-400 \>{\rm pc}$ between the density peaks of the baryonic disc and the DM halo in a Milky Way-like galaxy from the high-resolution cosmological hydrodynamics {\sc{Eris}}; the off-set is seen to be long-lived. An off-centred nucleus can result in an unsettled central region \citep[e.g.,][]{MillerandSmith1992}. Indeed, such a sloshing pattern in the central regions has been reported in a sample of remnants of advanced mergers of galaxies \citep{JogandMaybhate2006}. However, which physical mechanism(s) can create such an off-set between the baryonic and the DM distribution and more importantly, how long such an off-set can survive in a real galaxy -- a full consensus is yet to emerge till date.
\par
On the other hand, minor merger of galaxies is common in the hierarchical formation scenario of galaxies \citep{Frenketal1988,CarlbergandCouchman1989,LaceyandCole1993,Jogeeetal2009,Kavirajetal2009,FakhouriandMa2008}. This mechanism has a number of dynamical impacts on the kinematics as well as on the secular evolution of galaxies, such as disc heating and the vertical thickening of discs \citep{Quinnetal1993,Walkeretal1996,VelazquezandWhite1999,Fontetal2001,Kazantzidisetal2008,Quetal2011a}, slowing down the stellar discs of the post-merger remnants \citep{Quetal2010,Quetal2011b}, enhancing star formation \citep[e.g., see][]{Kaviraj2014}, transferring angular momentum to the dark matter halo via action of stellar bars \citep{Debattistaetal2006,SellwoodandDebattista2006}, and weakening of the stellar bars in the post-merger remnants \citep{Ghoshetal2020}. Furthermore, recent numerical study by \citet{Pardyetal2016} has shown that a dwarf-dwarf merger can produce an off-set bar and a highly asymmetric stellar disc that survives for $\sim 2 \mbox{$\>{\rm Gyr}$}$. This serves as a plausible explanation for the off-set bar \citep{Kruketal2017} found in many Magellanic-type galaxies. While a minor merger can excite lopsidedness in disc galaxies \citep[e.g.,][]{Bournaudetal2005,Mapellietal2008}, the exact role of different orbital parameters, Hubble type of the companion, remain unexplored in the context of excitation of an $m=1$ lopsidedness during a minor merger event. Furthermore, whether a minor merger can produce an off-set between the baryonic disc and the DM halo is not yet known in details.
\par
In this paper, we systematically investigate the generation of an $m=1$ lopsidedness in the density and the velocity fields of the host galaxy in a minor merger scenario while varying different orbital parameters, nature of the host galaxies. Also, we study in details whether a minor merger of galaxies can produce an off-set between the baryonic and the DM halo density distribution. For this, we make use of the publicly available GalMer library \citep{Chillingarianetal2010} which offers the study of the physical effects of minor merger of galaxies, encompassing a wide range of cosmologically motivated initial conditions; thus fulfilling the goal of this paper. The rest of the paper is organised as follows:\\
Section~\ref{sec:simu_setup} provides a brief description of the GalMer database and the minor merger models used here. Section~\ref{sec:diskhalooffset} provides the details of the disc-DM halo off-set configuration arising in minor merger models. Section~\ref{sec:Lopsidedness} quantifies the $m=1$ lopsided distortions present in the stellar disc of the host galaxy whereas section~\ref{sec:pattern_speed} provides pattern speed measurement and the location of resonance points associated with the $m=1$ lopsidedness. Section~\ref{sec:flapping_mode} provides the details of the kinematic lopsidedness present in the models. Section~\ref{sec:comparison_previousWork} compares the properties of the $m=1$ lopsidedness, as presented here, with the past literature. Sections~\ref{sec:discussion} and \ref{sec:conclusion} contain discussion and the main findings of this work, respectively.
\section{Minor merger models -- GalMer database}
\label{sec:simu_setup}
The publicly available GalMer \footnote{ available on \href{http:/ /galmer.obspm.fr} {http://galmer.obspm.fr}} library offers a suite of $N$-body+smooth particle hydrodynamics (SPH) simulations of galaxy mergers that can be used to probe the details of galaxy formation through hierarchical merger process. It offers three different galaxy interaction/merger scenarios with varying mass ratio -- the 1:1 mass ratio mergers ({\it giant-giant} major merger), 1:2 mass ratio mergers ({\it giant-intermediate} merger), and 1:10 mass ratio mergers ({\it giant-dwarf} minor merger). An individual galaxy model is comprised of a non-rotating spherical dark matter halo, a stellar and a gaseous disc (optional), and a central non-rotating bulge (optional). The bulge (if present) and the dark matter halo are modelled using a Plummer sphere \citep{Plummer1911} and the baryonic discs (stellar, gaseous) are represented by the Miyamoto-Nagai density profiles \citep{Miyamoto-Nagai1975}. The mass of the stellar disc varies from gS0, gSa-type ($9.2 \times 10^{10} \>{\rm M_{\odot}}$) to late-type gSd models ($5.8 \times 10^{10} \>{\rm M_{\odot}}$). Similarly, the bulge mass also decreases from gS0, gSa-type ($2.3 \times 10^{10} \>{\rm M_{\odot}}$) to late-type gSd models ($0$, no bulge). For details of the other structural parameters, the reader is referred to \citet[see Table.~1 there]{Chillingarianetal2010}. The total number of particles ($N_{tot}$) varies from a giant-dwarf interaction ($N_{tot}$ = 480, 000) to a giant-giant interaction ($N_{tot}$ = 120, 000). The orientation of an individual galaxy in the orbital plane is described completely by the spherical coordinates, $i_1$, $i_2$, $\Phi_1$, and $\Phi_2$ \citep[for details, see fig.~3 of ][]{Chillingarianetal2010}.
\par
Following \citet{MihosandHernquist1994}, a `hybrid particle' scheme is implemented to represent the gas particles in these simulations. In this prescription, they are characterised by two masses, namely, the gravitational mass ($M_i$) which is kept fixed during the entire simulation run, and the gas mass ($M_{i, gas}$), changing with time, denoting the gas content of the particles \citep[for details see][]{Dimatteoetal2007,Chillingarianetal2010}. Gravitational forces are always calculated using the gravitational mass, $M_i$ while the hydrodynamical quantities make use of the time-varying gas mass, $M_{i, gas}$. The gas fraction in a galaxy model increases monotonically from the early Sa-type galaxies (10 per cent of the stellar mass) to the late Sd-type galaxy (30 per cent of the stellar mass). A suitable empirical relation is adopted in the simulations to follow the star formation process which reproduces the observed Kennicutt-Schmidt law for the interacting galaxies. The simulation models also include recipes for the gas phase metallicity evolution as well the supernova feedback.
\par
The simulations are run using a TreeSPH code by \citet{SemelinandCombes2002}. For calculating the gravitational force, a hierarchical tree method \citep{BarnesandHut1986} with a tolerance parameter $\theta = 0.7$ is employed which includes terms up to the quadrupole order in the multipole expansion. The gas evolution is achieved by means of smoothed particle hydrodynamics \citep[e.g.][]{Lucy1977}. A Plummer potential is used to soften gravitational forces. The adopted softening length ($\epsilon$) varies with different merger scenarios, with $\epsilon = 200 \>{\rm pc}$ for the giant-intermediate and giant-dwarf merger models and $\epsilon = 280 \>{\rm pc}$ for the giant-giant merger models. The equations of motion are integrated using a leapfrog algorithm with a fixed time step of $\Delta t = 5\times 10^5$ yr \citep{Dimatteoetal2007}. The galaxy models are first evolved in {\it isolation} for $1 \mbox{$\>{\rm Gyr}$}$ before the start of merger simulation \citep{Chillingarianetal2010}.
\par
For this work, we consider a set of giant-dwarf minor merger models with varying morphology for the host galaxy (see Table~\ref{table:key_param}). The GalMer library provides only one orbital configuration for the giant-dwarf interaction, characterised by $i_1 =33^{\circ}$ and $i_2 =130^{\circ}$ \citep{Chillingarianetal2010}. For consistency, any merger model is referred as a unique string `{\sc [host galaxy][satellite galaxy][orbit ID][orbital spin]33}'. {\sc [host galaxy]} and {\sc [satellite galaxy]} denote the corresponding morphology types whereas {\sc [orbit ID]} denotes the orbit number as assigned in the GalMer library, and {\sc [orbital spin]} denotes the orbital spin vector (`dir' for direct and `ret' for retrograde orbits). The number `33' refers to $i_1 =33^{\circ}$ which is constant for all minor mergers considered here. The same sense of nomenclature is used throughout the paper, unless stated otherwise.
\par
We define the epoch of merger, $T_{\rm mer}$, when the distance between the centre of mass of two galaxies becomes close to zero. Table.~\ref{table:key_param} contains the epochs of pericentre passages and the epoch of merger, $T_{\rm mer}$ for all minor merger models used for this work. Also, the epochs of the first and the second pericetre passages ($T_{1,\rm peri}$, $T_{2,\rm peri}$) for these models are mentioned in Table~\ref{table:key_param}.
\begin{table}
\centering
\caption{Key parameters for the selected minor merger models from GalMer library.}
\begin{tabular}{ccccccc}
\hline
model$^{(1)}$ & $R_{\rm peri}$$^{(2)}$ & $T_{1,\rm peri}$$^{(3)}$ & $T_{2, \rm peri}$$^{(4)}$ & $T_{\rm mer}$$^{(5)}$ & $T_{\rm end}$$^{(6)}$\\
& (kpc) & (Gyr) & (Gyr) & (Gyr) & (Gyr)\\
\hline
gSadE001dir33 & 8. & 0.5 & 1.1 & 1.55 & 3.8\\
gSadE001ret33 & 8. & 0.5 & 1.3 & 1.95 & 3.8 \\
gSadE002dir33 & 8. & 0.45 & 1.2 & 1.55 & 3. \\
gSadE002ret33 & 8. & 0.45 & 1.4 & 2. & 3. \\
gSadE003dir33 & 8. & 0.45 & 1.25 & 1.95 & 3. \\
gSadE003ret33 & 8. & 0.45 & 1.5 & 2.25 & 3. \\
gSadE004dir33 & 8. & 0.5 & 1.2 & 1.7 & 3. \\
gSadE004ret33 & 8. & 0.5 & 1.75 & 2.85 & 3. \\
gSadE005dir33 & 16. & 0.5 & 1.35 & 1.85 & 3. \\
gSadE005ret33 & 16. & 0.5 & 1.85 & 2.7 & 3. \\
gSadE006dir33 & 16. & 0.45 & 1.45 & 2. & 3. \\
gSadE006ret33 & 16. & 0.45 & 1.95 & 2.85 & 3. \\
gS0dE001dir33 & 8. & 0.5 & 1.2 & 1.55 & 3.8 \\
gS0dE001ret33 & 8. & 0.55 & 1.7 & 2.2 & 3.8 \\
gSbdE001dir33 & 8. & 0.5 & 1.05 & 1.35 & 3 \\
gSbdE001ret33 & 8. & 0.5 & 1.35 & 1.85 & 3 \\
\hline
\end{tabular}
\centering
{ (1) GalMer minor merger model; (2) pericentre distance; (3) epoch of first pericentre passage; (4) epoch of second pericentre passage; (5) epoch of merger; (6) total simulation run time.}
\label{table:key_param}
\end{table}
\section{Disc-DM halo off-set in minor mergers}
\label{sec:diskhalooffset}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{collage_denXYmap_gSadE001dir33_final.pdf}
\medskip
\includegraphics[width=0.95\linewidth]{collage_DiskHalosepatn_alt.pdf}
\caption{{\bf Disc-halo off-set in minor mergers} : \textit{top panels} : face-on stellar density distribution of host plus satellite (gSa+dE0) system, shown for the model \textbf{gSadE001dir33} at different epochs before and after pericentre passages of the satellite galaxy. Black lines denote the contours of constant surface density. The symbols `$+$' (in cyan) and `$\times$' (in yellow) denote the \textit{density-weighted} centres for the stellar disc and the DM halo distributions of the host galaxy, respectively. \textit{Bottom panels} : separation between the disc-bulge (black dashed line) an the disc-DM halo (blue dashed line) centres of the host galaxy, as a function of time are shown for different minor merger models. The barycentre of the host plus satellite galaxy system is used as the centre of reference. The vertical arrows (in magenta) denote the epochs of pericentre passages of the satellite galaxy. The horizontal dash line (in maroon) denotes the softening length ($\epsilon = 200 \>{\rm pc}$) adopted for the minor merger simulations. Here, $R_{\rm d} = 3 \kpc$.}
\label{fig:density_illustration_collage}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1.\linewidth]{discDMseparatn_XYproj.pdf}
\caption{Separation between the disc and DM halo centres of the host galaxy in the ($x-y$)-plane are shown for different minor merger models with varying orbital parameters. The barycentre of the host plus satellite galaxy system is used as the centre of reference. The colour bar denotes the epochs of the minor merger models.}
\label{fig:sloshing_of_centres}
\end{figure*}
First, we investigate whether a minor merger scenario can create a disc-DM halo off-set configuration in the models considered here. To achieve that, we first choose a minor merger model \textbf{gSadE001dir33} from the GalMer library. Fig.~\ref{fig:density_illustration_collage} (top panels) shows the corresponding face-on density distribution of the stellar particles from the host and the satellite galaxies (gSa+dE0) at different epochs, before and after pericentre passages. In this model, a giant Sa-type (gSa) galaxy experiences an interaction with a dwarf elliptical (dE0) galaxy. The satellite loses a part of its orbital angular momentum after each pericentre passage due to the dynamical friction and the tidal torque, thereby falling deep in the gravitational potential of the host galaxy and ultimately merges with it. After each pericentre passage of the satellite, the host galaxy displays a prominent distortion/asymmetry in the stellar density distribution (e.g., see $t = 0.75 \mbox{$\>{\rm Gyr}$}$ in the top panel of Fig.~\ref{fig:density_illustration_collage}).
\par
Next, we compute the density centres of the stellar disc and the DM halo of the host galaxy, and see whether they are concentric during and after the merger.We mention that, during the interaction, the host as well as the satellite galaxy form a tail-like feature due to the gravitational pull exerted on them. Therefore, the \textit{mass-weighted} centre (centre-of-mass) could be misleading to locate the \textit{actual} centre of the mass distribution. Consequently, one has to compute the \textit{density-weighted} centres of the underlying mass distribution \citep[for detailed discussion, see][]{CasertanoandHut1985}. Fig.~\ref{fig:density_illustration_collage} (top panels) also shows the \textit{density-weighted} centres of the stellar disc and the dark matter halo (indicated by $+$ and $\times$, respectively) of the host galaxy at different times. Interestingly, the density-weighted centres of the stellar disc and the DM halo are seen to be separated by a finite amount after each pericentre passage of the satellite galaxy; thereby indicating the presence of an off-set between the stellar disc and the DM halo. However, by the end of the simulation run ($t = 3.8 \mbox{$\>{\rm Gyr}$}$), these two centres are seen to coincide again.
\par
To investigate further, we calculate the density-weighted centres of stellar disc and the DM halo of the host galaxy as a function of time, for different minor merger models with varying orbital configuration, orbital energy (for details see in section~\ref{sec:simu_setup}). Here, we use the barycentre of the host plus satellite galaxy system as the centre of reference.
Fig.~\ref{fig:density_illustration_collage} (bottom panels) show the corresponding temporal evolution of the separation between the centres of the stellar disc and the DM halo of the host galaxy in different merger models. The separation/off-set ($\Delta r_{\rm CM} (t)$) at time $t$ is calculated as
\begin{equation}
\Delta r_{\rm CM} (t) =\sqrt{\Delta x_{\rm CM}^2 (t)+\Delta y_{\rm CM}^2 (t)+\Delta z_{\rm CM}^2 (t)}\,,
\end{equation}
\noindent where, $\Delta x_{\rm CM} (t) = x_d(t) -x_{dm} (t)$, $\Delta y_{\rm CM} (t) = y_d(t) -y_{dm} (t)$, $\Delta z_{\rm CM} (t) = z_d(t) -z_{dm} (t)$. Here ($x_d (t), y_d (t), z_d (t)$) and ($x_{dm} (t), y_{dm} (t), z_{dm} (t)$) denote the density-weighted centres of the stellar disc and the DM halo of the host galaxy at time $t$, respectively. {\footnote {After the merger happens, the stellar and the DM halo particles from the satellite get redistributed in the host galaxy. We check that, after the merger, the values of $\Delta r_{CM} (t)$ when calculated using particles from both the host and the satellite remains same as the $\Delta r_{CM} (t)$ values calculated by taking only the particles from the host galaxy.}} Fig.~\ref{fig:density_illustration_collage} (bottom panels) clearly demonstrates the presence of an off-set between the stellar disc and the DM halo of the host galaxy, after each time the host galaxy experiences a pericentre passage of the satellite galaxy. This off-set can be $\sim 400-600 \ \>{\rm pc}$ (2-3 times the softening length of the simulation), and is most prominent immediately after the pericentre passage. If there is sufficient time to adjust between two successive pericentre passages, the off-set decreases and goes below the softening length (and hence not reliable). Once the satellite mergers, and the post-merger remnant gets some time ($\sim 250-400$ Myr) to readjust itself, this off-set disappears. In other words, the off-set between stellar disc and the DM halo in these minor merger models is a \textit{transient} phenomenon.
\par
Next, we compare how the generation of an off-set between the stellar disc and the DM halo of the host galaxy varies in models with different orbital configurations. We find that, while each pericentre passage of the satellite triggers an off-set between stellar disc and the dark matter halo of the host galaxy in a generic fashion in these merger models, the actual value of such an off-set depends (weakly) on the distance of closest approach. For example, the model \textbf{gSadE006dir33} has a pericentre distance of $16 \kpc$ as opposed to the model \textbf{gSadE001dir33} which has a pericentre distance of $8 \kpc$ \citep[for details see][]{Chillingarianetal2010}. The lower values of the off-set seen in the model \textbf{gSadE006dir33} as compared to that of model \textbf{gSadE001dir33} suggests a dependence on the distance of closest approach. Furthermore, the merging of the satellite with the host galaxy happens at a very late epoch for the model \textbf{gSadE006ret33}, thus it mimics a fly-by encounter scenario. Although the actual value of the off-set between the stellar disc and the DM halo in this model is smaller, the off-set persists till the very end of the simulation run due to the continued pericentre passages of the satellite galaxy.
\par
Lastly, we probe the temporal evolution of the disc-DM halo off-set in the in-plane ($x-y$ plane) and in the vertical direction ($x-z$ plane) for different minor merger models. We found that the in-plane separation/off-set varies in the range $\sim 300 -500 \>{\rm pc}$ (see Fig.~\ref{fig:sloshing_of_centres}). However, the separation in the direction perpendicular to the disc mid-plane is less than the softening length ($200 \>{\rm pc}$) of the simulation, hence they are not shown here. This trend remains true for all minor merger models considered here.
\par
In appendix~\ref{sec:isolatedEvolution}, we show the temporal evolution of the disc-DM halo separation for the isolated gSa galaxy model (hereafter \textbf{isogSa}). The galaxy model, when evolved in isolation, does not produce an off-set between the centres of the disc and the DM halo. This accentuates the fact that a pericentre passage of a satellite galaxy can drive a transient off-centred disc-DM halo configuration for a wide variety of orbital configurations considered here. A similar scenario of generating a transient off-set between disc and DM halo via dwarf-dwarf merger has been shown in \citet{Pardyetal2016}. For the sake of completeness, we also examine whether a similar off-set is generated between the stellar disc and the bulge of the host galaxy. This is also shown in Fig.~\ref{fig:density_illustration_collage} (bottom panels). As seen clearly, the disc and the bulge of the host galaxy always remain \textit{concentric}, and this holds true for different orbital configurations considered here.
\section{Quantifying lopsided asymmetry in the density distribution}
\label{sec:Lopsidedness}
\begin{figure*}
\centering
\textbf{gSadE001dir33}
\includegraphics[width=0.9\linewidth]{Bar_lop_1rotation_gSadE001dir33.pdf}
\medskip
\vspace{0.5 cm}
\textbf{gSadE006ret33}
\includegraphics[width=0.9\linewidth]{retro-den_t10to17.pdf}
\caption{ Face-on stellar density distribution of the the host plus satellite (gSa+dE0) system are shown for models \textbf{gSadE001dir33} (top panels) and \textbf{gSadE006ret33} (bottom panels), during one full rotation of the bar after the first pericentre passage happens. The position angles of the bar and the $m=1$ lopsidedness are denoted by arrows (of different colours, as shown at the top of each panel). Time between each snapshot is $50 \mbox{$\>{\rm Myr}$}$. Solid lines denote the contours of constant surface density.}
\label{fig:densmap_lopsided}
\end{figure*}
Fig.~\ref{fig:density_illustration_collage} already indicated the existence of an $m=1$ asymmetry/distortion in the stellar density distribution of the host galaxy; this $m=1$ distortion is most prominent after each pericentre passage of the satellite. In this section, we study in detail the generation of the $m=1$ distortion in the stellar density distribution, identify the nature of the $m=1$ distortion as lopsidedness, and characterise its strength, and longevity.
Fig.~\ref{fig:densmap_lopsided} (top panels) shows the face-on density distribution of the stellar particles of the host plus satellite system (gSa+dE0) for the minor merger model \textbf{gSadE001dir33}, at different time-steps after the first pericentre passage of the satellite galaxy. The existence of an $m=1$ asymmetry is clearly seen in the density distributions of stellar particles. To quantify further, we calculate the radial variation of the $m=1$ Fourier harmonics of the stellar density distribution of the host galaxy at different times. This is shown in Fig.~\ref{fig:lopsided_radial} (top right panel).
\begin{figure*}
\centering
\includegraphics[width=0.95\linewidth]{radial_lopsided_totalExplained.pdf}
\caption{\textit{Left panels} show an example of identifying the one-arm spiral and the $m=1$ lopsidedness from the radial profiles of $A_1/A_0$ and the corresponding phase-angle ($\phi_1$). For the one-arm spiral, the phase-angle ($\phi_1$) varies with radius, while for an $m=1$ lopsidedness, the phase-angle remains almost constant. Similarly, the determination of bar extent from the amplitude and the phase-angle of the $m=2$ Fourier mode is also shown. The cyan, magenta, and the maroon circles denote the radial extents of $1.5 R_{\rm d}$, $4 R_{\rm d}$, and $7 R_{\rm d}$, respectively. \textit{Right panels} show the radial profiles of $A_1/A_0$, calculated at different times, for two minor merger models \textbf{gSadE001dir33} (top right panel) and \textbf{gSadE006ret33} (bottom right panel). The colour bar denotes the epochs of the minor merger models.}
\label{fig:lopsided_radial}
\end{figure*}
\subsection {Presence of an \textit{m=1} spiral arm and a tidal tail}
\label{sec:onearmSpiral}
We mention that a large, non-zero value of the $m=1$ Fourier coefficient ($A_1/A_0$) does not always guarantee the existence of a coherent lopsided pattern \citep[for details see discussion in][]{Lietal2011,Zaritskyetal2013}. We find that in our chosen model \textbf{gSadE001dir33}, a one-arm spiral forms after the first pericenter passage of the satellite (e.g. see at $t = 0.6-0.8 \mbox{$\>{\rm Gyr}$}$ in Fig~\ref{fig:densmap_lopsided}). The same one-arm spiral reappears after the second pericenter passage of the satellite as well. Eventually it fades away after $200-300 \mbox{$\>{\rm Myr}$}$. We check that the presence of such a one-arm spiral produces a hump-like feature in the radial profile of the $m=1$ Fourier coefficient (see the example shown in the left panels of Fig.~\ref{fig:lopsided_radial}). The extent of this one-arm spiral varies from $2-4 R_{\rm d}$ in the model \textbf{gSadE001dir33}. Within this radial extent, the corresponding phase-angle ($\phi_1$) varies with radius (see the left panels of Fig.~\ref{fig:lopsided_radial}). The excitation of the $m=1$ spiral arm in the models considered here is generic, and hence this radial extent should be avoided while studying the $m=1$ lopsidedness. On the other extreme, a tail-like feature is produced due to the tidal pull during and (shortly) after a pericenter passage (e.g. see in the top panels of Fig~\ref{fig:densmap_lopsided}). This tidal tail, in turn, yields a large non-zero value of the $m=1$ Fourier coefficient in the outer radial extent ($R > 7 R_{\rm d}$). Hence, this radial extent should also be avoided while characterising the $m=1$ lopsidedness in the stellar density distribution.
\subsection {\textit{m=1} lopsided distortions in the stars}
\label{sec:lopsided_distortions}
Fig.~\ref{fig:lopsided_radial} (middle panel) revealed that, within the radial extent $4-7 \ R{\rm d}$, the Fourier coefficient ($A_1/A_0$) of the $m=1$ Fourier harmonics increases with radius. Moreover, the corresponding phase-angle ($\phi_1$) remains almost constant throughout this radial extent. Following \citet{Zaritskyetal2013}, we define the signatures of a true $m=1$ lopsided distortion are a non-zero value of the $m=1$ Fourier coefficient ($A_1/A_0$), together with a (nearly) constant value of the phase-angle of the $m=1$ Fourier mode ($\phi_1$) over a finite radial extent \citep[for details see discussion in][]{Lietal2011,Zaritskyetal2013}. We use this same convention throughout the paper when we refer to an $m=1$ distortion as a lopsided pattern. We checked that, the $m=1$ lopsided distortion exists predominantly within $\sim 4-7 R_{\rm d}$ at different times in the model \textbf{gSadE001dir33}. Also, when a true $m=1$ lopsidedness is present in our model, the corresponding strength of the $m=1$ lopsidedness increases with radius (as denoted by the increasingly higher values of $A_1/A_0$ with radius, see e.g., in Fig.~\ref{fig:lopsided_radial}). This trend is in agreement with the observational studies of the $m=1$ lopsidedness in galaxies \citep[e.g., see][]{RixandZaritsky1995,RudnickandRix1998,Conseliceetal2000,Angirasetal2006,Reichardetal2008,vanEymerenetal2011,Zaritskyetal2013}. We also checked that, after the merger of the satellite, the stellar particles from the satellite do not contribute appreciably to a coherent $m=1$ lopsidedness in the density distribution; hence, they are discarded in the subsequent analyses. Next, we study the temporal evolution of the $m=1$ lopsidedness in the stellar density distribution of the host galaxy for the model \textbf{gSadE001dir33}.
\par
For the model \textbf{gSadE001dir33}, initially ($t=0$) there was no discernible lopsidedness in the outer part as inferred from the $A_1/A_0$ value close to zero in the outer disc regions. A prominent coherent lopsidedness appears only after the first pericentre passage of the satellite; the average $A_1/A_0$ value reaches close to $0.4$; followed by a decrease in the $A_1/A_0$ value as the satellite moves farther away after the pericentre passage. The strong lopsidedness reappears after the second pericentre passage of the satellite galaxy, marked by an increase in the average $A_1/A_0$ value in the outer disc region ($\sim 4.5 -6 R_{\rm d}$). However, after the satellite merges with the host galaxy, the lopsidedness gets weakened subsequently in the post-merger remnant. By the end of the simulation run, at $t = 3.8 \mbox{$\>{\rm Gyr}$}$, the $A_1/A_0$ value becomes less than 0.1 in the outer parts ($\sim 4-7 R_{\rm d}$) of the stellar disc, thereby denoting the absence of a strong, coherent lopsidedness. We checked that, this broad trend of generation of lopsided distortion (with varying strength) in minor merger events holds true for different orbital configurations considered here; for the sake of brevity they are not shown here. The total (net) duration of the minor merger models showing the presence of an $m=1$ lopsidedness varies from $\sim 1.5 \mbox{$\>{\rm Gyr}$}$ to $\sim 1.85 \mbox{$\>{\rm Gyr}$}$ after the first pericentre passage of the satellite.
%
\par
%
The question remains what happens to the sustainability of the $m=1$ lopsided feature when the merger happens at a very late epoch and the host galaxy experiences continued pericentre passage of the satellite? To investigate that, we select the minor merger model \textbf{gSadE006ret33} where the merger happens at a later epoch (see Table.~\ref{table:key_param}). Fig.~\ref{fig:densmap_lopsided} (bottom panels) shows the face-on density distribution of the stellar particles of the host plus satellite system (gSa+dE0) for the model \textbf{gSadE006ret33}, at different time-steps after the first pericentre passage of the satellite galaxy. Also, the corresponding radial variations of the $m=1$ Fourier harmonics of the density distribution are shown in Fig.~\ref{fig:lopsided_radial} (bottom right panel). As the host galaxy experiences continued pericentre passages, the lopsided pattern also persists till the end of the simulation run ($t = 3 \mbox{$\>{\rm Gyr}$}$); the average values of $A_1/A_0$ in the outer disc region ($\sim 4 -7 R_{\rm d}$) are non-zero, and are higher than values calculated at $t=0$. This demonstrates that, a continued pericentre passage can generate a rather long-lived lopsided pattern in the host galaxy.
%
\par
%
In appendix~\ref{sec:isolatedEvolution}, we show that when the host galaxy model is evolved in isolation, no prominent $m=1$ lopsided distortion gets excited. The average values of $A_1/A_0$ throughout the entire disc region remain below 0.1, indicating the absence of a strong $m=1$ lopsidedness in the stellar density distribution. This reinforces the fact that minor merger event is liable to trigger the lopsided disturbance in the host galaxy.
%
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{comparison_lopsided_offset.pdf}
\caption{Maximum values of the $m=1$ Fourier coefficient ($A_1/A_0$) and the separation between the centres of stellar disc and the DM halo ($\Delta r_{\rm CM}$) are shown for different minor mergers (of gSa-dE0 type) with direct (red circles) and retrograde (blue squares) configurations. Dashed lines denote the corresponding best-fit straight lines to the points. The increasing size of the points denote higher orbit numbers, for details see section~\ref{sec:simu_setup}. Here, $R_{\rm d} = 3 \kpc$.}
\label{fig:lopsided_comparison_offset}
\end{figure}
Also, we notice a correlation in the formation epoch of a stellar disc-DM halo off-set and the excitation of the lopsided pattern in the host galaxy (compare bottom panels of Fig.~\ref{fig:density_illustration_collage} and Fig.~\ref{fig:densmap_lopsided}). Here, we compare the maximum values of the $m=1$ Fourier coefficient and separation between the centres of stellar disc and the DM halo ($\Delta r_{\rm CM}$) for different minor merger models with direct and retrograde configurations. This is shown in Fig.~\ref{fig:lopsided_comparison_offset}. As seen clearly, for both orbital spin configurations, these two maximum values follow a (nearly) linear relation. This is not surprising since both physical phenomena are driven by the tidal forces exerted on the host galaxy by the satellite during the pericentre passages.
\section {Pattern speed measurement and resonances}
\label{sec:pattern_speed}
\begin{figure*}
\centering
\includegraphics[width=0.85\linewidth]{pattern_speed_combined.pdf}
\caption{Pattern speed measurement of the $m=1$ lopsided distortion (top panels) and the $m=2$ bar mode (bottom panels) are shown at two epochs after the first and second pericentre passages for the model \textbf{gSadE001dir33}. Black dashed lines denote the best-fit straight line to the temporal variation of the phase-angle (for the bar) as well as the temporal variation of the orientation of the density isophotal contours (for the lopsidedness). Red points and the red dashed line in the top right panel denote the measurement of the pattern speed of the $m=1$ lopsidedness via fitting a straight line to the temporal variation of the phase-angle ($\phi_1$). Measured pattern speed values are indicated in each sub-panel.}
\label{fig:lopsided_patternspeed}
\end{figure*}
A visual inspection of Fig.~\ref{fig:densmap_lopsided} already provided the indication that the $m=1$ lopsided pattern rotates in the disc, in a similar fashion the $m=2$ bar mode rotates. Here, we measure \textit{simultaneously} the pattern speeds of the $m=1$ lopsided distortion as well as the central $m=2$ bar mode in the model \textbf{gSadE001dir33}. For that, we choose two time-intervals of $\sim 0.3 \mbox{$\>{\rm Gyr}$}$), after the first and the second pericenter passages of the satellite when both the $m=1$ lopsided distortion and the $m=2$ bar mode co-exist. This simultaneous measurements will ease the determination of the direction of the pattern speed of the $m=1$ lopsidedness with respect to the $m=2$ bar pattern speed. The bar pattern speed ($\Omega_{\rm bar}$) is measured by fitting a straight line to the temporal variation of the phase-angle ($\phi_2$) of the $m=2$ Fourier mode. This assumes that the bar rotates rigidly with a single pattern speed in that time-interval. The resulting measurements of $m=2$ bar pattern speeds ($\Omega_{\rm bar}$) are shown in Fig.~\ref{fig:lopsided_patternspeed} (bottom panels). The pattern speed of the $m=1$ lopsided distortion ($\Omega_{\rm lop}$) is measured by fitting a straight line to the time variation of the orientation of the density isophotal contours. This is shown in Fig.~\ref{fig:lopsided_patternspeed} (top panels). We note that, when the $m=1$ lopsided distortion rotates with a well-defined, single pattern speed (similar to $m=2$ bar mode), it is possible to measure the pattern speed of the $m=1$ lopsided distortion by fitting a straight line to the temporal variation of the phase-angle ($\phi_1$) of the $m=1$ lopsidedness. However, we show that these two measurements of the pattern speed of the lopsidedness match pretty well within their error limits (see top right panel of Fig.~\ref{fig:lopsided_patternspeed}).
The simultaneous measurements of the bar and the lopsided pattern speeds reveal two important aspects. First, the bar rotates faster than the $m=1$ lopsided distortion. For example, around $t = 1.4 \mbox{$\>{\rm Gyr}$}$, the bar pattern speed ($\Omega_{\rm bar}$) is $29.3 \pm 0.4 \mbox{$\>{\rm km\, s^{-1}}$}$ kpc$^{-1}$ whereas around the same epoch, the $m=1$ lopsided distortion rotates with a pattern speed ($\Omega_{\rm lop}$) of $-5.2 \pm 0.5 \mbox{$\>{\rm km\, s^{-1}}$}$ kpc$^{-1}$. This trend also holds for the chosen time-interval after the first pericenter passage of the satellite (compare left panels of Fig.~\ref{fig:lopsided_patternspeed}). Secondly, the $m=1$ lopsided distortion is in retrograde motion with respect to the bar rotation as well as the underlying disc rotation.
Lastly, we investigate whether the direction of the orbital spin vector plays any role in deciding the sense of the rotation of the $m=1$ lopsidedness with respect to the $m=2$ bar mode. To achieve that, we choose the model \textbf{gSadE006ret33}. In Fig.~\ref{fig:densmap_lopsided} (bottom panels), the density distributions of the stellar particles after the first pericentre already indicated a pattern rotation of the $m=1$ lopsided pattern, similar to what is seen in the model \textbf{gSadE001dir33}. Using the same methodology, as mentioned above, we simultaneously measure the pattern speeds of the $m=2$ bar mode and the $m=1$ lopsided pattern after the first pericentre passage of the satellite. This is shown in Fig.~\ref{fig:lopsided_patternspeed_ret}. For the retrograde orbital configuration also, the lopsided pattern rotates much slower ($\Omega_{\rm lop} =-4.3 \pm 0.2 \mbox{$\>{\rm km\, s^{-1}}$}$ kpc$^{-1}$ at $t \sim 0.75 \mbox{$\>{\rm Gyr}$}$) than the bar ($\Omega_{\rm bar} =22.2 \pm 0.5 \mbox{$\>{\rm km\, s^{-1}}$}$ kpc$^{-1}$ at $t \sim 0.75 \mbox{$\>{\rm Gyr}$}$), and the sense of rotation is in retrograde with respect to the bar. The physical implications of the retrograde pattern speed (with respect to the bar) of the lopsidedness in our minor merger models is discussed below.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{combined_patternSpeed_ret.pdf}
\caption{Pattern speed measurement of the $m=1$ lopsided distortion (top panel) and the $m=2$ bar mode (bottom panel) are shown after the first pericentre passage for the model \textbf{gSadE006ret33}. Measured pattern speed values are indicated in each sub-panel.}
\label{fig:lopsided_patternspeed_ret}
\end{figure}
\par
Following \citet{BiineyTremaine2008}, the dispersion relation for a tightly wound $m=1$ pattern in a disc can be written as
\begin{equation}
(\omega - \Omega)^2 = \kappa^2(R) -2\pi G \Sigma_{0}(R) |k| + \sigma^2 k^2,
\end{equation}
\noindent where $\Sigma_{0}$ is the surface density of the disc, $\Omega$ and $\kappa$ are the angular and radial epicyclic frequencies; $\sigma$ refers to the velocity dispersion and $k$ is the wavenumber. In the absence of self-gravity, the relevant free precession frequency corresponding to the $m=1$ mode in a cold disc is $\omega = \Omega - \kappa$ at given radius $R$. Since in a realistic galaxy model with stellar disc and dark matter halo, $\kappa > \Omega$, and hence $\omega = \Omega - \kappa < 0$ at all radii \cite[see][for various mass models]{SahaandJog2014}. In Fig.~\ref{fig:frequencies}, the values of $\Omega - \kappa$ remain less than 0, for almost all radii considered here. It is worth mentioning that in previous studies, \citet{JunqueiraCombes1996,Baconetal2001} have shown excitation of an $m=1$ lopsidedness in the central region of galaxies e.g., M~31 nucleus and the corresponding pattern speeds are positive, generally high, since dynamical time is also shorter inside. These central $m=1$ modes mimic well the pressure mode ($p$-mode) as described in the context of near-Keplerian disc by \cite{Tremaine2001}. However, the $m=1$ lopsidedness that we measure in our current simulation set-up are dominated by self-gravity (more like the $g$-modes) and hence the pattern speed is expected to be following the $\Omega-\kappa$ curve in the galaxy model chosen. In the outer part ($\sim 4 - 7 R_d$) of our galaxy models where we measure lopsidedness, the absolute value of $\Omega -\kappa$ is smaller and we also obtain comparatively smaller pattern speed. Interestingly, in the case of a pure exponential stellar disk, the value of $\Omega-\kappa \sim 0$ beyond about $5R_d$ and it switches sign from negative to positive at $4.6R_d$ \citep{SahaandJog2014}. In pure disk only simulations, it was shown that the outer lopsidedness had pattern speed close to zero as well \citep{Sahaetal2007}.
\par
In our current measurements (see Fig.~\ref{fig:lopsided_patternspeed}), we show that lopsidedness in the outer parts of our galaxy model has negative pattern speed , with $\Omega_{\rm lop} = -5.7 \mbox{$\>{\rm km\, s^{-1}}$}$ kpc$^{-1}$ after the second pericenter passage in model \textbf{gSadE001dir33}. The pattern speed is negative after the first pericenter passage as well. For the retrograde model \textbf{gSadE006ret33} (Fig.~\ref{fig:lopsided_patternspeed_ret}), we measure $\Omega_{\rm lop} = -4.3 \mbox{$\>{\rm km\, s^{-1}}$}$ kpc$^{-1}$ after the first pericenter passage. Since the pattern speed is negative, our galaxy models do not have a cororation resonance for the lopsidedness but only the inner Lindblad resonance (ILR). For the model \textbf{gSadE001dir33}, $\Omega_{\rm lop} \simeq -5 \mbox{$\>{\rm km\, s^{-1}}$}$ kpc$^{-1}$, the ILR for the lopsidedness is pushed to $4.3 R_{\rm d}$, towards the outer region of the stellar disc (see Fig.~\ref{fig:frequencies}). In the same galaxy model, the central bar has a positive pattern speed and the corotation for the bar in model \textbf{gSadE001dir33} is well inside the disc, at $3.5 R_{\rm d}$. It is tantalising to infer that the central bar and the outer lopsidedness in our galaxy models are two dynamically decoupled patterns. After all, the lopsidedness has been generated during the minor merger process. However, the ILR of the $m=1$ lopsidedness ($\sim 4.3 R_{\rm d}$) falls in between the CR ($\sim 4.2 R_{\rm d}$) and the OLR ($\sim 5.2 R_{\rm d}$) of the $m=2$ bar, i.e., in the outer part of the stellar disc these resonance points almost overlap. Past studies have addressed the important role of resonance overlap due to bar-spiral or spiral-spiral scenarios \citep[e.g., see][]{SellwoodandBinney2002,MinchevandFamaey2010,Minchevetal2011} in the context of radial migration and disc dynamics. It will be insightful to investigate the stellar and gas kinematics at these (nearly overlapping) resonance locations associated with the $m=1$ lopsidedness and the $m=2$ bar.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{circfreq_t18_gSadE001dir33.pdf}
\caption{ The circular and the epicyclic frequencies, together with the locations of the resonant points of the $m=2$ bar and $m=1$ lopsideness are shown at $t = 0.8 \mbox{$\>{\rm Gyr}$}$ for the model \textbf{gSadE001dir33}. The magenta dashed horizontal line denotes the pattern speed ($\Omega_{\rm bar}$) of the bar while the cyan dashed horizontal line denotes the pattern speed ($\Omega_{\rm lop}$) of the $m=1$ lopsidedness.}
\label{fig:frequencies}
\end{figure}
\section{Excitation of kinematic lopsidedness}
\label{sec:flapping_mode}
Past studies, both theoretically \citep[e.g., see][]{Jog1997,Jog2002} and from the observations \citep[e.g., see][]{vanEymerenetal2011,vanEymeren2011}, have shown that an $m=1$ lopsided distortion in the density distribution often couples with a large-scale asymmetry in the velocity field as well. An off-set between the rotation curves of a galaxy, calculated for the receding and approaching sides separately, is considered as the signature of a kinematic lopsidedness. In the light of these past findings, one might wonder whether the minor merger models considered here, also display a kinematic lopsidedness.
To study the presence of a kinematic lopsidedness in details, we first choose the model \textbf{gSadE001dir33}. This model displays a prominent large-scale $m=1$ lopsided distortion in the density distribution of the stars in the host galaxy (see previous sections). Using the six-dimensional position-velocity information of the stellar particles, we first construct the line-of-sight stellar velocity fields, at $90 ^{\circ}$ angle of inclination ($i$) for the model \textbf{gSadE001dir33}. One such stellar velocity field shown in Fig.~\ref{fig:lopsided_kinematic} (see top left panel). To quantify the kinematic asymmetry, we then extract the radial profiles of the line-of-sight stellar velocity ($V_{los}$) along the kinematic major-axis for both the receding and the approaching parts of the galaxy. One such inclination-corrected stellar radial velocity profile at $t = 0.9 \mbox{$\>{\rm Gyr}$}$ is shown in Fig.~\ref{fig:lopsided_kinematic} (see bottom left panel). We point out that, one has to perform a rigorous asymmetric drift correction calculation to recover the rotation curve from the projected kinematics \citep[e.g., see][]{BiineyTremaine2008}. However, the \textit{inclination-corrected} velocity profiles ($V_{los}/sin(i)$), extracted along the kinematic major axis, can be used as a reasonable proxy for the rotation curve to a first-order approximation. This serves our purpose of detecting any kinematic lopsidedness in the stellar velocity field.
\begin{figure*}
\centering
\includegraphics[width=0.85\linewidth]{collage_vc_lopsided.pdf}
\caption{ \textit{Top panels :} line-of-sight stellar velocity fields for host plus satellite (gSa+dE0) system (at $90 ^{\circ}$ angle of inclination) are shown at $t = 0.9 \mbox{$\>{\rm Gyr}$}$ for models \textbf{gSadE001dir33} (left column) and \textbf{gSadE006ret33} (right column). The green roundish region in the upper panels are due to the satellite galaxy. Red arrows denote the corresponding kinematic major axis. \textit{Bottom panels} show the corresponding radial profiles of the inclination-corrected, line-of-sight rotational velocities ($V_{\rm los}/\sin(i)$), calculated separately for the receding and the approaching sides. Black solid lines in the lower panels denote the corresponding circular velocity, calculated from the intrinsic particle distribution, for details see Appendix~\ref{sec:circvel}. Here, $R_{\rm d} = 3 \kpc$.}
\label{fig:lopsided_kinematic}
\end{figure*}
A prominent, global distortion of the stellar velocity, is seen to be present where the inclination-corrected line-of-sight rotational velocities ($V_{\rm los}/\sin(i)$) of the receding and approaching sides have a large-scale off-set (similar as $t = 0.9 \mbox{$\>{\rm Gyr}$}$). In other words, the model \textbf{gSadE001dir33} displays a kinematic lopsidedness as well after each pericentre passage of the satellite galaxy. Also, there exists a one-to-one correspondence between the epochs of prominent lopsidedness in the stellar density and the kinematics; thereby showing that they are indeed coupled, in concordance with the findings of previous studies. However, at the end of the simulation run, at $t = 3.8 \mbox{$\>{\rm Gyr}$}$, when the satellite galaxy has merged with the host galaxy, and the post-merger remnant gets time to readjust itself, the global distortion of the stellar velocity fades away. In other words, at $t = 3.8 \mbox{$\>{\rm Gyr}$}$, the values of $V_{\rm los}/\sin(i)$ at the receding and the approaching sides coincide again. We check that the other minor merger models considered here, also show a similar trend of the temporal of the kinematic lopsidedness in the host galaxy. For the sake of brevity, they are not shown here.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{gasVelocity_gSadE001dir33.pdf}
\medskip
\includegraphics[width=\linewidth]{gasHisto_t18dir.pdf}
\caption{\textit{Top panels} show the line-of-sight gas velocity fields at $90 ^{\circ}$ angle of inclination, calculated at two different epochs for the model \textbf{gSadE001dir33}. \textit{Bottom panel} shows the corresponding distribution of the line-of-sight velocity, calculated separately for the receding and the approaching sides. At $t = 0.9 \mbox{$\>{\rm Gyr}$}$, the gas velocity distributions in the receding and the approaching sides are indeed asymmetric.}
\label{fig:lopsided_kinematic_gas}
\end{figure}
At this point, one might wonder whether the gas velocity field also displays a similar large-scale kinematic asymmetry in the minor merger models considered here. To investigate that, we have first constructed the line-of-sight gas velocity fields, at $90 ^{\circ}$ angle of inclination ($i$) for the model \textbf{gSadE001dir33}. Two such gas velocity distributions, calculated at $t = 0.9 \mbox{$\>{\rm Gyr}$}$ and at the end of the simulation run ($t = 3.8 \mbox{$\>{\rm Gyr}$}$) are shown in Fig.~\ref{fig:lopsided_kinematic_gas} (top panels). Next, we calculated the gas velocity distributions separately for the receding and the approaching sides of the host galaxy. The resulting distributions, at $t = 0.9 \mbox{$\>{\rm Gyr}$}$ and at $t = 3.8 \mbox{$\>{\rm Gyr}$}$ are shown in Fig.~\ref{fig:lopsided_kinematic_gas} (bottom panels). At $t = 0.9 \mbox{$\>{\rm Gyr}$}$, the gas velocity distribution shows a high degree of asymmetry when the receding and the approaching sides of the host galaxy are considered separately; thereby indicating a presence of kinematic lopsidedness in the gas velocity field as well. We found that this kinematic asymmetry becomes larger after each pericentre passage of the galaxy, similar to what is seen for the stellar velocity fields. However, at the end of the simulation run ($t = 3.8 \mbox{$\>{\rm Gyr}$}$), the gas velocity distributions in the receding and the approaching sides of the host galaxy becomes close to a symmetric distribution; thereby indicating the absence of a strong kinematic lopsidedness in the gas velocity field.
Lastly, we probe the persistence of the kinematic lopsidedness in the model \textbf{gSadE006ret33} where the merger happens at a very late epoch. Following the same methodology, we first construct the line-of-sight stellar velocity field, at $90 ^{\circ}$ angle of inclination. One such stellar velocity field shown in Fig.~\ref{fig:lopsided_kinematic} (see top right panel). We then extract the line-of-sight rotational velocity profile along the kinematic major-axis for the receding and the approaching parts separately. One such radial variation of the inclination-corrected line-of-sight rotational velocity ($V_{\rm los}/\sin(i)$) at $t =0.9 \mbox{$\>{\rm Gyr}$}$ is shown in Fig.~\ref{fig:lopsided_kinematic} (see bottom right panel) for the model \textbf{gSadE006ret33}. A prominent large-scale off-set in the line-of-sight rotational velocities at the receding and the approaching sides exists, similar to other minor merger models considered here. The main difference here is that, the kinematic lopsidedness persists till the end of the simulation run ($t = 3 \mbox{$\>{\rm Gyr}$}$) because of the increased epoch of interaction. Thus, a prolonged interaction with a satellite can drive a large-scale kinematic lopsidedness which sustains for a longer time-scale.
\section{Comparison with previous works}
\label{sec:comparison_previousWork}
Here, we briefly compare the properties of the $m=1$ lopsidedness (e.g., strength, extent) in the stellar density distribution of the host galaxy in our chosen minor merger models, with the past literature of excitation of lopsidedness via minor mergers or tidal encounters \citep[e.g., ][]{ZaritskyandRix1997,Bournaudetal2005,Mapellietal2008}. We also compare the strength of the $m=1$ lopsided distortion in our models with that from the observed galaxies with lopsidedness, as revealed from the past observational studies.
%
\par
%
In \citet{ZaritskyandRix1997}, the measured average value of the $m=1$ lopsidedness ($\left<A_1/A_0 \right>$) is $\sim 0.2$ at $R > 1.5 R_{\rm d}$, where the cause of the lopsidedness was conjectured to be tidal interactions. In \citet{Mapellietal2008}, the average values of the $m=1$ lopsidedness ($\left<A_1/A_0 \right>$) is $\sim 0.1$ at $R \sim 2.5 R_{\rm d}$ when the lopsidedness in the stars is excited via fly-by encounter. Similarly, a model presented in \citet{Bournaudetal2005}, showed $\left<A_1/A_0 \right> \sim 2$) at $t \sim 2 \mbox{$\>{\rm Gyr}$}$ where the lopsidedness is induced by galaxy interaction and merger. In comparison, the minor merger models considered here, show a stronger $m=1$ lopsidedness ($\left<A_1/A_0 \right> \sim 0.4$ for some models) in the outer disc region ($\sim 4-7 R_{\rm d}$) of the stellar density distribution. In other words, the galaxy interactions involved in these merger models, can generate strong outer disc lopsidedness.
%
\par
The location of the occurrence of the prominent $m=1$ lopsidedness also merits some discussion. Past observational studies measured the $m=1$ lopsided distortion up to 3.5 $R_{\rm d}$ where $R_{\rm d}$ is the disc scale-length \citep[e.g., see][]{RixandZaritsky1995,Bournaudetal2005,Zaritskyetal2013} as the near-IR measurements were not available at larger radii due to signal-to-noise constraint. However, the usage of H~{\sc i} as a tracer allowed one to detect the $m=1$ distortion further out in a galaxy. The resulting amplitude of lopsidedness increases with radius up to the optical radius ($\sim 4-5 R_{\rm d}$) and then saturates at larger radii, as measured for the WHISP sample data \citep[e.g., see][]{vanEymeren2011}. While the radial increment of the amplitude of the $m=1$ lopsided distortion is seen in our chosen minor merger models, the $m=1$ lopsided distortion appears predominantly in $4-7 R_{\rm d}$ in our models.
\par
%
Lastly, we compare the strength of the $m=1$ lopsided distortion in our models with that from the observed galaxies with lopsidedness. We note that the model \textbf{gSadE001dir33} also hosts a central stellar bar which gets amplified during a pericenter passage of the satellite and subsequently gets weakened in the post-merger remnant \citep[for details see][]{Ghoshetal2020}. First, we choose a few time-steps where both the bar and the lopsided pattern co-exist. Then, we measure the average values of $m=2$ Fourier coefficient, $\left< A_2/A_0 \right>$, in the central region encompassing the bar ($\sim 0.5 - 2 \ R_{\rm d}$) as well as the average values of the $m=1$ Fourier coefficient $\left< A_1/A_0 \right>$, in the outer region encompassing the lopsided pattern ($\sim 4 - 7 \ R_{\rm d}$). This is shown in Fig.~\ref{fig:lopsided_comparison_observation}. Next, to compare with observation, we make use of the measurements of the average values of the $m=1$ and the $m=2$ Fourier coefficients from \citet{Zaritskyetal2013} which provides these measurements for 163 galaxies selected from the S$^4$G sample \citep{Shethetal2010}. Now, \citet{Zaritskyetal2013} provides the average values of the $m=1$ Fourier coefficient $\left< A_1/A_0 \right>$, measured in two radial extents -- $\left< A_1\right>_i$, measured in the inner region ($1.5-2.5 R_{\rm d}$), and $\left< A_1\right>_o$, measured in the outer region ($2.5-3.5 R_{\rm d}$). The same is true for the $m=2$ mode as well \citep[for details see description in][]{Zaritskyetal2013}. Therefore, for a uniform comparison with the model \textbf{gSadE001dir33}, we have taken the values of $\left< A_1\right>_o$ and $\left< A_2\right>_i$. Furthermore, we select only those galaxies which host a bar. This is shown in Fig.~\ref{fig:lopsided_comparison_observation}. As seen clearly, the lopsidedness excited in our selected minor merger model is broadly consistent with the observed lopsidedness in galaxies from S$^4$G sample.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{nonaxisymmetry_comparison_s4gsample.pdf}
\caption{Distribution of average values of $m=1$ and $m=2$ Fourier coefficients, $\left<A_1/A_0 \right>$ and $\left<A_2/A_0 \right>$, shown at different time-steps for the model \textbf{gSadE001dir33} when these two modes co-exist. The same are compared with the measurements by \citep{Zaritskyetal2013} for a sample of galaxies from the S$^4$G catalogue. Here, only galaxies with a bar are selected, for details see text. The simulation run time (in Gyr) is colour-coded.}
\label{fig:lopsided_comparison_observation}
\end{figure}
\section{Discussion}
\label{sec:discussion}
Here, we discuss a few points regarding this work.\\
We show that a minor merger event can trigger a strong, coherent $m=1$ lopsided pattern in both the density and the velocity distributions of the host galaxy. Thus, minor merger is a plausible avenue to excite lopsidedness in the stellar component of the host galaxies that reside in dense environments (e.g., in groups and in clusters), and in concordance with the finding of previous works \citep[e.g.,][]{Bournaudetal2005,Mapellietal2008}. Indeed, galaxies residing in groups and clusters, are observationally known to display strong lopsidedness distortion \citep[e.g, see][]{Haynesetal2007}. We show that both direct and retrograde orbital configurations can generate strong, coherent $m=1$ lopsidedness in the host galaxies, as opposed to earlier findings that only retrograde orbital configurations are more favourable to generate lopsided asymmetry \citep{Bournaudetal2005}. However, the longevity of the $m=1$ lopsided pattern indeed depend on the orbital configuration. In our chosen models, for a retrograde orbit, the time of interaction is larger than for a direct orbit with same orbital energy (see Table~\ref{table:key_param}). For example, the lopsidedness is shown to persist for $\sim 2.3 \mbox{$\>{\rm Gyr}$}$ in the model \textbf{gSadE006ret33} where the merger happens at a very late epoch. This implies that galaxies experiencing continuous fly-by encounters will show sustained, coherent strong $m=1$ lopsidedness which can be detected in observations. Thus, a minor merger has a significant effect in stirring up the internal dynamics of the host galaxy before the merger is complete. The occurrence of minor merger events are more probable in the early Universe; therefore the secular evolution driven by an $m=1$ lopsided distortion is likely to have a strong influence on the early evolution of galaxies.
\par
Also, we mention that the satellite merges with the host galaxy, typically around $2 \mbox{$\>{\rm Gyr}$}$ after the start of the simulation run. As the lopsidedness fades away after the merger happens, therefore, our findings are in apparent tension with the observationally known vast abundances of the lopsided galaxies in the local Universe as minor mergers are common here \citep[e.g., see][]{Frenketal1988,CarlbergandCouchman1989,LaceyandCole1993,FakhouriandMa2008,Kavirajetal2009}. Furthermore, a galaxy might experience multiple minor merger events during its entire lifetime \citep[e.g., see][]{Hopkinsetal2009}. However we note that, in reality, a galaxy can accrete cold gas \citep[e.g.,][]{BirnboimDekel2003,Keresetal2005,DekelBirnboim2006,Ocvriketal2008} either during the merger-phase or at a later stage. Such an asymmetric cold gas accretion can rejuvenate the lopsidedness in the galaxy, as shown in the previous studies \citep[e.g.,][]{Bournaudetal2005,Mapellietal2008}. Thus, while the minor merger event continues to be one plausible mechanism which can excite an $m=1$ lopsidedness in galaxies, as also shown in this work, other mechanisms (e.g., asymmetric gas accretion) are also at play to account for the vast abundances of the lopsided galaxies, as inferred by past observational studies.
\par
Lastly, we mention that a disc-DM halo off-set configuration was previously shown to lead to a strong central lopsidedness in the central few kpc region \citep[e.g, see][]{PrasadandJog2017} as the central disc mass dominates the halo so that the off-set halo acts as a perturbation on the disc. Although such a disc-DM halo off-set exists (for a short time-scale) in our minor merger models, we do not find any strong lopsidedness in the central disc regions of the host galaxy.
\section{Conclusion}
\label{sec:conclusion}
In summary, we investigated the dynamical impact of minor merger of galaxies (mass ratio 1:10) on the excitation of an $m=1$ lopsided distortion in the stellar density and the velocity fields. We also studied the generation of a stellar disc-DM halo off-set configuration in the host galaxy during a minor merger event. We selected a set of minor merger models, with varying orbital energy, orientation of orbital spin vector, morphology of host galaxy from the GalMer library of galaxy merger simulation. Our main findings are :\\
\begin{itemize}
\item{A minor merger event can trigger a prominent $m=1$ lopsided distortion in the stellar density distribution of the host galaxy. The strength of the lopsided distortion undergoes a transient amplification phase after each pericentre passage of the satellite. However, the lopsidedness fades away after the merger happens and the post-merger remnant gets time ($500-850 \mbox{$\>{\rm Myr}$}$) to readjust itself. This broad trend holds true for a wide range of orbital configurations considered here. In addition, a \textit{delayed} minor merger can drive a prolonged ($\sim 2-2.5 \mbox{$\>{\rm Gyr}$}$) lopsidedness due to continued pericentre passages of the satellite.}
\item{The $m=1$ lopsided pattern is shown to rotate in the disc with a well-defined pattern speed. The pattern speed of the $m=1$ lopsidedness is smaller than the pattern speed of the $m=2$ bar, when measured simultaneously at a same epoch. Moreover, the $m=1$ lopsided distortion rotates in a retrograde sense with respect to the $m=2$ bar mode. This gives rise to a dynamical scenario of a bar-lopsidedness resonance overlap.}
\item{The stellar and the gas velocity fields of the host galaxy also displays a large-scale kinematic lopsidedness after each pericentre passage of the satellite galaxy. However, this kinematic lopsided pattern fades away long after the merger happens. }
\item{An interaction with a satellite galaxy also excites an off-set between the stellar disc and the DM halo of the host galaxy. The resulting off-set is 2-3 times of the softening length of the simulation. This off-set is rather short-lived, and is most prominent after each pericentre passage of the satellite. This holds true for a wide range of orbital configurations considered here.}
\end{itemize}
\section*{Acknowledgement}
The authors acknowledge support from an Indo-French CEFIPRA project (Project No.: 5804-1).
CJJ thanks the DST, Government of India for support via a J.C. Bose fellowship (SB/S2/JCB-31/2014).
This work makes use of the publicly available GalMer library of galaxy merger simulations which is a part of {\sc HORIZON} project (\href{http://www.projet-horizon.fr/rubrique3.html} {http://www.projet-horizon.fr/rubrique3.html}).
\section*{Data availability}
The simulation data of minor merger models used here is publicly available from the URL \href {http:/ /galmer.obspm.fr}{http:/ /galmer.obspm.fr}. The measurements of average values of $m=1$ lopsided distortion and $m=2$ bar mode for the sample of S$^4$G galaxies are publicly available from \href {https://doi.org//10.26093/cds/vizier.17720135}{this URL}.
|
1,314,259,993,527 | arxiv | \section{Introduction}
\label{sec:intro}
Recent years saw a revival of the idea originally put forward by S. Hawking \cite{1971MNRAS.152...75H}, that Primordial Black Holes (PBH) could make up the so far elusive Dark Matter. LIGOs first detection of gravitational waves from merging binary black holes of approximately equal masses in the range 10--30 M$_{\odot}$ \citep{2016PhRvX...6d1015A,2019ApJ...882L..24A} led to the suggestion that these could be a signature of dark matter stellar mass PBH \citep{2016PhRvL.116t1301B,2016ApJ...823L..25K,2017PDU....15..142C} in a mass window not yet excluded by other astrophysical constraints. A recent review about the rich literature constraining the possible contributions of PBH to the dark matter is e.g. given in \cite{2019PhRvD.100d3540M}.
\vskip 0.1 truecm
In a recently published theoretical prediction \cite{2019rsta.377..2161G,2019arXiv190411482G} PBH are created in the QCD phase transitions (around 100 MeV) of different particle families freezing out of the primordial Quark-gluon plasma within the first two seconds after the inflationary phase. When W$^{+/-}$, Z bosons, baryons, pions are created, and e$^+$e$^-$ pairs annihilate, they leave an imprint in form of a significant reduction of the sound speed at the corresponding phase transitions, and allow regions of high curvature to collapse and form PBH \citep[see also][]{2018JCAP...08..041B}. The typical mass scale of these PBH is defined by the size of the horizon at the time of the corresponding phase transition. In this model four distinct populations of PBH in a wide mass range are formed: planetary mass black holes at the W$^{+/-}$, Z transition, PBH of around the Chandrasekhar mass when the baryons (protons and neutrons) are formed from 3 quarks, PBH of masses of order 30 M$_{\odot}$ (these correspond to the LIGO black holes), when pions are formed from two quarks, and finally supermassive black holes (SMBH) at the e$^+$e$^-$ annihilation \citep[see also][]{2017JPhCS.840a2032G}. Another remarkable aspect of this theory is, that the gravitational energy released at at the PBH collapse locally reheats regions (hot spots) around the black holes to the electroweak transition scale (around 100 GeV), where chiral sphaleron selection effects can introduce the matter/antimatter asymmetry. The PBH in this picture would therefore also be responsible for the baryogenesis and fix the ratio of dark matter to baryons. Clustering of the PBH in a very wide mass distribution could alleviate some of the more stringent observational constraints on the allowed contribution of PBH to the dark matter \citep{2017PDU....15..142C,2019EPJC...79..246B}. The interpretation of cold dark matter as the sum of contributions of different mass PBH families could explain a number of so far unsolved mysteries, like e.g. the massive seed black holes required to create the supermassive black holes in the earliest QSOs \citep{2007ApJ...665..187L}, the ubiquitous massive LIGO/VIRGO massive binary black holes \citep[e.g.][]{2016ApJ...823L..25K}, or even the putative "Planet X" PBH in our own Solar System \citep{2019arXiv190911090S}.
\vskip 0.1 truecm
The most abundant family of PBH should be around the Chandrasekhar mass (1.4 M$_{\odot}$). This prediction may already have been vindicated by the recent OGLE/GAIA discovery of a sizeable population of putative black holes in the mass range 1--10 M$_{\odot}$ \citep{2019arXiv190407789W}. The microlensing survey OGLE has detected $\sim$60 long-duration microlensing events. About 20 of these have GAIA DR2 parallax distances of a few kpc, which break the microlensing mass–distance degeneracy and allow the determination of masses in the few solar mass range, implying that these objects are most likely black holes, since stars at those distances would be directly visible by OGLE.
\vskip 0.1 truecm
Important fingerprints of a population of PBH may be hidden in the Cosmic infrared and X--ray background radiation (see \cite{2018RvMP...90b5006K} for a comprehensive review). Indeed, \cite{2016ApJ...823L..25K} argues, that the near-infrared Cosmic background (CIB) anisotropies detected in deep {\em Spitzer} \cite{2005Natur.438...45K,2007ApJ...654L...5K,2010ApJS..186...10A,2012ApJ...753...63K} and {\em Akari} \cite{2011ApJ...742..124M} images, which cannot be accounted for by known galaxy populations \cite{2012ApJ...752..113H}, could be connected to PBH. Similar fluctuations were discovered in the Cosmic X--ray background (CXB) observed in a deep {\em{Chandra}} survey, which are correlated with the CIB anisotropies in the same field \cite{2013ApJ...769...68C}. Later studies of wider/deeper fields covered by both {\em Chandra} and {\em Spitzer} \citep{2017ApJ...847L..11C,2018ApJ...864..141L,2019ApJ...883...64L} have substantially improved the detection significance of the observed signal. The X--ray fluctuations contribute about 20\% to the CIB signal, indicating that black hole accretion should be responsible for such highly efficient X--ray emission. Similar studies of deep fields observed with the {\em{Hubble}} Space Telescope in the optical range do not show such a cross-correlation signal down to m$_{AB}$$\sim$28 \citep[see][]{2018RvMP...90b5006K}. The angular scale of the fluctuation power spectra of the CIB and CXB reach values >1000", much larger than expected for the known galaxy populations \citep{2014ApJ...785...38H}. All of these findings can be understood, if the fluctuation signal comes from a high-redshift (z$\gtrsim$12) population of black holes. The spectral shape of the CXB fluctuations determined from a combination of the deepest/widest fields \cite{2019ApJ...883...64L} can be fit either with a very high redshift population of obscured black holes, or with completely unobscured black hole accretion. Original models \citep{2013MNRAS.433.1556Y} invoked highly obscured Direct Collapse Black Holes formed in metal-free halos at z>12 to explain the observed CIB and CXB signal. However, accreting massive black holes have recently been firmly ruled out as the source of these fluctuations \cite{2019MNRAS.489.1006R}, because they would require an unfeasible amount of black hole accretion at z>6, locking up a larger amount of mass in massive black holes at high redshift, than the known black hole mass function at z=0. These authors also ruled out local diffuse emission as the source of the X--ray fluctuations. The CXB has been largely resolved into discrete sources in deep X--ray images, either directly \citep[see][]{2005ARA&A..43..827B,2006ApJ...645...95H}, or by crosscorrelating with the deepest {\em Hubble} galaxy catalogues \citep{2007ApJ...661L.117H,2012ApJ...748...50C}. However, \cite{2007ApJ...661L.117H} show that some marginally significant diffuse CXB still remains after accounting for all discrete contributions. This is consistent with the independent determination of \cite{2017ApJ...837...19C}. The residual unresolved flux is about 3 times larger than the X-ray flux associated with the above CXB/CIB fluctuations.
\vskip 0.1 truecm
Given the difficulties in explaining the CIB/CXB correlation with known classes of sources, and motivated by the notion that the dark matter could be dominated by an extended mass distribution of PBH, I constructed a toy model to explore the potential contribution to the cosmic backgrounds by the accretion of baryons throughout cosmic history onto such a population of early black holes. Assuming a combination of Bondi-Hoyle-Lyttleton quasi-spherical capture at large distances from the PBH, and advection-dominated disk accretion flows (ADAF) in the vicinity of the central object, I can explain the observed residual CXB flux and the CXB/CIB crosscorrelation with minimal tuning of the input parameters, and find a maximum contribution to the extragalactic background light in the redshift range 15<z<30. I further estimate that this accretion onto PBH can produce enough flux to significantly contribute to the pre-ionization of the intergalactic medium with UV photons by a redshift z$\gtrsim$15 and to the pre-heating of the baryons with X--ray photons, observed as an "entropy floor" in the X--ray emission of galaxy groups.
\vskip 0.1 truecm
In section 2 the assumed PBH mass distribution is introduced and contrasted with recent observational limits on the PBH contribution to the dark matter. The basic ingredients of the toy model for the accretion onto PBH are presented in section 3. The assumed radiation mechanism and efficiency is discussed in section 4. The contribution of the PBH emission to the different bands is compared with the observational constraints in section 5. Other potential diagnostics of this putative dark matter black hole population are discussed in section 6, and conclusions are presented in section 7. Throughout this work a $\Lambda$CDM cosmology with $\Omega_M$=0.315, $\Omega_\Lambda$=0.685, and H$_0$=67.4 km s$^{-1}$ Mpc$^{-1}$ \citep{2018arXiv180706209P} is used. These parameters define the baryon density $\Omega_{bar}$=0.049, the dark matter density $\Omega_{DM}$=0.264, and the critical mass density of the universe $\rho_{crit}$=1.26$\times10^{20} M_\odot~ {\rm Gpc}^{-3}$. All logarithms in this paper are taken to the base 10.
\section{The assumed PBH mass distribution}
\label{sec:PBH}
The theoretical predictions in \cite{2019rsta.377..2161G,2019arXiv190411482G,2017JPhCS.840a2032G,2019arXiv190608217C} yield a broad distribution of PBH masses with a number of peaks corresponding to the particle families freezing out from the Big Bang. Depending on the spectral index $n_s$ of the primordial curvature fluctuation power spectrum, the PBH mass distribution has a different overall slope. \cite{2019arXiv190608217C} find consistency of these predictions with a number of recent observational limits on the PBH contribution to the dark matter, but there is a tension of their models with the Cosmic Microwave Background (CMB) constraints from accretion at large PBH masses \citep{2017PhRvD..96h3524P,2017PhRvD..95d3534A}. Recent limits from gravitational lensing of type Ia supernovae on a maximum contribution of stellar-mass compact objects to the dark matter of around 35\% \citep{2018PhRvL.121n1101Z}, and from the LIGO OI gravitational wave merger rate of black holes in the mass range 10--300 M$_{\odot}$ \cite{2017PhRvD..96l3523A} are also in tension with these models. An additional important constraint comes from a comparison of the predicted PBH fraction with the measured local mass function of supermassive black holes (SMBH) in the centers of nearby galaxies. Integrating the local SMBH mass function of \cite{2013CQGra..30x4001S} (see figure \ref{fig:fPBH}) in the range $10^{6}$--$10^{10}$ M$_{\odot}$ yields a local SMBH mass density of $\rho_{SMBH}$=6.3$\times$10$^5$ M$_{\odot}$ Mpc$^{-3}$, corresponding to a dark matter fraction of f$_{SMBH}$=1.89$\times$10$^{-5}$, which is about a factor of 10--100 lower than the f$_{PBH}$ predictions in \cite{2019rsta.377..2161G,2019arXiv190608217C}.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.6\textwidth]{figures/f_PBH.jpg}
\end{center}
\caption{The PBH mass spectrum (thick red line) assumed for this work (Garc{\'\i}a-Bellido, 2020, priv. comm.), compared to a number of observational constraints. Microlensing limits from SNe \citep{2018PhRvL.121n1101Z}, EROS \citep{2007A&A...469..387T}, and the Subaru M31 survey \citep{2019NatAs...3..524N} are shown as solid, dashed and dotted green lines, respectively. LIGO limits from gravitational merger event rates are shown as blue solid line for subsolar masses \cite{2019PhRvL.123p1102A}, and as blue dashed line for 10-300 M$_{\odot}$ \cite{2017PhRvD..96l3523A}. The CMB accretion limits from \cite{2017PhRvD..96h3524P} are shown as orange dashed line. Multiwavelength limits from the Galactic Center \cite{2019JCAP...06..026M} are shown in magenta for X-ray (solid) and radio (dashed) observations. Finally, the local SMBH mass function \citep{2013CQGra..30x4001S} is shown as black line at 10$^{6-10}$ M$_{\odot}$.}
\label{fig:fPBH}
\end{figure*}
\vskip 0.1 truecm
For these reasons, Garc{\'\i}a-Bellido et al. (2020 in prep.) are revising their model parameters in order to predict a steeper PBH mass function at large M$_{PBH}$ and shared one of their new models, shown as red curve in figure \ref{fig:fPBH}. Here a value of n$_s$=0.987 is assumed for the spectral index of the primordial fluctuation power spectrum, as well as a running curvature of dn$_s$=$-$0.0006. The integral of this PBH distribution over the whole mass range yields f$_{PBH}$=1. On the other hand, the distribution yields only $\sim$40\% of the dark matter in the peak mass range [0.1,10] M$_{\odot}$, and is thus fully consistent with the microlensing constraints in figure \ref{fig:fPBH}. In the mass range of the LIGO black hole binaries it predicts just the right amount of dark matter to explain the gravitational wave merger rates, and in the SMBH range it is consistent with the local black hole mass function (taking into account the accretion onto supermassive PBH over cosmic time producing the bulk of the X-ray background \citep{2015A&A...574L..10C}). Apart from small sections, the new PBH mass function is thus fully consistent with the most recent observational constraints.
\section{Baryon accretion onto the PBH}
\label{sec:toy}
In the following I use the PBH mass spectrum presented in section \ref{sec:PBH} to calculate the accretion of baryons onto PBH over cosmic time, and to predict the electromagnetic emission from this process. As we will see, for most of the cosmic history these black holes move at supersonic speeds among the baryons and will therefore undergo Bondi-Hoyle-Lyttleton quasi-spherical capture \citep{1939PCPS...35..405H,1944MNRAS.104..273B,1952MNRAS.112..195B,2004NewAR..48..843E}. In the Bondi-Hoyle picture of a black hole moving supersonically through a homogeneous gas, the capture happens in the wake of the moving object. Behind the object, material moves in from a wide cone, and needs to lose angular momentum before it can fall towards the black hole. The gas is in principle collisionless, so that only the magnetic field trapped in the plasma allows particles to lose angular momentum and start to behave like a fluid. This gas forms the accretion flow, in which it is adiabatically heated. The accreting gas is ionized and embedded in the magnetic field. Any plasma drawn in by the gravitational field will carry along the magnetic field. Shvartsman \cite{1971SvA....15..377S} argues that in the black hole tail, where the matter flow stops, the gravitational and magnetic energy densities become nearly equal. This equipartition is preserved in the infalling flow and thus the magnetic field grows towards the black hole. Like the heat has to be ultimately radiated away, the magnetic field needs a way to dissipate energy on its way inward. \cite{2005A&A...440..223B} discuss that the most likely dissipation mechanism for the magnetic field is reconnection of field lines in narrow current sheets, similar to the processes we observe in solar flares and active galactic nuclei. Magnetic reconnection causes the acceleration and non-thermal heating of a small fraction of the infalling electrons. In parallel, decoupled magnetic field lines can carry some of the amplified magnetic field outward and eject plasma \citep{2005A&A...440..223B}.
\vskip 0.1 truecm
An important question is, whether the accretion flow is spherically symmetric close to the black hole, or whether an accretion disk is formed. Originally most researchers assumed spherical accretion for PBH \citep[e.g.][]{2007ApJ...662...53R,2008ApJ...680..829R,2017PhRvD..95d3534A}. However, \cite{2017PhRvD..96h3524P} argues, that the accreted angular momentum is large enough, that an accretion disk is formed, at least close to the black hole. According to these authors, the granularity of the PBH distribution and the formation of PBH binaries at the scale of the Bondi radius will imprint density and velocity gradients into the relative distribution of baryons and PBH, such that ultimately an accretion disk and an advection-dominated accretion flow (ADAF) will form \cite{2014ARA&A..52..529Y}. The formation of an ADAF disk significantly reduces the accretion rate and the radiative efficiency \citep{2012MNRAS.427.1580X}, compared to spherical accretion.
But to first order the Bondi-Hoyle-Lyttleton mechanism can be used to estimate the accretion rate $\dot M$ onto the PBH \cite{2017PhRvD..96h3524P,2019PhRvD.100d3540M}.
\vskip 0.1 truecm
Bondi \cite{1952MNRAS.112..195B} discusses two different approximations to the spherical gas accretion problem, (i) the velocity-limited case, where the motion of the accreting object through the gas is dominant and an accretion column is formed in the wake of the moving object, and (ii) the temperature-limited case, where the sound speed of the gas is dominant and a spherical accretion flow forms. In the velocity-limited case (i) the mass accretion rate is given as
\begin{equation}
\dot M = 2.5 \pi \rho (G M)^2 v_{rel}^{-3},
\end{equation}
where $\rho$ is the gas density, $M$ is the PBH mass, and $v_{rel}$ is the relative velocity between object and gas. In the temperature-limited case (ii) with negligible relative velocity, the thermal velocity of the gas particles is dominant and the corresponding accretion rate is given by
\begin{equation}
\dot M = 2.5 \pi \rho (G M)^2 c_s^{-3},
\end{equation}
where $c_s$ is the sound speed. For intermediate cases, \cite{1952MNRAS.112..195B} introduces an effective velocity
\begin{equation}
v_{eff} = \sqrt{v_{rel}^2+c_s^2}
\end{equation}
and the corresponding mass accretion rates becomes
\begin{equation}
\dot M = 2\lambda \pi \rho (G M)^2 v_{eff}^{-3},
\end{equation}
where the so called accretion eigenvalue $\lambda$ is is a fudge factor of order unity, dependent on non-gravitational aspects of the problem, like e.g. the gas equation of state or outflows from feedback effects. Different authors have discussed this parameter for the particular application of gas accretion onto PBH in the early universe. \cite{2007ApJ...662...53R} find values of $1.12>\lambda>10^{-3}$, depending e.g. on the PBH mass. For masses of order 1 M$_\odot$ they find $\lambda=1.12$. \cite{2016PhRvL.116t1301B} discriminate between isothermal and adiabatic gas with accretion eigenvalues of $\lambda$=1.12, and 0.12, respectively. In this paper I assume an eigenvalue $\lambda$=0.05. The motivation for this choice is discussed in section 4, while section 5 and 6 show that this choice fits the observational constraints quite well.
\begin{figure*}[htbp]
\includegraphics[width=0.49\textwidth]{figures/Temperature.jpg}
\includegraphics[width=0.49\textwidth]{figures/Velocity.jpg}
\caption{Left: Baryon temperature as a function of redshift. Right: Mean relative velocity $\langle v_{rel} \rangle $ between dark matter and baryons, sound speed $c_s$ and the effective velocity $v_{eff}$ (eq. 3.8) as a function of redshift.}
\label{fig:TempVel}
\end{figure*}
\vskip 0.1 truecm
Let us first look at the thermal history and thus the sound speed of the gas over cosmic history. A nice summary is given in figure 15 of \cite{2013ASSL..396...45Z}. Despite having decoupled from the CMB at z$\approx$1089, the gas temperature continues to follow the temperature evolution T$\propto$(1+z) of the background photons due to Compton scattering off residual ionized electrons from the recombination era. Below redshifts z$\approx$200 the residual ionization in the gas is low enough, that it decouples from the background radiation and cools adiabatically following the relation T$\propto$(1+z)$^2$. When the first objects form and reionization starts around z$\lesssim$20, the gas is heated up to temperatures $\sim$10$^4$ K. The details of re-ionization are still uncertain and will be discussed below. I have deliberately chosen a redshift of z$\approx$20 for re-ionization to become dominant, with full ionization occurring around z$\approx$7. Finally, at z<3, when the bulk of the cosmic baryons are falling into increasingly larger dark matter halos and become virialized, they are further heated up to form the warm/hot intergalactic medium at temperatures $10^{5-7}$K \citep{1999ApJ...514....1C}. Using figure 2b in this paper I estimate average temperatures for the IGM of 5$\times10^4$, 1.5$\times10^5$, and 8$\times10^5$ K at z=2, 1, 0, respectively. The baryon temperature as a function of redshift assumed in this work is shown in figure \ref{fig:TempVel} (left). The sound speed of the gas is given by
\begin{equation}
c_s=\sqrt{\frac {\gamma k T} {\mu m_H}},
\end{equation} where $\gamma$=5/3 for an ideal monoatomic gas, $\mu$=1.22 is the mean molecular weight including a helium mass fraction of 0.24, $m_H$ is the mass of the hydrogen atom, and $T$ is the temperature of the baryons as a function of cosmic history discussed above \citep{2010PhRvD..82h3520T}. The sound speed as a function of redshift is the dotted curve shown in figure \ref{fig:TempVel} (right).
\vskip 0.1 truecm
I now discuss the relative velocity $v_{rel}$ between the dark matter PBH and the baryons throughout cosmic history.
In the radiation-dominated phase of the universe at z>1089, the dark matter is already hierarchically clustering under the influence of its own gravity. The sound speed of the photon-baryon fluid is very high, of order one third of the velocity of light, and thus the normal matter undergoes baryonic acoustic oscillations \citep{1970Ap&SS...7....3S,1970ApJ...162..815P}. This leads to a spatial separation between baryons and dark matter and thus to a Gaussian distribution of relative velocities with an average around $\langle v_{rel} \rangle$$\approx$30 km/s \citep[see][]{2010PhRvD..82h3520T,2014IJMPD..2330017F}. At z$\approx$1089, when electrons and protons combine and the universe becomes transparent, the sound speed of the gas dramatically drops to $\sim$6 km/s. The dark matter PBH kinematically decouple from the baryons and their relative velocities become highly supersonic. In the linear growth phase of the universe, at scales larger than the gas Jeans-length, the dark matter and the baryons fall in the same gravitational potentials of the cosmic web and thus their relative velocity decreases with the cosmic expansion:
\begin{equation}
\langle v_{rel} \rangle_{linear} \approx 30~{\frac {1+z} {1000}}~{\rm km~s}^{-1}.
\end{equation}
\vskip 0.1 truecm
This relation is shown as the right branch of the dashed line in figure \ref{fig:TempVel} (right), above redshifts $z\gtrsim20$.
From this figure it becomes apparent, that between recombination and re-ionization the PBH move with highly supersonic, but decreasing velocities through the gas, due to the decaying sound waves. As we will see below, in this branch of the velocity curve the contribution of PBH to the cosmic backgrounds has a maximum. At lower redshifts, at scales smaller than the gas Jeans-length, the hierarchical clustering becomes non-linear and baryons falling into growing dark matter halos are virialized. As a consequence, the velocity dispersion between dark matter and gas increases again towards lower redshifts, scaling as $M_{Halo}^{1/3}$, where $M_{Halo}$ is the mass of the dark matter halo becoming non-linear. I used two different methods to estimate the average virial velocity for redshifts z$\lesssim$20. First, the Millenium Simulation run described in \cite{2005Natur.435..629S} gives the mass function of dark matter halos with halo masses $M_{Halo}$>$10^{10} M_\odot$ for five different redshifts between z=10 and z=0. I extrapolated these simulated mass functions to lower masses ($M_{Halo}$>10$^3 M_\odot$) using the empirical universal halo mass function shape found through simulations by \cite{2013MNRAS.433.1230W}. For every mass bin I determined the virial velocity according to the calibration of the velocity dispersion as a function of halo mass described in \cite{2013MNRAS.430.2638M}, and then calculated the average for each epoch. These virial velocities are shown as crosses in figure \ref{fig:TempVel} (right). The extrapolation to halo masses as small as $M_{Halo}>10^3 M _\odot$ is rather uncertain, both for the mass function and the velocity dispersion, because the cosmological simulations do not have a fine enough mass resolution at this scale. Also, the velocity dispersion relevant for Bondi capture onto PBH is determined by the smallest mass scales becoming non-linear at any redshift. A second possibility to calculate the relative velocities in the non-linear phase is therefore to determine the velocity dispersion directly from the dark matter power spectrum and integrate this over the smallest non-linear scales. This calculation has been performed by M. Bartelmann (2020, priv. comm.), adopting the normalization of the primordial power spectrum of $\sigma_8$=0.8. The relative velocity in the non-linear regime can be approximated by
\begin{equation}
\langle v_{rel} \rangle _{nonlinear} \approx 620~(1+z)^{-2.3}~{\rm km~s}^{-1},
\end{equation}
and is shown as the left branch ($z\lesssim20$) of the dashed line in figure \ref{fig:TempVel} (right). At z=2 the cluster velocity dispersion agrees with this estimate, but it systematically overestimates the small-scale velocity dispersion at larger redshifts.
\vskip 0.1 truecm
Since we are interested in the total contribution of PBH to the electromagnetic radiation of the universe, we have to average over the whole Gaussian distribution of relative velocities. The Bondi accretion rate is proportional to $v_{rel}^{-3}$ (see above), and therefore smaller velocities dominate. For this particular case \cite{2017PhRvD..95d3534A} propose to replace the quadratic average of relative velocity and sound speed in Bondi's formula (3.3) above with their harmonic mean:
\begin{equation}
v_{eff} = \sqrt{ \langle v_{rel} \rangle~c_s.}
\end{equation}
This is the assumption I adopt here, and the resulting effective velocity $v_{eff}$ is shown as solid red curve in figure \ref{fig:TempVel} (right). With equation (3.8) the accretion rate becomes
\begin{equation}
\dot M = 2 \lambda \pi \rho (G M)^2 ~(\langle v_{rel} \rangle~c_s)^{-3/2}
\end{equation}
\vskip 0.1 truecm
It is interesting to note that in the range 200<z<20 both relative velocity and sound speed decrease linearly with (1+z). Therefore the mass accretion rate is expected to be constant in this era. It is important to understand that the redshift, at which both the sound speed and the relative velocity of the gas turn around due to re-ionization and virialization, respectively, and rapidly increase towards lower redshift, is crucial for our analysis. The redshift, where the minimum velocity occurs, ultimately determines the maximum flux contribution of PBH accretion to the cosmic backgrounds.
\vskip 0.1 truecm
The calculation of the Bondi accretion rate in equation (3.9) requires the density $\rho$ as a function of redshift. With $\Omega_{bar}$=0.049 and $\rho$=n{$\cdot$}m$_H $, where $n$ is the number density of particles, I find
\begin{equation}
n = 250~\left( \frac {1+z} {1000} \right)^3~{\rm cm}^{-3}.
\end{equation}
I define $\dot m$ as the normalized mass accretion rate $\dot m = \dot M/\dot M_{Edd} $, with the Eddington accretion rate $\dot M_{Edd}$=1.44$\times10^{17} M/M_\odot$ g s$^{-1}$. Then I can rewrite equation (3.9) into normalized quantities
\begin{equation}
\dot m = \lambda \cdot 0.321 \left(\frac {1+z} {1000} \right)^3~\left(\frac {M} {M_\odot} \right) \left( \frac {v_{eff}} {1~{\rm km~s}^{-1}} \right)^{-3}
\end{equation}
\vskip 0.1 truecm
With a very broad PBH mass spectrum, including intermediate-mass and supermassive black holes (M$_{PBH}$>1000 M$_\odot$), it is important to include the effective viscosity due to the Hubble expansion in the early universe \citep{2007ApJ...662...53R}. The Bondi radius determines the amount mass captured by the PBH:
\begin{equation}
r_B={\frac {G~M} {v_{eff}^2}} \approx 1.34\cdot10^{16} \left(\frac {M} {M_\odot} \right) \left( \frac {v_{eff}} {1~{\rm km~s}^{-1}} \right)^{-2} cm.
\end{equation}
This is shown for two different PBH masses in figure \ref{fig:RadIon} (left). The characteristic time scale for accretion is the Bondi crossing time $t_{cr}=r_B/v_{eff}$, which can be compared to the Hubble time $t_H$ at the corresponding redshift. If $t_{cr}<t_H$ there will be stationary accretion, while for Bondi crossing times larger than the Hubble time the accretion is suppressed. For every redshift we can calculate a critical PBH mass M$_{cr}$, below which the steady-state Bondi assumption can be applied. For redshifts z=1000, 200, 20 this critical mass corresponds to $log (M_{cr}/M_\odot)$=5.3, 4.8, 3.4, respectively. At redshifts below z=20 M$_{cr}$ rapidly increases to values above 10$^6$ M$_\odot$. For PBH masses close to and above M$_{cr}$ the Bondi accretion rate can be scaled by the Hubble viscosity loss given in the dashed curve in figure 3 (left) of \citep{2007ApJ...662...53R}.
\vskip 0.1 truecm
Inserting $v_{eff}$ from equation (3.8) and figure \ref{fig:TempVel} (right) into equation (3.11), assuming an accretion eigenvalue $\lambda$=0.05 and applying the above Hubble viscosity correction, I can finally calculate the normalized accretion rate as a function of redshift and PBH mass. The results are shown in figure \ref{fig:mdoteff} (left). For PBH masses smaller than $\sim$1000 M$_\odot$ the normalized accretion rate is roughly constant in the redshift range 20<z<200 due to the fact that the density and velocity dependence on redshift in equation (3.9) roughly cancel out (see also the lower panel of figure 4 in \cite{2017PhRvD..95d3534A}). At z<20 $\dot m$ drops dramatically because of the effective velocity increase. PBH masses larger than $\sim10^4$ M$_\odot$ reach accretion rates close to the Eddingon limit at z$\gtrsim$100, but are significantly affected by the Hubble viscosity at z$\gtrsim$20. For all PBH masses the accretion rate is small enough, that the growth of the PBH population can be neglected over cosmic time (PBH with masses in the range 10$^{5-7}$ M$_{\odot}$ accrete about 0.5--2\% of their mass until z>20).
\begin{figure*}[htbp]
\includegraphics[width=.49\textwidth]{figures/mdot.jpg}
\includegraphics[width=.49\textwidth]{figures/Efficiency.jpg}
\caption{Left: Normalized accretion rate onto PBH with masses in the range 0.1--10$^7$ M$_\odot$ as a function of redshift. Right: Radiative efficiencies derived from the accretion rates, assuming the hot accretion flow model of \cite{2012MNRAS.427.1580X} with a viscous heating parameter $\delta$=0.5.}
\label{fig:mdoteff}
\end{figure*}
\begin{figure*}[htbp]
\includegraphics[width=.49\textwidth]{figures/Spectra_mdot.jpg}
\includegraphics[width=.49\textwidth]{figures/Spectra_M.jpg}
\caption{Spectra of the hot disk accretion flow (ADAF) from \citep{2014ARA&A..52..529Y} with a viscous heating parameter $\delta$=0.5, divided by the normalized accretion rate. Left: accretion onto a 10 M$_\odot$ black hole for different accretion rates, as indicated. Right: same for an accretion rate of $log(\dot m)$=-1.6 but different black hole masses (as indicated) .}
\label{fig:Spectra}
\end{figure*}
\section{Accretion spectrum and radiative efficiency}
For the accretion flow and the electromagnetic emission mechanism I follow \cite{2017PhRvD..96h3524P,2019PhRvD.100d3540M} and assume the formation of an accretion disk. Accretion rates close to the Eddington limit will lead to the formation of a standard Shakura-Sunyaev disk \citep{1973A&A....24..337S}, which has a canonical radiative efficiency $\eta\approx0.1$. For much lower accretion rates $\dot m$$\ll$1 an advection-dominated hot accretion flow \citep{2014ARA&A..52..529Y} is expected, with a significantly lower radiation efficiency \citep{2012MNRAS.427.1580X}, roughly scaling according to $\eta\propto\dot m$. Figure \ref{fig:Spectra} shows hot accretion flow spectra from \cite{2014ARA&A..52..529Y} with a viscous heating parameter $\delta$=0.5 for black holes, normalized by Eddington luminosity and mass accretion rate. The left graph shows radiation from a 10 M$_\odot$ black hole at different mass accretion rates. The right graph shows the spectrum from black holes with different masses in the range 10-10$^9$ M$_\odot$ and a mass accretion rate $log(\dot m)$=--1.6.
\vskip 0.1 truecm
It is important to understand, that for advection dominated accretion flows not all the matter entering the Bondi radius will actually reach the black hole. This is due to feed-back mechanisms acting on the accreted gas, e.g. producing outflows or jets. The advection dominated flow models of \cite{2012MNRAS.427.1580X,2014ARA&A..52..529Y} therefore find a radial dependence of mass accretion rate $\dot M\propto R^\alpha$, typically with $\alpha\sim0.4$. Within a radius of about 10 R$_S$, where $R_S=2GM/c^2$ is the Schwarzschild radius, the accretion flow more closely follows the standard Shakura-Sunyaev description of a constant accretion rate with radius down to the last stable orbit ($\sim3R_S$). In terms of the classical Bondi description of quasi-spherical capture, the loss of accreted matter can be associated with the accretion eigenvalue:
\begin{equation}
\lambda\approx \left( \frac {10 R_S} {R_D} \right)^\alpha,
\end{equation}
where $R_D$ is the outer radius of the accretion disk formed. For $\alpha$=0.4, the value of $\lambda$=0.05 chosen for the analysis in this paper therefore corresponds to an outer disk radius of $R_D\sim$2$\times$10$^4$ $R_S$, about 8 orders of magnitude smaller than the Bondi radius. In this picture the accretion flow on large scales follows the Bondi quasi-spherical flow for most of the radial distance, until the advection-dominated accretion disk is formed.
\vskip 0.1 truecm
The radiative efficiency for the ADAF spectra in figure \ref{fig:Spectra} is the integral over these curves and has been calculated through numerical simulations by \cite{2012MNRAS.427.1580X}. For this work I use a digitized version of the highest efficiency curve in their figure 1, with a viscous heating parameter $\delta$=0.5\footnote{Please take note that the definition of $\dot m$ between these authors and the analysis presented here differs by a factor of 10.}. A maximum radiative efficiency of $\eta$$\sim$0.08 is achieved for log($\dot m$)>--1.6. We can now calculate the radiative efficiency for every mass and redshift bin from the normalized accretion rate in figure \ref{fig:mdoteff} (left). The result is shown in figure \ref{fig:mdoteff} (right). It turns out that above redshifts z$\gtrsim$20 and black hole masses M$_{PBH}$>100 M$_\odot$, which dominate the contribution to the extragalactic background light, the radiative efficiencies are relatively large (typically >3\%).
\begin{figure*}[htbp]
\includegraphics[width=.49\textwidth]{figures/L_Weight_M.jpg}
\includegraphics[width=.49\textwidth]{figures/L_Weight_z.jpg}
\caption{Density-weighted bolometric luminosity of single PBH as a function of mass for different redshifts indicated (left), and as a function of redshift for different mass bins indicated (right).}
\label{fig:Lbol}
\end{figure*}
\begin{figure*}[htbp]
\includegraphics[width=.49\textwidth]{figures/F_Weight_M.jpg}
\includegraphics[width=.49\textwidth]{figures/F_Weight_z.jpg}
\caption{Density-weighted bolometric flux of single PBH as a function of mass for different redshifts indicated (left), and as a function of redshift for different mass bins indicated (right).}
\label{fig:Fbol}
\end{figure*}
\vskip 0.1 truecm
We now have the ingredients to calculate the bolometric luminosity and flux expected from the baryon accretion onto the assumed PBH mass spectrum over cosmic time. For every black hole of mass M$_{PBH}$ I calculate the expected bolometric luminosity L$_{bol}=\dot m~\eta~L_{Edd}$, where L$_{Edd}$=1.26$\times$10$^{38}~M_{PBH}/M_{\odot}$ erg/s is the Eddington luminosity, and the normalized mass accretion rate $\dot m$ as well as the radiation efficiency $\eta$ are taken from the data in figure \ref{fig:mdoteff}. In every mass bin, the relative number density of PBH compared to those of 1 M$_{\odot}$ is n$_{PBH}$=f$_{PBH}$/M$_{PBH}$, where f$_{PBH}$ is the PBH mass function from figure \ref{fig:fPBH}. For every mass and redshift bin I thus multiply the bolometric luminosity with this relative number density in order to obtain the density-weighted luminosity $\langle L_{bol} \rangle^*$ for an equivalent PBH of 1 M$_\odot$. This quantity is shown in figure \ref{fig:Lbol} as a function of PBH mass (left) and redshift (right). It shows that the largest contribution to the PBH luminosity over cosmic time comes from PBH in the mass range M$_{PBH}$=10$^{3-7}$ at redshifts z>100. The Chandrasekhar PBH mass peak is subdominant in this representation. The total PBH luminosity deposited in the baryonic gas at high redshifts is important for the pre-ionization and X--ray heating of the intergalactic medium discussed in section 6.
\vskip 0.1 truecm
To calculate the contribution of PBH accretion to the extragalactic background light we need to convert the density-weighted luminosities in Figure \ref{fig:Lbol} to bolometric fluxes using the luminosity distance $D_L$ at the corresponding redshift: $\langle F_{bol} \rangle^*$=$\langle L_{bol} \rangle^*$/(4$\pi~D_L^2$). This quantity is shown in figure \ref{fig:Fbol} as a function of PBH mass (left) and redshift (right). It shows that the largest contribution to the extragalactic surface brightness is produced at a redshift z$\approx$20 from PBH in the mass range M$_{PBH}$=10$^{2-5}$, and a similar contribution from the Chandrasekhar mass peak. SMBH at M$_{PBH}\sim10^{6.5}$ have a notable contribution around z$\sim$10.
\section{The contribution of PBH to the extragalactic background light}
\vskip 0.1 truecm
To calculate the surface brightness per redshift shell in a particular observed frequency band [$\nu_1$;$\nu_2$] of the electromagnetic spectrum, I take into account the spectral shape and the fraction of the radiation falling into the rest frame frequency band [$\nu_1$/(1+z);$\nu_2$/(1+z)]. The exact spectral shape is not so important for this derivation, it is mainly used to calculate the bolometric corrections, i.e. the fraction of the total luminosity falling into the various frequency bands as a function of redshift. The ADAF spectra in figure \ref{fig:Spectra}, in particular those at high $\dot m$ values, can be approximated by power laws with an exponential cutoff at $\sim$200 keV. Following \cite{2017PhRvD..96h3524P} and \cite{2019PhRvD.100d3540M}, I assume a power law slope of --1 (corresponding to a flat line in figure \ref{fig:Spectra}). Below a critical frequency $\nu_c$ the power law spectrum is cut off by synchrotron self-absorption into a power law with a steep slope of approximately +1.86. As can be seen in figure \ref{fig:Spectra} (right), $\nu_c$ depends on M$_{PBH}$ and can be approximated by log($\nu_c$)$\approx$14.82--0.4log(M$_{PBH}$/M$_\odot$). The bolometric corrections are then obtained by integrating the analytic normalized spectra over the observed frequency bands. For the 2--5$\mu$m band we have to consider in addition the Lyman-$\alpha$ break, which produces a sharp cutoff at z$\gtrsim$30 (see e.g. \cite{2013MNRAS.433.1556Y,2002MNRAS.336.1082S}). These bolometric corrections are shown in figure \ref{fig:kcorrSB} (left) for the 2--5$\mu$m NIR band, the 0.5--2 keV and the 2--10 keV X--ray bands, respectively.
To predict the surface brightness of all PBH across cosmic time in these observed frequency bands, the total flux per PBH in figure \ref{fig:Fbol} (right) has to be multiplied with the bolometric correction and the PBH surface density in a particular redshift shell. Using the critical mass density of the universe $\rho_{crit}$=1.26$\times10^{20} M_\odot~{\rm Gpc}^{-3}$ and the Dark Matter density $\Omega_{DM}$=0.264, as well as the reference mass 1M$_\odot$, a comoving PBH space density of $n_{PBH} = 3.32\times10^{19} (1+z)^3~{\rm Gpc}^{-3}$ is obtained. For every redshift shell [z+$\Delta$z] the PBH space density is multiplied with the volume of the shell [V(z+$\Delta$z)--V(z)] and divided by 4$\pi$ deg$^2$ to obtain the number of PBH per deg$^2$. Figure \ref{fig:kcorrSB} (right) shows the derived surface brightness as a function of redshift (per $\Delta$z=1 interval) for the three spectral bands considered here. The emission in all three bands peaks around z$\approx$20 with a FWHM of $\Delta$z$\approx$[-3;+6].
\begin{figure*}[htbp]
\includegraphics[width=.49\textwidth]{figures/BolometricCorrection.jpg}
\includegraphics[width=.49\textwidth]{figures/SurfaceBrightness.jpg}
\caption{Left: The bolometric correction, i.e. the fraction of the total luminosity falling into the respective observed frequency band as a function of redshift, for the 2--5$\mu$m NIR band, as well as the 0.5--2 and 2--10 keV X--ray bands. Right: Predicted surface brightness of the PBH in the same observed bands as a function of redshift (per $\Delta$z=1). }
\label{fig:kcorrSB}
\end{figure*}
\vskip 0.1 truecm
The curves in figure \ref{fig:kcorrSB} (right) can now be integrated to predict the total PBH contribution to
the extragalactic background light as SB$_{2-5\mu}$$\approx$10$^{-13}$,
SB$_{0.5-2keV}$$\approx$1.9$\times$10$^{-13}$, and SB$_{2-10keV}$$\approx$1.3$\times$10$^{-13}$ erg cm$^{-2}$
s$^{-1}$ deg$^{-2}$, respectively.
The minimum amount of X--ray surface brightness necessary to explain the CXB/CIB cross-correlation signal observed by \cite{2013ApJ...769...68C} in the 0.5--2 keV band has been discussed by \cite{2019MNRAS.489.1006R}. This is $9 \times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$, corresponding to roughly 1\% of the total CXB signal in this band. The 0.5--2 keV PBH contribution predicted for an accretion eigenvalue of $\lambda$=0.05 in equation (3.11) is thus about a factor of 2 larger than the observed CXB fluctuation signal, which could well be consistent, given the coherence between the CXB and CIB signals. As discussed above, there is a marginally significant diffuse CXB remaining after accounting for all discrete source contributions \citep{2006ApJ...645...95H,2017ApJ...837...19C}. Extrapolating into the X--ray bands considered here, this residual flux corresponds to $\approx$(7$\pm$3) and $\approx$(9$\pm$20)$\times10^{-13}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$ in the 0.5--2 keV and 2--10 keV band, respectively. Assuming the $\lambda$=0.05 value, the predicted PBH contribution is therefore well below the upper limit (15--25\%) of any unresolved component left in the CXB. {\bf The main result of this paper is therefore, that the assumed PBH population for the dark matter can indeed explain the X--ray fluctuation signal, with a Bondi accretion eigenvalue of $\lambda$=0.05.}
\vskip 0.1 truecm
The flux measured in the 2--5$\mu$m CIB fluctuations at angular scales >100" is about 1 nW m$^{-2}$ sr$^{-1}$ \citep{2007ApJ...654L...1K}, or 3$\times10^{-10}$ erg cm$^{-2}$ s$^{-1}$. The cross-correlated CIB/CXB fluctuations contribute about 10\% to the total CIB fluctuations \citep{2013ApJ...769...68C}, i.e. 3$\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$. Therefore the predicted PBH contribution to these CIB fluctuations is only about 0.5\% for $\lambda$=0.05. It is argued in \cite{2016ApJ...823L..25K} that PBH in the early universe could amplify the cosmic power spectrum at small spatial scales (see below). Together with the pre-ionization of the intergalactic medium discussed below, the PBH can therefore significantly increase the associated star formation. The NIR emission in this picture would then be dominated by early star formation associated with PBH instead of direct PBH emission.
\section{Discussion}
\subsection{Linear versus post-linear growth}
In this simplified treatment I only consider the linear evolution of the power spectrum above the virialization redshift around z$\approx$20 (see figure \ref{fig:TempVel} right). On sufficiently large scales the initial power spectrum has been very precisely determined as nearly scale invariant with overdensities of 10$^{-4}$ \cite{2018arXiv180706209P}, and the PBH density field is expected to follow the standard adiabatic perturbations. On small scales the power spectrum is only poorly constrained and could be significantly amplified by the discrete nature of the PBH population itself \cite{2016ApJ...823L..25K,2018MNRAS.478.3756C,2019PhRvD.100h3528I}. Poisson variations in the density of PBH will introduce non-linear growth of density fluctuations and the corresponding velocity dispersion already well before the virialization redshift z$\sim$20 discussed above. However, from numerical simulations \cite{2019PhRvD.100h3528I} conclude that the nonlinear velocity perturbations introduced by >20 M$_\odot$ PBH are too small to dominate the relative velocities between baryons and PBH at z$\gtrsim$100 \citep[see also][]{2019PhRvD.100h3016H}. However, non-linear effects definitely become more important at lower redshifts (see above) and could effectively reduce the Bondi capture rate.
\subsection{Magnetic fields in the early universe}
The accretion mechanism assumed in the Bondi capture model only works, if there is a rough equipartition between the kinetic energy and magnetic fields in the accreted gas, as it is the case in the turbulent interstellar medium of our Galaxy. It is therefore justified to ask, whether this mechanism can also work at high redshifts, where the existence and magnitude of magnetic fields is still unclear. Magnetic fields are present at almost every scale of the low redshift universe, from stars and planets to galaxies and clusters of galaxies and possibly even in the intergalactic medium in voids of the cosmic web, as well as in high-redshift galaxies. \cite{2001PhR...348..163G} and \cite{2016RPPh...79g6901S} review the observations and possible origin of magnetic fields. There is a surprising similarity between the relatively strong magnetic fields measured in our own Galaxy (0.3--0.4 nT) and other nearby galaxies ($\sim$1 nT) with magnetic fields measured in clusters of galaxies (0.1--1 nT), as well as in high redshift galaxies ($\sim$1 nT), when the universe was only about 1/3 of its current age. There are even indications of magnetic fields of order $\gtrsim$10$^{-20}$ T in cosmic voids derived from the gamma ray emission of blazars \citep{2013ApJ...771L..42T}.
\vskip 0.1 truecm
One can come to the conclusion that the origin of cosmic magnetism on the largest scales of galaxies, galaxy clusters and the general intergalactic medium is still an open problem \citep{2018arXiv180903543S}. It is usually assumed that primordial or cosmic seed fields are amplified over time through the galactic dynamo effect to produce the rather strong fields observed in local galaxies. In this picture it is, however, unclear how similar fields can be created in such different settings (e.g. clusters) and different cosmic times (high-redshift galaxies). An interesting possibility is therefore that cosmic magnetic fields could be remnants from the early universe, or created in a process without galactic dynamos.
Assuming equipartition, the energy density in the CMB photons would correspond to a magnetic field of about 0.3 nT. Magnetic fields of $10^{-20}$ T, as observed in cosmic voids today, would only require a minute fraction of $10^{-10}$ of this energy density in the early universe to be channeled into magnetic fields.
\begin{figure*}[htbp]
\includegraphics[width=.49\textwidth]{figures/Radii.jpg}
\includegraphics[width=.49\textwidth]{figures/Ionization.jpg}
\caption{Left: The Bondi radius for a 10$^4$ M$_\odot$ (thin blue) and 1 M$_\odot$ (thick blue) PBH compared to the proton (red) and electron (green) Larmor radius, assuming a magnetic field of B=$10^{-20}$ T, as observed in local galaxy voids. Right: Baryon ionization/heating fraction $\chi_e$ as a function of redshift. The thin dash-dotted line shows the residual ionization left over from the radiation dominated era \cite{2002MNRAS.330L..43B}. The red curve shows the ionization fraction from UV photons produced by accreting PBH. The blue curve shows the corresponding heating fraction by >1 keV X--ray photons. The thick dashed black line shows one of the models consistent with the {\em Planck} satellite data \citep{2018arXiv180706209P} (see text). The green hatched areas shows the range of high--redshift ionization fractions considered in \cite{2018RvMP...90b5006K}.}
\label{fig:RadIon}
\end{figure*}
\vskip 0.1 truecm
Here I argue, that PBH could play a role in amplifying early magnetic seed fields and sustaining them until the epoch of galaxy formation. I compare the Bondi radius in eq. (3.12) and figure \ref{fig:RadIon} (left) with the Larmor radius
\begin{equation}
r_L={\frac {m~v_\bot} {|q|~B}},
\end{equation}
which determines the gyro motion of particles moving through a magnetic field. Here $m$ is the mass of the particle (either proton or electron), $v_\bot$ is the velocity component of the particle perpendicular to the magnetic field, $|q|$ is the absolute electric charge of the particle, and $B$ is the magnetic field. Assuming a seed field of $B$=10$^{-20}$ T and approximating the velocity with the sound speed $v_\bot$$\approx$$c_s$ yields the gyro radius for both protons and electrons. The proton gyro radius is about a factor of 2000 larger than the electron gyro radius.
\vskip 0.1 truecm
Figure \ref{fig:RadIon} (left) shows the Bondi radius as well as the proton and electron Larmor radii as a function of redshift. If the gyro radius is smaller than the Bondi radius, the respective particle is easily accreted by the PBH. If, however, the gyro radius is larger than the Bondi radius, the particle will first not be easily accreted, but rather spiral around the PBH. From \ref{fig:RadIon} we see, that at redshifts z$\gtrsim70$ and PBH masses in the range M$_{PBH}$$\approx$0.3--500 for our assumed magnetic field strength the proton Larmor radius is larger than the Bondi radius, while the electron Larmor radius is smaller than the Bondi radius. There is still a substantial fraction of residual electrons and protons/helium ions from the era before recombination (see the dash-dotted curve in \ref{fig:RadIon} right from \cite{2002MNRAS.330L..43B}). These electrons have therefore no problem being accreted, while for certain PBH masses protons resist the accretion. This will create a net electric current, which in turn will increase the average magnetic field strength until the proton gyro radius becomes smaller than the Bondi radius. This way the PBH can amplify the average magnetic field. The supersonic motion between baryon gas and PBH discussed above is expected to be coherent over large scales (of the order of Mpc) and can therefore induce large-scale ordered magnetic fields. A further magnetic field amplification occurs, as discussed above, in the accretion funnel, when magnetic fields are dissipated through reconnection and ejected with the plasma. In a sense, the ubiquitous PBH can possibly create their own magnetic field and distribute it throughout the universe. It is, however, plausible to assume, that magnetic fields in the early universe should be smaller than today, and the fractions of ionized baryons is less. This could also explain a rather small Bondi accretion eigenvalue required to match the observations.
\subsection{Re-Ionization}
\label{subsec:Reionization}
Next I turn to the contribution of PBH accretion to the re-ionization and re-heating history of the universe. At z$\approx$1089, when the photons decoupled from the baryons and formed the CMB radiation, the universe became predominantly neutral. Afterwards the universe entered a long period of “darkness”, in which the residual ionization left over from the primordial photon-baryon fluid diminished (see figure \ref{fig:RadIon} right), the background photons cooled down, and any higher-frequency emission was quickly absorbed in the atomic gas. In the model described here the first sources to illuminate the “dark ages” would be the PBH accreting from the surrounding gas. Their ultraviolet emission, above the Hydrogen ionization energy of 13.6 eV, would start to re-ionize small regions around each PBH. However, in the beginning the density of the surrounding gas is still so high that the ionized matter quickly recombines. As long as the re-combination time is much shorter than the Hubble time at the corresponding epoch, UV photons from the PBH cannot penetrate the surrounding medium, but instead produce an ionized bubble growing with time. In this type of ionization equilibrium the number of ionizing photons $N_{ion}$ required to overcome recombination is given as the ratio between the Hubble time $t_H(z)$ and the recombination time $t_{rec}(z)$ at the particular epoch, and can be derived from equations (2) and (3) in \cite{2004MNRAS.350..539R} as
\begin{equation}
N_{ion} = t_H/t_{rec} = max[1, 0.59~\left( \frac {1+z} {7} \right)^{1.5}].
\end{equation}
At a redshift z=1000, $N_{ion}$ is about 1000, and reaches a level of unity at z$\lesssim$10 for the assumed set of parameters. For this calculation I ignore clumping of the ionized gas. In reality the effective clumping factor is relatively large for reionization at high redshift because the ionizing sources are more numerous in the filaments of the cosmic web, but must also ionize a correspondingly larger fraction of dense gas in the filaments, and thus ionization is slowed down. At lower redshift, when molecular gas and stars have already formed, not all UV photons will escape the dense regions. The effective escape fraction is one of the largest uncertainties in our current understanding of re-ionization. For simplicity, I assume an escape fraction $f_{esc}$=0.1 for UV photons, and $f_{esc}$=1 for X--ray photons, independent of redshift.
\vskip 0.1 truecm
To calculate the history of pre-ionization by PBH I integrate the above normalized ADAF model for frequencies log($\nu$)>15.52 Hz, corresponding to the hydrogen ionization energy of 13.6 eV. To calculate the number of ionizing photons per PBH of reference mass 1 M$_\odot$ I take the density-weighted luminosity $\langle L_{bol} \rangle^*$ from figure \ref{fig:Lbol} (right). To determine the average space density of ionizing photons I multiply with the average comoving space density of PBH (assuming the reference mass 1 M$_\odot$):
\begin{equation}
n_{PBH} = 1.06\times10^{-54}~\left( \frac {1+z} {1000} \right)^3~{\rm cm}^{-3},
\end{equation}
and with the escape fraction $f_{esc}$ and finally divide by $N_{ion}$ from eq. (6.3) and the average density of baryons given in equation (2.10) to determine the ionization rate of baryons over cosmic time.
\vskip 0.1 truecm
The red curve in figure \ref{fig:RadIon} (right) shows the cumulative ionization fraction $\chi_e$ as a function of redshift for the accretion eigenvalue $\lambda$=0.05. A maximum cumulative ionization fraction of $\sim$2.8\%, is reached at a redshift z$\approx$10. This can be compared to one of the recent models determined from the {\em Planck} satellite data \citep{2018arXiv180706209P}. Here the 1$\sigma$ upper percentile of the {\em FlexKnot} model in their figure 45, which is consistent with the ionization optical depth determined from the most recent {\em Planck} data, is shown as dashed curve. A high-redshift contribution to the ionization history of the universe has also been discussed by \cite{2018PhRvD..98f3514H} and \cite{2018RvMP...90b5006K}. The range of $\chi_e$ values assumed in the latter work is shown as green hatched region in figure \ref{fig:RadIon} (right). For the choice of $\lambda$=0.05, the UV emission from the PBH population assumed in the toy model is therefore fully consistent with the observational constraints from {\em Planck}.
\subsection{X--ray heating}
The role of X--ray heating in shaping the early universe has been discussed by \cite{2013MNRAS.431..621M}. Compared to UV photons, X--ray photons have a much larger mean free path and can therefore ionize regions far away from the source. In addition, most of the X--ray energy gets deposited into heating up the gas. In order to estimate the amount of X--ray heating of the gas I applied the same mechanism as for the UV photons above, but integrating the above ADAF model for frequencies log($\nu$)>17.68 Hz, corresponding to 2 keV. I assume an escape fraction of $f_{esc}$=1 and $N_{ion}$=1. The blue curve in figure \ref{fig:RadIon} (right) shows the cumulative 2 keV heating fraction per baryon as a function of redshift for the assumed accretion eigenvalue of $\lambda$=0.05. The maximum cumulative heating fraction is $\sim$1.6\%. X--rays from PBH therefore have only a small contribution to the pre-ionization of the universe as a whole, but can be responsible for a part of the pre-heating of gas observed in the "entropy floor" of groups of galaxies. \cite{2002A&A...383..450D} reviewed the energetics of groups and clusters of galaxies, which cannot be reproduced by simple models, where the gas density is proportional to dark matter density. \cite{1991ApJ...383..104K} and \cite{1991ApJ...383...95E} argued that the gas must have been pre-heated before falling into the cluster potential. X--ray observations of groups of galaxies with {\em ROSAT} by \cite{1999Natur.397..135P} confirmed the need for a non-gravitational entropy injection in the group gas. These authors coined the term "entropy floor", which amounts to an energy of about 2 keV per baryon injected into the group gas. The pre-heating of the gas by PBH, albeit only contributing to a small fraction of the total baryon content of the universe, could have played an important role in heating the high-density regions, which first formed groups and clusters.
\subsection{Cosmological 21-cm signal}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=.49\textwidth]{figures/LR_Weight_z.jpg}
\caption{Density-weighted 1.4 GHz (observed) luminosity of a single PBH as a function of mass for different redshifts indicated.}
\label{fig:LRadio}
\end{center}
\end{figure*}
The red-shifted 21-cm line can provide important new constraints on the physical processes in the early universe \citep[see e.g.][]{2016ApJ...821...59F,2019PhRvD.100d3540M}. The Experiment to Detect the Global EoR Signature ({\em EDGES}) has measured a strong, sky-averaged 21-cm absorption line profile after subtracting the Galactic synchrotron emission \citep{2018Natur.555...67B}. The signal is centered at a frequency around 78 MHz and covers a broad range in redshift z=15--20. The signal may be due to ultraviolet light from first objects in the universe altering the emission of the 21-cm line by lowering the spin temperature of neutral hydrogen relative to the CMB. However, the signal is about three times larger than that expected from the standard $\Lambda$CDM cosmology, which led some authors to suggest new dark matter physics \citep[e.g.][]{2018PhRvL.121a1101F}. Instead of new physics, an increased 21-cm radio background contribution above the CMB at the epoch around z=15--20 could also explain the {\em EDGES} signal. Indeed, \cite{2018ApJ...868...63E} estimate the additional 21-cm radio background from accretion onto putative radio-loud intermediate-mass black holes (IMBH) forming in first molecular cooling halos at redshifts z=16--30. This could be sufficient to explain the {\em EDGES} feature, however, it requires extreme assumptions about the radio loudness of the IMBH population. Instead of assuming an interpretation in terms of mini-QSOs from IMBH grown through accretion, I estimate here, whether PBH accretion could have a significant contribution to the EDGES signal.
A full treatment of this effect for the PBH toy model is beyond the scope of this paper, but similar to the treatment of the PBH contribution to the CXB and CIB derived in section 5, I can estimate the PBH contribution to the observed low-frequency cosmic radio background, and thus to the EDGES signal.
\vskip 0.1 truecm
The balloon-borne double-nulled Absolute Radiometer for Cosmology, Astrophysics and Diffuse Emission ({\em ARCADE2}) instrument has measured the absolute temperature of the sky at frequencies 3, 8, 10, 30, and 90 GHz, and detected a significant excess over the CMB blackbody spectrum at a temperature of 2.731K \citep{2011ApJ...734....5F}. Combining the {\em ARCADE2} measurements with lower frequency data from the literature, the excess brightness temperature can be characterized as a power law T$_R$=1.19 ($\nu$/1 GHz)$^{-2.62}$ K, which translates into a radio spectrum with a slope of -0.62 and a normalization of 3$\times$10$^{-22}$ W m$^{-2}$ sr$^{-1}$ at 1.4 GHz. This cosmic radio synchrotron background is substantially larger than that expected from an extrapolation of the observed radio counts \citep{2018PASP..130c6001S}, and thus presents a major new challenge in astrophysics. \cite{2018ApJ...858L..17F} found that the global 21cm signal can be significantly amplified by an excess background radiation compared to the standard $\Lambda$CDM models, especially in absorption. Assuming that only 10\% of the radio synchrotron background originates at large redshifts, they predict a 21cm feature almost an order of magnitude stronger than that expected purely from the CMB. Interpolating between their calculations for 0.1\% and 10\% excess background I find, that an excess high-redshift radiation field of about 5\% of the observed radio synchrotron background is sufficient to explain the EDGES findings.
In order to calculate the expected PBH contribution to the radio background I assume that each black hole has a radio emission following the fundamental plane relation between X-ray luminosity, radio luminosity and black hole mass found by \cite{2003MNRAS.345.1057M}. I use the parameterization for radio-quiet AGN from \cite{2006ApJ...645..890W}: log(L$_R$)=0.85 log(L$_X$)+0.12 log(M$_{PBH}$), where L$_R$ is the 1.4 GHz radio luminosity in units of 10$^{40}$ erg/s, L$_X$ is the 0.1--2.4 keV X--ray luminosity in units of 10$^{44}$ erg/s, and M$_{PBH}$ is the PBH mass in units of 10$^8$ M$_\odot$.
The X--ray luminosity is calculated from the bolometric luminosity shown in figure \ref{fig:Lbol} (right). Assuming the ADAF radiation spectrum above, the fraction of the bolometric luminosity radiated in the 0.1-2.4 keV band is 0.23. For the radio spectrum I assume a power law with spectral index -0.62. This means that the bolometric correction is $\propto$(1+z)$^{0.38}$. The radio luminosities derived this way as a function of PBH mass and redshift are shown in figure \ref{fig:LRadio}. Multiplying these luminosities with the PBH density over cosmic time, converting into observed fluxes and integrating over mass and redshift I obtain a contribution of radio-quiet PBH to the observed radio background of $\sim$3$\times$10$^{-25}$ W m$^{-2}$ sr$^{-1}$ at 1.4 GHz, i.e. a fraction of 0.1\% of the observed synchrotron radio background. Most of this additional radiation field is accumulated at redshifts z$\gtrsim$20. Following \cite{2018ApJ...858L..17F}, this excess radio flux would increase the depth of the 21cm absorption line only by about 30\%. If, however, some fraction of the PBH would be radio-loud (e.g. 5\% with 1000 times higher fluxes), like observed in the AGN population, the 5\% excess high-redshift radio background flux necessary to explain the EDGES feature could be easily achieved by PBH.
\subsection{Primordial Black Holes in the Galactic Center}
Next I discuss some observational effects of the putative PBH population residing in the Galactic Center region. First, assuming a Milky Way dark matter halo of $\sim$10$^{12} M_\odot$, the PBH mass spectrum from section 2 (figure \ref{fig:fPBH}) indeed predicts about one supermassive PBH with a mass $\gtrsim10^{6.5} M_\odot$, consistent with the Sgr A$^*$ SMBH in the center of our Galaxy \citep{2010RvMP...82.3121G}. To estimate the density of dark matter and baryons in the Galactic bulge region itself, I refer to dynamical models of the Milky Way’s center, using the density of red clump giant stars stars measured in infrared photometric surveys, as well kinematic radial velocity measurements of M-giant stars in the Galactic bulge/bar constructed in \cite{2015MNRAS.448..713P}. From N--body simulations of stellar populations for barred spiral discs in different dark matter halos these authors were able to determine with high precision the mass in a volume of ($\pm2.2\times\pm1.4\times\pm1.2$ kpc$^3$) centered on the Galactic Bulge/Bar. The total mass is (1.84$\pm$0.07)$\times$10$^{10}$ M$_\odot$. Depending on the assumed model, about 9--30\% consists of dark matter, i.e. 1.7--5.5$\times$10$^{9}$ M$_\odot$. Applying the above PBH mass spectrum, we thus expect 5--10 intermediate-mass PBH with M$_{PBH}$>10$^4$ M$_\odot$ in the Galactic bulge region, but zero with M$_{PBH}$>10$^5$ M$_\odot$.
\vskip 0.1 truecm
Recent high-resolution observations of high-velocity compact clouds (HVCC) in the central molecular zone of our Milky Way with the Atacama Large Millimeter/submillimeter Array (ALMA) have indeed identified five promising IMBH candidates, wandering through the patchy ISM in the Galactic Center \citep[see][]{2020ApJ...890..167T}. The most compelling case is HCN–0.044–0.009, which shows two dense molecular gas streams in Keplerian orbits around a dark object with a mass M$_{IMBH}$=(3.2$\pm$0.6)$\times$10$^4$ M$_\odot$ \citep{2019ApJ...871L...1T}. The semimajor axis of these Keplerian streams are around 2 and 5$\times$10$^{17}$ cm. Another interesting case is the infrared and radio object IRS13E, a star cluster close to the Galactic Center potentially hosting an IMBH \cite{2005ApJ...625L.111S}. ALMA observations identified a rotating, ionized gas ring around IRS13E \citep{2019PASJ...71..105T}, with an orbit radius of 6$\times$10$^{15}$ cm and a rotation velocity of $\sim$230 km/s. This is thus another promising IMBH candidate with a mass of M$_{IMBH}$=2.4$\times$10$^4$ M$_\odot$.
Two of the five IMBH candidate sources in \cite{2020ApJ...890..167T} are possibly associated with X--ray sources detected in the deep {\em Chandra} images of the Galactic Center \cite{2009ApJS..181..110M}. IRS13E has the X--ray counterpart CXOGC 174539.7-290029 with an X--ray luminosity L$_{2-10 keV}$$\approx$3$\times$10$^{30}$ erg/s, and CO--0.31+0.11 has the possible X--ray counterpart CXOGC 174426.3-290816 with an X--ray luminosity L$_{2-10keV}$$\approx$4$\times$10$^{29}$ erg/s. The other three sources have X--ray upper limits in the range of several 10$^{30}$ erg/s. Assuming a bolometric correction factor of 1/30 for the 2--10 keV range, the combination of the mass accretion eigenvalue $\lambda$ and the radiative efficiency $\eta$ therefore has to be extremely small, on the order of 3$\times$10$^{-11}$. This is about a factor of 100 lower than the 2$\times$10$^{-9}$ L$_{Edd}$ derived for the Galactic Center black hole Sgr A$^*$ \cite{2014ARA&A..52..529Y}. Even assuming a very low efficiency ADAF model, a steady-state accretion solution is unlikely for these objects. The solution of this puzzle may come from the fact, that the velocity and density gradients of the gas in the Galactic Center region are so large, that the angular momentum forces any accreted matter into Keplerian motion well outside the classical Bondi radius \citep[see][]{2017PhRvD..96h3524P}. Indeed, the orbital periods and lifetimes of the Keplerian streams around HVCCs are in the range 10$^{4-5}$ years, and thus accretion is expected to be highly variable on very long time scales. Another possibility to halt accretion for a considerable time is the feedback created by outflows during efficient accretion events. Numerical simulations of the gas dynamics in the center of the Galaxy \cite{2015MNRAS.450..277C} show that the outflows significantly perturb the gas dynamics near the Bondi radius and thus substantially reduce the capture rate. The net result of both these effects would be a highly variable, low duty cycle bursty accretion onto the IMBH and SMBH in the Galactic Center, consistent with the extremely low accretion efficiencies observed. The accretion limits for black holes in the mass range M$_{PBH}$=20--100 M$_\odot$ derived from deep {\em Chandra} and radio observations of the Galactic Center \cite{2019JCAP...06..026M}, are already shown in figure \ref{fig:fPBH} to be consistent with the assumed PBH mass spectrum. Recent {\em NuSTAR} observations of the Galactic Center, including the effects of gas turbulence and the uncertainties related to the dark matter density profile even further weaken these constraints \citep{2018A&A...618A.139H}. At any rate, the assumed PBH mass distribution of section 2 is fully consistent with the observational constraints for all PBH masses >20 M$_\odot$ in the Galactic Center.
\vskip 0.1 truecm
Finally, I check the PBH predictions for lower masses against the Galactic ridge X--ray emission (GRXE), an unresolved X--ray glow at energies above a few keV discovered almost 40 years ago and found to be coincident with the Galactic disk. The GRXE in the 2--10 keV band has a background-corrected surface brightness of (7.1$\pm$0.5)$\times10^{-11}$ erg cm$^{-2}$ s$^{-1}$ deg$^{-2}$ which was largely resolved into discrete sources \citep{2009Natur.458.1142R}, with the brightest source having an X--ray luminosity of about 10$^{32}$ erg s$^{-1}$, and the minimum detectable luminosity around 10$^{30}$ erg s$^{-1}$. The integrated emission has a strong iron line from hot plasma at 6.7 keV, and the authors interpret the X--ray emission as coming from a large population of cataclysmic variables and coronally active stars. Using the mass determination in the Galactic bulge/bar above I find that the average baryon density in this region is in the range 17--22 cm$^{-3}$. However, most of these baryons are locked up in stars. In order to estimate the physical conditions of the gas in the Galactic Bulge/Bar region I follow \cite{2007A&A...467..611F}. According to these authors, there are four phases of the interstellar medium in the Galactic center region: (1) a cold molecular phase in Giant Molecular Clouds with temperatures around 50 K and gas densities n=10$^{3.5-4}$ cm$^{-3}$ covering a volume fraction around 1\%; (2) a warm molecular phase with temperatures around 150 K and gas density n=10$^{2.5}$ cm$^{-3}$, covering a volume fraction of $\sim$10\%; (3) an atomic phase with temperatures around 500-1000 K and density $\sim$10 cm$^{-3}$, covering a volume fraction around 70\%, and (4) ionized gas with temperatures 10$^{4-8}$ K and an unknown density. Depending on the temperature of the interstellar medium, the sound speeds are in the range $c_s$=1--100 km/s. The stellar velocity dispersion in the central region of our Galaxy is in the range 100--140 km/s \citep{2018A&A...616A..83V}, while the dark matter velocity dispersion is about 110 km/s \citep{2002ASPC..276..453K}. In the spirit of the discussion leading up to equation 3.9 and figure \ref{fig:TempVel} (right) above, I assume an effective velocity for Bondi accretion $v_{eff}\approx$50 km/s and $\lambda$=0.1. As shown in figures \ref{fig:Lbol} and \ref{fig:Fbol}, the PBH emissivity for the assumed mass spectrum is typically dominated by objects with M$_{PBH}$>100, which already are discussed above. Indeed, calculating the Bondi accretion rates and radiative efficiencies for objects with M$_{PBH}$<100 for the four ISM phases in the Galactic Center I obtain negligible PBH contributions to the total GRXE brightness. Some individual M$_{PBH}$$\sim$100 M$_\odot$ objects in high density regions could in principle have X--ray luminosities up to $L_{2-10 keV}$=10$^{33}$ erg/s, more luminous than the brightest X--ray source detected in the Galactic Ridge survey \cite{2009Natur.458.1142R}, but taking into account the strong variability and small duty cycle expected for this class of objects, their absence in the surveys is understandable. Some of the fainter unidentified sources in the current deep X--ray surveys could indeed be accreting PBH and future large X--ray observatories like {\em ATHENA} \citep{2013arXiv1306.2307N} or {\em LYNX} \citep{2019SPIE11118E..0KS} should be able to identify more. See also \cite{2019MNRAS.489.2038I} for future searches in the IR and sub-mm region.
\section{Conclusions and Outlook}
The interpretation of cold dark matter as the sum of contributions of different mass PBH families \cite{2019rsta.377..2161G} could explain a number of so far unsolved mysteries, like e.g. the massive seed black holes required to create the supermassive black holes in the earliest QSOs \citep{2007ApJ...665..187L}, the ubiquitous massive LIGO/VIRGO massive binary black holes \citep[e.g.][]{2016ApJ...823L..25K}, or even the putative "Planet X" PBH in our own Solar System \citep{2019arXiv190911090S}. The most abundant family of PBH should be around the Chandrasekhar mass (1.4 M$_{\odot}$). This prediction may already have been vindicated by the recent OGLE/GAIA discovery of a sizeable population of putative black holes in the mass range 1-10 M$_{\odot}$ \citep{2019arXiv190407789W}. Here I estimate the contribution of baryon accretion onto the overall PBH population to various cosmic background radiations, concentrating first on the crosscorrelation signal between the CXB and the CIB fluctuations discovered in deep {\em Chandra} and {\em Spitzer} surveys \cite{2013ApJ...769...68C}. Assuming Bondi capture and advection dominsted disk accretion with reasonable parameters like baryon density and the effective relative velocity between baryons and PBH over cosmic time, as well as appropriate accretion and radiation efficiencies, I indeed predict a contribution of PBH consistent with the residual X--ray fluctuation signal. This signal peaks at redshifts z$\approx$17--30. The PBH contribution to the 2--5 $\mu$m CIB fluctuations, however, is only about 1\%, so that these would have to come from star formation processes associated with the PBH.
\vskip 0.1 truecm
I discuss a number of other phenomena, which could be significantly affected by the PBH accretion. Magnetic
fields are an essential ingredient in the Bondi accretion process, and I argue that the PBH can play an
important role in amplifying magnetic seed fields in the early universe and maintaining them until the
galactic dynamo processes set in. Next I study the contribution of the assumed PBH population to the
re-ionization history of the universe and find that they do not conflict with the stringent ionization
limits set by the most recent {\em Planck} measurements \citep{2018arXiv180706209P}. X--ray heating from the
PBH population can provide a contribution to the entropy floor observed in groups of galaxies
\cite{1999Natur.397..135P}. The tantalizing redshifted 21-cm absorption line feature observed by
{\em EDGES} \citep{2018Natur.555...67B} could well be connected to the radio emission contributed by PBH
to the cosmic background radiation.
Finally, the number of IMBH and the diffuse X--ray emission in the Galactic Center region are not violated by the PBH dark matter,
on the contrary, some of the discrete sources in the resolved GRXE could be accreting PBH.
\vskip 0.1 truecm
It is obvious, that our simple PBH toy model for the dark matter requires significantly more work to turn it into quantitative predictions. Real magnetohydrodynamic simulations of the whole PBH mass spectrum including their own hierarchical clustering would be required to obtain the full history of their contribution to the cosmic backgrounds. The exciting {\em EDGES} discovery definitely requires a full-blown analysis of the radio contribution of PBH to the cosmic background. Future X--ray observations with {\em eROSITA} and {\em ATHENA}, infrared wide field surveys with {\em Euclid} and {\em WFIRST}, and microlensing observations with {\em WFIRST} will provide important additional diagnostics in this exciting and dramatically developing PBH field (see \cite{2019BAAS...51c..51K,2019ApJ...871L...6K}).
\vskip 0.2 truecm
\acknowledgments
I am thankful to Juan Garc{\'\i}a-Bellido for sharing a digital copy of the new running spectral index PBH mass distribution model in figure \ref{fig:fPBH} in advance of publication, as well as many very useful discussions about PBH. I am indebted to Matthias Bartelmann for computing the small-scale non-linear relative velocity dispersion (figure \ref{fig:TempVel} right) and providing very valuable comments and corrections to the manuscript. I would like to thank Sergey Karpov for very helpful discussions and inputs about their spherical accretion model. I would also like to thank my colleagues Nico Cappelluti, Sasha Kashlinsky and Alexander Knebe for very helpful discussions and contributions. Finally, I thank an anonymous referee for pointing out a substantial flaw in the first version of the paper, which has been corrected here and led to significant improvements. Throughout this work I made use of Ned Wright's cosmology calculator \citep{2006PASP..118.1711W} and the NASA Astrophysics Data System (ADS), operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement NNX16AC86A.
\bibliographystyle{JHEP}
|
1,314,259,993,528 | arxiv | \section{Introduction}
The rapid and violent form of combustion called detonation \cite{Fickett1979} propagates through detonation wave which is a shock wave with chemical reaction. Given a wide range of application in science and engineering, shock and detonation have always been of great concern in the field of science and technology, such as the research and analysis on coal and gas outburst mechanism \cite{Cheng2004,Li2010}. The detonation phenomena are widely used in the acceleration of various projectiles, mining technologies, depositing of coating to a surface or cleaning of equipment, etc. Early in 1899 and 1905, Chapmann \cite{Chapman1899} and Jouguet \cite{Jouguet1905} presented CJ theory. This theory assumes that detonation front is a strong discontinuous plane with chemical reaction which immediately completes as soon as the detonation wave passes. In 1940s, Zeldovich \cite{Zeldovich1940}, Neumann \cite{Neumann1942} and Doering \cite{Doering1943} presented the well-known ZND model. This model gives an important conclusion that there is von-Neumann-peak at detonation wave front. Reactant is firstly pre-compressed by shock wave, and there is a continuous reaction zone behind the shock wave. Physical quantities (density, temperature, pressure and velocity) reach maximum values within the reaction zone.
Although detonation has been studied for more than one century \cite{Bjerketvedt1997}, it remains an active area of research in both theoretical studies and numerical simulations \cite{XunKun2000} due to its practical importance \cite{Wang2012}. So far, all chemical reaction models are empirical or semi-empirical formulas \cite{Sun1995}, such as the Arrhenius kinetics, forest fire burn, two-step model, Cochran's rate function \cite{Cochran1979}, Lee-Tarver model \cite{LeeTarver1980}, etc. Selecting appropriate chemical reaction kinetics is very important for describing detonation phenomena under consideration. In this paper, we adopt Cochran's rate function for chemical reaction, which is one of the most physically justifiable models satisfying simulation and experimental results \cite{Cao1986,Zhao1989}.
In recent decades Lattice Boltzmann (LB) method has achieved great success in various fields of fluid dynamics\cite{Succi-Book,Succi_RMP,Succi_Sci,Succi_PRL2005,Succi_PRL2006A,Succi_PRL2006B,Succi_PRL200
,Yeomans_PRL1997,Yeomans_PRL2001,Yeomans_PRL2002,Yeomans_PRL2004A,Yeomans_PRL2004B,Yeomans_PRL2004
,Yeomans_PRL2007,Yeomans_PRL2013,ShanChen,SYChen,DXZhang,HPFang,Guozhaoli2013,ProgPhys2014,Dawson1993,Weimar1996,
ZhangRenliang2014,ChenShiyi1996}. LB modeling for combustion phenomena \cite{Succi1997,Filippova1998,Filippova2000JCP,Filippova2000CPC,Yu200
,Yamamoto2002,Yamamoto2003,Yamamoto2005,Lee2006,Chiavazzo201
,ChenSheng2007,ChenSheng2008,ChenSheng2009,ChenSheng2010I,ChenSheng2010II,ChenSheng2010III,ChenSheng2011,ChenSheng2012} has been an interesting topic from early days.
In 1997, Succi et al. \cite{Succi1997} proposed the pioneering LB model for combustion systems under the assumptions of fast chemistry and cold flames with weak heat release.
In 1998 and 2000, Filippova and H\"{a}nel \cite{Filippova1998,Filippova2000JCP,Filippova2000CPC} proposed a kind of hybrid scheme for low Mach number reactive flows. The flow field is solved by modified lattice-BGK model and the transport equations for energy and species are solved by a finite difference scheme. In 2002, Yu et al. \cite{Yu2002} simulated scalar mixing in a multi-component flow and a chemical reacting flow using the LB method. In the same year, Yamamoto et al \cite{Yamamoto2002} presented a LB model for simulation of combustion, which includes reaction, diffusion and convection. In 2006, Lee et al. presented a new two-distribution LB equation algorithm to solve the laminar diffusion flames within the context of Burke-Schumann flame sheet model. In 2007, Chen et al \cite{ChenSheng2007} developed a novel coupled lattice Boltzmann model for two- and three-dimensional low Mach number combustion simulations. In this model, the fluid density can bear sharp changes. In the following year, another LB model was proposed for simulating combustion in two-dimensional system by Chen and and his cooperators \cite{ChenSheng2008}. Within this model the time step and the fluid particle speed can be adjusted dynamically. Later, based on their improved models, they presented a number of works \cite{ChenSheng2009,ChenSheng2010I,ChenSheng2010II,ChenSheng2010III,ChenSheng2011,ChenSheng2012}.
However, previous studies on LB model were mainly focused on isothermal and incompressible fluid systems. Those models generally can not recover the correct energy equation or describe enough the compressibility in the hydrodynamic limit, which makes difficult the modeling of systems with shock and/or detonation. At the same time, most of those LB models assume that exothermic reaction has no significant effect on fluid field, which also constrains the practical application of the models to most cases of combustion. In recent years, the development of LB models for high speed compressible flows \cite{Alexander1992,Alexander1993,Chen1994,McNamara1997,XuPan2007,XuGan,XuChen,Review2012} makes it possible to simulate systems with shock and detonation. Very recently Yan, Xu, Zhang, et al. proposed a Lattice Boltzmann Kinetic Model (LBKM) for detonation phenomena in Cartesian coordinates \cite{XuYan2013}. For simulating the explosion and implosion behaviors, a polar coordinate LB model is obviously more convenient. And there are many nice papers about LB formulations for axisymmetric flows in polar coordinates \cite{Halliday2001,Niu2003,Premnath2005,Reis2007,Guo2009}. In 2011 Watari \cite{Watari2011} proposed a finite-difference LB methods in polar coordinate system. Recently we \cite{XuLin2014} improved the LBKM by using a hybrid scheme so that it works also for supersonic flows. Within the improved model, the temporal evolution is calculated analytically and the convection term is solved via a Modified Warming-Beam (MWB) scheme. In this work, a new polar coordinate LBKM which is similar to and simpler than the one in Ref.\cite{XuLin2014} is used to study detonation phenomena.
In contrast to traditional methods based on Navier-Stokes description, the LBKM has some intrinsic superiority in describing kinetic mechanisms in systems where equilibrium and non-equilibrium behaviors coexist \cite{ProgPhys2014,Review2012,XuYan2013,XuLin2014}. The mini-review \cite{Review2012} presented a methodology to investigate non-equilibrium behaviors of the system by using the LB method. The non-equilibrium behaviors in various complex systems attract great attention \cite{XuYan2013,XuLin2014,XuGan2013,XuChen2013}. In the work \cite{XuYan2013} by Yan, Xu, Zhang, et al., some non-equilibrium behaviors around the von Neumann peak are obtained. In a recent work \cite{XuLin2014} we studied the non-equilibrium characteristics of the system around three kinds of interfaces, the shock wave, the rarefaction wave and the material interface, for two specific cases. We draw qualitative information on the actual distribution function. In this work, we further develop the LBKM with chemical reaction in polar coordinates to model the implosion and explosion phenomena and investigate the macroscopic behaviors due to deviating from local thermodynamic equilibrium in the detonation procedure.
The rest of the paper is structured as follows. In section II the polar coordinate LBKM for compressible fluid with chemical reaction is proposed for the first time. This model can recovery the Navier-Stokes equations with chemical reaction. The treatment of inner boundary around disc center is presented, and the manifestations of non-equilibrium characteristics are introduced. In section III we give the chemical reaction model and numerical verification, simulate implosion and explosion phenomena, and study the non-equilibrium characteristics of each case. The actual distribution functions around detonation wave are qualitatively illustrated. Section IV gives the conclusion and discussions.
\section{LBKM in polar coordinates}
\subsection{Modified Boltzmann equation with chemical reaction}
Here, for the purpose of simulating detonation which includes chemical reaction, we propose a novel Botlzmann equation, to the right side of which an artificial term $M$ is added. This term is called chemical term. The modified Boltzmann equation with the Bhatanger-Gross-Krook approximation reads
\begin{equation}
\frac{\partial f}{\partial t}+\mathbf{v}\cdot \nabla f = -\frac{1} {\tau }( f-f^{eq})+M.
\label{Boltzmann_chemical}
\end{equation}
where $f$ ($f^{eq}$) is the (equilibrium) distribution function; $\tau$ the relaxation time; $\mathbf{v}$ the discrete velocity.
To give the special form of $M$, the following assumptions are made:
1. The flow is describe by single function $f$. The the relaxation time $\tau$ is constant (not a function of density or temperature) and independent of time $t$ or space $\mathbf{r}$.
2. The flow is symmetric, and there are no external forces. The radiative heat loss is neglected.
3. The local particle density $\rho$ and hydrodynamic velocity $\mathbf{u}$ remains unchanged in the progress of local chemical reaction, i.e.,
\begin{equation}
\frac{d \rho }{d t} |_{R(\lambda)}=0 , \label{M_chemical_rho}
\end{equation}
\begin{equation}
\frac{d \mathbf{u}}{d t} |_{R(\lambda)}=0 . \label{M_chemical_u}
\end{equation}
The temperature increases due to the chemical energy released.
4. It is an irreversible reaction, the process of which is described by an empirical or semi-empirical formulas, i.e.,
\begin{equation}
\frac{d\lambda}{dt}=R(\lambda),
\label{lambda_formula}
\end{equation}
where $\lambda$ denotes the progress rate of reaction.
In fact, the chemical term $M$ in Eq.\ref{Boltzmann_chemical} refers to the change of distribution function $f$ due to the local chemical reaction. Specially,
\begin{equation}
M=\frac{df}{dt}|_{R(\lambda)}. \label{M_chemical_1}
\end{equation}
If the physical system has little departure from equilibrium state, $f\approx f^{eq}$, Eq.\ref{M_chemical_1} gives
\begin{equation}
M=\frac{df^{eq}}{dt}|_{R(\lambda)}. \label{M_chemical_2}
\end{equation}
With $f^{eq}=f^{eq}(\rho,\mathbf{u},T)$ and $\frac{d}{dt}=\frac{\partial }{\partial t}+\mathbf{u}\cdot \nabla$, Eq.\ref{M_chemical_2} gives
\begin{equation}
M=\frac{\partial f^{eq}}{\partial \rho} \frac{d \rho }{d t} |_{R(\lambda)}
+\frac{\partial f^{eq}}{\partial \mathbf{u}} \frac{d \mathbf{u}}{d t} |_{R(\lambda)}
+\frac{\partial f^{eq}}{\partial T } \frac{d T }{d t} |_{R(\lambda)}.
\label{M_chemical_3}
\end{equation}
Substituting Eqs.\ref{M_chemical_rho} and \ref{M_chemical_u} into \ref{M_chemical_3} gives
\begin{equation}
M=\frac{\partial f^{eq}}{\partial T}\frac{d T}{d t}|_{R(\lambda)}.
\label{M_chemical_4}
\end{equation}
With the equilibrium distribution function
\begin{equation}
f^{eq}=\rho (\frac{1}{2 \pi k T})^{D/2} Exp[-\frac{(\mathbf{v}-\mathbf{u})^2}{2kT}],
\end{equation}
for a $D$-dimensional physical system where the particle mass is $m=1$, we get
\begin{equation}
\frac{\partial f^{eq}}{\partial T}=\frac{-D T+(\mathbf{v}-\mathbf{u})^2}{2T^2} f^{eq}. \label{M_chemical_feq}
\end{equation}
With the relation $T=\frac{2E}{D \rho}$ between temperature $T$ and internal energy per volume $E$, we get
\begin{equation}
\frac{d T}{d t}|_{R(\lambda)}=\frac{2}{D \rho}\frac{d E}{d t}|_{R(\lambda)}
=\frac{2Q}{D}\frac{d \lambda}{d t}. \label{M_chemical_T1}
\end{equation}
where $Q$ is the amount of heat released by the chemical reactant per unit mass. Equations \ref{lambda_formula} and \ref{M_chemical_T1} give
\begin{equation}
\frac{d T}{d t}=\frac{2Q}{D} R(\lambda). \label{M_chemical_T2}
\end{equation}
Substituting Eqs.\ref{M_chemical_feq}, \ref{M_chemical_T2} into \ref{M_chemical_4}, we get
\begin{equation}
M=f^{eq} \frac{-D T+(\mathbf{v}-\mathbf{u})^2}{2T^2} \frac{2Q}{D} R(\lambda).
\label{M_continuous}
\end{equation}
It is clear that Eq.\ref{M_continuous} satisfies the following relations
\begin{eqnarray}
\int Md\mathbf{v} &=&\frac{d\rho }{dt}|_{R(\lambda )}=0 \\
\int M\mathbf{v}d\mathbf{v} &=&\frac{d\rho \mathbf{u}}{dt}|_{R(\lambda )}=0
\\
\frac{1}{2}\int M\mathbf{v}^{2}d\mathbf{v} &=&\frac{dE}{dt}|_{R(\lambda
)}=\rho QR(\lambda )
\end{eqnarray}
\subsection{Discrete velocity model}
In a polar coordinate system, the LB equation corresponding to Eq.\ref{Boltzmann_chemical} reads,
\begin{equation}
\frac{\partial f_{ki}}{\partial t}+v_{kir}\frac{\partial f_{ki}}{\partial r}+ \frac{1}{r}v_{ki\theta }\frac{\partial f_{ki}}{\partial \theta }
=-\frac{1} {\tau }( f_{ki}-f_{ki}^{eq})+M_{ki} ,
\label{PC_BGK}
\end{equation}
\begin{equation}
M_{ki}=f_{ki}^{eq} \frac{-D T+(\mathbf{v_{ki}}-\mathbf{u})^2}{2T^2} \frac{2Q}{D} R(\lambda),
\end{equation}
where $r$ ($\theta$) is the radial (azimuthal) coordinate; $f_{ki}$ ($f_{ki}^{eq}$) is the discrete (equilibrium) distribution function; $v_{kir}$ ($v_{ki\theta }$) is the radial (azimuthal) component of the discrete velocity $\mathbf{v}_{ki}$ as below \cite{Watari2011,Watari2003},
\begin{equation}
\begin{array}{c}
\mathbf{v}_{ki}=v_{kir}\mathbf{e}_{r}+v_{kir}\mathbf{e}_{\theta },
\ v_{kir}=v_{k}\cos(i \pi/4-\theta),
\ v_{ki\theta }=v_{k}\sin(i \pi/4-\theta),
\end{array}
\end{equation}
with unit vectors $\mathbf{e}_{r}$ and $\mathbf{e}_{\theta }$. The subscript $k$($=0,1,2,3,4$) indicates the $k$-th\ group of the particle velocities with speed $v_{k}$. One speed is $v_{0}=0$, and each of the other group has $8$ components, i.e. $i=0,1,\cdots$,7. In this work we choose $v_{1}=1.5$, $v_{2}=3.5$, $v_{3}=7.5$, $v_{4}=12.5$.
In terms of local particle density $\rho$ ($=\sum_{ki}f_{ki}$), hydrodynamic velocity $\mathbf{u}$ ($=\sum_{ki}f_{ki}\mathbf{v}_{ki}/\rho$) and temperature $T$ ($=\sum_{ki}\frac{1}{2}f_{ki}(\mathbf{v}_{ki}-\mathbf{u})\cdot(\mathbf{v}_{ki}-\mathbf{u})/\rho$), we get
\begin{eqnarray}
f_{ki}^{eq} &=&\rho F_{k}[(1-\frac{u^{2}}{2T}+\frac{u^{4}}{8T^{2}})+\frac{v_{ki\varepsilon }u_{\varepsilon }}{T}(1-\frac{u^{2}}{2T})+\frac{v_{ki\varepsilon }v_{ki\pi }u_{\varepsilon }u_{\pi }}{2T^{2}}(1-\frac{u^{2}}{2T}) \nonumber\\
&&+\frac{v_{ki\varepsilon }v_{ki\pi }v_{ki\vartheta }u_{\varepsilon }u_{\pi}u_{\vartheta}}{6T^{3}}+\frac{v_{ki\varepsilon }v_{ki\pi }v_{ki\vartheta}v_{ki\xi }u_{\varepsilon }u_{\pi}u_{\vartheta }u_{\xi }}{24T^{4}}]
\label{feq}
\end{eqnarray}
with weighting coefficients
\begin{eqnarray}
F_{k} &=&\frac{1}{v_{k}^{2}(v_{k}^{2}-v_{k+1}^{2})(v_{k}^{2}-v_{k+2}^{2})(v_{k}^{2}-v_{k+3}^{2})}
[48T^{4}-6(v_{k+1}^{2}+v_{k+2}^{2}+v_{k+3}^{2})T^{3} \nonumber\\
&&+(v_{k+1}^{2}v_{k+2}^{2}+v_{k+2}^{2}v_{k+3}^{2}+v_{k+3}^{2}v_{k+1}^{2})T^{2}+\frac{1}{4}v_{k+1}^{2}v_{k+2}^{2}v_{k+3}^{2}T], \nonumber\\
F_{0} &=&1-8(F_{1}+F_{2}+F_{3}+F_{4}) \nonumber,
\end{eqnarray}
where the subscript $\{k+l\}$ equals to $(k+l-4)$ if $(k+l)>4$.
Via the Chapman-Enskog expansion, it is easy to find that Eq.\ref{PC_BGK} could recovery the following Navier-Stokes equations
\begin{equation}
\frac{\partial \rho }{\partial t}+\nabla \cdot (\rho \mathbf{u})=0\text{,}
\label{NS_1}
\end{equation}
\begin{equation}
\frac{\partial (\rho \mathbf{u})}{\partial t}+\nabla \cdot (P\mathbf{I}
+\rho \mathbf{uu})+\nabla \cdot \lbrack \mu (\nabla \cdot \mathbf{u})\mathbf{I}
-\mu (\nabla \mathbf{u})^{T}-\mu \nabla \mathbf{u}]=0 \text{,}
\label{NS_2}
\end{equation}
\begin{gather}
\frac{\partial }{\partial t}(\rho E+\frac{1}{2}\rho u^{2})+\nabla \cdot
\lbrack \rho \mathbf{u}(E+\frac{1}{2}u^{2}+\frac{P}{\rho })] \notag \\
-\nabla \cdot \lbrack \kappa ^{^{\prime }}\nabla E+\mu \mathbf{u}\cdot
(\nabla \mathbf{u})-\mu \mathbf{u}(\nabla \cdot \mathbf{u})+\frac{1}{2}\mu
\nabla u^{2}]=\rho QR(\lambda ) \text{,}
\label{NS_3}
\end{gather}
in the hydrodynamic limit, where $\mu$(=$P\tau$) and $\kappa ^{^{\prime }}$($=2P\tau$) are viscosity and heat conductivity, respectively.
\subsection{LB evolution equation}
The evolution equation, with first-order accuracy, used for Eq.\ref{PC_BGK}, reads
\begin{equation}
f_{ki}^{t+\Delta t}=f_{ki}^{eq}+(f_{ki}^{t}-f_{ki}^{eq})\exp (-\Delta t/\tau)
+f_{ki,r}^{*}+f_{ki,\theta}^{*}+M_{ki} \Delta t,
\label{LB_evolution_equation}
\end{equation}
and
\begin{equation}
f_{ki,r}^{*} =\left\{
\begin{array}{ccc}
C_{r}[f_{ki}(i_r,i_\theta )-f_{ki}(i_r-1,i_\theta )] & for & C_{r}\geq 0, \\
C_{r}[f_{ki}(i_r+1,i_\theta )-f_{ki}(i_r,i_\theta )] & for & C_{r}<0,
\end{array}
\right.
\end{equation}
\begin{equation}
f_{ki,\theta }^{*} =\left\{
\begin{array}{ccc}
C_{\theta }[f_{ki}(i_r,i_\theta )-f_{ki}(i_r,i_\theta -1)] & for & C_{\theta }\geq 0, \\
C_{\theta }[f_{ki}(i_r,i_\theta +1)-f_{ki}(i_r,i_\theta )] & for & C_{\theta }<0.
\end{array}
\right.
\end{equation}
with Courant-numbers $C_{r}$($=v_{kir}\frac{\Delta t}{\Delta r}$) and $C_{\theta}$($=\frac{1}{r}v_{ki\theta}\frac{\Delta t}{\Delta \theta}$).
\subsection{Boundary conditions}
\begin{figure}
\begin{center}
\includegraphics[bbllx=25pt,bblly=0pt,bburx=585pt,bbury=316pt,width=0.75\textwidth]{./Fig01.eps}
\end{center}
\caption{Rotation of the distribution functions from the first to the fifth sector of physical domain in a disc divided into $8$ sections.}
\label{Fig01}
\end{figure}
The physical domain under consideration is in a sector which is only $1/8$ of an annular or circular area. The azimuthal boundaries are treated with periodic boundary conditions \cite{XuLin2014}. For annular area with radii $0<R_{1}<R_{2}$, inflow/outflow conditions are imposed at radial boundaries \cite{XuLin2014}. For circular area with radius $R$, its outer radial boundary is treated in the same way. Specially, around the center, the inner boundary is treated as,
\begin{equation}
f(i_r,i_\theta ,k,i)=f(1-i_r,i_\theta,k,mod(i+4,8)) ,
\end{equation}
with $i_r=-1, 0$ for the nodes added to the computational domain and the function $mod(a,b)$ means the remainder of $a$ divided by $b$. Figure \ref{Fig01} shows the relation between the distribution functions in the first and the fifth sector of physical domain in a periodic circular area by rotation.
\subsection{Non-equilibrium characteristics}
LB model naturally inherits the function of Boltzmann equation describing non-equilibrium system. The departure of the system from local thermodynamic equilibrium state can be measured by the high-order moments of $f_{ki}$. As given in \cite{XuYan2013,XuLin2014}, the central moments $\mathbf{M}_{m}^*$ are defined as:
\begin{eqnarray}
&&\left\{
\begin{array}{l}
\mathbf{M}_{2}^*(f_{ki})=\sum_{ki}f_{ki}\mathbf{v}_{ki}^*\mathbf{v}_{ki}^* \\
\mathbf{M}_{3}^*(f_{ki})=\sum_{ki}f_{ki}\mathbf{v}_{ki}^*\mathbf{v}_{ki}^*\mathbf{v}_{ki}^* \\
\mathbf{M}_{3,1}^*(f_{ki})=\sum_{ki}\frac{1}{2}f_{ki}\mathbf{v}_{ki}^*\cdot\mathbf{v}_{ki}^*\mathbf{v}_{ki}^* \\
\mathbf{M}_{4,2}^*(f_{ki})=\sum_{ki}\frac{1}{2}f_{ki}\mathbf{v}_{ki}^*\cdot\mathbf{v}_{ki}^*\mathbf{v}_{ki}^*\mathbf{v}_{ki}^*
\end{array}
\right.
\label{M_absolute}
\end{eqnarray}
where $\mathbf{v}_{ki}^*=\mathbf{v}_{ki}-\mathbf{u}$. The manifestations of non-equilibrium are defined as:
\begin{eqnarray}
\mathbf{\Delta }_{m}^{* } &=&\mathbf{M}_{m}^{* }(f_{ki})-\mathbf{M}_{m}^{* }(f_{ki}^{eq}) .
\label{D_absolute}
\end{eqnarray}
In theory, $\mathbf{M}_{3}^{* }(f_{ki}^{eq})=0$, $\mathbf{M}_{3}^{* }(f_{ki})=\mathbf{\Delta }_{3}^{* }$, $\mathbf{M}_{3,1}^{*}(f_{ki}^{eq})=0$, $\mathbf{M}_{3,1}^{* }(f_{ki})=\mathbf{\Delta }_{3,1}^{* }$, $\mathbf{M}_{3,1,r}^*(f_{ki})=\frac{1}{2} (\mathbf{M}^*_{3,rrr}(f_{ki})+\mathbf{M}^*_{3,r\theta\theta}(f_{ki}))$, $\mathbf{M}_{3,1,\theta}^*(f_{ki})=\frac{1}{2} (\mathbf{M}^*_{3,rr\theta}(f_{ki})+\mathbf{M}^*_{3,\theta\theta\theta}(f_{ki}))$, $\mathbf{\Delta}_{3,1,r}^*(f_{ki})=\frac{1}{2} (\mathbf{\Delta}^*_{3,rrr}(f_{ki})+\mathbf{\Delta}^*_{3,r\theta\theta}(f_{ki}))$, $\mathbf{\Delta}_{3,1,\theta}^*(f_{ki})=\frac{1}{2} (\mathbf{\Delta}^*_{3,rr\theta}(f_{ki})+\mathbf{\Delta}^*_{3,\theta\theta\theta}(f_{ki}))$.
\section{Detonation}
Detonation is in a complex process with mutual influence between fluid dynamics and chemical reaction kinetics. The detonation front propagates into unburnt gas at a velocity higher than the speed of sound in front of the wave \cite{Bjerketvedt1997}. Physical quantities at two sides of detonation front satisfy Hugoniot relations \cite{Fickett1979}.
\subsection{Chemical reaction}
To describe the chemical process of detonation, we choose Cochran's rate function presented by Cochran and Chan \cite{Cochran1979},
\begin{equation}
R(\lambda)=\omega_{1}P^{m}(1-\lambda )+\omega_{2}P^{n}\lambda(1-\lambda ) ,
\label{Cochran}
\end{equation}
where $\lambda $($=\rho _{p}/\rho $) is the mass fraction of reacted reactant, and $\rho _{p}$ is the density of reacted reactant. The right side of Eq.\ref{Cochran} is composed of a hot formation term and a growth term. $P^{m}$ and $P^{n}$ describe the dependence on the local pressure and $\omega_{1}$, $\omega_{2}$, $m$ and $n$ are adjustable parameters. Furthermore, $T>T_{th}$ is a necessary condition for chemical reaction, with the ignition temperature $T_{th}$. In this work, we choose $m=n=1$, $T_{th}=1.1$.
Via introducing the symbol, $a=\omega _{1}P^{m}$, $b=\omega _{2}P^{n}$, $\lambda=\lambda_{i_r}=\lambda_{i_\theta}=\lambda(i_r,i_\theta,t)$, the evolution of Eq.\ref{Cochran} with first-order accuracy reads
\begin{equation}
\lambda ^{t+\Delta t}=\frac{(a+b\lambda )e^{(a+b)\Delta t}-a(1-\lambda )}
{(a+b\lambda )e^{(a+b)\Delta t}+a(1-\lambda )}+\lambda_{i_r}^{*}+\lambda _{i_\theta }^{*} ,
\label{Eqlambda}
\end{equation}
and
\begin{equation}
\lambda _{i_r}^{* }=\left\{
\begin{array}{ccc}
-\frac{u_{r}(\lambda _{i_r}-\lambda _{i_r-1})}{\Delta r}\Delta t & for & u_{r}\geq 0, \\
-\frac{u_{r}(\lambda _{i_r+1}-\lambda _{i_r})}{\Delta r}\Delta t & for & u_{r}<0,
\end{array}
\right.
\end{equation}
\begin{equation}
\lambda _{i_\theta }^{* }=\left\{
\begin{array}{ccc}
-\frac{u_{\theta }(\lambda _{i_\theta}-\lambda _{i_\theta -1})}{r\Delta \theta}\Delta t & for & u_{\theta }\geq 0, \\
-\frac{u_{\theta }(\lambda _{i_\theta +1}-\lambda _{i_\theta})}{r\Delta \theta}\Delta t & for & u_{\theta }<0.
\end{array}
\right.
\end{equation}
In the evolution of $\lambda$, the velocity $\mathbf{u}$ and the pressure $P$ are calculated from $f_{ki}$. In this way, the chemical reaction has coupled naturally with the flow behaviors.
It is worth pointing out that the Cochran's rate function used in this work is similar to, but different from, the Lee-Tarver model used in the work \cite{XuYan2013}. The parameters $a$ and $b$ in Eq.\ref{Eqlambda} depend on the pressure in the former work, while they are given fixed values in the latter work where the pressure plays no role in the chemical process. Consequently, the extinction phenomenon can be investigated in this work and can not be simulated in the latter work.
\subsection{Simulation of steady detonation}
\subsubsection{Validation and verification}
In this section, a steady detonation is simulated to demonstrate the validity of the new model. The initial physical quantities are as:
\begin{equation}
\left\{
\begin{array}{l}
(\rho ,T,u_{r},u_{\theta },\lambda )_{i}=(1.35826,2.59709,0.81650,0,1)\\
(\rho ,T,u_{r},u_{\theta },\lambda )_{o}=(1,1,0,0,0)
\end{array}
\right.
\label{V_and_V}
\end{equation}
which satisfy the Hugoniot relations for detonation wave. Here the suffixes $i$ and $o$ index two parts, $1000 \le r \le 1000.01$ and $1000.01<r\le 1000.1$, in an annular area, respectively. The inner radius is given large enough, so that the curvature becomes negligible and the polar coordinates revert locally to Cartesian coordinates. With this condition, the simulation results can be compared to the analytic solutions of the 1-dimensional steady detonation wave. Other parameters are $\tau=2\times 10^{-4}$, $\Delta t=2.5\times 10^{-7}$, $N_{r}\times N_{\theta }=20000\times 1$, $\omega _{1}=10$, $\omega_{2}=1000$.
\begin{figure}
\begin{center}
\includegraphics[bbllx=0pt,bblly=303pt,bburx=595pt,bbury=611pt,width=1.0\textwidth]{./Fig02.eps}
\end{center}
\caption{physical quantities of steady detonation wave at time $t=0.025$, with radial range $1000.07\le r\le 1000.1$: (a) $\rho$, (b) $T$, (c) $P$, (d) $u_r$, (e) $\lambda$.}
\label{Fig02}
\end{figure}
Figure \ref{Fig02} gives LB simulation results, CJ results \cite{Fickett1979,Chapman1899,Jouguet1905} and ZND results \cite{Fickett1979,Zeldovich1940,Neumann1942,Doering1943} of physical quantities ($\rho$, $T$, $P$, $u_r$, $\lambda$) at time $t=0.025$, with radial range $1000.07\le r\le 1000.1$, respectively. The solid lines with squares are for LB simulation results, the dashed lines are for analytic solutions of CJ theory, and the solid lines are for analytic solutions of ZND theory. The simulation physical quantities after detonation wave are $(\rho,T,u_{r},u_{\theta},\lambda)=(1.36163,2.59366,0.819670,0,1)$. Comparing them with
CJ results gives the relative differences $0.2\%$, $0.1\%$, $0.3\%$, $0\%$ and $0\%$, respectively. Panels (a)-(e) show that the LB simulation results have a satisfying agreement with the ZND results in the area behind von Neumann peak. There are few differences between them. Physically, the analytic solutions of ZND theory here ignore the viscosity and heat conduction, and the von Neumann peak is simply treated as a strong discontinuity. Furthermore, the relative difference is $1.8\%$ between the simulation detonation velocity $D=3.152$ and the analytic solution $D=3.09557$. In sum, the current LB model works for detonation phenomenon.
\subsubsection{Nonequilibriums in steady detonation wave}
\begin{figure}
\begin{center}
\includegraphics[bbllx=22pt,bblly=234pt,bburx=563pt,bbury=474pt,width=0.75\textwidth]{./Fig03.eps}
\end{center}
\caption{The profile of steady detonation wave in an annular area with radii $R_{1}=1000$ and $R_{2}=1000.1$ at time $t=0.025$: (a) physical quantities, (b) gradients. From left to right, three vertical lines are shown to guide the eyes for the rarefaction area, the maximum value of pressure, the pre-shocked area, respectively. }
\label{Fig03}
\end{figure}
Figure \ref{Fig03} gives the physical quantities and their gradients versus radius at time $t=0.025$, with radial range $1000.08\le r\le 1000.0925$. Three vertical lines are shown, from left to right, to guide the eyes for the rarefaction area, the von-Neumann peak and the pre-shocked area, respectively. Panel (a) shows that the maximum values of density, temperature, pressure, velocity do not coincide. The radial positions of their maximum values are $R_{\rho }=1000.08740$, $R_{T}=1000.08654$, $R_{P}=1000.08712$, $R_{u}=1000.08739$. Panel (b) shows that the largest absolute values of $\nabla \rho$, $\nabla T$, $\nabla P$, $\nabla u$ are located at the pre-shocked area, their second largest values are at the rarefaction area, and their vales are close to zero at the von-Neumann peak.
\begin{figure}
\begin{center}
\includegraphics[bbllx=6pt,bblly=278pt,bburx=573pt,bbury=641pt,width=0.99\textwidth]{./Fig04.eps}
\end{center}
\caption{The simulation results of $\mathbf{M}_{m}^*(f_{ki})$, $\mathbf{M}_{m}^*(f_{ki}^{eq})$ and $\mathbf{\Delta }_{m}^{*}$ in the same case as Fig.\ref{Fig03}.}
\label{Fig04}
\end{figure}
Figure \ref{Fig04} shows the central moments and their non-equilibrium manifestations in the case corresponding to Fig.\ref{Fig03}. The simulation results of $\mathbf{M}_{2}^{*}(f_{ki})$, $\mathbf{M}_{3}^{*}(f_{ki})$, $\mathbf{M}_{3,1}^{*}(f_{ki})$, $\mathbf{M}_{4,2}^{*}(f_{ki})$, $\mathbf{M}_{2}^{*}(f_{ki}^{eq})$, $\mathbf{M}_{3}^{*}(f_{ki}^{eq})$, $\mathbf{M}_{3,1}^{*}(f_{ki}^{eq})$, $\mathbf{M}_{4,2}^{*}(f_{ki}^{eq})$, $\mathbf{\Delta }_{2}^{*}$, $\mathbf{\Delta }_{3}^{*}$, $\mathbf{\Delta}_{3,1}^{*}$, $\mathbf{\Delta }_{4,2}^{*}$ are shown in Figs.\ref{Fig04} (a)-(l), respectively. The vertical lines in Figs.\ref{Fig04} (i)-(l) coincide with the ones in Fig.\ref{Fig03}. It's easy to get from Fig.\ref{Fig04} that, the non-equilibrium system is mainly around the von-Neumann peak. The departure of the system from its equilibrium around the rightmost line is opposite the one around the leftmost line. Physically, the former is under shock effect, whereas the latter under rarefaction effect. Furthermore, both $\Delta _{2,rr}^{*}$ and $\Delta_{2,\theta \theta }^{*}$ are close to zero at the von-Neumann peak, i.e., the internal kinetic energies in different degrees of the freedom are equal to each other at the von-Neumann peak. Comparing Fig.\ref{Fig03} (b) with Fig.\ref{Fig04} (i) gives that the internal kinetic energies in different degrees of freedom show the maximum difference at the inflexion point where the pressure has the largest spatial derivative. Figure \ref{Fig04} (a) shows that the internal energy in the freedom of $r$ and that in the freedom of $\theta$ do not coincide due to the fluid viscosity. The former travels faster than the latter around the detonation front.
\begin{figure}
\begin{center}
\includegraphics[bbllx=0pt,bblly=0pt,bburx=369pt,bbury=152pt,width=0.6\textwidth]{./Fig05.eps}
\end{center}
\caption{The sketch of the Maxwellian and actual distribution functions versus velocity $v_{r}$ and $v_{\protect\theta }$, respectively. (a) the distribution functions at the leftmost line, (b) the distribution functions at the rightmost line. The long-dashed line is for distribution function $f(v_{r})$, the shot-dashed one is for distribution function $f(v_{\theta})$, and the solid line is for $f^{eq}$ .}
\label{Fig05}
\end{figure}
Around the leftmost line in Fig.\ref{Fig04} (i), $\mathbf{\Delta}_{2,rr}^{*}$ shows a negative peak and $\mathbf{\Delta}_{2,\theta \theta }^{*}$ shows a positive peak with the same amplitude, which implies that the distribution function $f(v_{r})$ is \textquotedblleft thinner\textquotedblright and \textquotedblleft higher\textquotedblright than the Maxwellian $f^{eq}$, and $f(v_{\theta })$ is \textquotedblleft fatter\textquotedblright and \textquotedblleft lower\textquotedblright. The simulation results of $\mathbf{\Delta }_{3}^{* }$ in Fig.\ref{Fig04}(j) and $\mathbf{\Delta }_{3,1}^{* }$ in Fig.\ref{Fig04}(k) indicate that $f(v_{\theta })$ is symmetric, and the $f(v_{r})$ is asymmetric. The portion of $f(v_{r})$ for $v_{r}>0$ is \textquotedblleft thinner\textquotedblright than that for $v_{r}<0$. Figure \ref{Fig05} (a) shows the sketch of the actual distribution functions, $f(v_{r})$, $f(v_{\theta })$ and the Maxwellian $f^{eq}$. For the rightmost line in Fig.\ref{Fig04}, similarly, Fig.\ref{Fig05}(b) shows the sketch of the actual distribution functions, $f(v_{r})$, $f(v_{\theta })$ and the Maxwellian $f^{eq}$. It can be found that $f(v_{r})$ is \textquotedblleft fatter\textquotedblright and \textquotedblleft lower\textquotedblright than the Maxwellian $f^{eq}$, while $f(v_{\theta })$ is \textquotedblleft thinner\textquotedblright and \textquotedblleft higher\textquotedblright. $f(v_{\theta })$ is symmetric, while $f(v_{r})$ is asymmetric. The portion of $f(v_{r})$\ for $v_{r}>0$ is \textquotedblleft fatter\textquotedblright\ and the portion of $f(v_{r})$ for $v_{r}<0$ is \textquotedblleft thinner\textquotedblright .
Moreover, the simulation result $\mathbf{\Delta }_{2,r\theta}^{*}=0$ in Fig.\ref{Fig04}(i) indicate that the contours of the actual distribution function in velocity space ($v_{r}$,$v_{\theta }$) is symmetric about the $v_{r}$-axis or/and $v_{\theta }$-axis. The above analysis suggests that $v_{r}$-axis is the symmetric axis. Combining Figs.\ref{Fig05} and the sketch of contours of the actual distribution function gives the sketch of the actual distribution function in velocity space ($v_{r}$,$v_{\theta }$) \cite{XuLin2014}.
\subsection{Simulation of implosion}
For the case of implosion, the initial physical quantities are:
\begin{equation}
\left\{
\begin{array}{l}
(\rho ,T,u_{r},u_{\theta },\lambda )_{i}=(1,1,0,0,0) \\
(\rho ,T,u_{r},u_{\theta },\lambda )_{o}=(1.5,1.55556,-0.666667,0,1) ,
\end{array}
\right.
\end{equation}
where the suffixes $i$ and $o$ index areas $0\le r\le 0.098$ and $0.098<r\le 0.1$, respectively. Other parameters are $\tau =2\times 10^{-4}$, $\Delta t=2.5 \times 10^{-6}$, $N_{r}\times N_{\theta}=2000\times 1$, $\omega_{1}=1$, $\omega_{2}=50$.
\begin{figure}
\begin{center}
\includegraphics[bbllx=0pt,bblly=0pt,bburx=575pt,bbury=382pt,width=0.99\textwidth]{./Fig06.eps}
\end{center}
\caption{Physical quantities versus radius in implosion process at times, $t=0.0000$, $0.0375$, $0.0425$ and $0.0500$, respectively: (a) $\rho$; (b) $T$; (c) $P$; (d) $u_{r}$; (e) $\lambda$; (f) $\Delta_{2,rr}^{*}$. }
\label{Fig06}
\end{figure}
Figure \ref{Fig06} shows physical quantities along radius in implosion process: (a) density; (b) temperature; (c) pressure; (d) radial velocity; (e) the parameter for chemical reaction process; (f) $\Delta_{2,rr}^{*}$. There are two stages in implosion process. In the former stage, the detonation travels inwards, the material behind the detonation front moves inwards, and the density, temperature and pressure behind detonation wave increase continuously due to the disc geometric effect. When the detonation wave reaches the center, the density, temperature and pressure get their maximum values. Meanwhile, the velocity reduces to zero gradually and then point outwards. In the latter stage, the detonation wave travels outwards. As the chemical reaction completes, the detonation wave becomes a pure shock wave. The hydrodynamic velocity in front of the shock wave points inwards and that behind the wave points outwards. Consequently, the density, temperature and pressure outside the shock wave still increase continuously, and those inside reduce.
In addition, Fig.\ref{Fig06} (f) shows that the departure of the system from equilibrium increases (reduces) when the detonation or shock wave becomes stronger (weaker). Specially, the value of $\Delta_{2,rr}^{*}$ shows a crest and a trough from the time $t=0.0000$ to $t=0.0425$. The crest results from compression effect ahead of the detonation front, while the trough results from rarefaction effect behind. From $t=0.0425$ to $0.0500$, it is also positive at the crest and negative behind. In fact, $\Delta_{2,rr}^{*}$ is always positive at the shock wave and negative at the rarefaction wave, which can be seen as a criterion to distinguish the two waves. Furthermore, the system at the disc center is always in its thermodynamic equilibrium.
\subsection{Simulation of explosion}
For the case of explosion, the initial physical quantities are:
\begin{equation}
\left\{
\begin{array}{l}
(\rho ,T,u_{r},u_{\theta },\lambda )_{i}=(1.5,1.55556,0.666667,0,1) \\
(\rho ,T,u_{r},u_{\theta },\lambda )_{o}=(1,1,0,0,0)
\end{array}
\right.
\end{equation}
where the suffixes $i$ and $o$ index areas $0\le r\le R_{1}$ and $R_{1}<r\le R$, respectively. Here $\tau =2\times 10^{-4}$, $\Delta t=2.5\times 10^{-6}$, $\omega _{1}=1$, $\omega _{2}=50$. Figures \ref{Fig07}-\ref{Fig09} show the evolution of physical quantities ($\rho$, $T$, $P$, $u_{r}$, $\lambda$, $\Delta_{2,rr}^{*}$) versus radius. Figure \ref{Fig07} corresponds to parameters $R_{1}=0.015$, $R=0.3$, $N_{r}\times N_{\theta }=6000\times 1$; Fig.\ref{Fig08} corresponds to parameters $R_{1}=0.023$, $R=0.75$, $N_{r}\times N_{\theta }=15000\times 1$; Fig.\ref{Fig09} corresponds to parameters $R_{1}=0.050$, $R=1.2$, $N_{r}\times N_{\theta }=24000\times 1$.
\begin{figure}
\begin{center}
\includegraphics[bbllx=0pt,bblly=0pt,bburx=578pt,bbury=381pt,width=0.99\textwidth]{./Fig07.eps}
\end{center}
\caption{For the case $R_{1}=0.015$, physical quantities versus radius in explosion process at times, $t=0.000$, $0.005$, $0.050$ and $0.150$, respectively: (a) $\rho$; (b) $T$; (c) $P$; (d) $u_{r}$; (e) $\lambda$; (f) $\Delta_{2,rr}^{*}$. }
\label{Fig07}
\end{figure}
Figure \ref{Fig07} shows the extinction phenomenon. The area of initial reacted reaction is very small, i.e., the initial energy is not enough to trigger the detonation. The energy propagates outward in the form of disturbance wave with amplitude decreasing gradually. The wave dissipates and vanishes finally. It's easy to get from Fig.\ref{Fig07} (e) that $\lambda $ only has a little change at the beginning. The reason is that the initial temperature of reacted reaction is higher than the temperature threshold $T_{th}$ and there is a little chemical reaction at the start. The rate of chemical energy released is smaller than the rate of heat dissipated under the disc geometric effect. Consequently, the energy of disturbance wave reduces gradually, which leads to extinction. Figure (f) shows that the system has only a small departure from equilibrium at the start. The departure reduces gradually and vanishes finally.
\begin{figure}
\begin{center}
\includegraphics[bbllx=0pt,bblly=0pt,bburx=580pt,bbury=383pt,width=0.99\textwidth]{./Fig08.eps}
\end{center}
\caption{For the case $R_{1}=0.023$, physical quantities versus radius in explosion process at times, $t=0.0000$, $0.1850$, $0.3000$, $0.3475$ and $0.4000$, respectively: (a) $\rho$; (b) $T$; (c) $P$; (d) $u_{r}$; (e) $\lambda$; (f) $\Delta_{2,rr}^{*}$. }
\label{Fig08}
\end{figure}
Figure \ref{Fig08} shows that (i) A disturbance wave travels outwards from $t=0.0000$ to $0.1850$. Part reactant reacts around the disturbance wave. The released chemical energy is added into the disturbance wave, meanwhile the thermal energy within disturbance wave disperses to its adjacent area. The disturbance wave becomes wider in both radical and azimuthal directions under disc geometric effect, and the maximum values of density, temperature, pressure and velocity reduce. The value of $\Delta_{2,rr}^{*}$ shows a crest at the disturbance wave and a trough behind. (ii) The perturbation wave is transformed into detonation wave from $t=0.1850$ to $0.3000$. The heat release rate of chemical reactant increases sharply. The physical quantities ($\rho$, $T$, $P$, $u_{r}$) increase suddenly. Meanwhile the number of crest (trough) of $\Delta_{2,rr}^{*}$ increases from one to two. (iii) The implosion and explosion waves coexist from $t=0.3000$ to $0.3475$. With amplitude defined as the distance from the crest to trough of $\Delta_{2,rr}^{*}$, it is found that the amplitudes at the implosion and explosion waves increase dramatically, especially the former one increases from $2.78\times10^{-2}$ to $1.39$. For the purpose of a clear view, the plot of $\Delta_{2,rr}^{*}$ at the time $t=0.3475$ is given specially within panel (f). (iv) From $t=0.3475$ to $0.4000$, the implosion wave passes the center of the disc, then travels outwards and changes into a shock wave with the completion of chemical reaction. It is then transformed into a perturbation wave, dissipates and vanishes finally. While the explosion wave propagates outwards and its peak rises further. The amplitude of $\Delta_{2,rr}^{*}$ at the inner outgoing wave reduces, and the one at the explosion wave increases.
\begin{figure}
\begin{center}
\includegraphics[bbllx=0pt,bblly=0pt,bburx=577pt,bbury=386pt,width=0.99\textwidth]{./Fig09.eps}
\end{center}
\caption{For the case $R_{1}=0.050$, physical quantities versus radius in explosion process at times, $t=0.000$, $0.025$, $0.250$ and $0.500$, respectively: (a) $\rho$; (b) $T$; (c) $P$; (d) $u_{r}$; (e) $\lambda$; (f) $\Delta_{2,rr}^{*}$. }
\label{Fig09}
\end{figure}
Figure \ref{Fig09} shows a direct explosion phenomenon with ignition energy large enough. Panel (a) shows that the density behind the detonation front is lower than the one outside due to the disc geometric effect. Panels (b)-(c) shows higher temperature and pressure inside. Panel (d) shows that the velocity inside is close to zero. Panel (e) shows that chemical reaction steadily proceeds. Panel (f) shows that the amplitude of $\Delta_{2,rr}^{*}$ increases gradually. The detonation wave becomes wider and wider in both radical and azimuthal directions.
In fact, there is competition between the chemical reaction, macroscopic transportation, thermal diffusion and the geometric convergence or divergence in the detonation phenomenon. The chemical reaction increases the temperature while the thermal diffusion decreases the temperature around the detonation wave. If there is enough thermal energy transformed from chemical energy, the detonation proceeds; otherwise, extinguishes. Specially in explosion case, if the geometric divergence effects dominate, extinction will occur; with the the combustion front propagating outwards, geometric divergence makes less effect, the existing part combustion may result in complete combustion, even detonation.
\section{Conclusions and discussions}
A polar coordinate Lattice Boltzmann Kinetic Model (LBKM) for detonation phenomena is presented. Within this novel model, the change of discrete distribution function due to local chemical reaction is given directly in the modified lattice Boltzmann equation, which could recovery the Navier-Skokes equations including chemical reaction via Chapman-Enskog expansion. And the chemical reaction is described by Cochran's rate function. A combined scheme is used to treat with the LB equation and the Cochran's rate equation. Both the temporal evolution of the collision effects in the LB equation and the temporal evolution of chemical reaction in Cochran's rate equation are calculated analytically. Both the convection terms in the LB equation and Cochran's rate equation are treated using the first-order upwind scheme. From the numerical point of view, compared with the LBKM in Ref. \cite{XuLin2014}, the present model has the same accuracy but is simpler. From the physical or chemical point of view, compared with the Lee-Tarver model used in previous work \cite{XuYan2013}, the pressure effects on the reaction rate is taken into account in the Cochran's rate function.
Compared with the previous work in Ref. \cite{XuLin2014}, the inner boundary condition for the disc computational domain is treated more naturally. In Ref. \cite{XuLin2014} the disc computational domain is approximated by a annular domain where the inner radius approaches zero. Consequently, one needs to construct ghost nodes for the inner boundary condition. In this work the center of the disc computational domain is considered as a inner point of the system. For periodic system, no ghost node is needed for the inner boundary condition. Other boundaries are treated with the same method as \cite{XuLin2014}.
The simulation results of physical quantities in the steady detonation process have a satisfying agreement with analytical solutions. Typical implosion and explosion phenomena are simulated. By changing initial ignition energy, we investigate three cases of explosion, including a case with extinction phenomenon. It is interesting to find that the geometric convergence or divergence effect makes the detonation procedure much more complex. The competition between the chemical reaction, the macroscopic transportation, the thermal diffusion and the geometric convergence or divergence determines the ignition process. If there is enough thermal energy transformed from chemical energy, the detonation proceeds; otherwise, extinguishes. Specially in explosion case, if the geometric divergence effects dominate, extinction will occur; with the combustion front propagating outwards, geometric divergence makes less effect, the existing part combustion may result in complete combustion, even detonation.
Moreover, the non-equilibrium behaviors in detonation phenomenon are investigated via the velocity moments of discrete distribution functions. The system at the disc center is always in its thermodynamic equilibrium. The internal kinetic energies in different degrees of freedom around the detonation front do not coincide due to the fluid viscosity. They show the maximum difference at the inflexion point where the pressure has the largest spatial derivative. The influence of shock strength on the reaction rate and the influences of both the shock strength and the reaction rate on the departure amplitude of the system from its local thermodynamic equilibrium are probed. The departure from equilibrium in front of von-Neumann peak results from shock effect, while the one behind the peak results from rarefaction effect. The departure increases when the shock or rarefaction effect increases. Specially, the value of $\Delta_{2,rr}^{*}$ is positive at shock wave and negative at the rarefaction wave, which can be seen a criterion to distinguish the two waves. What's more, the main behaviors of actual distribution functions around the detonation wave are recovered from the numerical results of high-order moments of the discrete distribution function.
Finally, further discussions include the following points.
(i) For the combustion systems where both the reactant and product have more than one components, a multi-distribution-function model is more preferred. Such a work is in progress \cite{XuLin_multi_distribution}.
(ii) The transport properties are relevant to the relaxation time $\tau$. It should depend on physical quantities, such as density and temperature. Consequently, it should be a function of the space and time. This point should be further investigated in the future.
(iii) In numerical simulations the spatial and temporal steps should be small enough so that the spurious transportation is negligible compared with the physical one.
\section*{Acknowledgements}
The authors thank Prof. Cheng Wang for many helpful discussions. AX and GZ acknowledge support of the Science Foundations of National Key Laboratory of Computational Physics, National Natural Science Foundation of China and the opening project of State Key Laboratory of Explosion Science and Technology (Beijing Institute of Technology) [under Grant No. KFJJ14-1M]. YL and CL acknowledge support of National Natural Science Foundation of China [under Grant No. 11074300], National Basic Research Program of China (under Grant No. 2013CBA01504) and National Science and Technology Major Project of the Ministry of Science and Technology of China (under Grant No.2011ZX05038-001).
|
1,314,259,993,529 | arxiv | \section{Introduction}
\label{sec:intro}
A gravitating mass that traverses a sea of other particles builds up an
overdense wake behind it. This wake tugs back on the mass, providing an
effective drag. The background sea may itself consist of non-interacting point
masses. For this collisionless case, \citet{c43} first derived the dynamical
friction force. His celebrated result has since found application in a great
many astrophysical problems, ranging from mass segregation in dense star
clusters \citep{pm02} to planet migration through interaction with
planetesimals \citep{dye03}.
The background environment may also be an extended gas cloud. This type of
dynamical friction has also been invoked in a variety of contexts, such as
black hole mergers in galactic nuclei \citep{d06}, the heating of the
intracluster medium by infalling galaxies \citep{e04}, and the migration of giant planets within a protoplanetary disk \citep{odi10}. When the ambient medium
is a gas, pressure gradients influence the formation of the wake behind the
gravitating object. Surprisingly, the general determination of
gaseous dynamical friction, for arbitrary Mach number of the perturbing mass,
has not yet been achieved. There is substantial agreement when the mass is
traveling hypersonically with respect to the gas. In this limit, the force
varies as $V^{-2}$, where $V$ is the speed of the perturber
\citep{d64,rs71,rs80,o99}.
In all studies thus far, the authors first calculated the properties of the
wake by treating it as a linear perturbation of the background gas. The density perturbation is symmetric upstream and downstream when the mass is moving subsonically \citep{d64}, leading \citet{rs80} to conclude that the friction force is zero in this case. \citet{o99} obtained the force through direct
integration over surrounding fluid elements, using their respective density
enhancements. She added the constraint that the projectile's gravitational
field only be switched on for a finite time interval $\Delta t$. With this
device, she first found a nonzero result even in the subsonic regime. The force increases with $V$, and logarithmically diverges at a Mach number of unity.
Interestingly, the quantity $\Delta t$ does not appear in Ostriker's final expression for the subsonic force. This fact indicates that the artifice of a finite time interval
was unnecessary and that a steady-state analysis is applicable. Indeed, the
force attains a steady-state value in the numerical simulations of
\citet{sb99}. The divergence at a Mach number of unity in the
analytical expression further suggests that physical understanding of the
problem is incomplete.
In this paper, we revisit the subject of dynamical friction, concentrating
entirely on the less studied subsonic case. We take the perturbing body to be
a point mass $M$ traveling through an initially uniform gas. The previous
studies cited also ostensibly dealt with point masses, in the sense that the
physical size of the body was ignored. However, it was assumed, either
tacitly or explicitly, that the object's radius $R$ far exceeds the accretion
radius $r_{\rm acc}$, conventionally defined as
\hbox{$r_{\rm acc}\,\equiv\,2\,G\,M/V^2$}. It is true that when
\hbox{$R \,\gg\,r_{\rm acc}$}, the gravitational force from the object is so
weak that mass accretion by infall is negligible. Under these circumstances, however,
the primary drag on the body is not from dynamical friction, but from direct
impact by the gas, a fact sometimes overlooked.\footnote{\citet{rs71} recognized that both drag forces act on galaxies moving supersonically through intracluster gas (see their eq. 5). They extended the linear analysis of the flow into the nonlinear regime, utilizing a similarity solution. However, their focus was the X-ray emission from the wake and bowshock, rather than the actual motion of the galaxies.}
The conventionally assumed inequality marginally holds in one situation
commonly envisioned, galaxies within intracluster gas
(\hbox{$R\,\sim\,r_{\rm acc}\,\sim\,10^{24}\,\,{\rm cm}$}). However, it fails badly in other contexts, e.g., supermassive black holes within galaxies
(\hbox{$R\,\sim\,10^{11}\,\,{\rm cm}$}, \hbox{$r_{\rm acc}\,\sim\,10^{19}\,\,{\rm cm}$})
or gas giant planets inside circumstellar disks (\hbox{$R\,\sim\,10^{10}\,\,{\rm cm}$},
\hbox{$r_{\rm acc}\,\sim\,10^{13}\,\,{\rm cm}$}). When \hbox{$R\,\ll\,r_{\rm acc}$},
as we assume here, dynamical friction is indeed the main drag force. The
relative density enhancement in the wake is not small, as needed for linear
theory \citep[see, e.g.,][]{kk09}, and mass accretion cannot be neglected.
Our analysis indeed pivots on the fact that the transfer of linear momentum
from the background gas to the object, which underlies the friction force, is
closely related to the transfer of mass. The problem of gas accretion onto a
moving body was addressed in a classic series of papers by \citet{hl39},
\citet{bh44}, and \citet{b52}. The final result for the accretion rate,
applicable for all Mach numbers, is the interpolation formula offered by
\citet{b52}. While not derived rigorously, the formula matches known results
in the hypersonic and stationary limits, and is broadly consistent with numerical simulations \citep[see][and references therein]{r96}.
The strategy in our paper is to determine, using perturbation theory, the
density and velocity of the gas. However, we focus not on the wake, as in previous studies, but on
a region {\it far} from the object, where its gravity is relatively weak.
Extending the perturbation analysis into the nonlinear regime, we calculate the net momentum flux onto the accreting object and derive
analytically that the force from dynamical friction is $\dot M\,V$, where
$\dot M$ is the mass accretion rate onto the object. Adopting an analytic form for this rate, the drag force follows. This force first rises with $V$ and then falls, remaining finite at all Mach numbers. Moreover, there is
a contribution from the direct accretion of fluid momentum onto the body. This
contribution is absent in the stellar dynamical problem, but is here comparable
to the gravitational tug from the wake.
In Section~\ref{sec:method} below, we introduce a perturbative series expansion to analyze the
far-field density and velocity. In Section~\ref{sec:outer}, we use this expansion to derive
a hierarchy of dynamical equations, of which we need only solve the first two
sets. Section~\ref{sec:mass} shows how the mass accretion rate is connected to solutions of
our second-order equations. In Section~\ref{sec:friction}, we similarly relate the friction
force to these solutions, and derive the central connection between this force
and the accretion rate. Using a modified version of the Bondi interpolation formula for the latter, we find
explicitly the deceleration of an isolated mass in Section~\ref{sec:velocity}. Finally, Section~\ref{sec:summary}
compares our result with existing numerical simulations and suggests future
avenues of inquiry.
\section{Outer Flow: Method of Solution}
\label{sec:method}
\subsection{Physical Assumptions}
Let the gravitating mass $M$ travel in a straight line with speed $V$ through the
extended gas cloud. Following previous analytic studies of dynamical friction, we assume the gas to be isothermal, with an associated sound speed
$c_s$. Very far from the mass, the density is spatially uniform and has
the value $\rho_0$. We choose a reference frame whose origin is anchored on the
perturbing mass. In this frame, it is the gas that has speed $V$ far from
the mass. We let the gas velocity be directed along the $z$-axis, and employ
spherical coordinates $r$ and $\theta$ (see Fig.~\ref{fig:coord}). We now seek a steady-state, axisymmetric solution for the flow, which is
taken to be inviscid. We neglect the self-gravity of the gas, and assume that
each fluid element feels only a pressure gradient and the gravitational pull of the point mass.
\begin{figure}
\plotone{figure1}
\caption{Spherical coordinate system centered on a gravitating body of mass $M$. The gas is isothermal, and its velocity far upstream is $\beta\,c_{\rm s}\ (\beta<1)$. The upstream direction corresponds to $\theta=\pi$, and downstream to $\theta=0$. While the figure shows the mass to have a finite physical size, we assume it to be a point particle in our analysis.}
\label{fig:coord}
\end{figure}
Strictly speaking, there is no steady flow, as this mass decelerates and $V$
continually changes \citep[e.g.,][]{f10}. What, then, is the meaning of the force we are calculating? Imagine the object being dragged by a massless string through the gas at the
fixed speed $V$. After a long time, a steady-state flow is indeed established
throughout the surrounding gas, and the tension in the string approaches a
constant value. This limiting tension is the dynamical friction force being
calculated here.
Return now to the actual case, in which there is no string and the mass
decelerates. As stated previously, there is no global, steady-state flow. The
flow is quasi-steady, however, within some distance over which the altered
motion of the mass is communicated by sound waves. We shall quantify this
distance later, after we have established the flow assuming steady-state
conditions.
One important property of the flow is that it is irrotational. Euler's
equation in steady state may be written
\begin{equation}
\boldsymbol{u} \times \boldsymbol{\omega} \,=\, \boldsymbol{\nabla} B \,\,,
\end{equation}
where $\boldsymbol{u}$ is the fluid velocity, \hbox{$\boldsymbol{\omega} \,\equiv\boldsymbol{\nabla}\times\boldsymbol{u}$} is the vorticity, and the
Bernoulli function $B$ is
\begin{equation}
B \,\equiv\,{1\over 2}\,u^2 \,+\,
c_s^2\,\,{\rm ln}\left(\,{\rho\over{\rho_0}}\right) \,-\,\frac{G\,M}{r}
\,\,.
\end{equation}
Both the fluid speed and density approach constant values far from the mass. Hence, $B$ is a spatial constant throughout the flow, and
\begin{equation}
\boldsymbol{u} \times \boldsymbol{\omega} \,=\, 0 \,\,.
\end{equation}
Since $\boldsymbol{u}$ is a poloidal vector, the vorticity $\boldsymbol{\omega}$ is toroidal. The last
equation then implies that \hbox{$\boldsymbol{\omega}\,=\,0$}, as claimed. We will not need
to invoke the irrotational character of the flow until Section~\ref{sec:friction}, when we explicitly
evaluate the dynamical friction force.
Throughout our analysis, it will be more convenient to employ, not the vector
fluid velocity $\boldsymbol{u} (r,\theta)$, but the scalar stream function
$\psi (r,\theta)$. We may recover the individual velocity components from the
stream function through the standard relations
\begin{eqnarray}
\label{eqn:ur}
u_r \,&=&\, {1\over{\rho\,r^2\,{\rm sin}\,\theta}}\,\,
{{\partial\psi}\over{\partial\theta}}\,\,, \\
\label{eqn:ut}
u_\theta\,&=&\,{-1\over{\rho\,r\,{\rm sin}\,\theta}}\,\,
{{\partial\psi}\over{\partial r}} \,\,,
\end{eqnarray}
where \hbox{$\rho\,=\,\rho (r,\theta)$} is the mass density. The velocity, as
given by equations~(\ref{eqn:ur}) and (\ref{eqn:ut}), automatically obeys mass continuity:
\begin{eqnarray}
0\,&=&\,\boldsymbol{\nabla} \boldsymbol{\cdot} (\rho\,\boldsymbol{u})\,\,, \nonumber \\
0\,&=&\,{1\over r^2}\,{\partial{\phantom r}\over{\partial r}}
\left(\rho\,r^2\,u_r\right) \,+\, {1\over{r\,{\rm sin}\,\theta}}\,
{\partial{\phantom\theta}\over{\partial\theta}}
\left(\rho\,{\rm sin}\,\theta\,u_\theta\right) \,\,.
\end{eqnarray}
\subsection{Perturbation Expansion}
Far from the mass, as the density approaches $\rho_0$, the velocity has only a
$z$-component, which is $V$. Equivalently, we have in this limit
\hbox{$u_r\,\approx\,V\,{\rm cos}\,\theta$} and
\hbox{$u_\theta\,\approx\,-V\,{\rm sin}\,\theta$}. It follows that the
far-field limit of the stream function is
\begin{equation}\label{eqn:sfunc1}
\psi \,\approx \,{{\rho_0\,V\,r^2\,{\rm sin}^2\,\theta}\over 2} \,\,.
\end{equation}
For a more complete analysis of the flow in this region, we take equation~(\ref{eqn:sfunc1})
to represent the leading term of a perturbation expansion. Introducing the
sonic radius \hbox{$r_s\,\equiv\,G\,M/c_s^2$}, we first rewrite equation~(\ref{eqn:sfunc1})
as
\begin{equation}
\psi \,\approx\, \rho_0\,c_s\,r_s^2\,\beta\,
\left({r\over r_s}\right)^2\,{{{\rm sin}^2\,\theta}\over 2} \,\,,
\end{equation}
where $\beta$ is the Mach number of the projectile mass:
\begin{equation}
\beta \,\equiv\, {V\over c_s} \,\,.
\end{equation}
Our perturbation expansion of $\psi$ is then given by
\begin{equation}
\psi\,=\,\rho_0\,c_s\,r_s^2\,\left[
f_2\left(r\over r_s\right)^2 \,+\,
f_1\left(r\over r_s\right) \,+\,f_0\,+\,
f_{-1}\left(r\over r_s\right)^{-1} \,+\, ...\,\right] \,\,,
\end{equation}
where
\begin{equation}\label{eqn:f2}
f_2 \,\equiv\,{{\beta\,\,{\rm sin}^2\,\theta}\over 2} \,\,,
\end{equation}
and where $f_1$, $f_0$, $f_{-1}$, etc. are still unknown, nondimensional
functions of $\beta$ and $\theta$. Similarly, we expand the density as
\begin{equation}
\rho \,=\, \rho_0\,\left[ 1\,+\,
g_{-1}\left({r\over r_s}\right)^{-1} \,+\,
g_{-2}\left({r\over r_s}\right)^{-2} \,+\,
g_{-3}\left({r\over r_s}\right)^{-3} \,+\,...\,\right] \,\,.
\end{equation}
Here, $g_{-1}$, $g_{-2}$, $g_{-3}$, etc. are also nondimensional functions of
$\beta$ and $\theta$, all yet to be found. Both expansions are only valid for
\hbox{$r\,\gg\,r_s$}, the inequality that defines our outer region. We further
assume that the physical radius of the object obeys \hbox{$R\,\ll\,r_s$}. Since
the motion is subsonic (\hbox{$V\,<\,c_s$}), it follows that
\hbox{$R\,\ll\,r_{\rm acc}$}, so that mass accretion is significant.
At this point, it is convenient to cast all our variables into nondimensional
form. We let the fiducial radius, density, and speed be $r_s$,
$\rho_0$, and $c_{\rm s}$, respectively. Similarly, the stream function is normalized to $\rho_0\,c_s\,r_s^2$. Then equations~(\ref{eqn:ur}) and
(\ref{eqn:ut}) relating the velocity to the stream function remain the same
nondimensionally. We shall not employ a new notation for nondimensional
variables, but make it clear whenever we switch back to dimensional relations.
With this convention, the nondimensional expansions for the stream function and
density simplify to
\begin{eqnarray}\label{eqn:psind}
\psi \,&=&\, f_2\ r^2\,\,+\,\,f_1\ r \,\,+\,\, f_0 \,\,+\,\,
f_{-1}\ r^{-1}\,\,+\,... \,\,, \\
\label{eqn:rhond}
\rho \,&=&\, 1 \,\,+\,\,g_{-1}\ r^{-1} \,\,+\,\, g_{-2}\ r^{-2}\,\,+\,\,
g_{-3}\ r^{-3}\,\,+\,\,... \,\,.
\end{eqnarray}
By adopting these perturbation expansions, we have effectively limited our
analysis to the subsonic regime. For \hbox{$\beta\,>\,1$}, we expect the
fluid variables or their derivatives to be discontinuous across the Mach
cone, whose opening angle is given by
\hbox{${\rm sin}\,\theta\,=\,\beta^{-1}$} \citep[see, e.g.,][]{rs71}. It would
thus be necessary to adopt two separate expansions for $\psi$ and $\rho$,
one inside and one outside the Mach cone. To avoid this complication, and since
we are primarily interested in the subsonic regime in any event, we assume that
\hbox{$\beta\,<\,1$} and retain the single expansions.
\subsection{Boundary Conditions}
By symmetry, the upstream axis of the flow, defined by \hbox{$\theta\,=\,\pi$},
is a streamline for any $\beta$-value. That is, $\psi(r,\pi)$ is independent
of $r$. The actual value of $\psi (r,\pi)$ is immaterial, reflecting the fact
that the full function $\psi (r,\theta)$ can have an arbitrary additive
constant without affecting the velocities. For convenience, we set
\hbox{$\psi (r,\pi)\,=\,0$}, and note from equation~(\ref{eqn:f2}) that $f_2 (\pi)$
already vanishes. From equation~(\ref{eqn:psind}) for the general expansion, we require that
\hbox{$f_i (\pi) \,=\,0$}, for \hbox{$i\,=\,1,\,0\,,-1,\,-2$}, etc.
A second set of boundary conditions pertains to the behavior of the velocity
$\boldsymbol{u}$. Let us focus again on the upstream axis. The righthand
sides of both equations~(\ref{eqn:ur}) and (\ref{eqn:ut}) contain \hbox{${\rm sin}\,\theta$} in the
denominator. Since the density $\rho$ is finite at \hbox{$\theta\,=\,\pi$},
where \hbox{${\rm sin}\,\theta$} vanishes, both
\hbox{$\partial\psi/\partial\theta$} and \hbox{$\partial\psi/\partial r$} must
tend to zero as $\theta$ approaches $\pi$, at least as fast as
\hbox{${\rm sin}\,\theta$}.
Considering first \hbox{$\partial\psi/\partial\theta$}, we see that
\hbox{$\partial f_2/\partial\theta\,=
\,\beta\,{\rm sin}\,\theta\,{\rm cos}\,\theta$}, which properly vanishes. We
must further demand that \hbox{$f_i^\prime\,(\pi)\,=\,0$}, for
\hbox{$i\,=\,1,\,0\,,-1,\,-2$}, etc. Turning to
\hbox{$\partial\psi/\partial r$}, the term involving $f_2$ still goes to zero,
while \hbox{$f_0 (\theta)$} itself vanishes when taking the $r$-derivative of
$\psi$. We are already requiring that \hbox{$f_i\,(\pi)\,=\,0$} for all other
$i$. Thus, the stipulation that \hbox{$\psi\,(r, \pi)\,=\,0$} implies that
\hbox{$\partial\psi/\partial r$} also vanishes at \hbox{$\theta\,=\,\pi$}, so
that $u_\theta$ does not diverge.
Approaching the downstream axis, \hbox{${\rm sin}\,\theta$} again vanishes as
$\theta$ goes to zero. By analogous reasoning, we require that
\hbox{$f_i^\prime\,(0)\,=\,0$} for \hbox{$i\,=\,1,\,0\,,-1,\,-2$}, etc. To
ensure the regularity of $u_\theta$, we further need \hbox{$f_i\,(0)\,=\,0$},
for \hbox{$i\,=\,1,\,-1,\,-2$}, etc. Again, the term $f_0$ disappears when
taking the $r$-derivative of $\psi$, and there is no a priori restriction on
\hbox{$f_0\,(0)$}. Indeed, this quantity sets the mass accretion rate onto the
moving body, as we later demonstrate. In summary, our boundary conditions are:
\hbox{$f_i\,(\pi)\,=\,f_i^\prime\,(\pi)\,=\,f_i^\prime\,(0)\,=\,0$}, for
\hbox{$i\,=\,1,\,0\,,-1,\,-2$}, etc., and \hbox{$f_i\,(0)\,=\,0$} for
\hbox{$i\,=\,1,\,-1,\,-2$}, etc.
\section{Outer Flow: Results}
\label{sec:outer}
\subsection{First-Order Equations}
Our inviscid flow obeys Euler's equation, which we write in spherical
coordinates. The $r$- and $\theta$-components of this vector equation are
\begin{eqnarray}\label{eqn:eulerr}
u_r\,{{\partial u_r}\over{\partial r}}\,+\,{u_\theta\over r}\,
{{\partial u_r}\over{\partial\theta}}\,-\,{{u_\theta^2}\over r}\,&=&\,
-{1\over\rho}\,{{\partial\rho}\over{\partial r}}\,-\,{1\over r^2}\,\,, \\
\label{eqn:eulert}
u_r\,{{\partial u_\theta}\over{\partial r}}\,+\,{u_\theta\over r}\,
{{\partial u_\theta}\over{\partial\theta}}\,+\,{{u_r\,u_\theta}\over r}\,&=&\,
-{1\over{\rho\,r}}\,{{\partial\rho}\over{\partial\theta}}\,\,,
\end{eqnarray}
where $u_r$ and $u_\theta$ are given in terms of $\psi$ by equations~(\ref{eqn:ur}) and
(\ref{eqn:ut}). Our strategy is to substitute the perturbation expansions~(\ref{eqn:psind}) and (\ref{eqn:rhond})
into Euler's equations. For each power of $r$, we demand that its coefficients
match. In this way, we will obtain a hierarchy of coupled equations for the
functions $f$ and $g$.
Before proceeding, we first note that $u_r$ and $u_\theta$ are both
proportional to $\rho^{-1}$. To avoid expanding inverse powers of the density,
we multiply equations~(\ref{eqn:eulerr}) and (\ref{eqn:eulert}) through by $\rho^3$. Replacing the
velocity components by derivatives of $\psi$ results in complex expressions
that we shall not write out in full. We simply note, as an example, that the
first lefthand term in the $r$-component of Euler's equation is
\begin{equation}
\rho^3\,u_r\,{{\partial u_r}\over{\partial r}} \,=\,
-{{2\,\rho}\over{r^5\,{\rm sin}^2\,\theta}}
\left({{\partial\psi}\over{\partial\theta}}\right)^2 \,-\,
{1\over{r^4\,{\rm sin}^2\,\theta}}\,
{{\partial\rho}\over{\partial r}}
\left({{\partial\psi}\over{\partial\theta}}\right)^2 \,+\,
{1\over{r^4\,{\rm sin}^2\,\theta}}\,
{{\partial\psi}\over{\partial\theta}}\,
{{\partial^2 \psi}\over{\partial r \partial\theta}} \,\,.
\end{equation}
After substituting the series expansions for $\psi$ and $\rho$, we find that
the highest power of $r$ is $r^{-1}$. In this case, all the coefficients on
both sides of Euler's equations vanish identically.
Matching the coefficients of $r^{-2}$, we obtain the {\it first-order}
equations. From the $r$-component of Euler's equation, we find
\begin{equation}\label{eqn:firstr}
-\beta\,f_1^{\prime\prime} \,-\, \beta\,f_1 \,+\,
\beta^2\,{\rm sin}\,\theta\,{\rm cos}\,\theta\,\,g_{-1}^\prime \,+\,
\left(\beta^2\,{\rm cos}^2\,\theta\,\,-\,1\right)\,g_{-1} \,+\,1 \,=\,0 \,\,,
\end{equation}
while the $\theta$-component yields
\begin{equation}\label{eqn:firstt}
\left(1\,-\,\beta^2\,{\rm sin}^2\,\theta\right)\,g_{-1}^\prime \,-\,
\beta^2\,{\rm sin}\,\theta\,{\rm cos}\,\theta\,\,g_{-1}\,=\,0 \,\,.
\end{equation}
In both of these equations and those that follow, a prime denotes a $\theta$-derivative.
These equations govern the first non-trivial terms in the expansions for $\psi$
and $\rho$. Their solution, therefore, must be equivalent to that obtained
through the more traditional, linear analysis. Integrating equation~(\ref{eqn:firstt}), we
find
\begin{equation}
g_{-1} \,=\, {C\over{\left(1\,-\,\beta^2\,{\rm sin}^2\,\theta\right)^{1/2}}}
\,\,,
\end{equation}
where $C$ is a constant, as yet undetermined. In the subsonic regime, the
denominator on the righthand side does not vanish for any $\theta$, and
$g_{-1}$ remains finite.
Substituting this expression for $g_{-1}$ into (\ref{eqn:firstr}) gives the equation obeyed
by $f_1$:
\begin{equation}
f_1^{\prime\prime} \,+\, f_1 \,=\, {1\over\beta} \,-\,
{{C\,\left(1\,-\,\beta^2\right)}\over
{\beta \left(1\,-\,\beta^2\,{\rm sin}^2\,\theta\right)^{3/2}}} \,\,.
\end{equation}
A particular solution of this equation may be found through the method of
variation of parameters. Adding the two homogeneous solutions yields
\begin{equation}
f_1 \,=\,{1\over\beta} \,-\,
{{C\,\left(1\,-\,\beta^2\,{\rm sin}^2\,\theta\right)^{1/2}}\over\beta} \,+\,
D\,{\rm cos}\,\theta \,+\, E\,{\rm sin}\,\theta \,\,,
\end{equation}
where $D$ and $E$ are also constants.
We proceed to evaluate the constants through application of the boundary
conditions. The requirement that \hbox{$f_1 (\pi)\,=\,0$} gives
\begin{equation}
C\,=\,1\,-\,\beta\,D \,\,.
\end{equation}
Similarly, we have \hbox{$f_1 (0)\,=\,0$}, yielding
\begin{equation}
C\,=\,1\,+\,\beta\,D \,\,.
\end{equation}
It follows, from these last two relations, that \hbox{$C\,=\,1$} and
\hbox{$D\,=\,0$}. Finally, we have \hbox{$f_1^\prime (\pi)\,=\,0$}, from which
we infer that \hbox{$E\,=\,0$}. It may be verified that the boundary
condition \hbox{$f_1^\prime (0) \,=\, 0$} is then also satisfied.
In summary, the first-order density and stream function perturbations are
\begin{eqnarray}\label{eqn:gm1}
g_{-1} \,&=&\,
{1\over{\left(1\,-\,\beta^2\,{\rm sin}^2\,\theta\right)^{1/2}}}\,\,, \\
\label{eqn:f1}
f_1 \,&=&\, {{1\,-\,\left(1\,-\,\beta^2\,{\rm sin}^2\,\theta\right)^{1/2}}\over
\beta} \,\,.
\end{eqnarray}
Notice that \hbox{$g_{-1}^\prime\,=\,0$} at both \hbox{$\theta\,=\,0$} and
$\pi$, implying that the density profile is flat (i.e., does not have a cusp)
on either the upstream or downstream axis. Our expression for $g_{-1}$ is
consistent with the linear density perturbation obtained by \citet{d64},
\citet{rs71}, and \citet{o99}. Notice that $g_{-1}(\pi/2)$ diverges as $\beta$ approaches unity, signifying the birth of the Mach cone. Our $f_1$, in combination with $g_{-1}$, yields the linear velocity components given in equations~(25) and (26) of \citet{d64}.
Figure~\ref{fig:1storder} displays the streamlines ({\it solid curves}) and isodensity contours
({\it dashed curves}) for the outer flow, including only the first-order
perturbations. The light, dotted circle marks the sonic radius,
\hbox{$r\,=\,1$}; the solution is only accurate well outside this sphere.
Notice how all curves and contours are symmetric about the
\hbox{$\theta\,=\,\pi/2$} plane. The streamlines, in particular, show the
fluid veering toward the mass, but then turning away again. We cannot detect
true accretion of mass or linear momentum until we include the next higher-order perturbations.
\begin{figure}
\plotone{figure2}
\caption{Streamlines ({\it solid}) and density contours ({\it dashed}) for the $\beta=0.5$ flow, including only first-order perturbations. All quantities shown are nondimensional. The density contours correspond to $\rho = 1.2, 1.4,$ and 1.6, while adjacent streamlines enclose equal mass fluxes. The inner dotted circle has the sonic radius. Since the streamlines and density contours are symmetric upstream and downstream, we cannot determine the true accretion rates of mass and momentum without higher-order perturbations. }
\label{fig:1storder}
\end{figure}
\subsection{Second-Order Equations}
We next equate coefficients of $r^{-3}$ in both components of Euler's equation.
From the $r$-component, equation~(\ref{eqn:eulerr}), we obtain one relation between $f_0$ and
$g_{-2}$:
\begin{equation}\label{eqn:secondr}
-\beta\,f_0^{\prime\prime} \,-\, \beta\,{\rm cot}\,\theta\,f_0^\prime \,+\,\
\beta^2\,{\rm sin}\,\theta\,{\rm cos}\,\theta\,\,g_{-2}^\prime \,+\,
\left(2\,\beta^2\,{\rm cos}^2\,\theta\,-\,2\right)\,g_{-2}\,=\,
{\cal A}_1 \,+\, {\cal A}_2 \,+\, {\cal A}_3 \,\,.
\end{equation}
Here, ${\cal A}_1$, ${\cal A}_2$, and ${\cal A}_3$ are expressions involving
$f_1$ and $g_{-1}$:
\begin{eqnarray}
{\cal A}_1 \,&\equiv&\, {f_1^2\over{{\rm sin}^2\,\theta}} \,-\,
{{f_1\,f_1^\prime\,{\rm cos}\,\theta}\over{{\rm sin}^3\,\theta}} \,+\,
{{\left(f_1^\prime\right)^2}\over{{\rm sin}^2\,\theta}} \,+\,
{{f_1\,f_1^{\prime\prime}}\over{{\rm sin}^2\,\theta}}\,\,,
\\
{\cal A}_2 \,&\equiv&\, \beta\,f_1\,\,g_{-1} \,-\,
2\,\beta\,f_1^\prime\,\,g_{-1}\,{\rm cot}\,\theta \,-\,
\beta\,f_1\,\,g_{-1}^\prime\,{\rm cot}\,\theta \,-\,
\beta\,f_1^\prime\,\,g_{-1}^\prime \,+\,
\beta\,f_1^{\prime\prime}\,\,g_{-1} \,\,,
\\
{\cal A}_3 \,&\equiv&\, 2\,g_{-1}^2 \,-\, 3\,g_{-1}
\,\,.
\end{eqnarray}
From the $\theta$-component, equation~(\ref{eqn:eulert}), we obtain a second relation between
$f_0$ and $g_{-2}$:
\begin{equation}\label{eqn:secondt}
-\beta\,f_0^\prime \,+\, {\cal D}\,g_{-2}^\prime \,-\,
2\,\beta^2\,{\rm sin}\,\theta\,\,{\rm cos}\,\theta\,\,g_{-2} \,=\,
{\cal B}_1 \,+\, {\cal B}_2 \,+\, {\cal B}_3
\,\,,
\end{equation}
where we have defined
\hbox{${\cal D}\,\equiv\,1\,-\,\beta^2\,{\rm sin}^2\,\theta$}, and where the
three terms on the righthand side are again combinations of $f_1$ and $g_{-1}$:
\begin{eqnarray}
{\cal B}_1 \,&\equiv&\, f_1^2\,\frac{{\rm cot}\,\theta}{\sin^2\theta} \,-\,
{{f_1\,f_1^\prime}\over{{\rm sin}^2\,\theta}} \,\,,
\\
{\cal B}_2 \,&\equiv&\, \beta\,f_1\,\,g_{-1}\,{\rm cot}\,\theta \,\,+\,\,
\beta\,f_1^\prime\,\,g_{-1} \,\,+\,\,2\,\beta\,f_1\,g_{-1}^\prime \,\,,
\\
{\cal B}_3 \,&\equiv&\, -2\,g_{-1}\,\,g_{-1}^\prime
\,\,.
\end{eqnarray}
We have already found $f_1$ and $g_{-1}$ in the subsonic case of interest.
After substituting these expressions, equations~(\ref{eqn:gm1}) and (\ref{eqn:f1}), into the
righthand sides of equations~(\ref{eqn:secondr}) and (\ref{eqn:secondt}), the coupled equations for $f_0$
and $g_{-2}$ become:
\begin{equation} \label{eqn:secondr2}
-\beta\,f_0^{\prime\prime} \,-\, \beta\,{\rm cot}\,\theta\,f_0^\prime \,+\,\
\beta^2\,{\rm sin}\,\theta\,{\rm cos}\,\theta\,\,g_{-2}^\prime \,+\,
\left(2\,\beta^2\,{\rm cos}^2\,\theta\,-\,2\right)\,g_{-2}\,=\,
{1\over{\cal D}}\,-\,{3\over\sqrt{\cal D}} \,+\,
{2\over{1+\sqrt{\cal D}}}\,\,,
\end{equation}
and
\begin{equation}\label{eqn:secondt2}
-\beta\,f_0^\prime \,+\, {\cal D}\,g_{-2}^\prime \,-\,
2\,\beta^2\,{\rm sin}\,\theta\,\,{\rm cos}\,\theta\,\,g_{-2} \,=\,
\beta^2\,{\rm sin}\,\theta\,\,{\rm cos}\,\theta
\left[-{2\over{{\cal D}^2}} \,+\, {2\over{{\cal D}^{3/2}}}\,-\,
{1\over{\cal D}} \,+\, {1\over{\left(1+\sqrt{\cal D}\right)^2}}
\right]\,\,.
\end{equation}
These last two relations constitute our {\it second-order} equations. For any
value of $\beta$, we may integrate them numerically from the upstream axis,
\hbox{$\theta\,=\,\pi$}, to the downstream axis at \hbox{$\theta\,=\,0$}. Three
initial conditions are required, of which we have already identified two:
\hbox{$f_0 (\pi) \,=\, f_0^\prime (\pi) \,=\, 0$}. As a third initial
condition, we use $g_{-2} (\pi)$, whose value at this point is arbitrary. For
each chosen value of $g_{-2} (\pi)$, we may find $f_0 (\theta)$ and
$g_{-2} (\theta)$. We thus have a one-parameter family of outer flow solutions.
In the upper panel of Figure~\ref{fig:2ndfandg} we display, for the representative value
\hbox{$\beta\,=\,0.5$}, three solutions of $g_{-2} (\theta)$. We obtained each
solution by assuming different values of $g_{-2} (\pi)$. Notice that $g_{-2}^\prime$ vanishes on the upstream and downstream axes, implying again that the density profile is flat in both regions. Notice also that all curves attain the same value at \hbox{$\theta\,=\,\pi/2$}. That is, $g_{-2} (\pi/2)$ depends only on $\beta$,
and not on the prescribed initial condition $g_{-2} (\pi)$.
The lower panel of Figure~\ref{fig:2ndfandg} shows the corresponding plots of $f_0 (\theta)$. We see that
\hbox{$f_0^\prime (0) \,=\,0$} in every case, ensuring regularity of $u_r$ on
the downstream axis. This condition was not imposed a priori, but resulted
automatically from integration of the governing equations. Specifically, the coefficient of $f_0^\prime$ in equation~(\ref{eqn:secondr2}) includes
${\rm cot}\,\theta$, which diverges at \hbox{$\theta\,=\,0$}. Since all the
other terms in this equation remain finite on the axis, $f_0^\prime (0)$ is
forced to zero.
\begin{figure}
\plotone{figure3}
\caption{Sample solutions to the second-order perturbation equations (\ref{eqn:secondr2}) and (\ref{eqn:secondt2}), for $g_{-2}(\theta)$ ({\it upper panel}) and $f_0(\theta)$ ({\it lower panel}). The initial values of $g_{-2}(\pi)$ are +1.12 ({\it dotted}), +0.12 ({\it solid}), and -0.88 ({\it dashed}). }
\label{fig:2ndfandg}
\end{figure}
Which of these solutions is the true outer flow for gas that is accreting
steadily onto the gravitating mass? In principle, one could answer this
question by continuing each solution inward, to see if the flow smoothly
crosses the sonic surface, where \hbox{$u\,=\,1$}. We shall not attempt such
a calculation here. Instead, we will proceed by determining generically
the mass accretion rate that is associated with each outer solution. Then,
given the Bondi prescription for this rate, we will indeed be able to select
the physical solution for each $\beta$.
\section{Mass Accretion Rate}
\label{sec:mass}
\subsection{Relation to Stream Function}
One could, in principle, equate coefficients of $r^{-4}$, $r^{-5}$, etc., and
thereby obtain the coupled equations linking higher-order $f$- and
$g$-variables. We will now demonstrate, however, that the first- and
second-order equations just presented are sufficient to establish the total
accretion rate onto the mass. We will then relate, in Section~\ref{sec:friction} below, this
infall rate to the desired friction force.
Refer again to Figure~\ref{fig:coord} and imagine a sphere of radius $r$ surrounding the mass. Reverting temporarily to
dimensional variables, the mass accretion rate is
\begin{eqnarray}
{\dot M} \,&=&\, -2\,\pi\int_0^\pi\!\rho\,u_r\,r^2\,{\rm sin}\,
\theta\,d\theta \\
\,&=&\, -2\,\pi\int_0^\pi\!
{{\partial\psi}\over{\partial\theta}}\,d\theta
\\
\,&=&\, 2\,\pi\,\left[\psi (r,0)\,-\,\psi (r,\pi)\right]
\,\,,
\end{eqnarray}
where we have utilized equation~(\ref{eqn:ur}) connecting $u_r$ and $\psi$. Recall that
$\psi (r,\pi)$ is actually a constant, independent of $r$, and that we have
set that constant to zero. We thus have
\begin{equation}
{\dot M} \,=\, 2\,\pi\,\psi(r,0) \,\,.
\end{equation}
To nondimensionalize this result, we first set the fiducial mass accretion rate
to \hbox{$2\,\pi\,\rho_0\,c_s\,r_s^2$}. After using the expansion of $\psi$
from equation~(\ref{eqn:psind}), we obtain the nondimensional equation
\begin{equation}
{\dot M} \,=\, f_2 (0)\,r^2 \,+\, f_1 (0)\,r \,+\, f_0 (0) \,+\,
f_{-1}(0)\,r^{-1} \,+\, f_{-2} (0)\,r^{-2} \,\,... \,\,.
\end{equation}
One of our boundary conditions, ensuring regularity of $u_\theta$ on the
downstream axis, is that \hbox{$f_i (0)\,=\,0$} for
\hbox{$i\,=\,1,\,-1,\,-2,\,$} etc. Since \hbox{$f_2 (0)\,=\,0$}, we find
the simple relation
\begin{equation}\label{eqn:mdotf0}
{\dot M} \,=\, f_0 (0) \,\,.
\end{equation}
Both sides in this equation are functions of $\beta$, although we have not
indicated the dependence explicitly. In any case, the relation confirms our
expectation that the mass accretion rate is independent of the sphere's radius
$r$ in steady-state motion.\footnote{Note, however, that the original series
expansion for $\psi$ becomes inaccurate when $r$ is not much greater than
unity.} We also now see that the higher-order variables $f_{-1}$,
$f_{-2}$, etc. play no part in determining this rate.
\subsection{Relation to Density Perturbation}
Now that we have tied the mass accretion rate to $f_0 (0)$, we can immediately
rule out a subset of outer flow solutions as being unphysical. Figure~\ref{fig:2ndfandg} shows
that, for \hbox{$g_{-2} (\pi)\,=\,1.12$}, $f_0 (0)$ is negative,
corresponding to a net mass efflux. That such a situation is even possible
emphasizes once more the need to extend the flow solution inward across the
sonic surface.
For this same choice of $g_{-2} (\pi)$, the dotted curve in the lower panel of
Figure~\ref{fig:2ndfandg} shows that
\hbox{$g_{-2}(0)\,<\,g_{-2}(\pi)$}. Indeed, we have just found one example
of a general result: the difference
\hbox{$g_{-2} (0) \,-\, g_{-2} (\pi)$} agrees in sign with $f_0 (0)$. We now
show that the two quantities are in fact equal, apart from a multiplicative
factor.
Our proof starts with the fact that the lefthand side of the second-order
equation~(\ref{eqn:secondt2}) is a perfect derivative. Specifically,
\begin{equation}
-\beta\,f_0^\prime \,+\, {\cal D}\,g_{-2}^\prime \,-\,
2\,\beta^2\,{\rm sin}\,\theta\,\,{\rm cos}\,\theta\,\,g_{-2} \,\,=\,\,
{{d{\phantom\theta}}\over{d\,\theta}}
\left(-\beta\,f_0 \,+\,{\cal D}\,g_{-2}\right) \,\,.
\end{equation}
Turning to the righthand side of the same equation, we note first that
${\rm sin}\,\theta$ is an even function of $\theta-\pi/2$, while
${\rm cos}\,\theta$ is an odd function. Since $\cal D$ depends only on
${\rm sin}\,\theta$, it has even symmetry. Inspection shows that the
righthand side of equation~(\ref{eqn:secondt2}) has odd symmetry.
If we now integrate equation~(\ref{eqn:secondt2}) from \hbox{$\theta\,=\,\pi$} to 0, the
righthand side vanishes because of the odd symmetry of the integrand. We find
that
\begin{equation}
\left(-\beta\,f_0 \,+\,{\cal D}\,g_{-2}\right)_{\theta\,=\,\pi} \,=\,
\left(-\beta\,f_0 \,+\,{\cal D}\,g_{-2}\right)_{\theta\,=\,0} \,\,.
\end{equation}
Since \hbox{$f_0 (\pi) \,=\, 0$} and
\hbox{${\cal D} (\pi) \,=\, {\cal D} (0) \,=\, 1$}, we have
\begin{equation}
g_{-2} (\pi) \,=\, -\beta\,f_0 (0) \,+\, g_{-2} (0) \,\,,
\end{equation}
which we recast as
\begin{equation}
f_0 (0) \,=\,
{{g_{-2} (0) \,-\, g_{-2} (\pi) }\over\beta} \,\,.
\end{equation}
Recalling equation~(\ref{eqn:mdotf0}) that identifies $f_0 (0)$ as the mass accretion rate,
we now see that this rate is proportional to the difference, upstream and
downstream, of the second-order density perturbation. As $\beta$ approaches
zero, these two perturbations become equal. Indeed, the function
$g_{-2} (\theta)$ is a constant (equal to 1/2) in the limit, consistent
with a spherically symmetric flow.
\subsection{Modified Bondi Prescription}
To establish the physically relevant flow solutions, we need to specify the
accretion rate as a function of velocity. \citet{b52} fully solved the
\hbox{$\beta \,=\, 0$} problem. That is, he determined the complete
distribution of density and velocity surrounding a mass at rest within a
background gas. Dimensionally, he found for the mass accretion rate
\begin{equation}
{\dot M} \,=\, {{4\,\pi\,\lambda\,\rho_0\,G^2\,M^2}\over {c_s^3}} \,\,,
\end{equation}
where \hbox{$\lambda\,=\,{\rm e}^{3/2}/4 \,=\, 1.12$} for the isothermal case
of interest here. Recasting the rate into nondimensional form (recall Section 4.1), we have
\begin{eqnarray}
\lim_{\beta\,\rightarrow\,0} \,\,
{\dot M} \,&=&\, 2\,\lambda \nonumber \\
\label{eqn:bondimdot}
\,&=&\, {{{\rm e}^{3/2}}\over 2} \,\,,
\end{eqnarray}
as one constraint on the general form of $\dot M (\beta)$.
Prior to Bondi's work, \citet{hl39} studied accretion onto a mass traveling
through a zero-temperature gas. Their dimensional result was
\begin{equation}
{\dot M} \,=\, {{4\,\pi\,\rho_0\,G^2\,M^2}\over{V^3}} \,\,.
\end{equation}
Noting that the Mach number $\beta$ is effectively infinite in this case, the
equivalent, nondimensional finding is
\begin{equation}
\lim_{\beta\,\rightarrow\,\infty} \,\,
{\dot M} \,=\,{2\over{\beta^3}} \,\,.
\end{equation}
\citet{bh44} later showed that this relation provides an upper bound to the
accretion rate in the zero-temperature case. Through more careful analysis of
the wake, which here degenerates into an infinite-density spindle, they set
the lower limit a factor of two smaller.
The widely used interpolation formula of \citet{b52} connects these limits, at
least approximately. Nondimensionally, the Bondi prescription is
\begin{equation}
{\dot M} (\beta) \,=\,{1\over{\left(1\,+\,\beta^2\right)^{3/2}}} \,\,.
\end{equation}
In the low-$\beta$ limit, $\dot M$ falls short of the isothermal result, but
matches that for a \hbox{$\gamma\,=\,3/2$} polytrope. The high-$\beta$ limit
reproduces the lower bound established by \citet{bh44}.
Since we are focusing on the subsonic regime within an isothermal gas, we want
our low-$\beta$ limit to agree with the exact result. Following \citet{mt09},
we adopt a modified form of the classic interpolation formula:
\begin{equation}\label{eqn:intermdot}
{\dot M} (\beta) \,=\, {{2\,\left(\lambda^2\,+\,\beta^2\right)^{1/2}}\over
{\left(1\,+\,\beta^2\right)^2}} \,\,,
\end{equation}
where we use the isothermal value of $\lambda$ previously given. For
\hbox{$\beta\,\ll\,1$}, $\dot M$ approaches the result of \citet{b52} given in
equation~(\ref{eqn:bondimdot}). For \hbox{$\beta\,\gg\,1$}, we recover the upper limit of
\citet{bh44}. In the simulation of \citet{mt09} for an isothermal gas with $\beta=10$, this
modified interpolation formula matches the calculated accretion rate to within
20~percent. Judging from their own polytropic simulations, both \citet{h71} and
\citet{s85} had earlier suggested that the original Bondi $\dot M (\beta)$ be
augmented by about a factor of two.
In summary, equation~(\ref{eqn:intermdot}) should be
sufficiently accurate for our purposes.
The combination of equations~(\ref{eqn:mdotf0}) and (\ref{eqn:intermdot}) gives us the proper value of
$f_0 (0)$ at each $\beta$, and thus also establishes the physically relevant
outer flow solutions. Figure~\ref{fig:2ndg} shows the physical $f_0 (0)$ and $g_{-2}(\pi)$ as
functions of $\beta$. Note that the latter diverges as $\beta$ approaches unity. Thus, our perturbation series fails to describe the flow along the upstream axis in this limit. As we will show in the next section, however, the dynamical friction force remains finite for all $\beta$.
The three panels of Figure~\ref{fig:2ndstream} display streamlines and
isodensity contours for the indicated $\beta$-values. These curves were
constructed from equations~(\ref{eqn:psind}) and (\ref{eqn:rhond}) for $\psi$ and $\rho$, respectively,
using the three known terms in each series. The circle in each panel represents
the sonic surface. As always, our results are only accurate well beyond this
radius.
\begin{figure}
\plotone{figure4}
\caption{The upstream density perturbation $g_{-2}(\pi)$ for the physical accretion flow, shown as a function of Mach number $\beta$. This initial condition gives the correct $\dot{M} = f_0(0)$, also shown in the figure.}
\label{fig:2ndg}
\end{figure}
\begin{figure}
\vspace{-0.45in}
\begin{center}
\includegraphics[scale=0.4]{figure5c}\vspace{-0.63in}
\includegraphics[scale=0.4]{figure5b}\vspace{-0.63in}
\includegraphics[scale=0.4]{figure5a}
\end{center}
\caption{Streamlines ({\it solid}) and density contours ({\it dashed}) for three different Mach numbers $\beta$. The density contours correspond to $\rho=1.2,1.4,$ and 1.6. The innermost streamlines enclose the full mass accretion rate $\dot{M}$. Successive streamlines enclose 3, 5, and 7 times this rate. As in Figure~\ref{fig:1storder}, the inner circle represents the sonic surface. }
\label{fig:2ndstream}
\end{figure}
\section{Friction Force}
\label{sec:friction}
\subsection{Integral Expression}
The dynamical friction force $F$ is the total rate at which $z$-momentum is
transferred from the background gas to the gravitating mass. Within our
steady-state flow, the total momentum transfer rate into a surface surrounding
the mass is independent of the size and shape of that surface, provided it
lies outside the wake, where the physical interaction between the projectile
and gas occurs. The net momentum flow calculated through such a surface
integration all goes into the gravitating mass, causing its deceleration.
Imagine the gravitating mass to be surrounded by a large sphere of
radius $r$. In part, the $z$-momentum transfer arises from the advection of
this quantity in the flowing gas across the spherical surface. Since the inward
flux of $z$-momentum is $-\rho\,u_r\,u_z$ the {\it kinetic} portion of $F$ is,
dimensionally,
\begin{equation}
F_{\rm kin} \,=\, -2\,\pi \,\int_0^\pi\!
\rho\,u_r\,u_z\,r^2\,{\rm sin}\,\theta\,d\theta \,\,.
\end{equation}
Another contribution to $F$ is from the thermal pressure of the surrounding
gas. This {\it static} portion of the force is
\begin{equation}
F_{\rm static} \,=\, -2\,\pi\,\int_0^\pi\!
\rho\,c_s^2\,r^2\,{\rm cos}\,\theta\,{\rm sin}\,\theta\,
d\theta \,\,.
\end{equation}
Adding these two pieces, we have, after nondimensionalization,
\begin{equation}
F \,=\, -\int_0^\pi\!\rho\,u_r\,u_z\,r^2\,{\rm sin}\,\theta\,d\theta \,-\,
\int_0^\pi\!\rho\,r^2\,{\rm cos}\,\theta\,{\rm sin}\,\theta\,
d\theta \,\,,
\end{equation}
where we have set the unit of force equal to
\hbox{$2\,\pi\,\rho_0\,c_s^2\,r_s^2$}.
The integrand within the first, righthand term must be recast in terms of the
stream function:
\begin{equation}
\rho\,u_r\,u_z\,r^2\,{\rm sin}\,\theta \,=\,
{{{\rm cot}\,\theta}\over{\rho\,r^2}}
\left({{\partial\psi}\over{\partial\theta}}\right)^2 \,+\,
{1\over{\rho\,r}}
{{\partial\psi}\over{\partial\theta}}
{{\partial\psi}\over{\partial r}} \,\,.
\end{equation}
We may now evaluate $F$ using the series expansions for $\psi$ and $\rho$. The
full expression is a series of terms proportional to $r^2$, $r^1$, $r^0$, etc.
All terms in $F$ containing positive powers of $r$ vanish upon integration.
Those proportional to $r^1$ involve $f_1$ and $g_{-1}$, both of
which are known explicitly. Terms associated with negative powers of $r$
contain $f$- and $g$-variables which we have not yet calculated
(e.g., $f_{-1}$, $g_{-3}$). However, as we consider ever larger radii $r$,
where the series expansions themselves become increasingly accurate, these
terms also go to zero. Only those independent of $r$ survive.
After restricting ourselves to $r$-independent terms, we find
\begin{equation}\label{eqn:friction}
F \,=\, -\int_0^\pi\! \left[
\left(1-\beta^2\right)\,{\rm sin}\,\theta\,
{\rm cos}\,\theta\,\,g_{-2}
\,+\,\beta \left(1\,+\,{\rm cos}^2\,\theta\right)
f_0^\prime
\right]\! d\theta \,\,.
\end{equation}
Here, we have omitted a number of terms in the integrand containing $f_1$,
$g_{-1}$, and their derivatives. All of these terms are antisymmetric
with respect to \hbox{$\theta\,-\pi/2$} (i.e., they are odd functions), and
therefore vanish upon integration.
\subsection{Relation to Mass Accretion Rate}
By dimensional considerations, the friction force should be
\hbox{$F\,=\,C{\dot M}\,V$}, where $C$ is dimensionless. In the hypersonic limit, this multiplicative factor contains a Coulomb logarithm
\citep[e.g.][]{rs71}. We now demonstrate the surprising fact that, in the
subsonic case of interest here, the factor is exactly unity. In fully nondimensional
language, we shall prove that
\begin{equation}\label{eqn:mdotv}
F\,=\,{\dot M}\,\beta \,\,.
\end{equation}
We begin by splitting the integral on the righthand side of equation~(\ref{eqn:friction}) into
two parts:
\begin{mathletters}
\begin{eqnarray}
F\,&=&\, -\int_0^\pi\! \beta\,f_0^\prime\,d\theta \,-\,
\int_0^\pi\! \left[
\left(1-\beta^2\right)\,{\rm sin}\,\theta\,
{\rm cos}\,\theta\,\,g_{-2}
\,+\,\beta\,{\rm cos}^2\,\theta\,
f_0^\prime
\right]\! d\theta \\
{\phantom F} \,&=&\, \beta\,f_0 (0) \,-\, {\cal I} \,\, \\
{\phantom F} \,&=&\, \beta\,\dot{M} \,-\, {\cal I} \,\,.
\end{eqnarray}
\end{mathletters}
In these equations, we have used the
fact that \hbox{$f_0 (\pi)\,=\,0$} and $f_0(0)=\dot{M}$ (eq. \ref{eqn:mdotf0}). We have further defined
\begin{equation}
{\cal I} \,\equiv\,
\int_0^\pi\! d\theta\,\left[
\left(1-\beta^2\right)\,{\rm sin}\,\theta\,
{\rm cos}\,\theta\,\,g_{-2}
\,+\,\beta\,{\rm cos}^2\,\theta\,
f_0^\prime
\right] \,\,.
\end{equation}
We next show that $\cal I$ vanishes.
First recall that our flow is irrotational. Specifically, the $\phi$-component of the vorticity vanishes, so that
\begin{equation}
\frac{\partial\,u_r}{\partial\,\theta} \,-\,
\frac{\left(r\,u_\theta\right)}{\partial\,r} \,=\,0 \,\,.
\end{equation}
Expressing both velocity components in terms of the stream function through
equations~(\ref{eqn:ur}) and (\ref{eqn:ut}), we have
\begin{equation}
\frac{\partial\rho}{\partial\theta}\,\frac{\partial\psi}{\partial\theta}\,+\,
\rho\,{\rm cot}\,\theta\,\frac{\partial\psi}{\partial\theta} \,-\,
\rho\,\frac{\partial^2\psi}{\partial \theta^2}\,+\,
r^2\,\frac{\partial\rho}{\partial r}\,\frac{\partial\psi}{\partial r}\,-\,
\rho\,r^2\,\frac{\partial^2\psi}{\partial r^2} \,=\, 0 \,\,.
\end{equation}
We substitute the series expansions for $\psi$ and $\rho$ into this last
equation and set the coefficients of all powers of $r$ to zero. Following this
procedure for $r^2$ and $r^1$, and using the known expressions for $f_2$,
$f_1$, and $g_{-1}$, yields identities. However, setting the $r$-independent
terms to zero leads to a nontrivial result:
\begin{equation}
\beta\,f_0^{\prime\prime} \,-\,\beta\,{\rm cot}\theta\,f_0^\prime \,-\,
\beta^2\,{\rm sin}\,\theta\,{\rm cos}\,\theta\,g_{-2}^\prime\,+\,
2\,\beta^2\,{\rm sin}^2\,\theta\,g_{-2} \,=\,
\frac{1\,-\,\sqrt{\cal D}}{\cal D} \,\,.
\end{equation}
We add this last equation to the second-order equation~(\ref{eqn:secondr2}), obtaining
\begin{equation}
2\,\left(\beta^2\,-\,1\right) g_{-2} \,-\,
2\,\beta\,{\rm cot}\,\theta\,f_0^\prime
\,=\,\frac{2}{\cal D} \,-\,
\frac{4}{\sqrt{\cal D}} \,+\,
\frac{2}{1\,+\,\sqrt{\cal D}} \,\,.
\end{equation}
Multiplying through by \hbox{$-(1/2)\,{\rm sin}\,\theta\,{\rm cos}\,\theta$}
gives
\begin{equation}
\left(1\,-\,\beta^2\right) {\rm sin}\,\theta \,{\rm cos}\,\theta \,g_{-2} \,+\,
\beta\,{\rm cos}^2\,\theta\,f_0^\prime \,=\,
-{\rm sin}\,\theta\,{\rm cos}\,\theta
\left(\frac{1}{\cal D} \,-\,
\frac{2}{\sqrt{\cal D}} \,+\,
\frac{1}{1\,+\,\sqrt{\cal D}} \right)
\,\,.
\end{equation}
Integrating over $\theta$, we recognize the lefthand side of the resulting
equation as $\cal I$. The righthand side vanishes, since the integrand is an
odd function. We see therefore that equation (\ref{eqn:mdotv}) holds.
If we now employ the modified Bondi prescription, equation~(\ref{eqn:intermdot}) for $\dot M$,
we have an explicit expression for the force:
\begin{equation}\label{eqn:bondifric}
F \,=\, {{2\,\beta\,\left(\lambda^2\,+\,\beta^2\right)^{1/2}}\over
{\left(1\,+\,\beta^2\right)^2}} \,\,.
\end{equation}
Figure~\ref{fig:friction} displays the function $F (\beta)$. Also shown, as the dashed curve, is
the result from \citet{o99} in which the force diverges as $\beta$ approaches unity. In the limit of low $\beta$, both forces rise linearly from zero at $\beta=0$, but our initial slope is larger by a factor of $3\lambda=3.36$. Indeed, over most $\beta$-values, our force exceeds that derived by \citet{o99}, presumably because we have included both the gravitational tug from the wake and the direct accretion of momentum from the flow. Our force does not rise monotonically but instead peaks around $\beta=0.68$ and then begins to decline; we expect this decline to continue into the supersonic regime. We should bear in mind that, while equation~(\ref{eqn:mdotv}) is exact, equation~(\ref{eqn:bondifric}) for $F$ is only as accurate as the underlying interpolation formula.
\begin{figure}
\plotone{figure6}
\caption{The dimensionless friction force $F$ as a function of Mach number $\beta$. The dashed curve shows the force derived by \citet{o99}, which diverges as $\beta$ approaches unity.
}
\label{fig:friction}
\end{figure}
\section{Velocity and Mass Evolution}
\label{sec:velocity}
Our simple result for the dynamical friction force means that the
deceleration of the gravitating mass is also simply described, as long
as there are no other forces at play. As we have stressed, the force is the
rate at which gas transfers linear momentum to the object. But the object's
momentum is $M V$, where we now revert to dimensional variables. In the
reference frame where the background gas is stationary, we have
\begin{equation}
\frac{d(MV)}{dt} \,=\, -{\dot M}\,V \,\,,
\end{equation}
which implies that
\begin{equation}
\frac{1}{V}\,\frac{dV}{dt} \,=\,-\frac{2}{M}\,\frac{dM}{dt} \,\,.
\end{equation}
If $V_0$ and $M_0$ are the object's initial speed and mass, respectively, then
\begin{equation}
\frac{V}{V_0} \,=\,\left( \frac{M}{M_0}\right)^{-2} \,\,.
\end{equation}
To track the speed as a function of time, we rewrite equation~(\ref{eqn:intermdot}) for the
mass accretion rate as
\begin{equation}
\frac{dM}{dt} \,=\, {{4\,\pi\,\rho_0\,c_s\,r_s^2\,
\left(\lambda^2\,+\,\beta^2\right)^{1/2}}\over
{\left(1\,+\,\beta^2\right)^2}}
\left({M\over M_0}\right)^2 \,\,.
\end{equation}
Here, \hbox{$\beta\,\equiv\,V/c_{\rm s}$} as before, while $r_s$ is now defined in
terms of the {\it initial} mass: \hbox{$r_s\equiv\,2\,G\,M_0/c_s^2$}. The
fully nondimensional evolutionary equation for the speed is then
\begin{equation}\label{eqn:velocity}
\left(\frac{1}{\beta}\right)\,
\frac{d\beta}{d\tau} \,=\, -{{4\,\left(\lambda^2\,+\,\beta^2\right)^{1/2}}
\over{\left( 1\,+\,\beta^2\right)^2}}
\left(\frac{\beta}{\beta_0}\right)^{-1/2} \,\,.
\end{equation}
In this last equation, we have introduced the initial, nondimensional speed
$\beta_0$, as well as a nondimensional time, \hbox{$\tau\,\equiv\,t/t_0$},
where
\begin{eqnarray}
t_0 \,&\equiv&\,\frac{c_s^3}{2\,\pi\,\rho_0\,G^2\,M_0} \\
&=\label{eqn:veltime}&\, \frac{M_0}{2\,\pi\,\rho_0\,c_s\,r_s^2} \,\,.
\end{eqnarray}
The denominator in equation (\ref{eqn:veltime}) is the fiducial mass accretion
rate defined in Section 4.1. Thus, $t_0$ is of order the accretion time onto the initial mass.
The upper panel of Figure~\ref{fig:massvel} plots $\beta (\tau)$ for \hbox{$\beta_0\,=\,0.2,\,0.5$} and 0.8,
obtained by numerical integration of equation~(\ref{eqn:velocity}). Also shown, in the lower
panel, is the growth of the nondimensional quantity $M$, the mass of the
gravitating body relative to its initial value. As expected, the body slows
down appreciably within an accretion time.
\begin{figure}
\plotone{figure7}
\caption{Evolution of a particle's speed and mass as a function of nondimensional time $\tau$. The different curves represent initial speeds $\beta_0=0.8, 0.5,$ and $0.2$. A particle both triples its mass and slows to $\sim0.1$ times its initial speed in a fraction of its mass accretion time.
}
\label{fig:massvel}
\end{figure}
\section{Summary and Discussion}
\label{sec:summary}
This study has pivoted on the close relationship between the dynamical friction
force, i.e., the transfer of linear momentum from gas to a gravitating object,
and the transfer of mass to that same object. This relationship is embodied in
our central result, equation~(\ref{eqn:mdotv}). From this equation, in turn, we derived an
analytic expression for the force itself, equation~(\ref{eqn:bondifric}).
We are now in a position to address a basic question raised in Section~2.1. How
are we justified in assuming steady-state flow, when the gravitating body is
continually decelerating? The answer is that quasi-steady flow is established
within a radius $r_{\rm crit}$ over which the sound crossing time ($r_{\rm crit}/c_{\rm s}$) equals
the time for the object's momentum to decrease appreciably ($MV/F$). Recalling that $F$ is normalized to $2\pi\,\rho_0\,c^2_{\rm s}\,r^2_{\rm s}$ and using equation (\ref{eqn:mdotv}), we have, nondimensionally,
\begin{equation}
r_{\rm crit} \,=\,\alpha\,\frac{M}{\dot M} \,\,,
\end{equation}
where \hbox{$\alpha\,\equiv\, M_0/(2\,\pi\,\rho_0\,r_s^3)$}. The latter
quantity was implicitly assumed to be large from the start, when we neglected
the self-gravity of the gas. The nondimensional mass accretion rate
\hbox{${\dot M}\,=\,f_0 (0)$} hovers near unity for the entire evolution
(recall~Fig.~\ref{fig:2ndg}), while $M$ itself starts at unity and climbs. Hence, the
critical radius is much larger than $r_s$, and our analysis is
self-consistent.
We note that dynamical friction still operates in circumstances where mass
accretion is frustrated. For example, a wind-emitting star moving through a gas cloud experiences mass loss rather than mass gain. Cloud gas impacting the wind upstream is arrested or refracted in a bowshock, as analytically calculated by \citet{w96}. Downstream, the wind forms a supersonic jet. As long as the upstream standoff radius of the shock lies within $r_s$ and the downstream jet is relatively narrow, the far-field perturbations are close to what we have obtained, and equation~(\ref{eqn:bondifric}) for $F$ still applies.
When the object is actually able to accept gas freely, dynamical friction
arises in two physically distinct ways. First, there is the gravitational tug
from the wake. Second, momentum is transferred directly to the object by gas
falling onto it. Our finding that these two forces sum to ${\dot M}\,V$ is at least roughly consistent with simulations. In a numerical study directed primarily at the
mass accretion issue, \citet{r96} explicitly determined both force
contributions on accretors of various size in a \hbox{$\gamma\,=\,1.01$} gas. For \hbox{$R/r_{\rm acc}\,=\,0.1$} and \hbox{$\beta\,=\,0.6$}, the simulation ended before the flow reached steady-state (see his Fig. 2). After initial transients died out, the gravitational drag was steady until $t\approx 13\ t_{\rm BH}$, where the Bondi-Hoyle time $t_{\rm BH}$ is $r_{\rm acc}/c_{\rm s}$. Thereafter, this force component declined for the rest of the integration. At the end of the simulation ($t = 32\ t_{\rm BH}$), the sum of the gravitational drag and momentum accretion forces was $1.2\ {\dot M}\,V$. For \hbox{$R/r_{\rm acc}\,=\,0.02$} and the same Mach number, the two forces quickly leveled off, with a sum equal to $1.4\ {\dot M}\,V$. However, this simulation ran only until $t = 10\ t_{\rm BH}$, so it is not clear whether the gravitational drag would have later declined, as in the first case.
Following historical precedent, we have restricted our investigation to an isothermal gas. For an isentropic gas with $\gamma > 1$, it seems likely that the friction force will still be given by $\dot{M}\, V$, as long as the accretor is moving subsonically. Verifying this equality analytically would require a perturbation study analogous to the present one. We leave such a project for future investigators.
Again the current body of numerical studies is in broad accord with our expectation. \citet{r94} determined the total friction force on an accretor moving through a $\gamma=5/3$ gas. For \hbox{$R/r_{\rm acc}\,=\,0.1$} and $\beta=0.6$, the friction force was $1.1\ \dot{M}\, V$ at $t=70\ t_{\rm BH}$. For \hbox{$R/r_{\rm acc}\,=\,0.02$} and the same Mach number, the flow had not achieved steady state by $t=19\ t_{\rm BH}$. The total force was $1.8\ \dot{M}\, V$ at this time, but was falling rapidly. A future project of interest would be to redo these simulations over a range of $\beta$- and $\gamma$-values, running the simulations long enough until a true steady state is reached.
For the more general isentropic case, $\dot{M}$ can no longer be approximated by equation (\ref{eqn:intermdot}). Instead the value of $\dot{M}$ at a given $V$ decreases with higher $\gamma$-values, as shown analytically by \citet{b52} for $V=0$, and as seen in the simulations of \citet{r94,r95,r96} for accretors moving relative to the background gas. Isentropic flows are less compressible than isothermal ones, so the wake will be less dense. As a result, the friction force will also be lower, presumably by the same amount as the accretion rate $\dot{M}$.
In the present investigation, we have been unable to tease apart analytically the two force
contributions. To do so would require study of the flow closer to the
gravitating mass, specifically across the sonic surface. In principle, a
perturbation series in this region could be linked to the outer one developed
here. Besides elucidating the momentum transfer through infall, such a study could also establish $\dot M$ analytically as a function of velocity, thus putting accretion theory as a whole on a firmer foundation.
\acknowledgements
We gratefully acknowledge useful conversations from a number of colleagues
during the course of the project. These include Jon Arons, Phil Chang, Chris McKee, and Prateek Sharma. We thank the referee Thiery Foglizzo for an insightful report that helped improve the clarity of our paper. ATL acknowledges support from
an NSF Graduate Fellowship, while SWS was partially funded by
NSF Grant~0908573.
\clearpage
|
1,314,259,993,530 | arxiv | \section{Introduction}\label{sec:Introduction}
The oscillation modes of neutron stars (NSs) provide a means to probe the
internal composition and state of dense matter. NSs have rich oscillation
spectra, with modes associated with different physical origins, such as
the internal ingredients, the elasticity of the crust, superfluid components,
and so on \citep{Andersson:2019}. For typical non-rotating fluid stars, the
oscillation modes include the fundamental ($f$), pressure ($p$), and gravity
($g$) modes, which provided the basic classification of modes according to the
physics dominating their behaviours \citep{Cowling:1941}. More realistic stellar
models and rotation introduce additional classes of oscillation modes.
In this work, we study the $g$-mode oscillations for non-rotating NSs in the
framework of pseudo-Newtonian gravity \citep{Marek:2005if, Mueller:2008it,
Yakunin:2015wra, Morozova:2018glm, OConnor:2018sti, OConnor:2015rwy,
Zha:2020gjw, Tang:2021woo}. \citet{Reisenegger:1992} investigated the $g$-mode
induced by composition (proton-to-neutron ratio) gradient in the cores of NSs.
Moreover, hot young NSs may excite $g$-modes supported by entropy gradients
\citep{McDermott:1983, McDermott:1988, Ferrari:2003nk, Kruger:2014pva}. It has
also been demonstrated that the onset of superfluidity has a key influence on
the buoyancy that supports the $g$-modes \citep{Lee:1995, Andersson:2001bz,
Passamonti:2015oia}. Density discontinuity produced by abrupt composition
transitions may play an important role in determining the $g$-mode properties
\citep{Finn:1987, McDermott:1990}. \citet{Sotani:2001bb} calculated $f$ and $g$
modes of NSs with density discontinuity at an extremely high density and
discussed the stability of the stellar models. A phase transition occurred in
the cores of NSs with a polytropic equation of state (EOS) has been studied by
\citet{Miniutti:2002bh}. The frequencies of $g$-modes from density
discontinuity are larger than those induced by the entropy gradient. Furthermore,
discontinuity $g$-mode may occur in perturbed quark-hadron hybrid stars
\citep{Tonetto:2020bie, Constantinou:2021hba}. Recently, \citet{Zhao:2022toc}
considered the $g$-mode of NSs containing quark matter and discussed the Cowling
approximation, which leads to a relative error of $\sim 10\%$ for higher-mass
hybrid stars. We here focus on the $f$ and $g$ modes of NSs in pseudo-Newtonian
gravity caused by the first-order phase transition in the cores of NSs.
The study of NS oscillations is timely in the gravitational-wave era
\citep{LIGOScientific:2017vwq, LIGOScientific:2018cki, Li:2022qql}. Tidal
interaction in a coalescing binary NS can resonantly excite the $g$-mode
oscillation of NSs when the frequency of the tidal driving force approaches the
$g$-mode frequencies \citep{Lai:1993di, Kuan:2021jmk}. Moreover, the mixture of
pure-inertial and inertial-gravity modes can become resonantly excited by
tidal fields for rotating NSs \citep{Lai:2006pr, Xu:2017hqo}. The $g$-mode can
also result in secular instability in rotating NSs \citep{Lai:1998yc}.
\citet{Gaertig:2009rr} considered the $g$-mode of fast-rotating stratified NSs
using the relativistic Cowling approximation. The typical scenarios pertain to
the $p$-$g$ mode instability and the saturation of unstable modes
\citep{Weinberg:2013pbi, LIGOScientific:2018ehx}. The universal relation of
$g$-mode asteroseismology has been discussed by \citet{Kuan:2022bhu} for
different classes of EOSs. In particular, the absence of very low-frequency
$g$-modes helps to explain the absence of tidal resonances
\citep{Andersson:2019mxp}. The cut-off in the high-order $g$-mode spectrum may
also be relevant for scenarios of nonlinear mode coupling. The properties of
$g$-modes for newly-born strange quark stars and NSs using Cowling approximation
in Newtonian gravity have been discussed by \citet{Fu:2008bu}.
Hydrodynamical simulations are necessary to study the properties of the proto-NS
in a core-collapse supernova (CCSN). The $g$-mode of such a scenario may impact
associated gravitational waves~\citep{Ott:2006qp}. However, the physics of
neutrino transport and EOS is very uncertain for the hydrodynamical simulations.
As multi-dimensional general-relativitic (GR) codes for numerical simulations
are scarce and have high demand of computational cost, most previous
investigations relied on the Newtonian approximation for the strong
gravitational field and fluid dynamics \citep{Marek:2005if, Mueller:2008it}.
Nevertheless, ``Case A potential'' formalism (c.f.\ Sec.~\ref{sec: Case A
potentia}) was found to be a good approximation to relativistic solutions in
simulating non-rotating or slowly rotating CCSNs. This potential allows for an
accurate approximation of GR effects in an otherwise Newtonian hydrodynamic
code, and it also works for cases of rapid rotation \citep{Mueller:2008it}.
This has motivated a sequence of CCSN simulations \citep{Yakunin:2015wra,
Morozova:2018glm, OConnor:2018sti, OConnor:2015rwy}. The effectiveness of using
the Case A potential formalism to approximate GR has been studied by
\citet{Pajkos:2019nef}, \citet{OConnor:2018sti}, and \citet{Zha:2020gjw}.
Besides, adding a lapse function in the CCSN simulation has been discussed by
\citet{Mueller:2008it} and \citet{Zha:2020gjw}.
Case A potential with lapse function formalism can predict very accurate
frequencies of oscillating NSs. Recently, \citet{Tang:2021woo} studied the
radial and non-radial oscillation modes of NSs in pseudo-Newtonian gravity,
including the Case A potential with and without the lapse function. Motivated
by \citet{Tang:2021woo}, we here study the $g$-mode of NS cores using Case A
potential formalism with and without the lapse function. Our findings suggest
that, with much less computational cost, pseudo-Newtonian gravity can be
utilized to accurately analyze oscillation of NSs constructed from an EOS with a
first-order phase transition, thus to provide an excellent approximation
of GR effects in CCSN simulations.
The paper is organized as follows. In Sec.~\ref{sec: Key_ingredient}, we
introduce the key ingredients of the model, including different pseudo-Newtonian
schemes and the buoyancy nature associated with $g$-mode. The local dynamics of
NS cores, including composition gradient and density discontinuity, are
presented in Sec.~\ref{sec: NR}. Finally, we summarize our work in
Sec.~\ref{sec: conculsion}. Throughout the paper, we adopt geometric units with
$c=G=1$, where $c$ and $G$ are the speed of light and the gravitational
constant, respectively.
\section{Key ingredients of the model}\label{sec: Key_ingredient}
\subsection{Case A potential in pseudo-Newtonian gravity}\label{sec: Case A potentia}
Case A effective potential is defined by replacing the Newtonian gravitational
potential in a spherically symmetric Newtonian hydrodynamic simulation by
\citep{Marek:2005if, Tang:2021woo}
\begin{equation} \label{eq:phiTOV}
\Phi_{\rm TOV}(r)=-4\pi\int^\infty_r \frac{\mathrm d r^{\prime}}{r^{\prime 2}} \left(\frac{m_{\rm TOV}}{4\pi}+r^{\prime 3} P\right) \times \frac{1}{\Gamma^2} \left(\frac{\rho+\rho\varrho+P}{\rho}\right)\,,
\end{equation}
where $r$ is the radial coordinate, $\rho$ is the rest-mass density, $P$ is the
pressure, $\varrho$ is the specific internal energy, and the total energy
density is given by $\epsilon=\rho+\rho\varrho$. The function $m_\text{TOV}$ is
defined by
\begin{equation} \label{eq:mTOV}
m_{\rm TOV}(r) = 4\pi\int^r_0 \mathrm d r^{\prime} r^{\prime 2} \epsilon \Gamma \,,
\end{equation}
with
\begin{equation}
\Gamma=\sqrt{1-2\frac{m_{\rm TOV}}{r}} \,.
\label{eq: Gamma}
\end{equation}
From Eq.~(\ref{eq:phiTOV}) and Eq.~(\ref{eq:mTOV}), we have
\begin{align}
\frac{{\rm d} m_{\rm TOV}}{{\rm d} r} &= 4\pi r^2\epsilon\Gamma
\label{eq: def_1}\,, \\
\frac{{\rm d} \Phi_{\rm TOV}}{{\rm d} r} &= \frac{4\pi}{r^2}\left(\frac{m_{\rm TOV}}{4\pi}+r^3 P\right)
\frac{1}{\Gamma^2}\frac{(\epsilon+P)}{\rho}
\label{eq: def_2}\,.
\end{align}
We use the Case A and Case A+lapse schemes and the other four schemes to study
the $g$-mode originating from the composition gradient and density discontinuity
of NS cores in the framework of pseudo-Newtonian gravity. All background and
perturbation equations for each scheme are given in the next three subsections
and summarized in Table \ref{tab: schemes}.
\begin{table}
\centering
\caption{Different schemes to calculate the oscillation modes, along with
the corresponding background and the lapse function. Non-radial perturbation
equations are the same [Eqs.~(\ref{eq: bc_p1}--\ref{eq: bc_p4})] for all six
schemes, but some of them include a lapse-function $\alpha$ in the
hydrodynamic equations. Note that the lapse function only appears in the
perturbation equations but not in the background equations.}
\begin{tabular}{llc}
\hline
Scheme & Background equations & Lapse function $\alpha$ \\
\hline
N & Eqs. (\ref{eq: N_dmdr}) to (\ref{eq: N_dphidr}) & --\\
N+lapse & Eqs. (\ref{eq: N_dmdr}) to (\ref{eq: N_dphidr}) & Eq. (\ref{eq: lapse})\\
TOV & Eqs. (\ref{eq: GR_dmdr}) to (\ref{eq: GR_dphidr}) & --\\
TOV+lapse & Eqs. (\ref{eq: GR_dmdr}) to (\ref{eq: GR_dphidr}) & Eq. (\ref{eq: lapse})\\
Case A & Eqs. (\ref{eq: A_dmdr}) to (\ref{eq: A_dphidr}) & --\\
Case A+lapse & Eqs. (\ref{eq: A_dmdr}) to (\ref{eq: A_dphidr}) & Eq. (\ref{eq: lapse})\\
\hline
\end{tabular}
\label{tab: schemes}
\end{table}
\subsection{Equilibrium configurations}\label{sec: EC}
We consider the following three sets of equilibrium configurations.
\begin{enumerate}[(I)]
\item For the Newtonian (N) and Newtonian+lapse function (N+lapse) schemes,
the hydrostatic equilibrium equations are
\begin{align}
\frac{{\rm d} m}{{\rm d} r} &= 4\pi r^2\rho
\label{eq: N_dmdr} \,, \\
\frac{{\rm d} P}{{\rm d} r} &= -\frac{\rho m}{r^2}
\label{eq: N_dpdr} \,, \\
\frac{{\rm d} \Phi}{{\rm d} r} &= -\frac{1}{\rho}\frac{{\rm d} P}{{\rm d} r}
\label{eq: N_dphidr} \,.
\end{align}
where $\rho$ is the rest-mass density, and $\Phi$ is the Newtonian gravitational
potential.
\item Instead, if we consider spherical and static stars in GR, we have the
Tolman-Oppenheimer-Volkoff (TOV) equations
\begin{align}
\frac{{\rm d} m}{{\rm d} r} &= 4\pi r^2\epsilon
\label{eq: GR_dmdr} \,, \\
\frac{{\rm d} P}{{\rm d} r} &= -\frac{(\epsilon+P)(m+4\pi r^3 P)}{r(r-2m)}
\label{eq: GR_dpdr} \,, \\
\frac{{\rm d} \Phi}{{\rm d} r} &= -\frac{1}{\epsilon+P}\frac{{\rm d} P}{{\rm d} r}
\label{eq: GR_dphidr} \,.
\end{align}
\item Lastly, for the Case A and Case A+lapse schemes, the background
equations are obtained by replacing the Newtonian gravitational potential by
the Case A potential \citep{Marek:2005if, Tang:2021woo}, and we have
\begin{align}
\frac{{\rm d} m}{{\rm d} r} &= 4\pi r^2\epsilon\Gamma
\label{eq: A_dmdr} \,, \\
\frac{{\rm d} P}{{\rm d} r} &= -\frac{4\pi}{r^2}\left(\frac{m}{4\pi}+r^3 P\right)\frac{1}{\Gamma^2}(\epsilon+P)
\label{eq: A_dpdr} \,, \\
\frac{{\rm d} \Phi}{{\rm d} r} &= -\frac{1}{\rho}\frac{\mathrm d P}{\mathrm d r}
\label{eq: A_dphidr} \,.
\end{align}
\end{enumerate}
\citet{Tang:2021woo} studied the radial and non-radial oscillation of NSs using
different combinations of modified Newtonian hydrodynamic equations and
gravitational potentials. In particular, \citet{Tang:2021woo} adopted Case A
effective potential with a lapse function correction to the perturbation
equations. The lapse function is defined by
\begin{equation}\label{eq: lapse}
\alpha=\exp (\Phi) \,.
\end{equation}
\citet{Tang:2021woo} found that for the non-radial quadrupolar $f$-mode, the
Case A+lapse scheme performs much better and can approximate the $f$-mode
frequency to within about a few percent even for the maximum-mass configuration
in GR. We will use the same lapse function in our calculations.
\subsection{Buoyancy and the $g$-mode}\label{sec: g-mode}
As well known that NSs always have real frequency $f$-mode and $p$-mode regimes.
However, $g$-mode may have a real, imaginary, and zero frequency, which
correspond to convective stability, instability, and marginal stability. We
consider the local dynamics of NS cores, focusing on the buoyancy experienced by
fluid elements and the associated $g$-mode. The frequencies of $g$-modes are
closely related to the Brunt-V\"ais\"al\"a frequency $N$, defined via
\begin{equation}\label{eq: buoyancy}
N^2=g_{N}^2\left(\frac{1}{c_\mathrm e^2}-\frac{1}{c_\mathrm s^2}\right) \,,
\end{equation}
where $g_{N}$ is the positive Newtonian gravitational acceleration, $c_\mathrm s$
is the adiabatic sound speed, given by
\begin{equation}\label{eq: adiabatic sound speed}
c_\mathrm s^2= \left(\frac{\partial P}{\partial \rho}\right)_{\mathrm s} \,,
\end{equation}
and the quantity $c_\mathrm e$ is given by
\begin{equation}\label{eq: equilibrium sound speed}
c_\mathrm e^2= \frac{{\rm d} P}{{\rm d} \rho} \,.
\end{equation}
If $c_\mathrm s^2 = c_\mathrm e^2$, the star exhibits no convective phenomena (zero-buoyancy
case). In this work, we consider only the $g$-mode of NS cores, so we set
$c_\mathrm s^2 = c_\mathrm e^2$ for the crustal region. Again, $c_\mathrm s^2 > c_\mathrm e^2$ ($c_\mathrm s^2 <
c_\mathrm e^2$) denotes convective stability (instability). Combining Eqs.~(\ref{eq:
buoyancy}--\ref{eq: equilibrium sound speed}), we can write the
Brunt-V\"ais\"al\"a frequency as
\begin{equation}\label{eq: bf}
N^{2}= -A g_{N} \,,
\end{equation}
where $A$ is
\begin{equation}\label{eq: S_discriminant_1}
A= \frac{{\rm d} \ln\rho}{{\rm d} r}- \frac{1}{\Gamma_{1}} \frac{{\rm d} \ln P}{{\rm d} r} \,,
\end{equation}
which is called the Schwarzschild discriminant. If the star model obeys a
simple polytropic EOS, $P=K\rho^{\gamma}$, then $\gamma=\rm d\ln P/ \rm d\ln\rho$
is defined for the unperturbed background configuration. Hence, the
Schwarzschild discriminant becomes
\begin{equation}\label{eq: S_discriminant_2}
A= \left(\frac{1}{\gamma} - \frac{1}{\Gamma_{1}}\right) \frac{{\rm d} \ln P}{{\rm d} r} \,.
\end{equation}
Clearly, if the adiabatic index $\Gamma_{1}>\gamma$, the star is related to the
convective stability, in the case of $c_\mathrm s^2 > c_\mathrm e^2$. In Sec.~\ref{sec:
Composition gradient}, we will calculate the frequencies of $g$-modes for the
composition gradient, which is related to the discussion here..
\subsection{Non-radial perturbation equations}\label{sec: nonradial_osc}
In this section, we study non-radial oscillations of NSs in pseudo-Newtonian
gravity. \citet{Tang:2021woo} calculated the quadrupole ($\ell=2$) $f$ and $p$
modes. The perturbation of scalars is expanded in spherical harmonics and the
Lagrangian displacement is expanded in vector spherical harmonics
\citep{McDermott:1988, Tang:2021woo}. When considering an eigenmode, we have
\begin{align}
&\delta\rho = \delta\tilde{\rho}(r)Y_{\ell m}\,,\\
&\delta P = \delta\tilde{P}(r)Y_{\ell m} \,, \\
&\delta\Phi = \delta\tilde{\Phi}(r)Y_{\ell m} \,, \\
&\vec{\xi} = U(r) Y_{\ell m} {\hat r} +V(r) \nabla Y_{\ell m} \,,
\end{align}
where $Y_{\ell m}$ is the standard spherical harmonic function, and $\hat r$ is
the radial unit vector. Then one can obtain the following system of equations
for the fluid perturbations~\citep[see][for a detailed variational
derivation]{Tang:2021woo},
\begin{align}
\frac{{\rm d} U}{{\rm d} r}&=-\left(\frac{2}{r}+\frac{{\rm d}\Phi}{{\rm d}r}+\frac{1}{\gamma P}\frac{{\rm d}P}{{\rm d}r} - \frac{A}{\alpha}\right)U
+ \left[\frac{\alpha \ell(\ell+1)}{\rho r^2 \omega^2} - \frac{1}{\alpha\Gamma_1 P}\right]\delta\tilde{P} \nonumber\\
& \quad + \frac{\alpha \ell(\ell+1)}{r^2 \omega^2}\delta\tilde{\Phi} \,,
\label{eq: bc_p1} \\
\frac{{\rm d}\delta\tilde{P}}{{\rm d} r}&=\left(\frac{\rho\omega^2}{\alpha} - \frac{{\rm d} P}{{\rm d} r}A\right)U + \frac{1}{\Gamma_1 P}\frac{{\rm d} P}{{\rm d} r}\delta\tilde{P} - \rho\frac{{\rm d} \delta\tilde{\Phi}}{{\rm d}r} \,, \label{eq: bc_p2} \\
\frac{{\rm d} \delta\tilde{\Phi}}{{\rm d} r}&=\Psi \,,
\label{eq: bc_p3}\\
\frac{{\rm d} \Psi}{{\rm d} r}&=-\frac{2}{r}\Psi + \frac{\ell(\ell+1)}{r^2}\delta\tilde{\Phi} + 4\pi\frac{\rho}{\Gamma_1 P}\delta\tilde{P} - 4\pi\rho AU \,.
\label{eq: bc_p4}
\end{align}
To solve these equations, we require the boundary conditions at the center and
surface of the NS. At the center, the regularity conditions of the variables
yield the following relations \citep{Westernacher-Schneider:2020bkw,
Tang:2021woo}
\begin{align}
& U = r^{\ell-1}A_{0} \,, \\
& \delta\tilde{P} = r^{\ell} B_{0} \,, \\
& \delta\tilde{\Phi} = r^{\ell} C_{0} \,, \\
& \Psi = \ell r^{\ell-1} C_{0} \,, \\
& A_{0} = \frac{\alpha \ell}{\rho\omega^{2}} (B_{0} + \rho C_{0}) \,,
\end{align}
where $B_{0}$ and $C_{0}$ are constants. At the surface of the star, the
perturbed pressure must vanish, which provides
\begin{equation}
\frac{{\rm d} P}{{\rm d} r}U+\delta\tilde{P}=0 \label{eq: surface_BC1} \,.
\end{equation}
The $\delta\tilde{\Phi}$ and ${{\rm d}\delta\tilde{\Phi}}/{{\rm d}r}$ are
continuous, so we obtain
\begin{equation}
\Psi=-\frac{\ell+1}{r}\delta\tilde{\Phi} \label{eq: surface_BC2} \,.
\end{equation}
Note that in the N, Case A, and TOV schemes, the lapse function equals to 1
($\alpha=1$).
\begin{table*}
\centering
\caption{Comparison of the non-radial mode frequencies (unit: Hz) of a
polytropic star model where polytropic index $\gamma=2$, $K=1.4553 \times
10^{5} \ \rm g^{-1}\,cm^{5}\,s^{-2}$, and central density
$\rho_{c}=7.9\times10^{14}\ \rm g\,cm^{-3}$, to earlier results of \citet{Westernacher-Schneider:2020bkw} and \citet{Tang:2021woo}.}
\begin{tabular}{c c c c c c c }
\hline
Mode & \citet{Westernacher-Schneider:2020bkw} & \citet{Tang:2021woo} & $\Gamma_{1}=2.01$ &
$\Gamma_{1}=2.05$ &$\Gamma_{1}=2.1$ & $\Gamma_{1}=2.15 $ \\
\hline
$p_{2}$ & 7290 & 7932 & 7957 & 8049 & 8163 & 8276 \\
$p_{1}$ & 5122 & 5131 & 5151 & 5216 & 5297 & 5377 \\
$f$ & 2024 & 2021 & 2021 & 2025 & 2029 & 2032 \\
$g_{1}$ & -- & -- & 143 & 317 & 441 & 532 \\
$g_{2}$ & -- & -- & 99 & 219 & 306 & 369 \\
$g_{3}$ & -- & -- & 76 & 169 & 235 & 284\\
\hline
\end{tabular}
\label{tab: Tang_compare}
\end{table*}
To test our numerical code, we have redone calculations with the same
polytropic EOS as that in the Appendix A of \citet{Marek:2005if}, where the
polytropic index $\gamma$ and the adiabatic index $\Gamma_{1}> \gamma$ are
constant throughout the stellar interior. Detailed numerical results are shown
in Table \ref{tab: Tang_compare}. It is noted that our numerical results for
the polytropic model with $\Gamma_{1}= \gamma$ agree with Table 3 of
\citet{Tang:2021woo}. In Table \ref{tab: Tang_compare}, we compare the
frequencies of $p$, $f$, and $g$ modes computed with $\Gamma_{1}> \gamma$ and
$\Gamma_{1}= \gamma$ \citep{Westernacher-Schneider:2020bkw, Tang:2021woo}. The
frequencies of $p$ and $f$ modes increase with the increase of the adiabatic
index $\Gamma_{1}$. In particular, the $g$-mode frequencies also increase with
increase of the adiabatic index $\Gamma_{1}$, which indicates a larger buoyancy.
\section{NUMERICAL RESULTS}\label{sec: NR}
\subsection{Composition gradient}\label{sec: Composition gradient}
Taking the matter composition into account, and assuming that the model accounts
for the presence of neutrons, protons, and electrons, we have a two-parameter
EOS, $P=P(n,x)$, which is a function of the baryon number density $n$ and the
proton fraction $x=n_\mathrm p/n$. Specifically, we use shorthand notations: ``$\mathrm n$'' for
neutrons, ``$\mathrm p$'' for protons, and ``$\mathrm e$'' for electrons. The energy per baryon of the
nuclear matter can be written as \citep{Lagaris:1981, Prakash:1988,
Wiringa:1988tp, Lai:1993di}
\begin{equation} \label{eq:Ennx}
E_n(n,x)=T_n(n,x)+V_{0}(n)+V_{2}(n)(1-2x)^2 \,,
\end{equation}
where
\begin{equation}
T_n(n,x)={\frac{3}{5}\frac{\hbar^2}{2m_\mathrm n} (3\pi^2n)^{2/3}[x^{5/3}}+(1-x)^{5/3}] \,,
\end{equation}
is the Fermi kinetic energy of the nucleons, and $m_n$ is the nucleon mass.
$V_{0}$ mainly specifies the bulk compressibility of the matter, and $V_{2}$ is
related to the symmetry energy of nuclear matter \citep{Lattimer:2014scr}.
To compare the results of $g$-modes in Newtonian gravity \citep{Lai:1993di}, we
adopt the same $V_{0}$ and $V_{2}$ for different EOS models, based on the
microscopic calculations in \citet{Wiringa:1988tp}. Detailed numerical results
of $V_{0}$ and $V_{2}$ have been tabulated in Table IV of
\citet{Wiringa:1988tp}. The approximate formulae of $V_{0}$ and $V_{2}$ are
presented in Sec.~4.3 of \citet{Lai:1993di}.
In this work, we consider the model ``AU'' \citep[the EOS based on nuclear
potential AV14+UVII in][]{Wiringa:1988tp} and the model ``UU'' \citep[the EOS
based on nuclear potential UV14+UVII in][]{Wiringa:1988tp}, respectively. For
the model AU, $V_0$ and $V_2$ (in the unit of MeV) are fitted as
\citep{Lai:1993di}
\begin{align}
V_{0}& = -43+330\,(n-0.34)^2 \label{eq: AU_1} \,, \\
V_{2}& = 21\,n^{0.25} \label{eq: AU_2} \,,
\end{align}
where $n$ is the baryon number density in $\rm fm^{-3}$. For the model UU, we
have
\begin{align}
V_{0}& = -40+400\,(n-0.3)^2 \label{eq: UU_1} \,, \\
V_{2}& = 42\,n^{0.55} \label{eq: UU_2} \,.
\end{align}
These fitting formulae are valid for $0.07 \, {\rm fm}^{-3} \leq n \leq 1\, \rm fm^{-3} $.
For densities $0.001 \, {\rm fm}^{-3} < n <0.07 \, \rm fm^{-3}$, we employ the EOS of \citet{BBP:1971},
while for $n \leq 0.001 \, \rm fm^{-3}$, we employ the EOS of \citet{BPS:1971}.
Once we have this relation, we can work out the mass-energy density, pressure,
and adiabatic sound speed. The equilibrium configuration must satisfy the beta equilibrium,
\begin{equation}
\mu_\mathrm n = \mu_\mathrm p+\mu_\mathrm e \,, \label{eq: one}
\end{equation}
and the charge neutrality
\begin{equation}
n_\mathrm p = n_\mathrm e \,, \label{eq: two}
\end{equation}
where $\mu_{i} $ are the chemical potentials of the three species of particles.
The equilibrium proton fraction $x(n) = x_{\mathrm e}(n)$ can be obtained by solving
Eqs.~(4.12--4.14) of \citet{Lai:1993di}. Hence, the mass-energy density and
pressure are determined as
\begin{align}
\epsilon(n,x)& = n\big[m_n+E(n,x)/c^2\big] \label{eq: energy-density} \,, \\
P(n,x) & = n^2{\frac{\partial E(n,x)}{\partial n}}
=\frac{2n}{3}T_n+\frac{n}{3}T_\mathrm e+n^2\left[V_{0}^{\prime}+V_{2}^{\prime} \,(1-2x)^2\right] \label{eq: pressure} \,,
\end{align}
where
\begin{equation}
T_\mathrm e(n,x_\mathrm e)=\frac{3}{4}\hbar c(3\pi^2n)^{1/3}x_\mathrm e^{4/3} \,, \label{eq: energy_e}
\end{equation}
is the energy per baryon of relativistic electrons. Here, and in the following,
primes denote baryon number density $n$ derivatives (for example,
$V_{0}^{\prime}={\rm d} V_{0}/{{\rm d} n}$). The adiabatic sound speed
$c_\mathrm s^2$ is
\begin{multline}c_\mathrm s^2 =\frac{\partial P}{\partial \epsilon}
=\frac{n}{\epsilon+P/c^2}\frac{\partial P}{\partial n} \\
\quad = \frac{n}{\epsilon+P/c^2}\left\{{\frac{10}{9}}T_n
+{\frac{4}{9}}T_\mathrm e+2n\left[V_{0}^{\prime}+V_{2}^{\prime}\,(1-2x)^2\right] \right\} \\
+\frac{n}{\epsilon+P/c^2}\left\{n^2\left[V_{0}^{\prime\prime}+V_{2}^{\prime\prime}\,(1-2x)^2\right] \right\} \,.
\label{eq: adiabatic_cs}
\end{multline}
The difference between $c_\mathrm s^2$ and $c_\mathrm e^2$ is given by
\begin{multline}
c_\mathrm s^2-c_\mathrm e^2 =\frac{n}{\epsilon+P/c^2}\left(\frac{\partial P}{\partial n}
-\frac{{\rm d} P}{{\rm d} n}\right)=-\frac{n}{\epsilon+P/c^2}
\left(\frac{\partial P}{\partial x}\right){\frac{{\rm d} x}{{\rm d} n}}\\
=-\frac{n^3}{\epsilon+P/c^2}\left[{\frac{\partial }{\partial n}}(\mu_\mathrm e+\mu_\mathrm p-\mu_\mathrm n)
\right]{\frac{{\rm d} x}{{\rm d} n}} \,.
\label{eq: delt_cs}
\end{multline}
From the beta equilibrium [i.e.\ Eq.~(\ref{eq: one})], we obtain
\begin{equation}
{\frac{{\rm d} x}{{\rm d} n}}= -\left[{\frac{\partial }{\partial n}}(\mu_\mathrm e+\mu_\mathrm p-\mu_\mathrm n) \right]\left[{\frac{\partial }{\partial x}}(\mu_\mathrm e+\mu_\mathrm p-\mu_\mathrm n) \right] ^{-1}\,.
\label{eq: dx/dn}
\end{equation}
Finally, the difference between $c_\mathrm s^2$ and $c_\mathrm e^2$ can be represented as
\begin{equation}
c_\mathrm s^2-c_\mathrm e^2 = \frac{n^3}{\epsilon+P/c^2}\left[{\frac{\partial }{\partial n}}(\mu_\mathrm e+\mu_\mathrm p-\mu_\mathrm n) \right]^{2}\left[{\frac{\partial }{\partial x}}(\mu_\mathrm e+\mu_\mathrm p-\mu_\mathrm n) \right] ^{-1}\,.
\label{eq: Delt_cs}
\end{equation}
\begin{figure*}
\centering
\includegraphics[width=12cm]{Fig_composition_gradient_EOS.pdf}
\caption{The left panels show the pressure $P$ (upper) and the proton fraction $x=n_\mathrm p/n$ (lower) versus the mass-energy density $\epsilon$ for representative EOS models AU and UU. The right panels show the relation between the adiabatic sound speed $c_\mathrm s$ and the fractional difference between $c_\mathrm s^2$ and $c_\mathrm e^2$ versus the mass-energy density $\epsilon$. The purple dashed line is the mass-energy density $\epsilon = 0.07 \,\rm fm^{-3}$.}
\label{fig: relations}
\end{figure*}
In the upper left panel of Fig.~\ref{fig: relations}, we show the EOS models AU
and UU, which include below neutron-drip region \citep{BBP:1971} and the
lower-density crustal region \citep{BPS:1971}. In the bottom left panel of
Fig.~\ref{fig: relations}, we show the relation between the proton fraction
$x=n_\mathrm p/n$ and the mass-energy density $\epsilon$. One notices that the value
of $x$ of model UU is larger than that of model AU. In the right panels of
Fig.~\ref{fig: relations}, we show the relation between the adiabatic sound
speed $c_\mathrm s$ and the fractional difference between $c_\mathrm s^2$ and $c_\mathrm e^2$, as
functions of the mass-energy density. Note that, in our work, we consider only
$g$-mode of the NS core, so we set $c_\mathrm s^2=c_\mathrm e^2$ in the lower-density region.
As mentioned in Sec.~4 of \citet{Lai:1993di} that $c_\mathrm s^2=c_\mathrm e^2$ in the crustal
region indicates effectively suppressing the crustal $g$-mode while
concentrating on the core $g$-mode.
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig_composition_gradient_MR.pdf}
\caption{Mass-radius relations of models AU and UU with Newtonian, Case A, and GR schemes. The $1$-$\sigma$ regions of the mass measurements in
PSRs~J0348+0432~\citep{Antoniadis:2013pzd} and
J0740+6620~\citep{Fonseca:2021wxt} are illustrated.}
\label{fig: composition_gradient_MR}
\end{figure}
In Fig. \ref{fig: composition_gradient_MR}, we display the mass-radius relations
of models AU and UU with Newtonian, pseudo-Newtonian (Case A), and GR schemes.
Note that the rest-mass density $\rho$ appears in the background and
perturbation equations in N and N+lapse schemes; the total energy density
$\epsilon$ and rest-mass density $\rho$ exhibit the background equations in Case
A and Case A+lapse schemes, but the rest-mass density $\rho$ appears in the
perturbation equations. To compare with the results of \citet{Lai:1993di}, we
use the energy density $\epsilon$ to obtain the mass-radius relation, as well as
to solve perturbation equations. The difference between Case A and GR is
apparent, though much smaller than the difference between Newtonian gravity and
GR. The Case A potential has captured some main effects from the full GR. As we
will see, the perturbation results will be even closer to that of GR than the
background results.
\citet{Lai:1993di} investigated $f$ and $g$ mode frequencies of EOS models AU
and UU with a given mass $M=1.4 \, M_{\odot}$\footnote{\citet{Lai:1993di} also
calculated models UT and UU2. However, the maximum mass of the model UT does
not accord with the new observation results \citep{Antoniadis:2013pzd,
Fonseca:2021wxt}. Also the model UU2 only considers the free $\mathrm n, \mathrm p,\mathrm e$
($V_{0}= V_{2}=0$). We will not include the two EOSs in our calculations.}.
They found that the $f$-mode properties are very similar, due to the fact that
the two EOSs have similar bulk properties ($V_{0}$) for the nuclear matter.
However, the properties of the $g$-mode are very different from models AU and
UU. From the bottom right panel of Fig. \ref{fig: relations}, we find that the
value of $(c_\mathrm s^2-c_\mathrm e^2)/c_\mathrm s^2$ is different with increase of the energy
density. These differences reflect the sensitive dependence of $g$-mode on the
nuclear matter's symmetry energy ($V_{2}$).
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig_AU_frequency.pdf}
\caption{The $g$-mode frequencies for the EOS model AU, with a given mass
$M=1.98 \, M_{\odot}$. The upper panel shows the frequencies of the first
eight quadrupolar ($\ell=2$) $g$-mode with different schemes in Table
\ref{tab: schemes}. The lower panel shows the percentage difference
$\Delta_{\rm C}$ between our numerical results and the Case A+lapse scheme.
} \label{fig: AU_frequency}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig_UU_frequency.pdf}
\caption{Same as Fig. \ref{fig: AU_frequency}, bur for the EOS model UU.}
\label{fig: UU_frequency}
\end{figure}
In our study, we extend calculations in \citet{Lai:1993di} by computing the
$g$-mode. We use the stars with a fixed mass $M=1.98 \, M_{\odot}$ as an
example. In the upper panel of Fig. {\ref{fig: AU_frequency}}, we plot the
frequencies of the first eight quadrupolar $g$-mode for the EOS AU. The results
computed by all perturbation schemes are represented by different color lines in
Fig. {\ref{fig: AU_frequency}}. The lower panel of Fig. {\ref{fig:
AU_frequency}} shows the absolute percentage difference $\Delta_{\rm C}$ defined
by
\begin{equation}
\Delta_{\rm C} = \left\lvert\frac{f-f_\text{Case A+lapse}}{f_\text{Case A+lapse}}\right\rvert \times 100\ \% \,,
\label{eq: error}
\end{equation}
where $f$ is the frequency of $g$-mode obtained by our perturbation schemes in
Table \ref{tab: schemes}. According to the numerical results of non-radial
oscillation ($f$-mode) in \citet{Tang:2021woo}, the Case A+lapse scheme can
approximate to about a few percents for a given mass $M=1.4 \, M_{\odot}$ in
full GR. Hence, we use the results of Case A+lapse as the baseline in the case
of the composition gradient. In particular, we found that the TOV+lapse scheme
can give a good approximation to the $g$-mode frequencies to a few percent
levels. Besides, the absolute percentage difference $\Delta_{\rm C}$ of the
TOV+lapse scheme decreases with increasing nodes. We also plot the results of
frequencies of $g$-mode and the absolute percentage difference $\Delta_{\rm C}$
for the EOS UU in Fig. {\ref{fig: UU_frequency}}. We seen similar properties of
$g$-mode, as the EOS AU in Fig. {\ref{fig: AU_frequency}}.
\subsection{Density discontinuity}\label{sec: density discontinuity}
In this subsection, we study the effect of discontinuities at high density on
the oscillation spectrum of a NS. We consider a simple polytropic EOS of the
form \citep{Finn:1987, McDermott:1990, Miniutti:2002bh}
\begin{equation}
P = \left\{
\begin{array}{rl}
K\epsilon^{\gamma} \,, \quad \quad \quad \quad & \epsilon > \epsilon_{\rm{d}} +\Delta\epsilon \,, \\
\displaystyle{ K\left( 1 + \frac{\Delta \epsilon}{\epsilon_{\rm{d}}}
\right)^{\gamma} \epsilon^{\gamma}} \,, & \epsilon \leq \epsilon_{\rm{d}} \,,
\end{array} \right.
\label{eq: eos}
\end{equation}
where the discontinuity of amplitude $\Delta\epsilon$ is located at a
mass-energy density $\epsilon_{\rm d}$. We study the properties of $g$-modes
with density discontinuity using the pseudo-Newtonian gravity schemes in
Table~\ref{tab: schemes}.
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig_density_EOS.pdf}
\caption{EOSs with density discontinuity, for different values of $\Delta
\epsilon/ \epsilon_{\rm {d}}$. The density and pressure are normalized by
the standard nuclear density $\epsilon_{\rm nuc}=2.68\times 10^{14} \ \rm g
\ cm^{-3}$.}
\label{fig: density_EOS}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=12cm]{Fig_density_MR.pdf}
\caption{({\it Left}) The relation between the mass of NSs and central
density $\epsilon_{c}$ for different values of $\Delta \epsilon/
\epsilon_{\rm {d}}$. ({\it Right}) Mass and radius relation of NSs with the
same value of $\Delta \epsilon/ \epsilon_{\rm {d}}$. The horizontal blue line shows the mass
$M=1.4 \, M_{\odot}$. } \label{fig: density_MR}
\end{figure*}
Now we have five parameters for a NS: the central density
$\epsilon_{c}$, the discontinuity of amplitude $\Delta\epsilon$, the critical
density $\epsilon_{\rm d}$, the polytropic index $\gamma$, and $K$. To compared
with the results of non-radial oscillating relativistic stars in the full
theory \citep[i.e.\ without the relativistic Cowling
approximation,][]{Miniutti:2002bh}, we adopt the same parameters as
\citet{Miniutti:2002bh}: the polytropic index $\gamma=2$, $K=180\ \rm km^{2}$
for the NSs without discontinuity, and $K(1+\Delta \epsilon/\epsilon_{\rm
{d}})^{2}=180\ \rm km^{2}$ for the case with a discontinuity. Some examples of
this EOS are illustrated in Fig. {\ref{fig: density_EOS}}.
In performing the calculation, boundary conditions must be specified at the
locations of the density discontinuities. \citet{Finn:1987} analyzed the jump
conditions of the perturbation variables with the Cowling approximation in
Newtonian gravity. Since the density is discontinuous, the perturbation
variables are discontinuous as well, and the differential equations (\ref{eq:
bc_p1}--\ref{eq: bc_p4}) require jump conditions in the discontinuity density,
denoted as [$\rho$]
\begin{align}
& U = 0 \,, \\
& \delta\tilde{P} = g_{N} [\rho]U\,, \\
& \delta\tilde{\Phi} = -4\pi[\rho]U \,, \\
& \Psi =0 \,.
\end{align}
To compare with the results of \citet{Miniutti:2002bh}, we use the energy
density $\epsilon$ to solve perturbation equations.
In the left panel of Fig. {\ref{fig: density_MR}}, we show the mass $M$ versus
central density $\epsilon_{c}$ for each value of $\Delta \epsilon/ \epsilon_{\rm
{d}}$. As $\Delta \epsilon/ \epsilon_{\rm {d}}$ gets larger, the maximum mass
decreases, and the stable region $\mathrm d M/ \rm d\epsilon_{c}>0$ becomes narrower
and moves to a high-density region. In this work, we study only stable NS
models with $\rm d M/ \rm d\epsilon_{c}>0$. In our analysis, we fix the mass of
a NS to $M=1.4 \, M_{\odot}$ as an example. In the right panel of Fig.
{\ref{fig: density_MR}}, we plot the mass-radius relation for NSs with and
without density discontinuity. In both cases, we set the polytropic index
$\gamma=2$. Comparing to the same EOS for $\epsilon<\epsilon_{\rm {d}}$, we
adopt $K=180\ \rm km^{2}$ for the NS models without discontinuity, and
$K(1+\Delta \epsilon/ \epsilon_{\rm {d}})^{2}=180\ \rm km^{2}$ for the NS models
with discontinuity. We find that the maximum mass is lower for the model with a
discontinuity. Because the softening of EOS affected by the discontinuity. NSs
with a discontinuity are more compact than those without discontinuity for a
fixed mass.
\begin{figure*}
\centering
\includegraphics[width=12cm]{Fig_density_f_mode.pdf}
\caption{The top panels plot the frequency of the non-radial $f$-mode for
different values of $\Delta \epsilon/ \epsilon_{\rm {d}}$ versus the density
$\epsilon_{\rm d}$ for the four perturbation schemes. The GR curves
correspond to GR results obtained by \citet{Miniutti:2002bh}. The bottom
panels show the percentage difference $\Delta_{\rm D}$ between our numerical
results and the GR results. We here consider stars with a fixed mass $M=1.4
\, M_{\odot}$} \label{fig: density_f_mode}
\end{figure*}
Now we will focus on the $\ell=2$ non-radial oscillation modes. In particular,
we consider the quadrupolar fundamental $f$-mode and gravity $g$-mode. The
frequency versus density $\epsilon_{\rm {d}}$ for the fixed mass $M=1.4 \,
M_{\odot}$ is shown in the top panel of Fig. {\ref{fig: density_f_mode}}. The
results computed by the four different perturbation schemes are represented by
different color lines in Fig. {\ref{fig: density_f_mode}}. The GR curves in the
upper panel correspond to the results of full perturbation theory in GR
\citep{Miniutti:2002bh}. Additionally, the absolute percentage difference
$\Delta_{\rm D}$ defined by
\begin{equation}
\Delta_{\rm D} = \left\lvert\frac{f-f_\text{GR}}{f_\text{GR}}\right\rvert \times 100\ \% \,,
\label{eq: error_D}
\end{equation}
is shown in the bottom panel of Fig. {\ref{fig: density_f_mode}}. The frequency
of $f$-mode of the Case A+lapse scheme decreases with increasing density
$\epsilon_{\rm d}$, which is similar to the GR results in trend. Again, the
Case A+lapse scheme is quite accurate for the frequency of the $f$-mode. For
the $\Delta \epsilon/ \epsilon_{\rm {d}}=0.3$, the Case A+lapse scheme is not as
good as that of the $\Delta \epsilon/ \epsilon_{\rm {d}}=0.1, 0.2$ cases, but it
is still the best among the four perturbation schemes. \citet{Tang:2021woo}
calculated $f$-mode using Newtonian, Newtonian+lapse, Case A, and Case A+lapse
schemes. They found that the Case A+lapse scheme performs much better and can
reasonably approximate the $f$-mode frequency.
\begin{figure*}
\centering
\includegraphics[width=12cm]{Fig_density_g_mode.pdf}
\caption{Same as Fig. {\ref{fig: density_f_mode}}, but for the $g$-mode
frequencies.} \label{fig: density_g_mode}
\end{figure*}
We show in the top panel of Fig. {\ref{fig: density_g_mode}} the frequency of
$g$-mode as a function of the density $\epsilon_{\rm d}$ for the four schemes
and the GR scheme. We also plot the results $\Delta_{\rm D}$ for the four
schemes at the bottom of Fig. {\ref{fig: density_g_mode}}. In particular, we
find that the Case A+lapse scheme can approximate the $g$-mode frequency of GR
reasonably well \citep{Miniutti:2002bh}. The percentage difference $\Delta_{\rm
D}$ of $g$-mode of the Case A+lapse scheme decreases with increasing $\Delta
\epsilon/ \epsilon_{\rm {d}}$. The Case A+lapse scheme provides the best
approximation to the frequencies of $f$ and $g$ modes. For the same central
density and discontinuity density, the radius of density discontinuity
$R_{\rm{d}}$ is larger than the radius $R$ of the Newtonian star. Hence, we
ignore the N and N+lapse schemes of discontinuity $g$-mode in this work.
Numerical results of the different schemes are given in Tables \ref{tab: f_mode}
and \ref{tab: g_mode}. For a given density $\epsilon_{\rm d}$ and $\Delta
\epsilon/ \epsilon_{\rm d}$, we show our numerical results for the frequencies
of $f$ and $g$ modes with four schemes and the GR scheme, where the GR results
were calculated by \citet{Miniutti:2002bh}.
\begin{table*}
\centering
\caption{Comparison between the frequencies of $f$-mode (unit: Hz) of
\citet{Miniutti:2002bh} and the different schemes in Table \ref{tab:
schemes}, with a given mass $M=1.4 \, M_{\odot}$ and $\Gamma=2 $ with
different center densities. The polytropic coefficient $K$ is $K(1+\Delta
\epsilon/ \epsilon)^{2}=180\, \rm km^{2}$. }
\begin{tabular}{c c c c c c c }
\hline
$\rho_{\rm d}\ (\rm g\,cm^{-3})$ & $\Delta \epsilon/ \epsilon_{\rm d}$ & GR & Case A & Case A+lapse &TOV & TOV+lapse \\
\hline
-- & 0.0 & 1666 & 2144 & 1673 & 2423 & 1863\\
\hline
$3\times10^{14}$ & 0.1 & 1998 & 2629 & 1984 & 3058 & 2257\\
$4\times10^{14}$ & 0.1 & 1962 & 2562 & 1942 & 2987 & 2213\\
$5\times10^{14}$ & 0.1 & 1915 & 2482 & 1892 & 2890 & 2155\\
$6\times10^{14}$ & 0.1 & 1857 & 2404 & 1842 & 2782 & 2089\\
$7\times10^{14}$ & 0.1 & 1792 & 2302 & 1777 & 2644 & 2004\\
$8\times10^{14}$ & 0.1 & 1723 & 2215 & 1720 & 2526 & 1929\\
$9\times10^{14}$ & 0.1 & 1670 & 2152 & 1678 & 2431 & 1864\\
\hline
$4\times10^{14}$ & 0.2 & 2408 & 3269 & 2359 & 3968 & 2765\\
$5\times10^{14}$ & 0.2 & 2330 & 3117 & 2273 & 3764 & 2658\\
$6\times10^{14}$ & 0.2 & 2226 & 2901 & 2149 & 3536 & 2532\\
$7\times10^{14}$ & 0.2 & 2088 & 2665 & 2006 & 3238 & 2362\\
$8\times10^{14}$ & 0.2 & 1901 & 2451 & 1871 & 2860 & 2137\\
$9\times10^{14}$ & 0.2 & 1680 & 2171 & 1692 & 2451 & 1881\\
\hline
$5\times10^{14}$ & 0.3 & 3216 & 4213 & 2859 & 6350 & 3829\\
$6\times10^{14}$ & 0.3 & 3039 & 3909 & 2708 & 5718 & 3585\\
$7\times10^{14}$ & 0.3 & 2831 & 3605 & 2547 & 5066 & 3305\\
$8\times10^{14}$ & 0.3 & 2553 & 3044 & 2236 & 4298 & 2938\\
$9\times10^{14}$ & 0.3 & 2002 & 2311 & 1783 & 3053 & 2254\\
\hline
\end{tabular}
\label{tab: f_mode}
\end{table*}
\begin{table*}
\centering
\caption{Same as Table \ref{tab: f_mode}, but for the $g$-mode frequencies.}
\begin{tabular}{c c c c c c c }
\hline
$\rho_{\rm d}\ (\rm g\,cm^{-3})$ & $\Delta \epsilon/ \epsilon_{\rm d}$ & GR & Case A & Case A+lapse & TOV & TOV+lapse \\
\hline
-- & 0.0 & -- & -- & -- & -- & -- \\
\hline
$3\times10^{14}$ & 0.1 & 504 & 571 & 500 & 604 & 523\\
$4\times10^{14}$ & 0.1 & 567 & 660 & 570 & 695 & 596\\
$5\times10^{14}$ & 0.1 & 613 & 730 & 624 & 766 & 651\\
$6\times10^{14}$ & 0.1 & 644 & 786 & 665 & 820 & 690\\
$7\times10^{14}$ & 0.1 & 659 & 828 & 692 & 855 & 712\\
$8\times10^{14}$ & 0.1 & 658 & 858 & 708 & 874 & 720\\
$9\times10^{14}$ & 0.1 & 641 & 876 & 713 & 876 & 712\\
\hline
$4\times10^{14}$ & 0.2 & 840 & 987 & 834 & 1059 & 883\\
$5\times10^{14}$ & 0.2 & 912 & 1093 & 916 & 1168 & 969\\
$6\times10^{14}$ & 0.2 & 961 & 1173 & 976 & 1252 & 1032\\
$7\times10^{14}$ & 0.2 & 987 & 1229 & 1016 & 1305 & 1070\\
$8\times10^{14}$ & 0.2 & 979 & 1262 & 1034 & 1311 & 1071\\
$9\times10^{14}$ & 0.2 & 906 & 1240 & 1009 & 1240 & 1010\\
\hline
$5\times10^{14}$ & 0.3 & 1211 & 1445 & 1174 & 1647 & 1286\\
$6\times10^{14}$ & 0.3 & 1281 & 1556 & 1257 & 1758 & 1375\\
$7\times10^{14}$ & 0.3 & 1326 & 1642 & 1319 & 1835 & 1439\\
$8\times10^{14}$ & 0.3 & 1339 & 1667 & 1341 & 1862 & 1467\\
$9\times10^{14}$ & 0.3 & 1251 & 1572 & 1274 & 1726 & 1383\\
\hline
\end{tabular}
\label{tab: g_mode}
\end{table*}
\section{Conclusions}\label{sec: conculsion}
In light of new observations, oscillating modes of NSs are of particular
interests to the physics and astrophysics communities in recent years. In this
work, we have investigated the properties of the gravity $g$-mode for NSs in the
framework of pseudo-Newtonian gravity. \citet{Tang:2021woo} have investigated
barotropic oscillations ($\Gamma_{1}=\gamma$ and the Schwarzschild discriminant
$A = 0$). We extended the work and have studied the $g$-mode of NSs with the
same polytropic EOS model. We find that, the $g$-mode frequencies increase with
increasing adiabatic index, which indicates that the buoyancy becomes much
larger.
A deeper understanding of the oscillation of NSs, which could be associated with
emitted gravitational waves, requires an analysis of both the state and
composition of the NS matter. We considered the case of the composition
gradient, and have extended calculations in \citet{Lai:1993di} to compute the
$g$-mode. The value of $(c_\mathrm s^2-c_\mathrm e^2)/c_\mathrm s^2$ is different when the energy
density increases. In particular, these differences reflect the sensitive
dependence of $g$-mode on the nuclear matter's symmetry energy [$V_{2}$ in
Eq.~(\ref{eq:Ennx}))]. Note that the tidal deformability of binary NSs appears
to be related to the dominant oscillation frequency of the post-merger remnant
\citep{Bernuzzi:2015rla}. The impact of thermal and rotational effects can
provide simple arguments that help explain the result
\citep{Chakravarti:2019sdc}. More recently, \citet{Andersson:2022cax} consider
the dynamic tides of NSs to build the structure NSs in the framework of
post-Newtonian gravity. We may expect using the pseudo-Newtonian gravity to
study the resonant oscillations and tidal response in coalescing binary NSs in
the future.
We considered a phase transition occurring in the inner core of NSs, which could
be associated with a density discontinuity. Phase transition would produce a
softening of EOSs, leading to more compact NSs. Using the different schemes, we
have calculated the frequencies of $f$ and $g$ modes for the $\ell=2$ component.
Compared to the results of GR \citep{Miniutti:2002bh}, the Case A+lapse scheme
can approximate the $f$-mode frequency very well. The absolute percentage
difference $\Delta_{\rm D}$ ranges from $0.01$ to $0.1$ percent. In particular,
we find that the Case A+lapse scheme also can approximate the $g$-mode frequency
of GR reasonably well \citep{Miniutti:2002bh}. The percentage difference
$\Delta_{\rm D}$ of $g$-mode of the Case A+lapse scheme decreases with
increasing $\Delta \epsilon/ \epsilon_{\rm {d}}$ in our model.
The existence of a possible hadron-quark phase transition in the central regions
of NSs is associated with the appearance of $g$-mode, which is extremely
important as they could signal the presence of a pure quark matter core in the
center of NSs~\citep{Orsaria:2019ftf}. Our findings suggest that the
pseudo-Newtonian gravity, with much less computational efforts than the full GR,
can accurately study the oscillation of the relativistic NSs constructed from an
EOS with a first-order phase transition. Observations of $g$-mode frequencies
with density discontinuity may thus be interpreted as a possible hint of the
first-order phase transition in the core of NSs. Lastly, our work also provides
more confidence in using the pseudo-Newtonian gravity in the simulations of
CCSNs, thus reducing the computational cost significantly.
\section*{Acknowledgements}
We thank Zexin Hu and Yacheng Kang for the helpful discussions. This work was
supported by the National SKA Program of China (2020SKA0120300, 2020SKA0120100),
the National Natural Science Foundation of China (11975027, 11991053), the
National Key R\&D Program of China (2017YFA0402602), the Max Planck Partner
Group Program funded by the Max Planck Society, and the High-Performance
Computing Platform of Peking University.
\section*{Data Availability}
The data underlying this paper will be shared on reasonable request to the
corresponding authors.
\bibliographystyle{mnras}
|
1,314,259,993,531 | arxiv |
\@ifstar{\@ssection}{\@section}{\@ifstar{\@ssection}{\@section}}
\def\@section#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hae \addvspace{\half}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\two}%
\fi
\bgroup
\ninepoint\bf
\Raggedright
\ifAutoNumber
\global\advance\Sec \@ne
\noindent\@nohdbrk\number\Sec\hskip 1pc \uppercase{#1}\@par}
\global\SecSec=\z@
\else
\noindent\@nohdbrk\uppercase{#1}\@par}
\fi
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hae\relax
}
\def\@ssection#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hae \addvspace{\half}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\two}%
\fi
\bgroup
\ninepoint\bf
\Raggedright
\noindent\@nohdbrk\uppercase{#1}\@par}
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hae\relax
}
\def\subsection#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hae \addvspace{1pt plus 1pt minus .5pt}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\onehalf}%
\fi
\bgroup
\ninepoint\bf
\Raggedright
\ifAutoNumber
\global\advance\SecSec \@ne
\noindent\@nohdbrk\number\Sec.\number\SecSec \hskip 1pc\relax #1\@par}
\global\SecSecSec=\z@
\else
\noindent\@nohdbrk #1\@par}
\fi
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hbe\relax
}
\def\subsubsection#1
\if@nobreak
\everypar{}%
\ifnum\LastMac=\Hbe \addvspace{1pt plus 1pt minus .5pt}\fi
\else
\addpen{\gds@cbrk}%
\addvspace{\onehalf}%
\fi
\bgroup
\ninepoint\it
\Raggedright
\ifAutoNumber
\global\advance\SecSecSec \@ne
\noindent\@nohdbrk\number\Sec.\number\SecSec.\number\SecSecSec
\hskip 1pc\relax #1\@par}
\else
\noindent\@nohdbrk #1\@par}
\fi
\egroup
\nobreak
\vskip\half
\nobreak
\@noafterindent
\LastMac=\Hce\relax
}
\def\paragraph#1
\if@nobreak
\everypar{}%
\else
\addpen{\gds@cbrk}%
\addvspace{\one}%
\fi%
\bgroup%
\ninepoint\it
\noindent #1\ \nobreak%
\egroup
\LastMac=\Hde\relax
\ignorespaces
}
\let\tx=\relax
\def\beginlist{%
\@par}\if@nobreak \else\addvspace{\half}\fi%
\bgroup%
\ninepoint
\let\item=\list@item%
}
\def\list@item{%
\@par}\noindent\hskip 1em\relax%
\ignorespaces%
}
\def\par\egroup\addvspace{\half}\@doendpe{\@par}\egroup\addvspace{\half}\@doendpe}
\def\beginrefs{%
\@par}
\bgroup
\eightpoint
\Raggedright
\let\bibitem=\bib@item
}
\def\bib@item{%
\@par}\parindent=1.5em\Hang{1.5em}{1}%
\everypar={\Hang{1.5em}{1}\ignorespaces}%
\noindent\ignorespaces
}
\def\par\egroup\@doendpe{\@par}\egroup\@doendpe}
\newtoks\CatchLine
\def\@journal{Mon.\ Not.\ R.\ Astron.\ Soc.\ }
\def\@pubyear{1994}
\def\@pagerange{000--000}
\def\@volume{000}
\def\@microfiche{} %
\def\pubyear#1{\gdef\@pubyear{#1}\@makecatchline}
\def\pagerange#1{\gdef\@pagerange{#1}\@makecatchline}
\def\volume#1{\gdef\@volume{#1}\@makecatchline}
\def\microfiche#1{\gdef\@microfiche{and Microfiche\ #1}\@makecatchline}
\def\@makecatchline{%
\global\CatchLine{%
{\rm \@journal {\bf \@volume},\ \@pagerange\ (\@pubyear)\ \@microfiche}}%
}
\@makecatchline
\newtoks\LeftHeader
\def\shortauthor#1
\global\LeftHeader{#1}%
}
\newtoks\RightHeader
\def\shorttitle#1
\global\RightHeader{#1}%
}
\def\PageHead
\begingroup
\ifsp@page
\csname ps@\sp@type\endcsname
\global\sp@pagefalse
\fi
\ifodd\pageno
\let\the@head=\@oddhead
\else
\let\the@head=\@evenhead
\fi
\vbox to \z@{\vskip-22.5\p@%
\hbox to \PageWidth{\vbox to8.5\p@{}%
\the@head
}%
\vss}%
\endgroup
\nointerlineskip
}
\def\today{%
\number\day\space
\ifcase\month\or January\or February\or March\or April\or May\or June\or
July\or August\or September\or October\or November\or December\fi
\space\number\year%
}
\def\PageFoot{}
\def\authorcomment#1{%
\gdef\PageFoot{%
\nointerlineskip%
\vbox to 22pt{\vfil%
\hbox to \PageWidth{\elevenpoint\noindent \hfil #1 \hfil}}%
}%
}
\newif\ifplate@page
\newbox\plt@box
\def\beginplatepage{%
\let\plate=\plate@head
\let\caption=\fig@caption
\global\setbox\plt@box=\vbox\bgroup
\TEMPDIMEN=\PageWidth
\hsize=\PageWidth\relax
}
\def\par\egroup\global\plate@pagetrue{\@par}\egroup\global\plate@pagetrue}
\def\plate@head#1{\gdef\plt@cap{#1}}
\def\letters{%
\gdef\folio{\ifnum\pageno<\z@ L\romannumeral-\pageno
\else L\number\pageno \fi}%
}
\everydisplay{\displaysetup}
\newif\ifeqno
\newif\ifleqno
\def\displaysetup#1$${%
\displaytest#1\eqno\eqno\displaytest
}
\def\displaytest#1\eqno#2\eqno#3\displaytest{%
\if!#3!\ldisplaytest#1\leqno\leqno\ldisplaytest
\else\eqnotrue\leqnofalse\def#2}\fi{#2}\def\eq{#1}\fi
\generaldisplay$$}
\def\ldisplaytest#1\leqno#2\leqno#3\ldisplaytest{%
\def\eq{#1}%
\if!#3!\eqnofalse\else\eqnotrue\leqnotrue
\def#2}\fi{#2}\fi}
\def\generaldisplay{%
\ifeqno \ifleqno
\hbox to \hsize{\noindent
$\displaystyle\eq$\hfil$\displaystyle#2}\fi$}
\else
\hbox to \hsize{\noindent
$\displaystyle\eq$\hfil$\displaystyle#2}\fi$}
\fi
\else
\hbox to \hsize{\vbox{\noindent
$\displaystyle\eq$\hfil}}
\fi
}
\def\@notice{%
\@par}\addvspace{\two}%
\noindent{\b@ls{11pt}\ninerm This paper has been produced using the
Blackwell Scientific Publications \TeX\ macros.\@par}}%
}
\outer\def\@notice\par\vfill\supereject\end{\@notice\@par}\vfill\supereject\end}
\def\start@mess{%
Monthly notices of the RAS journal style (\@typeface)\space
v\@version,\space \@verdate.%
}
\everyjob{\Warn{\start@mess}}
\newif\if@debug \@debugfalse
\def\Print#1{\if@debug\immediate\write16{#1}\else \fi}
\def\Warn#1{\immediate\write16{#1}}
\def\immediate\write\m@ne#1{}
\newcount\Iteration
\def\Single{0} \def\Double{1}
\def\Figure{0} \def\Table{1}
\def\InStack{0}
\def1{1}
\def2{2}
\def3{3}
\newcount\TEMPCOUNT
\newdimen\TEMPDIMEN
\newbox\TEMPBOX
\newbox\VOIDBOX
\newcount\LengthOfStack
\newcount\MaxItems
\newcount\StackPointer
\newcount\Point
\newcount\NextFigure
\newcount\NextTable
\newcount\NextItem
\newcount\StatusStack
\newcount\NumStack
\newcount\TypeStack
\newcount\SpanStack
\newcount\BoxStack
\newcount\ItemSTATUS
\newcount\ItemNUMBER
\newcount\ItemTYPE
\newcount\ItemSPAN
\newbox\ItemBOX
\newdimen\ItemSIZE
\newdimen\PageHeight
\newdimen\TextLeading
\newdimen\Feathering
\newcount\LinesPerPage
\newdimen\ColumnWidth
\newdimen\ColumnGap
\newdimen\PageWidth
\newdimen\BodgeHeight
\newcount\Leading
\newdimen\ZoneBSize
\newdimen\TextSize
\newbox\ZoneABOX
\newbox\ZoneBBOX
\newbox\ZoneCBOX
\newif\ifFirstSingleItem
\newif\ifFirstZoneA
\newif\ifMakePageInComplete
\newif\ifMoreFigures \MoreFiguresfalse
\newif\ifMoreTables \MoreTablesfalse
\newif\ifFigInZoneB
\newif\ifFigInZoneC
\newif\ifTabInZoneB
\newif\ifTabInZoneC
\newif\ifZoneAFullPage
\newbox\MidBOX
\newbox\LeftBOX
\newbox\RightBOX
\newbox\PageBOX
\newif\ifLeftCOL
\LeftCOLtrue
\newdimen\ZoneBAdjust
\newcount\ItemFits
\def1{1}
\def2{2}
\def\LineAdjust#1{\global\ZoneBAdjust=#1\TextLeading\relax}
\MaxItems=15
\NextFigure=\z@
\NextTable=\@ne
\BodgeHeight=6pt
\TextLeading=11pt
\Leading=11
\Feathering=\z@
\LinesPerPage=61
\topskip=\TextLeading
\ColumnWidth=20pc
\ColumnGap=2pc
\newskip\ItemSepamount
\ItemSepamount=\TextLeading plus \TextLeading minus 4pt
\parskip=\z@ plus .1pt
\parindent=18pt
\widowpenalty=\z@
\clubpenalty=10000
\tolerance=1500
\hbadness=1500
\abovedisplayskip=6pt plus 2pt minus 2pt
\belowdisplayskip=6pt plus 2pt minus 2pt
\abovedisplayshortskip=6pt plus 2pt minus 2pt
\belowdisplayshortskip=6pt plus 2pt minus 2pt
\ninepoint
\PageHeight=682pt
\PageWidth=2\ColumnWidth
\advance\PageWidth by \ColumnGap
\pagestyle{headings}
\newcount\DUMMY \StatusStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \NumStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \TypeStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \SpanStack=\allocationnumber
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newcount\DUMMY \newcount\DUMMY \newcount\DUMMY
\newbox\DUMMY \BoxStack=\allocationnumber
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\newbox\DUMMY \newbox\DUMMY \newbox\DUMMY
\def\immediate\write\m@ne{\immediate\write\m@ne}
\def\GetItemAll#1{%
\GetItemSTATUS{#1}
\GetItemNUMBER{#1}
\GetItemTYPE{#1}
\GetItemSPAN{#1}
\GetItemBOX{#1}
}
\def\GetItemSTATUS#1{%
\Point=\StatusStack
\advance\Point by #1
\global\ItemSTATUS=\count\Point
}
\def\GetItemNUMBER#1{%
\Point=\NumStack
\advance\Point by #1
\global\ItemNUMBER=\count\Point
}
\def\GetItemTYPE#1{%
\Point=\TypeStack
\advance\Point by #1
\global\ItemTYPE=\count\Point
}
\def\GetItemSPAN#1{%
\Point\SpanStack
\advance\Point by #1
\global\ItemSPAN=\count\Point
}
\def\GetItemBOX#1{%
\Point=\BoxStack
\advance\Point by #1
\global\setbox\ItemBOX=\vbox{\copy\Point}
\global\ItemSIZE=\ht\ItemBOX
\global\advance\ItemSIZE by \dp\ItemBOX
\TEMPCOUNT=\ItemSIZE
\divide\TEMPCOUNT by \Leading
\divide\TEMPCOUNT by 65536
\advance\TEMPCOUNT \@ne
\ItemSIZE=\TEMPCOUNT pt
\global\multiply\ItemSIZE by \Leading
}
\def\JoinStack{%
\ifnum\LengthOfStack=\MaxItems
\Warn{WARNING: Stack is full...some items will be lost!}
\else
\Point=\StatusStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemSTATUS
\Point=\NumStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemNUMBER
\Point=\TypeStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemTYPE
\Point\SpanStack
\advance\Point by \LengthOfStack
\global\count\Point=\ItemSPAN
\Point=\BoxStack
\advance\Point by \LengthOfStack
\global\setbox\Point=\vbox{\copy\ItemBOX}
\global\advance\LengthOfStack \@ne
\ifnum\ItemTYPE=\Figure
\global\MoreFigurestrue
\else
\global\MoreTablestrue
\fi
\fi
}
\def\LeaveStack#1{%
{\Iteration=#1
\loop
\ifnum\Iteration<\LengthOfStack
\advance\Iteration \@ne
\GetItemSTATUS{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemSTATUS
\GetItemNUMBER{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemNUMBER
\GetItemTYPE{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemTYPE
\GetItemSPAN{\Iteration}
\advance\Point by \m@ne
\global\count\Point=\ItemSPAN
\GetItemBOX{\Iteration}
\advance\Point by \m@ne
\global\setbox\Point=\vbox{\copy\ItemBOX}
\repeat}
\global\advance\LengthOfStack by \m@ne
}
\newif\ifStackNotClean
\def\CleanStack{%
\StackNotCleantrue
{\Iteration=\z@
\loop
\ifStackNotClean
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=\InStack
\advance\Iteration \@ne
\else
\LeaveStack{\Iteration}
\fi
\ifnum\LengthOfStack<\Iteration
\StackNotCleanfalse
\fi
\repeat}
}
\def\FindItem#1#2{%
\global\StackPointer=\m@ne
{\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=\InStack
\GetItemTYPE{\Iteration}
\ifnum\ItemTYPE=#1
\GetItemNUMBER{\Iteration}
\ifnum\ItemNUMBER=#2
\global\StackPointer=\Iteration
\Iteration=\LengthOfStack
\fi
\fi
\fi
\advance\Iteration \@ne
\repeat}
}
\def\FindNext{%
\global\StackPointer=\m@ne
{\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=\InStack
\GetItemTYPE{\Iteration}
\ifnum\ItemTYPE=\Figure
\ifMoreFigures
\global\NextItem=\Figure
\global\StackPointer=\Iteration
\Iteration=\LengthOfStack
\fi
\fi
\ifnum\ItemTYPE=\Table
\ifMoreTables
\global\NextItem=\Table
\global\StackPointer=\Iteration
\Iteration=\LengthOfStack
\fi
\fi
\fi
\advance\Iteration \@ne
\repeat}
}
\def\ChangeStatus#1#2{%
\Point=\StatusStack
\advance\Point by #1
\global\count\Point=#2
}
\def\InZoneB{1}
\ZoneBAdjust=\z@
\def\MakePage
\global\ZoneBSize=\PageHeight
\global\TextSize=\ZoneBSize
\global\ZoneAFullPagefalse
\global\topskip=\TextLeading
\MakePageInCompletetrue
\MoreFigurestrue
\MoreTablestrue
\FigInZoneBfalse
\FigInZoneCfalse
\TabInZoneBfalse
\TabInZoneCfalse
\global\FirstSingleItemtrue
\global\FirstZoneAtrue
\global\setbox\ZoneABOX=\box\VOIDBOX
\global\setbox\ZoneBBOX=\box\VOIDBOX
\global\setbox\ZoneCBOX=\box\VOIDBOX
\loop
\ifMakePageInComplete
\FindNext
\ifnum\StackPointer=\m@ne
\NextItem=\m@ne
\MoreFiguresfalse
\MoreTablesfalse
\fi
\ifnum\NextItem=\Figure
\FindItem{\Figure}{\NextFigure}
\ifnum\StackPointer=\m@ne \global\MoreFiguresfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Single \def\InZoneB{2}\relax
\ifFigInZoneC \global\MoreFiguresfalse\fi
\else
\def\InZoneB{1}
\ifFigInZoneB \def\InZoneB{3}\fi
\fi
\fi
\ifMoreFigures\Print{}\FigureItems\fi
\fi
\ifnum\NextItem=\Table
\FindItem{\Table}{\NextTable}
\ifnum\StackPointer=\m@ne \global\MoreTablesfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Single\relax
\ifTabInZoneC \global\MoreTablesfalse\fi
\else
\def\InZoneB{1}
\ifTabInZoneB \def\InZoneB{3}\fi
\fi
\fi
\ifMoreTables\Print{}\TableItems\fi
\fi
\MakePageInCompletefalse
\ifMoreFigures\MakePageInCompletetrue\fi
\ifMoreTables\MakePageInCompletetrue\fi
\repeat
\ifZoneAFullPage
\global\TextSize=\z@
\global\ZoneBSize=\z@
\global\vsize=\z@\relax
\global\topskip=\z@\relax
\vbox to \z@{\vss}
\eject
\else
\global\advance\ZoneBSize by -\ZoneBAdjust
\global\vsize=\ZoneBSize
\global\hsize=\ColumnWidth
\global\ZoneBAdjust=\z@
\ifdim\TextSize<23pt
\Warn{}
\Warn{* Making column fall short: TextSize=\the\TextSize *}
\vskip-\lastskip\eject\fi
\fi
}
\def\MakeRightCol
\global\TextSize=\ZoneBSize
\MakePageInCompletetrue
\MoreFigurestrue
\MoreTablestrue
\global\FirstSingleItemtrue
\global\setbox\ZoneBBOX=\box\VOIDBOX
\def\InZoneB{2}
\loop
\ifMakePageInComplete
\FindNext
\ifnum\StackPointer=\m@ne
\NextItem=\m@ne
\MoreFiguresfalse
\MoreTablesfalse
\fi
\ifnum\NextItem=\Figure
\FindItem{\Figure}{\NextFigure}
\ifnum\StackPointer=\m@ne \MoreFiguresfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Double\relax
\MoreFiguresfalse\fi
\fi
\ifMoreFigures\Print{}\FigureItems\fi
\fi
\ifnum\NextItem=\Table
\FindItem{\Table}{\NextTable}
\ifnum\StackPointer=\m@ne \MoreTablesfalse
\else
\GetItemSPAN{\StackPointer}
\ifnum\ItemSPAN=\Double\relax
\MoreTablesfalse\fi
\fi
\ifMoreTables\Print{}\TableItems\fi
\fi
\MakePageInCompletefalse
\ifMoreFigures\MakePageInCompletetrue\fi
\ifMoreTables\MakePageInCompletetrue\fi
\repeat
\ifZoneAFullPage
\global\TextSize=\z@
\global\ZoneBSize=\z@
\global\vsize=\z@\relax
\global\topskip=\z@\relax
\vbox to \z@{\vss}
\eject
\else
\global\vsize=\ZoneBSize
\global\hsize=\ColumnWidth
\ifdim\TextSize<23pt
\Warn{}
\Warn{* Making column fall short: TextSize=\the\TextSize *}
\vskip-\lastskip\eject\fi
\fi
}
\def\FigureItems
\Print{Considering...}
\ShowItem{\StackPointer}
\GetItemBOX{\StackPointer}
\GetItemSPAN{\StackPointer}
\CheckFitInZone
\ifnum\ItemFits=1
\ifnum\ItemSPAN=\Single
\ChangeStatus{\StackPointer}{2}
\global\FigInZoneBtrue
\ifFirstSingleItem
\hbox{}\vskip-\BodgeHeight
\global\advance\ItemSIZE by \TextLeading
\fi
\unvbox\ItemBOX\vskip\ItemSepamount\relax
\global\FirstSingleItemfalse
\global\advance\TextSize by -\ItemSIZ
\global\advance\TextSize by -\TextLeading
\else
\ifFirstZoneA
\global\advance\ItemSIZE by \TextLeading
\global\FirstZoneAfalse\fi
\global\advance\TextSize by -\ItemSIZE
\global\advance\TextSize by -\TextLeading
\global\advance\ZoneBSize by -\ItemSIZE
\global\advance\ZoneBSize by -\TextLeading
\ifFigInZoneB\relax
\else
\ifdim\TextSize<3\TextLeading
\global\ZoneAFullPagetrue
\fi
\fi
\ChangeStatus{\StackPointer}{\InZoneB}
\ifnum\InZoneB=3 \global\FigInZoneCtrue\fi
\fi
\Print{TextSize=\the\TextSize}
\Print{ZoneBSize=\the\ZoneBSize}
\global\advance\NextFigure \@ne
\Print{This figure has been placed.}
\else
\Print{No space available for this figure...holding over.}
\Print{}
\global\MoreFiguresfalse
\fi
}
\def\TableItems
\Print{Considering...}
\ShowItem{\StackPointer}
\GetItemBOX{\StackPointer}
\GetItemSPAN{\StackPointer}
\CheckFitInZone
\ifnum\ItemFits=1
\ifnum\ItemSPAN=\Single
\ChangeStatus{\StackPointer}{2}
\global\TabInZoneBtrue
\ifFirstSingleItem
\hbox{}\vskip-\BodgeHeight
\global\advance\ItemSIZE by \TextLeading
\fi
\unvbox\ItemBOX\vskip\ItemSepamount\relax
\global\FirstSingleItemfalse
\global\advance\TextSize by -\ItemSIZE
\global\advance\TextSize by -\TextLeading
\else
\ifFirstZoneA
\global\advance\ItemSIZE by \TextLeading
\global\FirstZoneAfalse\fi
\global\advance\TextSize by -\ItemSIZE
\global\advance\TextSize by -\TextLeading
\global\advance\ZoneBSize by -\ItemSIZE
\global\advance\ZoneBSize by -\TextLeading
\ifFigInZoneB\relax
\else
\ifdim\TextSize<3\TextLeading
\global\ZoneAFullPagetrue
\fi
\fi
\ChangeStatus{\StackPointer}{\InZoneB}
\ifnum\InZoneB=3 \global\TabInZoneCtrue\fi
\fi
\global\advance\NextTable \@ne
\Print{This table has been placed.}
\else
\Print{No space available for this table...holding over.}
\Print{}
\global\MoreTablesfalse
\fi
}
\def\CheckFitInZone{%
{\advance\TextSize by -\ItemSIZE
\advance\TextSize by -\TextLeading
\ifFirstSingleItem
\advance\TextSize by \TextLeading
\fi
\ifnum\InZoneB=1\relax
\else \advance\TextSize by -\ZoneBAdjust
\fi
\ifdim\TextSize<3\TextLeading \global\ItemFits=2
\else \global\ItemFits=1\fi}
}
\def\BeginOpening{%
\thispagestyle{titlepage}%
\global\setbox\ItemBOX=\vbox\bgroup%
\hsize=\PageWidth%
\hrule height \z@
\ifsinglecol\vskip 6pt\fi
}
\let\begintopmatter=\BeginOpening
\def\EndOpening{%
\On
\egroup
\ifsinglecol
\box\ItemBOX%
\vskip\TextLeading plus 2\TextLeadin
\@noafterindent
\else
\ItemNUMBER=\z@%
\ItemTYPE=\Figure
\ItemSPAN=\Double
\ItemSTATUS=\InStack
\JoinStack
\fi
}
\newif\if@here \@herefalse
\def\no@float{\global\@heretrue}
\let\nofloat=\relax
\def\beginfigure{%
\@ifstar{\global\@dfloattrue \@bfigure}{\global\@dfloatfalse \@bfigure}%
}
\def\@bfigure#1{%
\@par}
\if@dfloat
\ItemSPAN=\Double
\TEMPDIMEN=\PageWidth
\else
\ItemSPAN=\Single
\TEMPDIMEN=\ColumnWidth
\fi
\ifsinglecol
\TEMPDIMEN=\PageWidth
\else
\ItemSTATUS=\InStack
\ItemNUMBER=#1%
\ItemTYPE=\Figure
\fi
\bgroup
\hsize=\TEMPDIMEN
\global\setbox\ItemBOX=\vbox\bgroup
\eightpoint\nostb@ls{10pt}%
\let\caption=\fig@caption
\ifsinglecol \let\nofloat=\no@float\fi
}
\def\fig@caption#1{%
\vskip 5.5pt plus 6pt%
\bgroup
\eightpoint\nostb@ls{10pt}%
\setbox\TEMPBOX=\hbox{#1}%
\ifdim\wd\TEMPBOX>\TEMPDIMEN
\noindent \unhbox\TEMPBOX\@par}
\else
\hbox to \hsize{\hfil\unhbox\TEMPBOX\hfil}%
\fi
\egroup
}
\def\endfigure{%
\@par}\egroup
\egroup
\ifsinglecol
\if@here \midinsert\global\@herefalse\else \topinsert\fi
\unvbox\ItemBOX
\endinsert
\else
\JoinStack
\Print{Processing source for figure \the\ItemNUMBER}%
\fi
}
\newbox\tab@cap@box
\def\tab@caption#1{\global\setbox\tab@cap@box=\hbox{#1\@par}}}
\newtoks\tab@txt@toks
\long\def\tab@txt#1{\global\tab@txt@toks={#1}\global\table@txttrue}
\newif\iftable@txt \table@txtfalse
\newif\if@dfloat \@dfloatfalse
\def\begintable{%
\@ifstar{\global\@dfloattrue \@btable}{\global\@dfloatfalse \@btable}%
}
\def\@btable#1{%
\@par}
\if@dfloat
\ItemSPAN=\Double
\TEMPDIMEN=\PageWidth
\else
\ItemSPAN=\Single
\TEMPDIMEN=\ColumnWidth
\fi
\ifsinglecol
\TEMPDIMEN=\PageWidth
\else
\ItemSTATUS=\InStack
\ItemNUMBER=#1%
\ItemTYPE=\Table
\fi
\bgroup
\eightpoint\nostb@ls{10pt}%
\global\setbox\ItemBOX=\vbox\bgroup
\let\caption=\tab@caption
\let\tabletext=\tab@txt
\ifsinglecol \let\nofloat=\no@float\fi
}
\def\endtable{%
\@par}\egroup
\egroup
\setbox\TEMPBOX=\hbox to \TEMPDIMEN{%
\hss
\vbox{%
\hsize=\wd\ItemBOX
\ifvoid\tab@cap@box
\else
\noindent\unhbox\tab@cap@box
\vskip 5.5pt plus 6pt%
\fi
\box\ItemBOX
\iftable@txt
\vskip 10pt%
\eightpoint\nostb@ls{10pt}%
\noindent\the\tab@txt@toks
\global\table@txtfalse
\fi
}%
\hss
}%
\ifsinglecol
\if@here \midinsert\global\@herefalse\else \topinsert\fi
\box\TEMPBOX
\endinsert
\else
\global\setbox\ItemBOX=\box\TEMPBOX
\JoinStack
\Print{Processing source for table \the\ItemNUMBER}%
\fi
}
\def\UnloadZoneA{%
\FirstZoneAtrue
\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=1
\GetItemBOX{\Iteration}
\ifFirstZoneA \vbox to \BodgeHeight{\vfil}%
\FirstZoneAfalse\fi
\unvbox\ItemBOX\vskip\ItemSepamount\relax
\LeaveStack{\Iteration}
\else
\advance\Iteration \@ne
\fi
\repeat
}
\def\UnloadZoneC{%
\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\GetItemSTATUS{\Iteration}
\ifnum\ItemSTATUS=3
\GetItemBOX{\Iteration}
\vskip\ItemSepamount\relax\unvbox\ItemBOX
\LeaveStack{\Iteration}
\else
\advance\Iteration \@ne
\fi
\repeat
}
\def\ShowItem#1
{\GetItemAll{#1}
\Print{\the#1:
{TYPE=\ifnum\ItemTYPE=\Figure Figure\else Table\fi}
{NUMBER=\the\ItemNUMBER}
{SPAN=\ifnum\ItemSPAN=\Single Single\else Double\fi}
{SIZE=\the\ItemSIZE}}}
}
\def\ShowStack{%
\Print{}
\Print{LengthOfStack = \the\LengthOfStack}
\ifnum\LengthOfStack=\z@ \Print{Stack is empty}\fi
\Iteration=\z@
\loop
\ifnum\Iteration<\LengthOfStack
\ShowItem{\Iteration}
\advance\Iteration \@ne
\repeat
}
\def\B#1#2{%
\hbox{\vrule\kern-0.4pt\vbox to #2{%
\hrule width #1\vfill\hrule}\kern-0.4pt\vrule}
}
\newif\ifsinglecol \singlecolfalse
\def\onecolumn{%
\global\output={\singlecoloutput}%
\global\hsize=\PageWidth
\global\vsize=\PageHeight
\global\ColumnWidth=\hsize
\global\TextLeading=12pt
\global\Leading=12
\global\singlecoltrue
\global\let\onecolumn=\rela
\global\let\footnote=\sing@footnot
\global\let\vfootnote=\sing@vfootnote
\ninepoint
\message{(Single column)}%
}
\def\singlecoloutput{%
\shipout\vbox{\PageHead\pagebody\PageFoot}%
\advancepageno
\ifplate@page
\shipout\vbox{%
\sp@pagetrue
\def\sp@type{plate}%
\global\plate@pagefalse
\PageHead\vbox to \PageHeight{\unvbox\plt@box\vfil}\PageFoot%
}%
\message{[plate]}%
\advancepageno
\fi
\ifnum\outputpenalty>-\@MM \else\dosupereject\fi%
}
\def\vskip\ItemSepamount\relax{\vskip\ItemSepamount\relax}
\def\ItemSepbreak{\@par}\ifdim\lastskip<\ItemSepamount
\removelastskip\penalty-200\vskip\ItemSepamount\relax\fi%
}
\let\@@endinsert=\endinsert
\def\endinsert{\egroup
\if@mid \dimen@\ht\z@ \advance\dimen@\dp\z@ \advance\dimen@12\p@
\advance\dimen@\pagetotal \advance\dimen@-\pageshrink
\ifdim\dimen@>\pagegoal\@midfalse\p@gefalse\fi\fi
\if@mid \vskip\ItemSepamount\relax\box\z@\ItemSepbreak
\else\insert\topins{\penalty100
\splittopskip\z@skip
\splitmaxdepth\maxdimen \floatingpenalty\z@
\ifp@ge \dimen@\dp\z@
\vbox to\vsize{\unvbox\z@\kern-\dimen@
\else \box\z@\nobreak\vskip\ItemSepamount\relax\fi}\fi\endgroup%
}
\def\gobbleone#1{}
\def\gobbletwo#1#2{}
\let\footnote=\gobbletwo
\let\vfootnote=\gobbleone
\def\sing@footnote#1{\let\@sf\empty
\ifhmode\edef\@sf{\spacefactor\the\spacefactor}\/\fi
\hbox{$^{\hbox{\eightpoint #1}}$}\@sf\sing@vfootnote{#1}%
}
\def\sing@vfootnote#1{\insert\footins\bgroup\eightpoint\b@ls{9pt}%
\interlinepenalty\interfootnotelinepenalty
\splittopskip\ht\strutbox
\splitmaxdepth\dp\strutbox \floatingpenalty\@MM
\leftskip\z@skip \rightskip\z@skip \spaceskip\z@skip \xspaceskip\z@skip
\noindent $^{\scriptstyle\hbox{#1}}$\hskip 4pt%
\footstrut\futurelet\errmessage{Use \string\Bbb\space only in math mode}}\fi\next\fo@t%
}
\def\kern-3\p@ \hrule height \z@ \kern 3\p@{\kern-3\p@ \hrule height \z@ \kern 3\p@}
\skip\footins=19.5pt plus 12pt minus 1pt
\count\footins=1000
\dimen\footins=\maxdimen
\def\landscape{%
\global\TEMPDIMEN=\PageWidth
\global\PageWidth=\PageHeight
\global\PageHeight=\TEMPDIMEN
\global\let\landscape=\rela
\onecolumn
\message{(landscape)}%
\raggedbottom
}
\output{%
\ifLeftCOL
\global\setbox\LeftBOX=\vbox to \ZoneBSize{\box255\unvbox\ZoneBBOX}%
\global\LeftCOLfalse
\MakeRightCol
\else
\setbox\RightBOX=\vbox to \ZoneBSize{\box255\unvbox\ZoneBBOX}%
\setbox\MidBOX=\hbox{\box\LeftBOX\hskip\ColumnGap\box\RightBOX}%
\setbox\PageBOX=\vbox to \PageHeight{%
\UnloadZoneA\box\MidBOX\UnloadZoneC}%
\shipout\vbox{\PageHead\box\PageBOX\PageFoot}%
\advancepageno
\ifplate@page
\shipout\vbox{%
\sp@pagetrue
\def\sp@type{plate}%
\global\plate@pagefalse
\PageHead\vbox to \PageHeight{\unvbox\plt@box\vfil}\PageFoot%
}%
\message{[plate]}%
\advancepageno
\fi
\global\LeftCOLtrue
\CleanStack
\MakePage
\fi
}
\Warn{\start@mess}
\def\mnmacrosloaded{}
\catcode `\@=12
\@ifstar{\@ssection}{\@section}{Introduction}
An impressively large amount of observations of faint field galaxies has been
collected over the last fifteen years.
The determination of galaxy number counts, colour and redshift $(z)$
distributions
of increasingly deep samples is now part of the observational routine of
several established groups.
Since the review paper by Koo \& Kron (1992), galaxy photometric surveys have
been extended to $b_j = 27.5$ (Metcalfe et al. 1995a) and
to $K < 24 $ (Gardner, Cowie, \& Wainscoat 1993; Soifer et al. 1994;
Djorgovski et al. 1995), and spectroscopic surveys
to $K < 20 $ (Songaila et al. 1994), $I<22$
(Crampton et al. 1995) and to $B<24$ (Glazebrook et al. 1995a).
These data provide useful constraints for both cosmological models and
galaxy evolutionary models.
As soon as deep optical counts of galaxies became available, it was realized
that no-evolution (nE) models,
i.e. models in which the absolute brightness and the spectra of galaxies
do not change in time, predict a surface density of galaxies
at faint magnitude significantly lower than the observations
(see the review by Koo \& Kron 1992).
The excess in the observed number counts with respect to the nE model
predictions is of the order of $\sim$ 4 to 5 at $b_j \sim$ 24 and $\sim$
5 to 10 at $b_j \sim$ 26 (Maddox et al. 1990;
Guiderdoni \& Rocca-Volmerange 1991, hereafter GRV91).
Simple pure luminosity evolution (PLE) models, which allow for brightness
and spectral evolution in the galaxy population, were shown to provide a
better fit to the faint galaxy number
counts (Tinsley 1980; Bruzual \& Kron 1980 (hereafter BK80); Koo 1981,1985).
However, these early PLE models were ruled out
on the basis of comparisons with the results of $z$ surveys of faint galaxies
(Broadhurst, Ellis, \& Shanks 1988; Colless et al. 1990; Koo \& Kron 1992 and
references therein), which failed to reveal the large number of high $z$
galaxies predicted by the models.
The $z$ distributions in these surveys appeared to be in agreement
with the predictions of nE models.
In an attempt to explain all aspects of the data with a single model,
this apparent paradox (counts require PLE model but $z$ distribution only admits
nE model) prompted the development of a number of less straightforward models.
These models relax some of the assumptions adopted in nE and PLE models,
such as, the constancy of the volume number density of galaxies,
and/or the standard cosmology.
The excess observed in the faint counts can be explained, for example,
by either non-conservation of the number of galaxies due to
merger events (GRV91;
Broadhurst, Ellis, \& Glazebrook 1992; Carlberg \& Charlot 1992) or
dwarf galaxies which have faded and/or disappeared in recent epochs
(Broadhurst et al. 1988; Cowie, Songaila \& Hu 1991; Babul \& Rees 1992).
Fukugita et al. (1990) have proposed to revise the cosmological model
introducing a non-zero cosmological constant.
More recent and refined versions of PLE models have shown that
at least some of the discrepancies with existing data
can be substantially reduced also in the framework of these models
(Guiderdoni \& Rocca-Volmerange 1990 (hereafter GRV90);
Gronwall \& Koo 1995 (hereafter GK95);
Metcalfe et al. 1991,1995a).
The main goal of this paper is to explore up to what extent the properties
of the faint galaxy samples can be understood in the framework of PLE models,
without invoking the more exotic scenarios proposed in the literature.
We examine a new set of PLE models and compare their predictions
with the most recent data, including counts in all available photometric
bands, and colours and $z$ distributions as a function of magnitude for various
samples, selected in different bands.
A global comparison of this kind is essential because each different
data set provides constraints on different PLE model parameters.
We find that standard PLE models built along the lines described by BK80,
but with mild galaxy evolution, similar to that {\it assumed} by Metcalfe
et al. (1991, 1995a), but {\it derived} here consistently from the
Bruzual \& Charlot (1993, hereafter BC93) spectral evolution models,
provide satisfactory fits to most of the observational data
in an $\Omega \sim 0$ universe.
The only data set which our models do not reproduce well is the $z$
distribution of the $K$-selected sample for $K > 17$ by Songaila et al. (1994).
Despite this failure, and before new samples either
confirm or modify the Songaila et al. results, we think that it is useful
to compare the faint galaxy surveys with the predictions of simple PLE models
in order to constrain the range of evolutionary and cosmological parameters
used in these models.
In \S 2 we describe the main ingredients of our PLE model,
i.e. the shape and normalization of the luminosity function of galaxies
of various morphological types and the adopted models for their
spectral evolution.
In \S 3 we compare the predictions of our models with the observational data.
The conclusions of our work are presented in \S 4.
\@ifstar{\@ssection}{\@section}{Standard PLE models}
Standard PLE models (BK80; Metcalfe et al. 1991) assume that galaxies
maintain at every $z$ the same proportion of the various morphological
types as observed locally ($z = 0)$,
and that the spectral evolution of these galaxies is well described by models
which at low $z$ reproduce the colours and $k$-corrections determined from
observed galaxy spectra.
Assuming a geometry for the universe, and scaling the galaxy luminosity
function (LF) to match the number counts at a given magnitude in a specific
band, one can compute the expected galaxy number counts in various bands,
as well as the colour and $z$ distributions as a function of magnitude.
\subsection{Modelling faint galaxy counts and colour and $z$ distributions}
In a homogeneous and isotropic universe, the number of galaxies of each type
brighter than a given magnitude $m$ can be calculated from the integral
$$N_i(<m)=\int_o^{z_{max}}\int_{M_{min}}^{M_{max}(m,z)}\phi_i(M) dM
{dV(z,\Omega)\over{dz}} dz,\eqno(1)$$
where $\phi_i(M)$ and $dV(z,\Omega)$ are, respectively, the local LF
for the type and the comoving volume element.
In this paper we will consider only the standard ($\Lambda=0$) Friedmann
cosmology defined by $H_0$ and $\Omega$.
We use $H_0 = 50$ km s$^{-1}$ Mpc$^{-1}$ throughout this paper.
The total number of galaxies brighter than $m$, $N(<m)$, is obtained by
adding $N_i(<m)$ over all types considered. From (1) we obtain
the $z$-distribution
$N_i(<m;z,z+dz)$ by integrating over the specific $z$ range ($z,z+dz$).
In (1) $z_{max}=min~(z_f,z_L)$, where $z_f$ is the assumed $z$ of galaxy
formation and $z_L$ is the value of $z$ at which the Lyman continuum break
is shifted to the effective wavelength of the filter being considered, and
the galaxy presumably becomes dark (Madau 1995).
We use the following filters:
$U$ (Koo 1981), $B$ and $V$ (Buser 1978), $b_j$ and $r_f$
(Couch \& Newell 1980), $I$ and $K$ (Wainscoat \& Cowie 1992).
For the $U,~b_j,~r_f,~I$, and $K$ bands, $z_L \simeq $ 3, 4, 6, 8, and 23,
respectively.
The integration over $dM$ in (1) extends up to
$$M_{max}(m,z)=m-5 log {d_L(z;H_0,\Omega)\over{10}}-corr(z;H_0,\Omega,z_f),
\eqno(2)$$
where $d_L(z;H_0,\Omega)$ is the luminosity distance measured in pc,
and $corr(z;H_0,\Omega,z_f)$ is the {\it correction} needed to obtain the
galaxy rest frame magnitude from its observer frame magnitude.
In the nE model, it is just the $k$-correction.
In the PLE model, it is given by the ($e+k$)-correction,
which also includes the effects due to the intrinsic galaxy luminosity
evolution.
We computed this correction up to $z=z_{max}$ from the synthetic spectral
energy distribution (SED) of the various galaxy types for each photometric
band.
Finally, the colour distribution $N_i(<m;c,c+dc)$ can be derived from the
$z$ distribution $N_i(<m;z,z+dz)$ for each type of galaxy by using the
cosmology dependent relation between colour and $z$,
$c_i(z;H_0,\Omega,z_f)$, given by the adopted spectral evolution model.
The colour distributions for the different types are then added up.
\subsection{Luminosity function}
There is increasing evidence that the galaxy LF varies with galaxy type.
Bingelli, Sandage \& Tamman (1988) have derived morphology dependent LFs
for galaxies in the Virgo cluster.
Efstathiou, Ellis \& Peterson (1988), Shanks (1990) and Loveday et al. (1992)
have shown that the LF in the field is well described by the analytical
expression of Schechter (1976), with values of $\alpha$ and $M^*$ which depend
on the galaxy morphological type.
In general, blue galaxies show a steeper slope than red galaxies.
\begintable*{1}
\nofloat
\caption{{\bf Table 1.} Local galaxy mix, LF parameters$^a$, and
local colors$^b$}
\halign{%
#\hfil & \hfil#\hfil & \hfil#\hfil & ~~\hfil#\hfil
& ~~\hfil#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil
& \hfil#\hfil \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
Type~~~ & ~~Fraction$^c$~~ & ~~$\alpha$~~ & ~~$M^*_{b_j}$~~ & ~~$\Phi_i^*$~~
& ~$B-V$~ & ~$U-b_j$~ & ~$b_j-r_f$~ & ~$b_j-I$~ & ~$b_j-K$~ \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
E/S0 & 0.28 & -0.48 & -20.87 & 0.95 & 0.95 & 0.75 & 1.55 & 2.32 & 4.13 \cr
Sab-Sbc & 0.47 & -1.24 & -21.14 & 1.15 & 0.68 & 0.32 & 1.21 & 1.83 & 3.47 \cr
Scd-Sdm & 0.22 & -1.24 & -21.14 & 0.54 & 0.43 & -0.05 & 0.88 & 1.39 & 2.96 \cr
very Blue (vB) & 0.03 & -1.24 & -21.14 & 0.12 & 0.07 & -0.49 & 0.32 & 0.63 & 2.06 \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr
}
\tabletext{\noindent
$^a$ $\alpha$, $M^*_{b_j}$ from Efstathiou et al. (1988);
$\Phi_i^*$ in units of $10^{-3}$ Mpc$^{-3}$ (\S 2.3).
$H_0=50$ km s$^{-1}$ Mpc$^{-1}$ throughout this paper;
\@par}\noindent $^b$ From BC93 ($t_g=16$ Gyr, $z_f = 4.5$, $\Omega = 0$);
\@par}\noindent $^c$ Mix from Ellis (1983).}
\endtable
We adopt the values of $\alpha$ and $M^*_{b_j}$ derived separately
for early and late type galaxies by Efstathiou et al. (1988) from
the Anglo-Australian Redshift Survey data, listed in Table 1.
We assume that these parameters are valid from
$M_{b_j}=-24$ to $M_{b_j}=-15.5$ (cf. Loveday et al. 1992). To obtain the
local LF in the $U,~r_f,~I$, and $K$ bands we shift $M^*_{b_j}$ according
to the colours at $z=0$ for each galaxy type listed in Table 1.
The $K$ band LF thus obtained is consistent with the
observed one (Mobasher et al. 1995; Glazebrook et al. 1995b).
We have not used the more recent determination of $\alpha$ and $M^*_{b_j}$
by Loveday et al. (1992), because of the bias mentioned by these authors
against identifying early-type galaxies in their sample.
As shown by Zucca, Pozzetti \& Zamorani (1994), the correction for this bias
moves the Loveday et al. values of ($\alpha$, $M^*$) towards those of
Efstathiou et al. (1988).
\subsection{Count normalization}
The uncertainties in the normalization of the local galaxy LF translate
into a major source of uncertainty in the models for faint galaxy counts.
Moreover, different selection effects in photographic and CCD data
may lead to difficulties in comparing bright and deep surveys (McGaugh 1994).
The reduced size of well calibrated samples at bright magnitude and the local
fluctuations caused by galaxy clustering lead to large variations in the
bright counts.
In principle, this problem would be avoided by using the APM galaxy counts
in the range $15 < b_j < 20.5$ derived from 4300 deg$^2$ of the sky
(Maddox et al. 1990). However, while these counts are close to the
mean of previously published data at $b_j \sim 19-20$, they
show a steeper slope for $b_j<19$ than previous determinations.
Thus, the APM counts at $b_j < 17$ are below models normalized to
match the counts at $b_j = 19-20$.
Metcalfe, Fong \& Shanks (1995b) show evidence of a possible magnitude
scale error in the APM galaxy survey, whose size could be
large enough to cause the apparent disagreement between the APM galaxy counts
and the predictions from standard models in the range $17.5 < b_j < 20$.
Because of this uncertainty in the counts at bright magnitude,
we have chosen to scale our model predictions
to the observed number counts in the range $19<b_j<19.5$,
namely $log~N_0=1.98$ gal deg$^{-2}$ (0.5 mag)$^{-1}$ (see also
GRV90). This normalization is consistent with that obtained recently
for the $K$ band LF (Glazebrook et al. 1995b), as well as for the $b_j$
band (Ellis et al. 1995).
Finally, in order to compute $\Phi^*_i$, i.e. the normalization
of the LF for each morphological type,
we have adopted the local galaxy mix derived by Ellis (1983) for
$b_j<16.5$ from the DARS data (see column 2 in Table 1). This galaxy mix
is in good agreement with the percentage of elliptical and spiral
galaxies derived recently by Efstathiou et al. (1988).
The same values of $\Phi^*_i$ are used to model different photometric bands.
The $b_j$ LF summed over all types derived from our models is well described
by the parameters $\alpha= -1.2,~M^*_{b_j}=-21.1$,
$\Phi^*_{ALL}=2.5 \times 10^{-3}$ Mpc$^{-3}$.
\subsection {Spectral energy distributions}
Our galaxy spectral energy distributions (SEDs)
are based on the BC93 galaxy spectral evolution library.
The BC93 models are built from a library of stellar tracks
which includes all evolutionary stages for stars of solar metallicity.
Empirical near-UV to near-IR spectra of galactic stars, extended to the far-UV
by means of model atmospheres, are used in the synthesis.
The inclusion in these models of the Post-AGB stellar evolutionary phase,
which contributes significantly in the UV spectral range in old
stellar populations (Magris \& Bruzual 1993),
represents an improvement over previous spectral evolution codes.
We used the following procedure to select a subset of galaxy models from
the BC93 library.
First, we selected models on the basis of their ability to reproduce the
shape of the continuum and spectral features in the SEDs of local galaxies
of different morphological types, and verified that the $k$-corrections
computed from the models were in agreement with the empirical ones.
Then, among these models we selected by trial and error those which
reproduced better the empirical constraints on galaxy evolutionary
properties, namely, galaxy number counts and colour and $z$ distributions.
Thus, we have to compute the number count model as outlined in \S2.1
in order to find which galaxy spectral evolution models produce the
best fit to the counts and colour and $z$ distributions.
The feedback in this procedure is unavoidable due to the lack of independent
constraints on galaxy evolution (BK80, GK95).
\begintable{2}
\caption{{\bf Table 2.} Initial mass function$^a$}
\halign{%
#\hfil & \hfil# & \hfil# & \hfil# & # \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
~IMF & $x_i$ & $m_1$ & $m_2$ & $~~c_i$ \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
\noalign{\vskip 4pt}
Scalo (1986) & -2.60 & 0.10 & 0.18 & 100.530 \cr
& 0.01 & 0.18 & 0.42 & 1.14430 \cr
& 1.75 & 0.42 & 0.62 & 0.25293 \cr
& 1.08 & 0.62 & 1.18 & 0.34842 \cr
& 2.50 & 1.18 & 3.50 & 0.44073 \cr
& 1.63 & 3.50 & 125 & 0.14819 \cr
\noalign{\vskip 4pt}
Salpeter (1955) & 1.35 & 0.10 & 125 & 0.17038 \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr
}
\tabletext{\noindent $^a f_i(m) = c_i m^{-(1+x_i)}$ for $m_1 \le m \le m_2$,
$m_1$ and $m_2$ in M$_\odot$ units.}
\endtable
\subsubsection {Evolutionary constraints on the SEDs}
The following aspects of the data guided our choice of the SEDs.
Strong evolution for $z<1$ is ruled out for all galaxy types
by the observed $z$ distributions
of Broadhurst et al. (1988) and Colless et al. (1990, 1993), which are close
to the nE prediction up to $b_j=22.5$.
Colless et al. (1993) estimate an upper limit to
the amount of luminosity evolution of $\Delta M_B \sim -1.2 $ for
$z<1$, providing a strong constraint to the evolutionary corrections.
The observed number counts, particularly in the $b_j$ band, require
optically mildly-evolving SEDs, over the entire $z$ range,
in agreement with Koo, Gronwall, \& Bruzual (1993, hereafter KGB93).
Additional evidence in favor of mild evolution derives from recent work on the
$z$ evolution of the LF in $B$-, $I$-, and $K$-selected samples by
Colless (1995), Lilly et al. (1995b), and Glazebrook et al. (1995b),
respectively.
We introduce the spectral class of very Blue (vB) galaxies in order to
explain the bluest colours, $b_j-r_f \la 0.3$,
observed in the deepest $b_j$-selected surveys (Glazebrook et al. 1995a).
vB galaxies are meant to reproduce a population of starburst galaxies
present at each $z$, whose evolution does not follow the pure luminosity
prescription.
We assume that star formation in these galaxies keeps their SED constant
in time.
For vB galaxies we adopt the LF of spiral galaxies.
The observed fraction of galaxies bluer than $B-V=0.6$ at the faint magnitude
limit ($\sim10\%$), is reproduced assuming that vB galaxies represent
$\sim 3\%$ of the total local mix (Table 1).
Thus, at the few \% level, our PLE models include a fraction
of non-passively evolving galaxies.
\begintable{3}
\nofloat
\caption{{\bf Table 3.} Galaxy SED model parameters}
\halign{%
\hfil#\hfil & ~~#\hfil & ~~#\hfil & ~~#\hfil & ~~#\hfil \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
$\Omega$ & Type & SFR & IMF & $t_g$ (Gyr) \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
0 & E/S0 & $\tau_1,\tau_2$ & Scalo & 16 \cr
& Sab-Sbc & $\tau_{10}$ & Scalo & 16 \cr
& Scd-Sdm & cons & Salpeter & 16 \cr
& vB & cons & Salpeter & 0.1 \cr
\noalign{\vskip 3pt}\cr
1 & E/S0 & $B_1,\tau_1$ & Scalo & 12.7 \cr
& Sab-Sbc & $\tau_8$ & Scalo & 12.7 \cr
& Scd-Sdm & cons & Salpeter & 12.7 \cr
& vB & cons & Salpeter & 0.1 \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr
}
\endtable
\advance\fignumber by 1\beginfigure{\fignumber}
\putfigl{Fig1.ps}{6}{270}
\caption
{
{\bf Figure 1.}
$k$-correction in the $r_f$ band for galaxies of different morphological
types from BC93 models (dotted lines, {\it sc:} Scalo IMF; {\it sp:} Salpeter
IMF) and from the observed SEDs described in the text (solid lines).
}
\endfigure
\@ifstar{\@ssection}{\@section}{Results}
\subsection{Reference models}
A BC93 galaxy evolution model is specified by the star formation rate (SFR)
$\psi(t)$ and the initial mass function (IMF).
Once $H_0,~\Omega$, and $z_f$ are specified, the age $t_g$ of the model SEDs
and their observer frame properties are fixed.
We considered models with $t_g = 16$ and 12.7 Gyr, corresponding
to $z_f = 4.5$ and 10 for $\Omega = 0$ and 1, respectively.
We tested models computed for the Salpeter (1955) and the Scalo (1986) IMFs
(Table 2).
Table 3 defines the BC93 model SEDs selected as described in \S2.4 that
will be used to build our reference models for the counts as indicated in \S2.1.
A brief comment on these SEDs follows.
Models for E/S0 and Sab-Sbc galaxies are characterized by the SFR
$\psi(t) \propto \tau^{-1} ~ \exp(-t/\tau)$,
where $\tau$ is the e-folding time characterizing this form of $\psi(t)$.
The spectral properties of nearby E/S0 galaxies
are reproduced well by both the $\tau=1$ Gyr model (hereafter $\tau_1$ model)
and a model in which star formation takes
place at a constant rate during the first Gyr in the life of the galaxy
(hereafter $B_1$ model) for either the Salpeter or
the Scalo IMF, and by the
$\tau=2$ Gyr model (or $\tau_2$ model) for the Scalo IMF.
The observed $b_j-r_f$ and $B-K$ colour distributions are reproduced more
closely by our $\Omega = 0$ reference PLE model if we represent the E/S0
galaxy SEDs by the $\tau_1$ and $\tau_2$ models, rather than by the $B_1$ model
(Table 3).
The local properties of Sab-Sbc galaxies are well described by the
$\tau=4$ Gyr (or $\tau_4$) Salpeter IMF model, or the $\tau=10$ Gyr
(or $\tau_{10}$) Scalo IMF model. The Scd-Sdm galaxies are described
satisfactorily by a model in which stars form at a constant rate following
the Salpeter IMF (hereafter $cons$ model).
The SED of the vB galaxies can be approximated by a model with constant
SFR seen at an age of 0.1 Gyr after the last major event of star formation.
This SED is even bluer than that of NGC 4449 (BC93).
\subsubsection{Scalo IMF vs. Salpeter IMF}
The constraints mentioned in \S2.4 led us to adopt Scalo IMF models for
early-type galaxies (Table 3).
The Scalo IMF is less rich in massive stars than the Salpeter IMF
because of the steeper slope of the former at the high mass end.
The high number of massive stars in the Salpeter IMF models produces
a large amount of UV flux at early times, making high $z$ E galaxies detectable
in current deep surveys and producing an excess in the faint $b_j$ counts
with respect to the observations.
An alternative way to reduce the UV flux is to assume a significant
amount of dust extinction inside the galaxies (GK95).
There is some ambiguity in the literature on the functional form of the
Scalo IMF. Following a suggestion by D.C. Koo, BC93 have divided the IMF given
by Scalo in graphical form into the 6 different segments listed in Table 2.
For the $i^{th}$-segment the IMF is given by $f_i(m)=c_i m^{-(1+x_i)}$.
The constants $c_i$ are computed from the requirement of continuity of the
IMF (cf. Guiderdoni \& Rocca-Volmerange 1987, hereafter GRV87).
\begintable{4}
\nofloat
\caption{{\bf Table 4.} Intrinsic luminosity evolution}
\halign{%
#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
$\lambda_0^a$ & 2300\AA & 1150\AA & 11000\AA & 5500\AA \cr
Type~~~ & ~$\Delta B_{z=1}$~ & ~$\Delta B_{z=3}$~ & ~$\Delta K_{z=1}$~ &
~$\Delta K_{z=3}$~ \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
E/S0 & -1.6 & -6.0 & -0.6 & -2.3 \cr
Sab-Sbc & -1.0 & -1.6 & $\sim 0$ & $\sim 0$ \cr
Scd-Sdm & $\sim 0$ & $\sim 0$ & +0.4 & +0.6 \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr
}
\tabletext{\noindent $^a$ Rest frame wavelength}
\endtable
\subsection {$k$ and $(e+k)$-corrections}
Fig 1 shows the model and the empirical $k$-corrections in the $r_f$ band.
These models, listed under $\Omega = 0$ in Table 3, reproduce the flattening at
high $z$ shown by the empirical $k$-corrections of ellipticals and spirals
(cf. Cowie et al. 1994).
In contrast, the $k$-corrections of ellipticals modeled by
Rocca-Volmerange \& Guiderdoni (1988) continue to increase with $z$.
This difference is mainly due to the lower UV flux in the GRV87
model for E galaxies, resulting from the lack of
Post-AGB stars in their population synthesis.
The $k$-corrections derived from the synthetic spectra are in agreement with
the $k$-corrections in the $(U,b_j,r_f,I)$ bands computed from the observed
spectra of Pence (1976) up to $z=(0.6,0.9,2.2,2.5)$,
with the compilation in $b_j$ by King \& Ellis (1984) up to $z=1.5$, and with
the $K$ and $B$ band estimates by Cowie et al. (1994) up to $z < 3$.
Our E/S0 models reproduce also the $k$-correction at higher $z$
computed from the average observed E galaxy SED of BC93.
\advance\fignumber by 1\beginfigure{\fignumber}
\putfigl{Fig2a.ps}{6}{270}
\putfigl{Fig2b.ps}{6}{270}
\caption
{
{\bf Figure 2.}
{\it (a)} $b_j$ band $(e+k)$-corrections (solid lines) and $k$-corrections
(dotted lines) for galaxies of different morphological types derived from the
BC93 models listed under $\Omega = 0$ in Table 3.
{\it (b)} Same as {\it (a)} but for the $K$ band.
}
\endfigure
\begintable{5}
\nofloat
\caption{{\bf Table 5.} Slope $\gamma$ at faint magnitudes}
\halign{%
#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil & \hfil#\hfil \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
Band~~~ & ~~mag. range ~~ & ~~$\gamma_{obs}$~~ & ~~$\gamma_{\rm PLE}^a$~~
& ~~$\gamma_f^b$~~ \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
$U$ & 20--25 & 0.49 & 0.49 & 0.22 \cr
$b_j$ & 20--25 & 0.45 & 0.46 & 0.30 \cr
$r_f$ & 20--25 & 0.37 & 0.36 & 0.21 \cr
$I$ & 19--23 & 0.34 & 0.35 & 0.17 \cr
$K$ & 18--23 & 0.26-0.30 & 0.29 & 0.10 \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr
}
\tabletext{\noindent $^a$ From the $\Omega=0$ reference PLE model
\@par}\noindent $^b$ Predicted slope at faint limits: $24 < m < 27$}
\endtable
Figures 2a and 2b show the $k$ and $(e+k)$ corrections in the $b_j$ and $K$
bands, respectively, for the SEDs selected in the $\Omega = 0$ case (Table 3).
We see here the flattening of the ($e+k$)-corrections
in $b_j$ and $r_f$ at high $z$, which was assumed entirely {\it ad hoc} by
Metcalfe et al. (1991, 1995a).
If we use Salpeter IMF models for the Ellipticals, the ($e+k$)-correction
in $b_j$ continues to decrease with $z$, up to $\sim-3(-4)$ at $z=2(3)$.
The difference between the $(e+k)$ and $k$-corrections gives the intrinsic
galaxy luminosity evolution, or $\Delta M_z = e$-correction$(z)$.
This quantity represents the luminosity brightening of a galaxy at
$\lambda_0 = \lambda_{obs} \times (1+z)^{-1}$
with respect to its $z = 0$ luminosity at the same wavelength.
The $e$-corrections at $z=1$ and 3, as observed in the
$B$ and $K$ bands are given in Table 4.
The amount of brightening up to $z=1$ is relatively small,
as required by the observed $z$ distributions.
For $z \sim 3$, $\Delta B_z \sim -6$ for E/S0 galaxies and
$\Delta B_z \sim -1.6$ for Sab-Sbc galaxies.
In the $K$ band the luminosity evolution is significantly lower than in the
other bands ($\Delta K_{z=1} \sim -0.6$ and $\Delta K_{z=3} \sim -2.3$
for E/S0 galaxies), showing how the concept of mild luminosity evolution
depends on the wavelength range being considered.
\subsection{Number counts}
Optical counts have reached very deep levels thanks to CCD cameras.
IR array detectors have made possible to extend the faint
galaxy work to the $K$ band.
Observations in the $K$ band are particularly important because
light at $2 \mu m$ is relatively insensitive to dust extinction.
At high $z$ the $K$ band samples the well understood rest
frame optical range, in which spectral evolution is less significant than
in the UV region sampled at high $z$ by the optical bands (Table 4).
$K$ band counts are thus less sensitive than optical counts to the
evolution of the stellar population and to the details of the SFR and IMF.
$K$ counts can therefore be used, at least in principle, as a direct test of
cosmological models.
Because of these reasons in this paper we pay special attention to fitting
at the same time all known properties of faint galaxy samples,
from the $U$ to the $K$ band.
\advance\fignumber by 1\beginfigure*{\fignumber}
\psfig{file=Fig3.ps,height=16cm,width=13cm,clip=,angle=0}
\caption
{
{\bf Figure 3.}
Differential galaxy number counts per square degree per half
magnitude interval as a function of apparent magnitude.
The sources of the observed data points are indicated in each panel.
The lines show the predicted counts for different models:
dotted line: nE model ($\Omega = 0,\ z_f = 4.5$);
solid line: PLE model ($\Omega = 0,\ z_f = 4.5$);
dashed line: PLE model ($\Omega = 1,\ z_f = 10$).
{\it (a)} $U$ band.
{\it (b)} $b_j$ band.
{\it (c)} $r_f$ band.
{\it (d)} $I$ band.
{\it (e)} $K$ band.
}
\endfigure
Figures 3a-e show the differential number counts in the $(U,b_j,r_f,I,K)$
bands. As in most observational papers, from which the data have been taken,
$N(m)$ is plotted per half magnitude bin.
The sources of the data points are indicated in the figures.
Even though in a few cases the counts from different groups show a
relatively large scatter, the slope $\gamma$ of the $log~N - m$
relation is reasonably well defined in all bands.
At bright magnitude $\gamma\sim 0.6$ is close to the Euclidean value.
Table 5 shows that at fainter magnitude $\gamma$ decreases with increasing
filter effective wavelength (Jones et al. 1991; Gardner et al. 1993;
Djorgovski et al. 1995).
Despite the large difference in $\gamma$ between $b_j$ and $K$,
the two bands are sampling the same galaxy population.
We verified that the observed $b_j$ counts can be reproduced (in number
and $\gamma$) by just shifting the observed $K$ band counts according to
the $(B-K)_{med}$ at a given $K$ given by Gardner et al. (1993).
The lines in these figures represent the predictions of the models that are
described in detail below.
\subsubsection{nE model}
The well known excess of faint galaxies above the nE model prediction
is confirmed (Fig 3).
In nE models the SEDs of distant galaxies are represented by SEDs that match
those of nearby galaxies (Table 3),
which correspond, on average, to old stellar populations.
The amount of UV flux produced by Post-AGB stars in E/S0 galaxies
is not as large as that attained at early ages.
The large $k$-correction moves these galaxies towards fainter magnitude
and, hence, they do not contribute to the predicted counts
even at the faint limit reached by current observations.
\advance\fignumber by 1\beginfigure{\fignumber}
\putfig{Fig4.ps}{8}{0}
\caption
{
{\bf Figure 4.}
Differential galaxy number counts as a function of $b_j$ apparent magnitude.
The lines show the predicted counts for two different models:
solid line: NLE model ($\Omega = 1,\ z_f = 10,\ \eta = 1$);
dashed line: PLE model ($\Omega = 1,\ z_f = 10$).
}
\endfigure
\advance\fignumber by 1\beginfigure*{\fignumber}
\nofloat
\putfig{Fig5.ps}{8.0}{0}
\caption
{
{\bf Figure 5.}
Contribution of each morphological type to the total
differential galaxy number counts as a function of apparent magnitude.
The lines show the predicted counts for the reference $\Omega = 0$ PLE model.
Total counts: solid line;
E/S0: dotted line;
Sab-Sbc: short-dashed line;
Scd-Sdm: long-dashed line;
vB: dot-dashed line.
{\it (a)} $b_j$ band.
{\it (b)} $K$ band.
}
\endfigure
The excess of galaxies with respect to our nE model (with $\Omega=0$)
is about a factor of
2 at $m\sim24$ in all bands, except for the $K$ band in which the nE model is
consistent with the data. Even at fainter magnitude the observed data are
never more than a factor of three higher than the nE model.
The excess reported by other authors (Broadhurst et al. 1988; Maddox et al.
1990; GRV90) is larger by a factor $\sim$ 2 than our result.
This difference is mainly due to our choice of normalizing the model counts
to the observed number in the range $19.0 < b_j < 19.5$ (\S 2.3),
and to our selection of realistic SEDs for early-type galaxies.
Our SEDs reproduce reasonably well the UV flux of local galaxies, which
translates into significant differences between our $k$-corrections at high
$z$ and those used in the quoted papers.
On the other hand, our results are in reasonable agreement with the nE model
of KGB93, supporting their conclusion that only a moderate amount of spectral
evolution is required by the data.
In an $\Omega=1$ universe the difference between data and model (not shown)
at faint magnitude is substantially higher, being a factor of $\sim 3(4)$ at
$b_j\sim 24(26)$.
\subsubsection{PLE models}
{From} Figures 3a-e we see that the $\Omega = 0$ reference PLE model reproduces
reasonably well the observed galaxy counts in the five bands over a wide
magnitude range.
The most significant discrepancies are with respect to the bright $b_j$ counts
of Maddox et al. (1990, \S2.3), and in $K$ with respect to the Mobasher et al.
(1986) counts and the Gardner et al. (1993) HWS counts.
At $K > 20$ the model agrees well with the data from Soifer et al. (1994) and
McLeod et al. (1995), but overestimates the Gardner et al. (1993) HDS data
and the Djorgovski et al. (1995) counts for $K < 23$.
Table 5 shows the excellent agreement between the observed and predicted values
of the slope $\gamma$ in the five bands.
At fainter magnitude than the current limits the model predicts a significant
flattening of $N(m)$ in all bands (last column of Table 5).
The faintest $r_f$ and $I$ band counts of Tyson (1988) do not show this
flattening but show an excess (with large error bars) with respect to the
model predictions.
If faint counts keep increasing with a steep slope, then higher values of
$z_f$ are likely needed to fit the data.
For increasing $z_f$, the flattening predicted at faint magnitude shifts
towards even fainter magnitude.
Deeper data are necessary to test these predictions in detail.
\advance\fignumber by 1\beginfigure*{\fignumber}
\putfigl{Fig6.ps}{12.0}{270}
\caption
{
{\bf Figure 6.}
$b_j-r_f$ colour distribution for different $b_j$ bins.
The solid histograms show the observed distributions from
Metcalfe et al. (1991, 1995a) and, for the faintest bin, from
Koo \& Kron (1992; data kindly provided to us by C. Gronwall).
When needed, the data have been converted to the $b_j$ and $r_f$ passbands of
Couch \& Newell (1980).
The dotted histograms show the predictions of the $\Omega=0$ reference PLE
model. To aid the eye a dashed line is drawn at $b_j-r_f = 0.8$.
}
\endfigure
In Figures 3a-e we see that the $\Omega = 1$ model
significantly underestimates the faint counts in all bands.
Although to a lesser extent, this is even true in the $K$ band,
where the influence of evolution is less important.
The deficiency in the counts for the $\Omega = 1$ model is due to
the decreasing comoving volume available for increasing $\Omega$.
The differences between the predictions of this model and the data
are minimized assuming a high $z_f$.
Even for $z_f = 10$ the excess in the observed counts is $\sim 3$ at $b_j = 26$.
\subsubsection{Number luminosity evolution (NLE) model}
\def\sbl{\char91 }
\def\sbr{\char93 }
Models that obey the $\Omega = 1$ constraint of the inflation scenario
require physical effects not considered in simple PLE models.
Among several possibilities, strong galaxy merger events at early
ages have been suggested
(Rocca-Volmerange \& Guiderdoni 1990 (hereafter RVG90); Broadhurst et al. 1992).
To test qualitatively this possibility, we have constructed a NLE model
in which the LF is allowed to evolve with $z$ as proposed by RVG90
$$\phi(L,z) = (1+z)^{2\eta} \phi\sbl L(1+z)^{\eta},z=0\sbr.\eqno(3)$$
This function simulates the merging of faint high $z$ galaxies to form
bright local galaxies, while conserving the total comoving mass density.
We have used the same galaxy evolution models as in the PLE models (Table 3).
This simple and crude NLE model fits the galaxy counts quite well in all
bands. A good fit is obtained for $\eta=1$ in (3).
Fig 4 shows the resulting $b_j$ counts.
More realistic models that simulate the change of the photometric
properties of the merging galaxies have to be computed
(cf. Fritze-v.Alvensleben \& Gerhard 1994).
\subsubsection{Contribution of each galaxy type to the total counts}
\advance\fignumber by 1\beginfigure*{\fignumber}
\putfigl{Fig7.ps}{12.0}{270}
\caption
{
{\bf Figure 7.}
$b_j-r_f$ predicted colour distribution for the various morphological types
in different $b_j$ bins for the $\Omega=0$ reference PLE model.
All galaxies: solid line;
E/S0: dotted line;
Sab-Sbc: short-dashed line;
Scd-Sdm: long-dashed line;
vB: dot-dashed line.
}
\endfigure
Fig 5 shows the contribution to the total $b_j$ and $K$ counts of each galaxy
type in the $\Omega = 0$ PLE model.
In $b_j$ and $U$ (not shown), early spirals are the dominant
population over most of the magnitude range.
Evolved E/S0s contribute only $\sim 20\%$ to the counts at $20 \le b_j \le 22$.
Young E/S0 galaxies at their maximum SFR period contribute $\sim 50\%$
and produce the hump in the total counts in the range $23.5 \le b_j \le 25.5$.
This hump has been noticed by Metcalfe et al. (1995a) in their counts.
E/S0 galaxies are also dominant for $K \la 22$.
There is no $K$ hump because even at $z=z_f=4.5$ the $K$ filter
samples the blue spectral region, not reaching the UV rest frame
and missing the signature of the high star formation episode.
The hump shifts to brighter magnitude for higher $\Omega$ and to fainter
magnitude for higher $z_f$.
The onset of the $b_j$ hump may seem particularly steep in our models because we
use a coarse grid of galaxy types.
With a finer grid of galaxy spectra the bright galaxy phases appear more
gradually and the onset of the hump is less evident (GK95).
The hump in the data (Fig 3b) suggests some degree of discreteness in the
distribution of galaxy types, intermediate between ours and GK95's.
\subsection{Colour distributions}
\advance\fignumber by 1\beginfigure*{\fignumber}
\putfigl{Fig8.ps}{8.5}{270}
\caption
{
{\bf Figure 8.}
$B-K$ colour distribution for different $K$ bins.
The solid histograms show the observed distributions from
the Hawaii $K$ band survey (Songaila et al. 1994).
The dotted histograms show the predictions of the reference $\Omega=0$
PLE model.
}
\endfigure
The colour distributions derived from faint galaxy surveys show a gradual
trend towards bluer mean colours at fainter magnitude.
The result that $b_j-r_f$ (photographic) becomes significantly bluer
beyond $b_j\sim22$ (Kron 1980), has been confirmed by various groups
(Shanks et al. 1984; Tyson 1984; Infante, Pritchet \& Quintana 1986).
With CCD detectors colour distributions have been extended up to
$b_j\sim27$ (Koo \& Kron 1992 and references therein; Metcalfe et al. 1995a).
The median $B-K$ colour of $K$-selected field galaxies also becomes bluer
beyond $K\sim 17.5$ (Gardner et al. 1993).
To make the comparison of the data and models more realistic, we have
applied a gaussian error function with $\sigma=0.15$ mag to the
colour vs. $z$ relations before deriving the colour distributions.
This procedure is expected to take into account, at least to first order, both
the observational errors in the colours and the intrinsic dispersion
in the colours of galaxies of the same morphological type.
Fig 6 shows the $b_j-r_f$ colour distribution for different bins of $b_j$.
At the brightest and faintest bins, panels (a), (d)-(f), the agreement between
the data and model distribution is quite good.
At intermediate magnitude, panels (b) and (c), the model distribution
is redder than the data by $\sim \Delta (b_j-r_f) = 0.4$.
Despite this discrepancy, the model predicts that
galaxies with $b_j-r_f < 0.8$ appear at $b_j \sim 23$,
and are essentially absent at brighter magnitude.
\advance\fignumber by 1\beginfigure{\fignumber}
\putfigl{Fig9.ps}{6.0}{270}
\caption
{
{\bf Figure 9.}
Median colours as a function of apparent magnitude.
The bottom part of the figure shows $(b_j-r_f)_{med}$ vs. $b_j$.
The top part shows $(B-K)_{med}$ vs. $K$.
The data points are from the compilations by Metcalfe et al. (1995a) and
Gardner et al. (1993). The lines show the predictions for the
$\Omega=0$ reference PLE model (solid line),
$\Omega=1$ reference PLE model (dashed line),
$\Omega=0$ nE model (dotted line).
}
\endfigure
Fig 7 shows the $b_j-r_f$ fractional colour distribution predicted by the
model for the same $b_j$ bins
of Fig 6 discriminated by galaxy morphological type.
While at bright magnitude, panels (a) and (b), the various morphological types
are reasonably well separated in colour, at the faintest bins, panels (e) and
(f), galaxies of all types share the same colours.
In particular, galaxies bluer than $b_j-r_f = 0.8$ are not just vB galaxies or
late type spirals, but $\sim 50\%$ of them in panel (c) and about 60\%
in panel (d), are high $z$ young E/S0 galaxies with $z_{med} \sim 1.5$.
These are the same galaxies responsible for the hump discussed in \S3.3.4.
The E/S0 galaxies are responsible for both the blue (at high $z$) and the red
(at low $z$) tails of the distribution in panels (c) and (d).
\advance\fignumber by 1\beginfigure*{\fignumber}
\putfigl{Fig10.ps}{11.0}{270}
\caption
{
{\bf Figure 10.}
$z$ distributions for $b_j$ selected samples.
The source for the observed distributions (solid histogram) is indicated in
each panel together with $N_{ID}$ = number of galaxies with measured $z$,
$N_{no-ID}$ = number of galaxies for which no $z$ could be measured, and
the completeness limit of the survey.
The area of the rectangle in each panel $= N_{no-ID}$.
Dotted line: nE model; solid line: PLE model; both for
$\Omega = 0,\ z_f = 4.5$.
The model predictions have been scaled to the total number of objects,
$N_{ID} + N_{no-ID}$.
}
\endfigure
Fig 8 compares the Hawaii $K$-band survey (Songaila et al. 1994)
$B-K$ colour distribution with our model.
This sample has been obtained as a combination of a number of different
$K$ magnitude limited surveys. The size of the surveyed area was diminished as
a function of limiting $K$ to provide a roughly constant number of
galaxies in each magnitude bin from $K = 13$ to $K = 20$. The model has been
scaled to the number of galaxies observed in each bin. Also in this
case the agreement between model and data is rather good.
Fig 9 shows the observed $(b_j-r_f)_{med}$ and $(B-K)_{med}$ median colours
as a function of $b_j$ and $K$.
The $\Omega = 0$ PLE model reproduces rather well the data.
High-$z$ star forming galaxies are responsible for blueing the median model
colour at faint $K$.
This is one possible reason why galaxies with colour as red as local E
galaxies are not detected at faint $K$ (Gardner 1995).
The nE mode fails to reproduce the data, especially in $(B-K)_{med}$ vs. $K$,
where the difference between the PLE and the nE models is large.
Even though in the $\Omega = 1$ PLE model we have used intrinsically redder
galaxy SEDs than in the $\Omega = 0$ model (Table 3), the median $B-K$
for the $\Omega = 1$ model is bluer than for the data.
The main reason for this shift towards bluer colours in the $\Omega = 1$ model
is the younger age of the galaxies in this cosmology, despite the higher $z_f$.
\subsection{Redshift distributions}
Multi-object spectrograph surveys of large samples of faint galaxies have
reached faint enough magnitude for evolution to be
detectable.
At $b_j \ga 21$ large numbers of $z>1$ galaxies are predicted by PLE models
in which galaxies undergo {\it strong} luminosity evolution (Broadhurst et al.
1988, 1992).
The failure of the $z$ surveys to reveal these galaxies sets strong
upper limits on the amount of luminosity evolution that can be invoked to
explain the excess in the number counts and the blueing of the median
colour discussed in the previous sections.
Rather than being evidence against all PLE models
and in favour of the nE model, we argue below that the observed
$z$ distributions of blue-selected faint galaxies
are consistent with PLE models in which galaxies evolve mildly in
luminosity and rule out only the large amounts of luminosity evolution assumed
in early PLE models.
\subsubsection{$b_j$ band}
Fig 10 shows the $z$ distributions derived from four surveys.
Panels (a)-(c) correspond to $b_j$ limited samples, while the data in
panel (d) are a combination of seven fields with different limiting magnitude,
ranging from 23 to 24 in $b_j$, and completeness limit $> 70\%$ (Glazebrook et
al. 1995a). In the latter case, our simulation includes the different
$b_j$ limiting magnitude for each field.
The data in panel (b) correspond to the LDSS survey (Colless et al. 1990),
supplemented with additional $z$ measurements in a subsample of the original
LDSS survey areas by Colless et al. (1993).
The $z$ distribution of the $\Omega = 0$ reference PLE model shown in panels
(a) and (b) is consistent with the data, and does not show a significant high
$z$ tail. The mean $z$ for the PLE model distribution in these two panels are
$\langle z \rangle = (0.23,~0.32)$, in excellent agreement with the measured
values ($0.22,~0.31$).
Colless et al. (1993) reduced the $z$ incompleteness in their LDSS subsample
to $4.5\%$. From an analysis of these data they conclude that at most
$\sim4\%$ of galaxies with $b_j < 22.5$ have $z > 0.7$.
The PLE model predicts $\sim 3\%$ of galaxies with $z>0.7$ in this magnitude
range.
\begintable{6}
\nofloat
\caption{{\bf Table 6.} Glazebrook et al. (1995) $z$ surveys }
\halign{%
~#\hfil~~~ & ~~~#\hfil~~~ & ~~~\hfil#\hfil~~~ & ~~~\hfil#\hfil~~~ \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
$b_j$ range & source & $z_{med}$ & $f_{0.7}$ \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr\noalign{\vskip 3pt}
$22.5-23.5$ & data & 0.46 & 0.13 \cr
& UL$^a$ & $\le 0.48$ & $\le 0.24$ \cr
& PLE & 0.50 & 0.27 \cr
& G95$^b$ & 0.76-1.31 & 0.55-0.82 \cr
& & & \cr
$22.5-24$ & data & 0.46 & 0.12 \cr
& UL & $\le 0.56$ & $\le 0.38$ \cr
& PLE & 0.59 & 0.40 \cr
& G95 & 0.83-1.39 & 0.61-0.84 \cr
\noalign{\vskip 3pt}\noalign{\hrule}\cr
}
\tabletext{\@par}
\noindent $^a$ Upper limits if $z$-unidentified galaxies are at $z>0.7$\@par}
\noindent $^b$ PLE models by Glazebrook et al. (1995) }
\endtable
At fainter magnitude, panels (c) and (d), the PLE model predicts a
high $z$ tail which is not seen in the data.
Following Glazebrook et al. (1995a), we compare in Table 6 the
values of $z_{med}$ (median $z$) and $f_{0.7}$ (fraction of galaxies
with $z>0.7$) for their survey and our model.
For each magnitude range in Table 6, the first line gives
$z_{med}$ and $f_{0.7}$ for the galaxies with measured $z$,
the second line gives upper limits computed by
assuming that all the $z$-unidentified galaxies are at $z > 0.7$, and the
third line lists the predictions of the $\Omega = 0$ reference PLE model.
The model can be reconciled with the data only if all,
or at least most of the galaxies without measured $z$ in the
indicated magnitude range are at high $z$.
This assumption will be discussed in \S3.6.
Our PLE model is in good agreement with the spectroscopic data of the two
$B$-selected SSA13 and SSA22 Hawaii Survey fields (Cowie, Hu \& Songaila 1995):
all 9 galaxies with $B \le 23$ have $z\le0.7$, and for $23<B\le24$, 8 galaxies
out of 21 (38\%) with measured $z$ have $z>0.7$.
The nE model distribution is in good agreement with the data in the
four panels of Fig 10.
The nE model requires the $z$ distribution of the $z$-unidentified
galaxies not to be very different from that of the galaxies with measured $z$.
However, it is important to recall here the failure of the nE model to reproduce
the counts and the colour distributions at faint magnitude (\S3.3.1).
Note that we have not included in our models the $(1+z)^4$ dependence of
the surface brightness with $z$, which decreases the fraction of
expected high $z$ galaxies (Yoshii \& Peterson 1995).
The magnitude of this effect is a function not only of the intrinsic
parameters of the galaxies, but also of the details of the data
reduction procedure adopted in the construction of the photometric catalogs
from which the galaxies to be observed spectroscopically are selected.
Other effects, such as dust extinction inside galaxies (GK95), which have not
been considered in our models can decrease the predicted number of high $z$
galaxies.
To compare different PLE models, we list in Table 6 the range of $z_{med}$
and $f_{0.7}$ computed by Glazebrook et al. (1995a) from their PLE models.
The Glazebrook et al. values are significantly higher than ours.
The same is true for the GRV90 and Metcalfe et al. (1995a) PLE models.
The basic reasons for this difference are the milder galaxy evolution
assumed in our PLE models, and/or the more crude treatment of luminosity and
spectral evolution of galaxies in the quoted papers.
For example, in the Glazebrook et al. (1995a) model, a single SFR history
is adopted for all morphological types.
On the other hand, the KGB93 and GK95 models produce values of $z_{med}$ and
$f_{0.7}$ slightly lower than ours and, therefore, in good agreement
with the data.
However, the KGB93 model also predicts a significant population of low
luminosity galaxies at $z < 0.2$, which is not seen in the Glazebrook et al.
data.
\advance\fignumber by 1\beginfigure{\fignumber}
\putfig{Fig11.ps}{11}{0}
\caption
{
{\bf Figure 11.}
Redshift distributions for $I$ and $K$ selected samples.
The source for the observed distributions (solid histogram) are indicated in
each panel.
The Crampton et al. (1995) data were kindly provided to us by L. Tresse.
The two values for completeness of the Songaila et al. sample refer
to $K<18$ and $K>18$, respectively.
Dotted line: nE model; solid line: PLE model; both for
$\Omega = 0,\ z_f = 4.5$.
}
\endfigure
\advance\fignumber by 1\beginfigure{\fignumber}
\putfigl{Fig12a.ps}{6.0}{270}
\putfigl{Fig12b.ps}{6.0}{270}
\caption
{
{\bf Figure 12.}
{\it (a)}
$B-K$ colour as a function of $z$ for the Songaila et al. sample.
The solid dots correspond to the $z$-unidentified galaxies in the survey and
are plotted arbitrarily at $z=3.5$
The lines represent the colour predicted for each morphological type by the
BC93 models listed in Table 3 for the $\Omega = 0$ reference model.
E/S0: solid lines;
Sab-Sbc: short-dashed line;
Scd-Sdm: long-dashed line;
vB: dot-dashed line.
{\it (b)}
$B-I$ vs. $I-K$ for the same sample. The lines have the same meaning as in
{\it (a)}. The $z = 1$ locus is indicated. See text for details.
}
\endfigure
\subsubsection{$I$ and $K$ bands}
In Fig 11 we compare the $z$ distribution derived from the large and deep $z$
surveys of Crampton et al. (1995, $I$-selected) and Songaila et al. (1994,
$K$-selected) with the models.
The $z$ distribution in panel (a) is characterized by
$\langle z \rangle = 0.56$, $z_{med} = 0.57$, and $f_{0.7} = 0.34$.
The $\Omega = 0$ reference PLE model predicts
$\langle z \rangle = 0.58$, $z_{med} = 0.54$, and $f_{0.7} = 0.30$,
in excellent agreement with the data.
The high $z$ E/S0 galaxies predicted at $b_j \ga 23.5$ have a mean
$\langle b_j-I \rangle <1$, corresponding to $\langle I \rangle \ga 22.5$,
and are, therefore, not expected in significant numbers in this survey
limited at $I = 22.0$.
The nE model $z$ distribution,
$\langle z \rangle = 0.48$, $z_{med} = 0.47$, and $f_{0.7} = 0.18$,
does not reproduce well the tail of high $z$ galaxies observed
in this survey.
The $I$ band LF derived from the Crampton et al. (1995) data set (see
Fig 6 of Lilly et al. 1995b), as well as the E galaxy LF constructed from the
HST Medium Deep Survey (estimating $z$ photometrically, Im et al. 1995),
show clear evidence of evolution for $z\le 1$.
The amount of evolution is consistent with the prediction of our
PLE model: $\Delta I_z \sim -1.2$ up to $z = 1$.
To compare the models with the Songaila et al. (1994) $z$ distribution,
we have taken into account the decreasing area vs.
increasing limiting $K$ relationship of the survey as discussed in \S3.4.
The completeness of this spectroscopic survey is $\sim$ 100\% for
$K < 18$ and $\sim$ 70\% for $18 < K < 20$.
At variance with our $z$ distribution for $b_j$-selected samples (Table 6),
the $f_{0.7}$ upper limits derived from the $K$-selected data
are significantly lower than the model values.
For the observed $z$ distribution in Fig 11, panel (b), $f_{0.7} = 0.08$.
The PLE model predicts $f_{0.7} = 0.23$, well above the
upper limit $f_{0.7} < 0.13$ derived assuming that all galaxies
with no measured $z$ have $z > 0.7$.
This result indicates that there may be a problem in one or more
of the ingredients or assumptions that go into our PLE models.
The spectroscopic incompleteness of the sample is important at the faint
level, but it cannot be the only cause of the problem since the discrepancy
is already present in the bin $17 < K < 18$, where the spectroscopic survey
is complete. In this bin, the upper limit from the observed distribution is
$f_{0.7} < 0.16$, and the model predicts $f_{0.7} = 0.37$.
We think that the evolutionary rate in the $K$ band has to be revised in
order to explain the observations. We are currently exploring galaxy
evolution models based on different sets of evolutionary tracks and stellar
spectral libraries (Bruzual 1995) to evaluate their effect on PLE models.
\subsection{On the nature of $z$-unidentified galaxies}
Since most $z$ surveys are based on spectra which cover the range from
$\sim$3700 to $\sim$7500 \AA, the [OII] $\lambda3727$\ line (most prominent
feature in low S/N galaxy spectra) can be used as a $z$ indicator only up
to $z\sim 1$.
Higher $z$ galaxies are difficult to identify because of the lack of features
in the UV rest frame spectra of typical galaxies.
This fact, by itself, suggests the possibility that a large fraction
of the galaxies with no measured $z$ is at high $z$.
This assumption is also supported by an analysis of the colour distributions
of these galaxies.
We discuss here in detail the behavior of the $B-I$ and $I-K$ colours
in the Songaila et al. (1994) sample.
Fig 12a shows $B-K$ vs. $z$ for all galaxies in the sample, discriminated
by morphological type accordingly to the closest proximity of the galaxy
colour to the BC93 model colour for the class at the same $z$.
Most of the $z$-unidentified galaxies (black dots in the plot)
are consistent with the model colours of E/S0s at $z > 0.2$.
In Fig 12b the same galaxies are plotted in the $B-I$ vs. $I-K$ plane.
The lines show the expected location of galaxies of different morphological
type at increasing $z$.
Two things are interesting to note: first, galaxies of all types fall
reasonably well in the appropriate region in this plane.
Second, a large fraction of the $z$-unidentified galaxies has colours
consistent with being early-type galaxies at $z > 1.0$.
We have verified {\it a posteriori} the reliability of the photometric
$z$ estimates
by examining the measured $z$ of galaxies with colours close to the expected
colour at $z = 1$: 7 out of 8 such galaxies have $0.64 < z < 1.16$.
Thus, it seems reasonable to assume that most of the $z$-unidentified galaxies
in this sample are actually at high $z$.
This conclusion is consistent with the Keck telescope spectroscopic observations
of faint galaxies ($K<20, I<22.5, B<24.5$) by Cowie et al. (1995b).
Their data show strong evidence of a significant fraction of high-$z$ luminous
star forming galaxies.
Of 333 galaxies that have been observed, $z$ could be measured for 281 galaxies.
91 galaxies ($\sim 32\%$) have $z>0.7$ and 40 galaxies ($\sim 14\%$) have $z>1$.
Cowie et al. argue that inspection of the $B-I$ vs. $I-K$ colour-colour plane
suggests that most of the remaining unidentified objects are
luminous high $z$ star forming galaxies.
The same type of analysis cannot be applied to the $b_j$-selected survey of
Glazebrook et al. (1995a) because only $B-R$ is available for these galaxies.
The photometric $z$ estimates using a single colour are less accurate.
The $B-R$ distribution suggests a higher percentage of late type galaxies in
this sample than in the $K$-selected sample, in agreement with our
model for the number counts (\S3.3.4).
\@ifstar{\@ssection}{\@section} {Conclusions}
\def\ititem#1{\item{\it (#1)}}
The detailed comparison of a large amount of faint galaxy survey data and the
predictions of new models has provided the following results.
\beginlist
\ititem{i} The $\Omega=0$ nE model fits reasonably well the observed
$(U,b_j,r_f,I)$ counts up to $\sim$ (21,22,22,23), respectively.
At fainter magnitude the counts show an excess with respect to the nE model
in all bands, except in $K$. The excess in the $b_j$ counts
amounts to $N_{obs}/N_{nE}\sim 3$ at $b_j\sim 26$, and is lower
than in previous studies by a factor of $\sim$ 2.
This difference is due to our choice of normalizing the model counts
to the observed number in the range $19.0 < b_j < 19.5$,
and to the higher UV flux in the BC93 models of local E/S0's than in the SEDs
used in previous nE models (GRV90, Maddox et al. 1990).
\ititem{ii}
A PLE model, in which galaxies are characterized by the LF and SEDs
listed in Table 1 and Table 3 for the ($\Omega = 0,\ z_f = 4.5,\ t_g = 16$ Gyr)
Friedmann cosmology, provides good fits to most of the existing data
from faint galaxy surveys. In particular, it reproduces well the
faint galaxy counts in the $U,~b_j,~r_f,~I$ and $K$ bands, as well as
the colour distributions of blue and $K$-selected samples up to the faintest
observational limits, and the $z$ distributions of $b_j$-, $I$- and
$K$-selected samples up to $b_j<23, I<22$, and $K<17$. In the magnitude
range $23 \le b_j \le 24$, the predictions from the model can still
be reconciled with the data if most of the galaxies without a $z$
determination are at relatively high $z$ ($z>0.7)$.
\ititem{iii}
The steep slope of the APM counts at bright $b_j$ requires
strong evolution even at relatively small $z$ (Maddox et al. 1990).
This is in conflict with the mild amount of evolution required to fit
the $z$ and colour distributions at fainter magnitude.
It has been suggested that the low counts at bright magnitude may be produced
by either photometric errors (Metcalfe et al. 1995b) and/or by the fact that
photographic surveys may have missed a significant population of low surface
brightness galaxies because of the relatively high isophotal limits in these
surveys (McGaugh 1994; Ferguson \& McGaugh 1995).
\ititem{iv}
PLE models in a flat $\Omega = 1$ universe cannot reproduce several aspects
of the data. In particular, the faint counts are significantly below the
observed ones in all bands, including $K$.
A NLE model in an $\Omega = 1$ universe,
in which galaxy density increases as $(1+z)^2$ due to merger events
is found to reproduce satisfactorily the counts (see also RVG90).
However, we consider that a more realistic model, which takes into
account the change of the galaxy photometric properties when the merging
occurs, must be explored before this result can be taken at face value.
\ititem{v}
The Scalo IMF, being relatively poor in massive stars,
produces models for early-type galaxies which are considerably less luminous in
the UV at early epochs than the Salpeter IMF models.
This lower flux translates into a milder spectral evolution, as required
by the observed $z$ distributions of faint galaxies.
Scalo IMF models are thus preferred for early-type galaxies over Salpeter
IMF models.
\ititem{vi}
The observed $b_j-r_f$ and $B-K$ colour distributions are
reproduced more closely by our PLE models if we assume that star formation
in E/S0 galaxies extends over a longer period of time ($\tau_1$ and $\tau_2$
models) than in a short box-type burst models ($B_1$ model).
\ititem{vii}
At faint magnitude ($b_j \ga 23, K \ga 18$) our PLE model predicts a significant
fraction of high-$z$ galaxies, mostly blue young early-type E/S0 and Sab-Sbc
galaxies. The predictions of this model are in excellent agreement
with the $B-K$ colours of the Hawaii $K$-selected survey which failed
to reveal galaxies at high $z$ with colours as red as those of
local E/S0 galaxies (Gardner 1995). On the other hand, recent results on the
LF of red and blue galaxies in the CFRS sample suggest evolution of the blue
galaxies and not of the red galaxies (Lilly et al. 1995b).
This appears to be in contradiction with our model and to favor models in
which a significant amount of the luminosity evolution needed to fit the
faint counts is due to spiral rather than early-type galaxies.
A discussion of these models can be found in Campos \& Shanks (1995).
\ititem{viii}
The most significant remaining discrepancy between our PLE model and the
data is in the $z$ distribution of the Songaila et al. (1994) $K$-selected
sample at $K>17$.
The fraction of galaxies at high $z$ predicted by the model is significantly
higher, by a factor of $\sim 2-3$, than observed.
The discrepancy persists even if, as suggested by their colours, most of the
galaxies which have been observed spectroscopically and for which no $z$ could
be measured, are at high $z$.
Thus, on the one hand, the $z$ distribution of these galaxies seems to rule
out evolution.
On the other hand, the $K$ band LF (Glazebrook et al. 1995b) indicates
that galaxies at $z=1$ are $\sim 0.75$ magnitude brighter than local ones,
consistent with PLE models.
Only additional spectroscopy of faint $K$-selected galaxies can resolve this
apparent contradiction between different data sets.
\par\egroup\addvspace{\half}\@doendpe
In summary, simple PLE models, together with an appropriate selection of galaxy
evolution models (SFR and IMF) which provide mild luminosity evolution
at least up to $z=1$ in an $\Omega = 0$ universe, can still be considered
as baseline models.
Additional improvements should be considered in this framework, for example:
{\it (a)} Introduction of different $z_f$ for galaxies of different
morphological type and/or different luminosity. This could help in
explaining the colour-luminosity relation, observed in cluster
galaxies (see Bower, Lucey \& Ellis 1992 for ellipticals and
Gavazzi 1993 for both early and late types).
{\it (b)} Spectrophotometric models which include chemical and dynamical
evolution (Bressan, Chiosi \& Fagotto 1994) predict that E/S0 galaxies
approach their local colour at younger ages than in chemically homogeneous
models. This faster rate of evolution might bring the
predictions of the $\Omega = 1$ PLE model in better agreement with the
observations and should be explored in detail.
{\it (c)} Introduction in the models of at least the most important
observational selection effects, e.g. the
surface brightness selection (McGaugh 1994, Yoshii \& Peterson 1995).
{\it (d)} A detailed analysis of the effects of extinction by dust
(GK95, Campos \& Shanks 1995),
taking into account the time evolution of the amount of dust inside galaxies,
which can be particularly important at high $z$.
The interested reader may request the $k-$ and $(e+k)-$corrections required
to reproduce these models via e-mail from G.B.A. or L.P.
\@ifstar{\@ssection}{\@section}*{Acknowledgments}
G.B.A. was supported by SFB 328 during his visit to the Landessternwarte
Heidelberg K\"onigstuhl.
L.P. acknowledges the hospitality of the Landessternwarte Heidelberg
K\"onigstuhl and the Centro de Investigaciones de Astronom{\'\i}a during
the realization of this project, as well as the support of
the EEC program No. CHRX-CT92-0033.
We thank an anonymous referee for his/her careful reading of the first
version of this paper, and for challenging us to make this paper
shorter and more interesting. We expect to have succeeded.
\@ifstar{\@ssection}{\@section}*{References}
\beginrefs
\bibitem Babul A., Rees M.J., 1992, MNRAS, 255, 346
\bibitem Bingelli B., Sandage A., Tamman G.A., 1988, ARA\&A, 26, 509
\bibitem Bower R.G., Lucey J.R. \& Ellis R.S., 1992, MNRAS, 254, 589
\bibitem Bressan A., Chiosi C., Fagotto F. 1994, ApJS, 94, 63
\bibitem Broadhurst T.J., Ellis R.S., Shanks T., 1988, MNRAS, 235, 827
\bibitem Broadhurst T.J., Ellis R.S., Glazebrook K., 1992, Nat, 355, 55
\bibitem Bruzual A., G., 1995, in From Stars to Galaxies, eds. C. Leitherer, U.
Fritze-v.Alvensleben and J. Huchra, PASP Conference Series, in press.
\bibitem Bruzual A., G., Charlot S., 1993, ApJ, 405, 538 (BC93)
\bibitem Bruzual A., G., Kron R.G., 1980, ApJ, 241, 25 (BK80)
\bibitem Buser R., 1978, A\&A, 62, 411
\bibitem Campos A., Shanks T., 1995, Durham astro-ph/9511110 23-Nov-95 preprint
\bibitem Carlberg R.G. \& Charlot S., 1992, ApJ, 397,5
\bibitem Colless M., Ellis R.S., Taylor K., Hook R.N., 1990, MNRAS, 244, 408
\bibitem Colless M., Ellis R.S., Broadhurst T.J., Taylor K. \& Peterson B.A.,
1993, MNRAS, 261, 19
\bibitem Colless M., 1995, in Maddox S.J. \& Arag\'on-Salamanca A., eds,
Wide Field Spectroscopy and the Distant Universe, World Scientific, p. 263
\bibitem Couch W.J., Newell E.B., 1980, PASP, 92, 746
\bibitem Cowie L.L., Songaila A., Hu E.M., 1991, Nat, 354, 460
\bibitem Cowie L.L., Gardner J.P., Hu E.M., Songaila A., Hodapp K.-W.
and R.J. Wainscoat R.J., 1994, ApJ, 434, 114
\bibitem Cowie L.L., Hu E.M., \& Songaila A., 1995a, AJ, 110, 1576
\bibitem Cowie L.L., Hu E.M., \& Songaila A., 1995b, Nat, 377, 603
\bibitem Crampton D., Le F\`evre O., Lilly S.J., Hammer F., 1995,
ApJ, 455, 96 (CFRS V)
\bibitem Djorgovski S., Soifer B.T., Pahre M.A., Larkin J.E., Smith J.D.,
Neugebauer G., Smail I., Matthews K., Hogg D.W., Blandford R.D.,
Cohen J., Harrison W., Nelson J., 1995, ApJ, 438, L13
\bibitem Efstathiou G., Ellis R.S., Peterson B.A., 1988, MNRAS, 232, 431
\bibitem Ellis R.S., 1983, in Jones B.J.T., Jones J.E., eds, The Origin and
Evolution of Galaxies. Reidel, Dordrecht, p. 255
\bibitem Ellis R.S., Colless M., Broadhurst T., Heyl J., Glazebrook K.,
astro-ph/9512057 11-Dec-95 preprint
\bibitem Ferguson H.C. \& McGaugh S.S., 1995, ApJ, 440, 470
\bibitem Fritze-v.Alvensleben, U. \& Gerhard, O.E., 1994, A\&A, 285, 751
\bibitem Fukugita M., Takahara F., Yamashita K., Yoshii Y., 1990,
ApJ, 361, L1
\bibitem Gardner J.P., 1995, ApJ, 452, 538
\bibitem Gardner J.P., Cowie L.L., Wainscoat R.J., 1993, ApJ, 415, L9
\bibitem Gavazzi G., 1993, ApJ, 419, 469
\bibitem Glazebrook K., Peacock J.A., Collins C.A. \& Miller L., 1994,
MNRAS, 266, 65
\bibitem Glazebrook K., Ellis R., Colless M., Broadhurst T.,
Allington-Smith J. and Tanvir N., 1995a, MNRAS, 273, 157
\bibitem Glazebrook K., Peacock J.A., Miller L., Collins C.A., 1995b,
MNRAS, 275, 169
\bibitem Guiderdoni B., Rocca-Volmerange B., 1987, A\&A, 186, 1 (GRV87)
\bibitem Guiderdoni B., Rocca-Volmerange B., 1990, A\&A, 227, 362 (GRV90)
\bibitem Guiderdoni B., Rocca-Volmerange B., 1991, A\&A, 252, 435 (GRV91)
\bibitem Gronwall C., Koo D.C., 1995, ApJ, 440, L1 (GK95)
\bibitem Hall P., Mackay C.B., 1984, MNRAS, 210, 979
\bibitem Hammer F., Crampton D., Le F\`evre O., Lilly S.J., 1995, ApJ,
455, 88 (CFRS IV)
\bibitem Infante L., Pritchet C., Quintana H., 1986, AJ, 91, 217
\bibitem Im et al. 1995, in preparation
\bibitem Jones L.R., Fong R., Shanks T., Ellis R.S., Peterson B.A., 1991,
MNRAS, 249, 481
\bibitem King C.R. \& Ellis R.S., 1984, ApJ, 288, 456
\bibitem Koo D.C., 1981, Ph.D. thesis, University of California, Berkeley
\bibitem Koo D.C., 1985, AJ, 90, 418
\bibitem Koo D.C., 1986, ApJ, 311, 651
\bibitem Koo D.C., Gronwall C., Bruzual A., G., 1993, ApJ, 415, L21 (KGB93)
\bibitem Koo D.C., Kron R.G., 1992, ARA\&A, 30, 613
\bibitem Kron R.G., 1980, ApJS, 43, 305
\bibitem Le F\`evre O., Crampton D., Lilly S.J., Hammer F., Tresse L.,
1995, ApJ, 455, 60 (CFRS II)
\bibitem Lilly S.J., Cowie L.L. \& Gardner J.P., 1991, ApJ, 369, 79
\bibitem Lilly S.J., 1993, ApJ, 411, 501
\bibitem Lilly S.J., Hammer F., Le F\`evre O., Crampton D., 1995a, ApJ,
455, 75 (CFRS III)
\bibitem Lilly S.J., Tresse L., Hammer F., Crampton D., Le F\`evre O.,
1995b, ApJ, 455, 108 (CFRS VI)
\bibitem Loveday J., Peterson B.A., Efstathiou G., Maddox S.J., 1992,
ApJ, 390, 338
\bibitem Madau P., 1995, ApJ, 441,18
\bibitem Maddox S.J., Sutherland W.J., Efstathiou G., Loveday J.,
and Peterson B.A., 1990, MNRAS, 247, Short Comm., 1p
\bibitem Magris G.C. \& Bruzual A., G., 1993, ApJ, 417, 102
\bibitem Majewski S.R., 1989, in Frenk C.S. et al., eds, The Epoch of
Galaxy Formation. Kluwer, Dordrecht, p. 85
\bibitem McGaugh S., 1994, Nat, 367, 538
\bibitem McLeod B.A., Bernstein G.M., Rieke M.J., Tollestrup E.V.,
Fazio G.G., 1995, ApJS, 96, 117
\bibitem Metcalfe N., Shanks T., Fong R., Jones L.R., 1991, MNRAS, 249, 498
\bibitem Metcalfe N., Shanks T., Fong R., Roche N., 1995a, MNRAS, 273, 257
\bibitem Metcalfe N., Fong R., Shanks T., 1995b, MNRAS, 274, 769
\bibitem Mobasher B., Ellis R.S., Sharples R.M., 1986, MNRAS, 223,11
\bibitem Mobasher B., Sharples R.M., Ellis R.S., 1995, MNRAS, 263, 560
\bibitem Pence W., 1976, ApJ, 203, 39
Fong R. \& ZenLong Z., 1986, MNRAS, 221, 233
\bibitem Picard A., 1991, AJ, 102, 445
\bibitem Rocca-Volmerange B., Guiderdoni B., 1988, A\&AS, 75, 93
\bibitem Rocca-Volmerange B., Guiderdoni B., 1990, MNRAS 247, 166 (RVG90)
\bibitem Salpeter E.E., 1955, ApJ, 121, 161
\bibitem Scalo J.M., 1986, Fund Cosmic Phys, 11, 1
\bibitem Schechter P., 1976, ApJ, 203, 297
\bibitem Shanks T., 1990, in Bowyer S., Leinert Ch., eds, Galactic and
Extragalactic Background Radiation: Optical, Ultraviolet, and Infrared
Components, IAU Symp. 139. Kluwer, Dordrecht, p. 269
\bibitem Shanks T., Stevenson P.R.F., Fong R., MacGillivray H.T., 1984,
MNRAS, 206, 767
\bibitem Soifer B.T., Matthews K., Djorgovski S., Larkin J., Graham J.R.,
Harrison W., Jernigan G., Lin S., Nelson J., Neugebauer G., Smith G.,
Smith J.D., and Ziomkowski C., 1994, ApJ, 420, L1
\bibitem Songaila A., Cowie L.L., Hu E.M., Gardner J.P., 1994, ApJS, 94, 461
\bibitem Steidel C.G., Hamilton D., 1993, AJ, 105, 2017
\bibitem Stevenson P.R.F., Shanks T., Fong R., 1986, in Chiosi C. \&
Renzini A., eds, Spectral Evolution of Galaxies. Reidel, Dordrecht, p. 439
\bibitem Tinsley B.M., 1980, ApJ, 241, 41
\bibitem Tresse L., Hammer F., Le Fevre O., and Proust D., 1993, A\&A, 277, 53
\bibitem Tyson J.A., 1984, in Capaccioli M., eds, Astronomy with
Schimdt-Type Telescopes. Reidel, Dordrecht, p. 489
\bibitem Tyson J.A., 1988, AJ, 96, 1
\bibitem Yee H.K.C., Green R.F., 1987, ApJ, 319, 28
\bibitem Yoshii Y. \& Peterson B.A., 1995, ApJ, 444, 15
\bibitem Wainscoat R.J., Cowie L.L., 1992, AJ, 103, 332
\bibitem Weir N., 1994, Ph.D. thesis, California Institute of Technology
Peacock J.A., eds, The Epoch of Galaxy Formation. Kluwer, Dordrecht, p. 15
\bibitem Zucca E., Pozzetti L., Zamorani G., 1994, MNRAS 269, 953
\bibitem Zwicky F., Herzhog E., Wild P., Karpowicz M., Kowal C.T.,
1961-1968, Catalogue of Galaxies and Cluster of Galaxies,
California Institute of technology, Pasadena
\par\egroup\@doendpe
\@notice\par\vfill\supereject\end
\end
|
1,314,259,993,532 | arxiv | \section{Problem description}
\label{sec:1}
Reinforcement learning is aimed at the solution of the Markov decision problems without the exact knowledge of an underlying model. In this paper we address only the case of finite state-action Markov decision processes (MDP). Moreover, for concreteness we discuss only the discounted optimality criterion and the $Q$-learning algorithm. However, this is not essential since we consider only the Robbins-Monro conditions for the learning rates, and not the convergence of the algorithms. So, the result is applicable to other reinforcement learning algorithms, based on asynchronous stochastic approximation.
The $Q$-learning can be regarded as an asynchronous version of the classical value iteration algorithm for the $Q$-function. Recall that a $Q$-function $Q(x,a)$ is the optimal gain for fixed initial state $x$ and initial action $a$. The $Q$-learning algorithm updates the current approximation $Q_t$ to $Q$ along a trajectory $(x_t,a_t)$ of states $x_t$ and actions $a_t$, generated by selected learning (or exploration) strategy.
A learning strategy is a sequence of probability distributions $\pi_t(a)$ on the action set $A$ (we assume that $A$ does not depend on $x$). As e.g. in \cite{singh2000}, we distinguish between persistent exploration and decaying exploration learning strategies. Persistent exploration (in contrast to the decaying one) means the existence of a uniform lower bound of the form $\pi_t(a)\ge c>0$.
Besides the learning strategy, a particular instance of the $Q$-learning algorithm is determined by a learning rate $\gamma_t(x,a)$ which controls the influence of the new information on the update rule. Usually the learning rate is of the form
\begin{equation} \label{1.0}
\gamma_t(x,a)=\alpha_t I_{\{x_t=x,a_t=a\}}.
\end{equation}
The sequence $(\alpha_t)$ will be also called a learning rate.
The standard results assert the pointwise convergence $Q_t\to Q$ with probability 1 under the Robbins-Monro conditions (see Theorem \ref{th:1}):
\begin{equation} \label{1.1}
\sum_{t=0}^\infty \gamma_t=\infty,\quad \sum_{t=0}^\infty\gamma_t^2<\infty.
\end{equation}
Clearly, it is required that each state-action pair $(x,a)$ is visited infinitely often. Assuming that this property is satisfied, it is easy to construct a sequence $(\alpha_t)$ depending on a ``local clock'' and verifying (\ref{1.1}). By a local clock we mean the number of visits of a particular point $(x,a)$ by the sequence $(x_t,a_t)$. Indeed, consider a function $\varphi:\mathbb Z_+\mapsto(0,\infty)$ satisfying the Robbins-Monro conditions, that is,
$$\sum_{t=1}^\infty \frac{1}{\varphi(t)}=\infty,\qquad \sum_{t=1}^\infty \frac{1}{\varphi^2(t)}<\infty.$$
Put $\alpha_t=1/\varphi(n_t(x,a))$, where
\begin{equation} \label{1.3}
n_t(x,a)=\sum_{k=0}^t I_{\{x_k=x,a_k=a\}}
\end{equation}
is the number of visits of $(x,a)$ by the sequence $(x_k,a_k)_{k=0}^t$, and denote by $t_j(x,a)$ the time of $j$-th visit, $j\ge 1$. Then $n_{t_j}(x,a)=j$ and
$$\sum_{t=0}^\infty \gamma_t=\sum_{j=1}^\infty\alpha_{t_j}=\sum_{j=1}^\infty\frac{1}{\varphi(n_{t_j}(x,a))}= \sum_{j=1}^\infty\frac{1}{\varphi(j)}=\infty.$$
Similarly,
$$\sum_{t=0}^\infty \gamma_t=\sum_{j=1}^\infty\frac{1}{\varphi^2(j)}<\infty.$$
If the learning rate $\alpha_t$ explicitly depends on the ``global clock'', that is, the iteration number $t$, then the situation becomes more difficult. Let $\alpha_t$ be a deterministic sequence. In his PhD thesis Bradtke (\cite{bradtke1994}, see also \cite{bradtke1996}) in somewhat different situation, involving function approximation, conjectured that if $(\alpha_t)$ satisfies the Robbins-Monro conditions:
$$\sum_{t=0}^\infty \alpha_t=\infty,\quad \sum_{t=0}^\infty\alpha_t^2<\infty,$$
then the same is true for $\gamma_t$. In \cite{szepesvari1996} it was mentioned that this conjecture is true if the inter-arrival times $t_{j+1}-t_j$ have a common upper bound or, more specifically, are eventually exponentially distributed with common parameters. However, these conditions are difficult to verify and they depend on the learning strategy.
In this note we show that the Bradtke conjecture holds true for persistent exploration learning strategies. This assertion follows from the main result: Theorem \ref{th:2}.
\section{Markov decision processes and $Q$-learning}
\label{sec:2}
Let $X$ and $A$ be finite state and action spaces. Consider the canonical space $\Omega=(X\times A)^\infty$ with the $\sigma$-algebra $\mathscr F$ generated by projections
$$(x_0,a_0,x_1,a_1,\dots)\mapsto (x_t,a_t).$$
Denote by $\mathscr F_t=\sigma(x_0,a_0,\dots,s_t,a_t)$ the natural filtration of the coordinate process. The probabilistic structure of the process $(x_t,a_t)$ is determined by a fixed transition kernel $q(y|x,a)$:
$$ \sum_{y\in X} q(y|x,a)=1,\qquad q(y|x,a)\ge 0$$
and a control (or learning) strategy, which is a sequence $\pi=(\pi_t)$ of probability distributions on the action set $A$. These objects uniquely determine a unique probability measure $\mathsf P_{z,\pi}$ on $\Omega$ such that
\begin{align*}
&\mathsf P_{z,\pi}(x_{t+1}=y|\mathscr F_t,a_t)=q(y|x_t,a_t),\quad \mathsf P_{z,\pi}(a_t=a|\mathscr F_{t-1},x_t)=\pi_t(a),\\
&\mathsf P_{z,\pi}(x_0=z)=1
\end{align*}
(see, e.g., \cite{lerma1996}). Note, that $\pi_t(a)$ is $\sigma(\mathscr F_{t-1},x_t)$-measurable.
Given a reward function $r(x,a,y)$ and a discounting factor $\beta\in [0,1)$, the total discounted gain is defined by the value function
$$ V(z)=\sup_{\pi}\mathsf E_{z,\pi}\sum_{t=0}^\infty \beta^t r(x_t,a_t,x_{t+1}),$$
where $\mathsf E_{z,\pi}$ is the expectation with respect to $\mathsf P_{z,\pi}$. As is well known, this function is a unique solution of the Bellman (or dynamic programming) equation:
$$ V(x)=\max_{a\in A}\sum_{y\in X} q(y|x,a)(r(x,a,y)+\beta V(y)).$$
The $Q$-function is the total discounted gain for fixed initial state and initial action:
$$ Q(x,a)=\sum_{y\in X} q(y|x,a)(r(x,a,y)+\beta V(y)).$$
This function is a unique solution of the equation
$$ Q(x,a)=\sum_{y\in X} q(y|x,a)(r(x,a,y)+\beta \max_{a\in A} Q(y,a)).$$
The $Q$-learning algorithm proposed in \cite{watkins1989} recursively defines the sequence $Q_t$:
\begin{align*}
Q_{t+1}(x,a)&=(1-\alpha_t I_{\{x_t=x,a_t=a\}}) Q_t(x,a)\label{1.4}\\
&+\alpha_t I_{\{x_t=x,a_t=a\}}(r(x_t,a_t,x_{t+1})+\beta\max_{a'\in A}Q_t(x_{t+1},a'))\nonumber
\end{align*}
for a strictly positive $\mathscr F_t$-measurable random variables $\alpha_t$ and an arbitrary initial guess $Q_0(x,a)$.
Let us recall a basic result on the convergence of $Q_t$ to $Q$ with probability $1$: see \cite{jaakkola1994,tsitsiklis1994}.
\begin{theorem} \label{th:1}
Assume that the learning rate $\alpha_t$ satisfies the Robbins-Monro conditions
\begin{equation} \label{2.1}
\sum_{t=0}^\infty \alpha_t I_{\{x_t=x,a_t=a\}}=\infty,\quad \sum_{t=0}^\infty \alpha_t^2 I_{\{x_t=x,a_t=a\}}<\infty\quad \mathsf P_{z,\pi}\textrm{-a.s.}
\end{equation}
for all $(x,a)\in X\times A$. Then
$$ \lim_{t\to\infty} Q_t(x,a)=Q(x,a)\quad\mathsf P_{z,\pi}\textrm{-a.s.}$$
\end{theorem}
In this paper we study only conditions (\ref{2.1}) and not the proof of Theorem \ref{th:1}. Under the assumption that each pair $(x,a)$ is visited infinitely often, one simple construction of the learning rate $\alpha_t$, depending on the local clock (\ref{1.3}) and satisfying (\ref{2.1}), was given is Section \ref{sec:1}. In the sequel we solely consider another version of a local clock, defined as the number of visits of a particular state $x$:
\begin{equation} \label{2.2}
N_t(x)=\sum_{k=0}^t I_{\{x_k=x\}}.
\end{equation}
Assume that all states are visited infinitely often $\mathsf P_{z,\pi}$-a.s., the learning strategy satisfies the lower bound $\pi_t(a)\ge c(N_t)>0$, the learning rate is of the form $\alpha_t=1/\varphi(N_t)$ and
\begin{equation} \label{2.3}
\sum_{t=1}^\infty\frac{c(t)}{\varphi(t)}=\infty,\qquad \sum_{t=1}^\infty\frac{1}{\varphi^2(t)}<\infty,
\end{equation}
then the Robbins-Monro conditions (\ref{2.1}) are satisfied.
Indeed, by the conditional Borel-Cantelli lemma \cite{meyer1972} (Chapter 1, Theorem 21), the first condition (\ref{2.1}) is satisfied if and only if
\begin{align} \label{2.4}
&\sum_{t=0}^\infty \mathsf E_{z,\pi}(\alpha_t I_{\{x_t=x,a_t=a\}}|\mathscr F_{t-1},x_t)=
\sum_{t=0}^\infty \frac{1}{\varphi(N_t)} I_{\{x_t=x\}}\mathsf E_{z,\pi}(I_{\{a_t=a\}}|\mathscr F_{t-1},x_t)\nonumber\\
=&\sum_{t=0}^\infty \frac{1}{\varphi(N_t)} I_{\{x_t=x\}}\pi_t(a)\ge\sum_{t=0}^\infty\frac{1}{\varphi(N_t)} I_{\{x_t=x\}}c(N_t)=
\sum_{j=1}^\infty \frac{c(j)}{\varphi(j)}=\infty
\end{align}
$\mathsf P_{z,\pi}\textrm{-a.s.}$
For the second condition (\ref{2.1}) the argumentation is even easier:
$$\sum_{t=0}^\infty \alpha_t^2 I_{\{x_t=x,a_t=a\}}\le \sum_{t=0}^\infty \frac{1}{\varphi^2(N_t)} I_{\{x_t=x\}}=\sum_{j=1}^\infty \frac{1}{\varphi^2(j)}<\infty \quad \mathsf P_{z,\pi}\textrm{-a.s.}$$
Note, that the decaying exploration is allowed, but the learning strategy should ensure infinitely many visits of every state and the lower bounds $c(t)$ should be consistent with learning rate: see the first condition (\ref{2.3}).
In the next section we allow an explicit dependence of $\alpha_t$ on the global clock $t$, but consider only persistent exploration learning strategies. Two main examples of persistent exploration learning strategies are
\begin{itemize}
\item the Boltzmann exploration:
$$ \pi_t(a)=\frac{\exp(Q_t(x_t,a)/\tau)}{\sum_{a'}\exp(Q_t(x_t,a')/\tau)},\quad \tau>0.$$
The required inequality $\pi_t(a)\ge c>0$ follows from the boundedness of the sequence $(Q_t)$: see \cite{gosavi2006} for a simple proof.
\item $\varepsilon$-greedy exploration which takes a ``greedy'' action $a_t\in\arg\max Q_t(x_t,a_t)$ with probability $1-\varepsilon$ and a random action with probability $\varepsilon$.
\end{itemize}
\section{Robbins-Monro conditions for persistent exploration learning strategies}
\label{sec:3}
A distribution $a\mapsto g(a|x)$ on $A$, defined for all $x\in X$, is called a stationary randomized strategy. If $g(b(x)|x)=1$ for some function $b:X\mapsto A$, then the strategy is called deterministic. Such strategy can be identified with the function $b$. A stationary randomized strategy is called completely mixed if $g(a|x)>0$ for all $x\in X$, $a\in A$. Any stationary randomized strategy $g$ induces a Markov chain with the transition matrix
$$ P(g)(x,y)=\sum_{a\in A} q(y|x,a) g(a|x).$$
An MDP is called communicating (see \cite{bather1973,filar1988,kallenberg2002}), if for any $x, y \in X$ there exists a stationary deterministic strategy $g$ such that $y$ is accessible from $x$ in the Markov chain $P(g)$. In other words, there exists $n\in\mathbb N$, depending on $x, y$, such that $P^n(g)(x,y)>0$. We will use the fact that an MDP is communicating if and only $P(g)$ is irreducible for every completely mixed stationary randomized strategy: see \cite[Theorem 2.1]{filar1988}.
Define the completely mixed strategy $\overline g(a|x)=1/|A|$, where $|A|$ is the cardinality of $A$. Let us recall (see \cite[Lemma 7.3(i)]{behrends2000}) that a Markov chain $P(\overline g)$ is irreducible if and only if there exist $n\in\mathbb N$ such that the matrix $\sum_{j=1}^n P^j(\overline g)$ is strictly positive. Let $\delta>0$ be the minimal element of this matrix. Then
\begin{equation} \label{3.1}
\sum_{j=1}^n P^j(\overline g)(x,y)\ge\delta.
\end{equation}
\begin{lemma} \label{lem:1}
Assume that an MDP is communicating and the learning strategy $\pi$ ensures the persistent exploration: $\pi_t(a)\ge c>0$. Then for any function $f:X\mapsto [0,\infty)$ we have
\begin{equation} \label{3.2}
\sum_{j=1}^n \mathsf E_{z,\pi}(f(x_{t+j+1})|\mathscr F_t)\ge c^n |A|^n\delta\sum_{y\in X} f(y),
\end{equation}
where the constants $n$, $\delta$ satisfy (\ref{3.1}).
\end{lemma}
\begin{proof} Put
$$ P^n(g) f(x)=\sum_{y\in X} P^n(g)(x,y) f(y),\quad n\ge 1.$$
Let $k\ge 2$, $f\ge 0$. Then
\begin{align*}
&\mathsf E_{z,\pi}(f(x_{t+k})|\mathscr F_t)=\sum_{x} f(x)\mathsf P_{z,\pi}(x_{t+k}=x|\mathscr F_t)\\
&=\sum_{x} f(x)\mathsf E_{z,\pi}(\mathsf P_{z,\pi}(x_{t+k}=x|\mathscr F_{t+k-1})|\mathscr F_t)\\
&=\sum_{x} f(x)\mathsf E_{z,\pi}(q(x|x_{t+k-1},a_{t+k-1})|\mathscr F_t)\\
&=\sum_{x} f(x)\mathsf E_{z,\pi}(\mathsf E_{z,\pi}(q(x|x_{t+k-1},a_{t+k-1})|\mathscr F_{t+k-2},x_{t+k-1})|\mathscr F_t)\\
&=\sum_{x} f(x)\mathsf E_{z,\pi}\left(\sum_a q(x|x_{t+k-1},a)\pi_{t+k-1}(a)|\mathscr F_t\right)\\
&\ge c \sum_{x} f(x)\mathsf E_{z,\pi}\left(\sum_a q(x|x_{t+k-1},a)|\mathscr F_t\right)\\
&= c |A| \sum_{x} f(x)\mathsf E_{z,\pi}\left(P(\overline g)(x_{t+k-1},x)|\mathscr F_t\right)\\
&=c|A|\mathsf E_{z,\pi}\left(P(\overline g) f(x_{t+k-1})|\mathscr F_t\right)\ge c^{k-1}|A|^{k-1}\mathsf E_{z,\pi}\left(P^{k-1}(\overline g) f(x_{t+1})|\mathscr F_t\right).
\end{align*}
It follows that
\begin{align*}
&\sum_{j=1}^n\mathsf E_{z,\pi}(f(x_{t+j+1})|\mathscr F_t)\ge \sum_{j=1}^n c^j |A|^j\mathsf E_{z,\pi}\left(P^j(\overline g) f(x_{t+1})|\mathscr F_t\right)\\
&=\sum_{j=1}^n c^j |A|^j\sum_{z}P^j(\overline g) f(z)q(z|x_t,a_t)\ge c^n |A|^n\sum_{z}\sum_{j=1}^n P^j(\overline g) f(z)q(z|x_t,a_t)\\
&\ge c^n |A|^n \min_z \sum_{j=1}^n P^j(\overline g) f(z)
=c^n |A|^n\min_z \sum_y\sum_{j=1}^n P^j(\overline g)(z,y) f(y)\\
&\ge c^n |A|^n\delta\sum_{y} f(y),
\end{align*}
where we used the fact that $c\le 1/|A|$.
\end{proof}
Under the assumptions of Lemma \ref{lem:1} every state $x\in X$ is visited infinitely often. It is even possible to give a lower bound for the growth rate of the local clock $N_t$. Namely, we claim that
\begin{equation} \label{3.2A}
\liminf_{t\to\infty}\frac{N_t(x)}{t}\ge \frac{c^n|A|^n\delta}{n}\qquad \mathsf P_{z,\pi}\textrm{-a.s.}
\end{equation}
To prove (\ref{3.2A}) let us represent $N_{kn+1}$, $k\ge 1$ in the form
$$ N_{kn+1}(x)=I_{\{x_0=x\}}+I_{\{x_1=x\}}+\sum_{j=1}^k\xi_j,\quad \xi_j=\sum_{l=(j-1)n+2}^{jn+1} I_{\{x_l=x\}}.$$
Furthermore, consider the Doob decomposition
$$\sum_{j=1}^k\xi_j=A_k+M_k,\quad k\ge 1$$
with respect to the filtration $\overline{\mathscr F}_k=\mathscr F_{kn}$, $k\ge 0$. Here $(A_k)$ is a predictable process (compensator):
$$ A_k=\sum_{j=1}^k\mathsf E_{z,\pi}(\xi_j|\overline{\mathscr F}_{j-1})$$
and $(M_k)$ is a martingale. By Lemma \ref{lem:1} we have
\begin{align*}
\mathsf E_{z,\pi}(\xi_j|\overline{\mathscr F}_{j-1})= \sum_{l=(j-1)n+2}^{jn+1} \mathsf E_{z,\pi}\left(I_{\{x_l=x\}}|\mathscr F_{(j-1)n}\right)\\
=\sum_{r=1}^{n}\mathsf E_{z,\pi}\left(I_{\{x_{(j-1)n+r+1}=x\}}|\mathscr F_{(j-1)n}\right)\ge c^n|A|^n\delta.
\end{align*}
It follows that $A_k\ge c^n|A|^n\delta k$. Furthermore,
$$ \frac{M_k}{k}\to 0,\quad k\to\infty\qquad \mathsf P_{z,\pi}\textrm{-a.s.}$$
by the law of large numbers for martingales: \cite[Chapter 7, \S3, Corollary 2]{shiryaev1996}. Thus,
\begin{equation} \label{3.2B}
\liminf_{k\to\infty}\frac{N_{kn+1}(x)}{k}\ge c^n|A|^n\delta\qquad \mathsf P_{z,\pi}\textrm{-a.s.}
\end{equation}
For any $t\in\mathbb N$ there exists a unique $k\in\mathbb N$ such that $t\in[kn,(k+1)n)$. So, the inequality (\ref{3.2A}) easily follows from (\ref{3.2B}):
\begin{align*}
\liminf_{t\to\infty}\frac{N_t(x)}{t}&\ge\liminf_{k\to\infty}\frac{N_{kn}(x)}{(k+1)n}=\liminf_{k\to\infty}\frac{N_{(k+2)n}(x)}{(k+3)n}\\
&\ge \liminf_{k\to\infty}\frac{N_{(k+1)n+1}(x)}{k(1+3/k)n}\ge\frac{c^n|A|^n\delta}{n} \qquad \mathsf P_{z,\pi}\textrm{-a.s.}
\end{align*}
In Theorem \ref{th:2}, which is the main result of this note, the learning rate will be determined by a function $\varphi:\mathbb N\times \mathbb N\mapsto (0,\infty)$. Assume that
\begin{itemize}
\item[(i)] the functions $t\mapsto\varphi(t,j)$, $j\mapsto\varphi(t,j)$ are non-decreasing;
\item[(ii)] the function $\varphi$ satisfies the Robbins-Monro conditions on the diagonal:
\begin{equation} \label{3.3A}
\sum_{t=1}^\infty\frac{1}{\varphi(t,t)}=\infty,\qquad \sum_{t=1}^\infty\frac{1}{\varphi^2(t,t)}<\infty.
\end{equation}
\end{itemize}
\begin{theorem}\label{th:2}
Assume that the MDP is communicating and $\varphi$ satisfies conditions (i), (ii) above. Then the learning rate $\alpha_t=\varphi(t,N_t)$ satisfies the Robbins-Monro conditions (\ref{2.1}) for a persistent exploration learning strategy: $\pi_t(a)\ge c>0$.
\end{theorem}
\begin{proof} (a) Let us check the first property (\ref{2.1}). We will use the notation (\ref{1.0}). By the conditional Borel-Cantelli lemma the series
$$\gamma_0+\gamma_1+\sum_{j=1}^k\zeta_j,\qquad \zeta_j=\sum_{l=(j-1)n+2}^{jn+1} \gamma_l$$
diverges $\mathsf P_{z,\pi}$-a.s. if and only if
\begin{equation} \label{3.3}
\sum_{j=1}^\infty \mathsf E_{z,\pi}(\zeta_j|\overline{\mathscr F}_{j-1})=\infty\qquad \mathsf P_{z,\pi}\textrm{-a.s.},
\end{equation}
where $\overline{\mathscr F}_j=\mathscr F_{jn}$. Using the monotonicity properties of $\varphi$ and the inequality (\ref{3.2}), we get
\begin{align*}
\mathsf E_{z,\pi}(\zeta_j|\overline{\mathscr F}_{j-1})&=\sum_{l=(j-1)n+2}^{jn+1} \mathsf E_{z,\pi}\left(\gamma_l|\mathscr F_{(j-1)n}\right)\\
&=\sum_{l=(j-1)n+2}^{jn+1} \mathsf E_{z,\pi}\left(\frac{1}{\varphi(l,N_l)}I_{\{x_l=x\}}\mathsf E_{z,\pi}(I_{\{a_l=a\}}|\mathscr F_{l-1},x_l)|\mathscr F_{(j-1)n}\right)\\
&\ge c \sum_{l=(j-1)n+2}^{jn+1}\mathsf E_{z,\pi}\left(\frac{1}{\varphi(l,l)}I_{\{x_l=x\}}|\mathscr F_{(j-1)n}\right)\\
&\ge \frac{c}{\varphi(jn+1,jn+1)}\sum_{l=(j-1)n+2}^{jn+1}\mathsf E_{z,\pi}\left(I_{\{x_l=x\}}|\mathscr F_{(j-1)n}\right)\\
&\ge \frac{c}{\varphi(jn+1,jn+1)}\sum_{r=1}^{n} \mathsf E_{z,\pi}\left(I_{\{x_{(j-1)n+r+1}=x\}}|\mathscr F_{(j-1)n}\right)\\
&\ge \frac{c^{n+1}|A|^n\delta}{\varphi(jn+1,jn+1)}.
\end{align*}
So, to proof (\ref{3.3}), and hence the first relation (\ref{2.1}), it is enough to show that
$$\sum_{j=1}^\infty \frac{1}{\varphi(jn+1,jn+1)}=\infty.$$
But it is clear, since $\varphi(jn+1,jn+1)\le\varphi(jn+k,jn+k)$, $k=1,\dots,n$ and
$$\infty=\sum_{t=1}^\infty\frac{1}{\varphi(t,t)}\le \sum_{j=0}^\infty\frac{n}{\varphi(jn+1,jn+1)}.$$
(b) Denote by $\tau_j(x)$ the time of $j$-th visit, $j\ge 1$ of the point $x$ by the sequence $(x_t)$. Then $N_{\tau_j}(x)=j$
and
\begin{align*}
\sum_{t=0}^\infty \frac{1}{\varphi^2(t,N_t)} I_{\{x_t=x,a_t=a\}}&\le \sum_{t=0}^\infty \frac{1}{\varphi^2(t,N_t)} I_{\{x_t=x\}}\\
&=\sum_{j=1}^\infty \frac{1}{\varphi^2(\tau_j(x),j)}\le \sum_{j=1}^\infty \frac{1}{\varphi^2(j,j)}
\end{align*}
since $\tau_j(x)\ge j$ and the function $t\mapsto\varphi(t,j)$ is non-decreasing. Thus, the second condition (\ref{2.1}) is implied by the second condition (\ref{3.3A}).
\end{proof}
For instance, the learning rates
$$ \varphi(t,N_t)=\frac{a_1}{(b_1+t)^\alpha} \frac{a_2}{(b_2+N_t)^\beta},\quad \alpha+\beta\in (1/2,1],\quad a_i,b_i,\alpha,\beta>0,$$
$$ \varphi(t,N_t)=\frac{a_1}{(b_1+\ln t)^\alpha} \frac{a_2}{(b_2+N_t)^\beta},\quad \alpha\in (1/2,1],\quad \beta\in [1/2,1],\quad a_i,b_i>0$$
satisfy the conditions of Theorem \ref{th:2}.
For the learning rate depending only on the global clock:
$$\gamma_t=\frac{1}{\varphi(t)}I_{\{x_t=x,a_t=a\}} $$
Theorem \ref{th:2} partially confirms the mentioned conjecture of Bradtke:
$$ \sum_{t=1}^\infty \frac{1}{\varphi(t)}=\infty\quad\Longrightarrow\quad \sum_{t=1}^\infty \frac{1}{\varphi(t)}I_{\{x_t=x,a_t=a\}}=\infty $$
for finite state-action communicating MDP, persistent exploration learning strategies and non-decreasing functions $\varphi$.
It would be interesting to investigate the case of decaying exploration learning strategies. It is clear that the Robbins-Monro conditions (\ref{2.1}) can be ensured only by joint conditions on the learning rate and the randomized learning strategy $(\pi_t)$. A simple illustration was given by (\ref{2.4}).
\subsubsection*{Acknowledgments.} The research is supported by the Russian Science Foundation, project 17-19-01038.
|
1,314,259,993,533 | arxiv | \section{Introduction}
This paper concerns a version of the fractional $\mathrm{\bf p}$-Laplace
operator, which has been introduced in \cite{BCF2}. More precisely, for
$\mathrm{\bf p}\geq 2$, $s\in (\frac{1}{2},1)$, and for a given bounded function
$u:\mathbb{R}^N\to\mathbb{R}$ that is of regularity $C^{1,1}(x)$ with $\nabla u(x)\neq 0$, one defines:
\begin{equation}\label{dps}
\Delta_\mathrm{\bf p}^s u(x) \doteq C_{N,\mathrm{\bf p},s} \int_{T_\mathrm{\bf p}^{0,\infty}(\frac{\nabla u(x)}{|\nabla
u(x)|})}\frac{u(x+z)+u(x-z) - 2u(x)}{|z|^{N+2s}}\;\mbox{d}z.
\end{equation}
Above, $C_{N,\mathrm{\bf p},s}$ is a specific constant depending on $N,\mathrm{\bf p},s$, whereas the integration occurs on the infinite cone
$T_\mathrm{\bf p}^{0,\infty}(\frac{\nabla u(x)}{|\nabla u(x)|})\subset\mathbb{R}^N$ whose
centerline is aligned with the vector $\frac{\nabla u(x)}{|\nabla u(x)|}$ and
whose aperture angle $\alpha$ depends on $N,\mathrm{\bf p}$. In particular, for $\mathrm{\bf p}=2$
we have $\alpha=\frac{\pi}{2}$ so that the said cone becomes the
half-space and (\ref{dps}) is consistent with the familiar formula: $-(-\Delta)^s u(x) = C_{N,s}\int_{\mathbb{R}^N}
\frac{u(z)- u(x)}{|x-z|^{N+2s}}\;\mbox{d}z$.
On the other hand, when $\mathrm{\bf p}\to\infty$ then $\alpha\to 0$ and the
cone reduces to a line, consistently with the parallel
definition for fractional infinity Laplacian $\Delta_\infty^su(x)$ in \cite{BCF}.
As pointed out in \cite{BCF2}, definition (\ref{dps}) arises naturally
when extending the game-theoretical interpretation to the non-local, non-divergence
version of the classical $\mathrm{\bf p}$-Laplace operator $\Delta_\mathrm{\bf p}$. The
interpretation for $\Delta_\mathrm{\bf p}$
has been originally put forward in \cite{PS} and it is based on the Tug-of-War game with random
noise, which in its turn can be seen as the interpolation between the pure Tug-of-War developed for the
$\infty$-Laplacian $\Delta_\infty$ in \cite{PSSW}, and the
random walk description of the linear harmonic operator $\Delta$, which is classical.
In order to emphasise the importance of the choice of the integration cone $T_\mathrm{\bf p}^{0,\infty}$ and
to distinguish the formula (\ref{dps}) from the divergence form of the
fractional $\mathrm{\bf p}$-Laplacian arising through the Euler-Lagrange equations
of an appropriate non-local energy \cite{CLM}, we call the operator
$\Delta_\mathrm{\bf p}^s$ above the ``geometric'' $\mathrm{\bf p}$-$s$-Laplacian.
\smallskip
The purpose of this paper is to rigorously define the non-local
version of the noisy Tug-of-War game and prove that its
values converge to viscosity solutions of the
Dirichlet problem for $\Delta_\mathrm{\bf p}^s$, posed on a sufficiently regular domain $\mathcal{D}\subset\mathbb{R}^N$:
\begin{equation}\label{ndci}
\Delta_\mathrm{\bf p}^su = 0 ~\mbox{ in }\; \mathcal{D},\qquad u=F ~\mbox{ in }\; \mathbb{R}^N\setminus\mathcal{D}.
\end{equation}
We remark that condition $\mathrm{\bf p}\geq 2$ which we assume throughout, can be relaxed to
cover the full range $\mathrm{\bf p}\in (1,\infty)$, by replacing the cone $T_\mathrm{\bf p}$
with the complement of its doubled version for $\mathrm{\bf p}\in (1,2)$. This
construction has been proposed in \cite[Remark 4.5]{BCF2}.
We now describe our main results. The said game will be modeled on
the dynamic programming principle that involves an appropriate
average, in whose asymptotic expansion the operator $\Delta_\mathrm{\bf p}^s$
arises as the highest order term, in the vanishing limit of the expansion
parameter $\epsilon$. Hence, our first set of results develops such asymptotic
expansions, reminiscent of the well known local and linear formula:
\begin{equation}\label{aed2}
\fint_{B_\epsilon(x)}u(y)\;\mbox{d}y = u(x) +
\frac{\epsilon^2}{2(N+2)}\Delta u(x) + o(\epsilon^2) \qquad \mbox{ as }\; \epsilon\to 0+.
\end{equation}
\smallskip
\subsection{Asymptotic expansions.}
More precisely, we define the following non-local and non-linear averaging operator:
$$\mathcal{A}_\epsilon u(x) \doteq \frac{1}{2} \Big(\sup_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}
\frac{u(x+z)}{|z|^{N+2s}}\;\mbox{d}z + \inf_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}
\frac{u(x+z)}{|z|^{N+2s}} \;\mbox{d}z \Big),$$
where the integration takes place on the truncated infinite cones
$T_\mathrm{\bf p}^{\epsilon, \infty}(y)=T_\mathrm{\bf p}^{0, \infty}(y)\setminus
B_\epsilon(0)$, each oriented along its indicated unit direction vector $y$ and
having the aperture angle $\alpha$ as in (\ref{dps}). The integral averages $\fint$ are taken with respect to the
singular measure $|z|^{-N-2s}\,\mbox{d}z$.
Note that $\mathcal{A}_\epsilon u$ is well defined for any bounded, Borel
function $u$, and in particular it does not necessitate the existence or the knowledge
of $\nabla u(x)$, which was essential in (\ref{dps}).
The form of $\mathcal{A}_\epsilon$ is justified by the following expansion,
which we prove to be valid for functions $u$ that are $C^2$ in the vicinity of
a given $x\in\mathbb{R}^N$ with $\nabla u(x)\neq 0$, and uniformly continuous away from $x$:
\begin{equation}\label{raz}
\begin{split}
\mathcal{A}_\epsilon u(x) = u(x) + \frac{s}{(2-2s)(N+\mathrm{\bf p}-2)} \epsilon^{2s}\cdot \Delta_\mathrm{\bf p}^s u (x)
+ o(\epsilon^{2s}) \qquad \mbox{ as }\; \epsilon\to 0+.
\end{split}
\end{equation}
We also propose another nonlinear average of a combined local - non-local nature:
\begin{equation*}
\begin{split}
\bar\mathcal{A}_\epsilon u(x)\doteq &\; \frac{(1-s)(N+\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s}\cdot
\mathcal{A}_\epsilon u(x) \\ & + \frac{s(N+2)}{N+\mathrm{\bf p}-2+2s}\cdot \fint_{B_\epsilon (x)} u(y)\;\mathrm{d}y
+ \frac{s(\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s}\cdot\frac{1}{2} \Big(\sup_{B_\epsilon (x)}u + \inf_{B_\epsilon (x)}u \Big).
\end{split}
\end{equation*}
Note that the three positive multiplication factors above add up to
$1$. We prove that the result as in (\ref{raz}) similarly holds for $\bar{\mathcal{A}}_\epsilon$:
\begin{equation}\label{dwa}
\bar\mathcal{A}_\epsilon u(x) = u(x) + \frac{s}{2(N+\mathrm{\bf p}-2+2s)}\epsilon^{2s}\cdot
\Delta_\mathrm{\bf p}^s u(x) + o(\epsilon^{2s})\qquad \mbox{ as }\; \epsilon\to 0+.
\end{equation}
Expansion (\ref{dwa}) is superior to (\ref{raz}), because
the error quantity $o(\epsilon^{2s})$ in (\ref{raz}), which we make precise in the paper,
blows up to $\infty$ as $s\to 1-$, whereas $o(\epsilon^{2s})$ in (\ref{dwa}) is
uniform in the whole considered range $s\in
(\frac{1}{2},1)$. When $s\to 1$, the expansion (\ref{dwa}) becomes:
\begin{equation}\label{aed3}
\begin{split}
\frac{N+2}{N+\mathrm{\bf p}}\cdot \fint_{B_\epsilon (x)} u(y)\;\mathrm{d}y & + \frac{\mathrm{\bf p}-2}{N+\mathrm{\bf p}}\cdot\frac{1}{2}
\big(\sup_{B_\epsilon (x)}u + \inf_{B_\epsilon (x)}u \big) \\ & = u(x) + \frac{\epsilon^2}{2(N+\mathrm{\bf p})}
|\nabla u(x)|^{2-\mathrm{\bf p}}\Delta_\mathrm{\bf p} u(x) + o(\epsilon^{2}),
\end{split}
\end{equation}
which in turn yields (\ref{aed2}) for $\mathrm{\bf p}=2$. We recall in passing that
(\ref{aed3}) is a convex combination of (\ref{aed2}) and the asymptotic
expansion for the infinity Laplacian in:
\begin{equation*}
\frac{1}{2} \big(\sup_{B_\epsilon (x)}u + \inf_{B_\epsilon (x)}u \big)
= u(x) + \frac{\epsilon^2}{2}\Delta_\infty u(x) + o(\epsilon^2) \qquad \mbox{ as }\; \epsilon\to 0+,
\end{equation*}
with the weights corresponding to the following identity for the classical
$\mathrm{\bf p}$-Laplacian in non-divergence form: $|\nabla u|^{2-\mathrm{\bf p}}\Delta_\mathrm{\bf p} u = \Delta u + (\mathrm{\bf p}-2)\Delta_\infty u$.
Asymptotic expansions for gradient-dependent operators have been
recently discussed in \cite{BS1, BS}. However, the averages in there depended on
$\nabla u(x)$, which is a drawback in the context of our further
applications, based on solutions to the truncated expansions
\ref{dppe}. We seek these solutions among the natural class of Borel
functions. Indeed, they are at most continuous and become
higher regular only generically and in the limit as $\epsilon\to 0$,
so no notion of pointwise gradient may be feasible in the definition of an average.
The idea of the local correction in the average $\bar {\mathcal{A}}_\epsilon$, and
of the expansion (\ref{raz}) with no reference to the gradient, first
appeared in \cite{EDL} in the context of the fractional
$\infty$-Laplacian $\Delta^s_\infty$. We also observe that the case
$\mathrm{\bf p}=\infty$ where $\alpha=0$ in $T_\mathrm{\bf p}^{0,
\infty}$, is independent and cannot be deduced from the present work.
Expansion (\ref{aed3}) when $\Delta_\mathrm{\bf p} u=0$, and the related
characterisation of $\mathrm{\bf p}$-harmonic functions in the viscosity sense,
have been studied in \cite{MPR}. This expansion informs a game-theoretical interpretation
of the $\mathrm{\bf p}$-Laplacian (alternative to the one originally
carried out in \cite{PS}) only for $\mathrm{\bf p}\geq 2$, when the weight coefficients
are nonnegative. Another expansion, yielding a family of Tug-of-War games in the
whole range $\mathrm{\bf p}\in (1,\infty)$, was proposed in \cite{L1}.
\smallskip
\subsection{Dynamic programming and Tug-of-War.}
The second set of results in this paper concerns the operator $\mathcal{A}_\epsilon$
and the truncated version of the expansion (\ref{raz}),
aiming at an approximation scheme for solutions to (\ref{ndci}). More
precisely, given an open bounded domain $\mathcal{D}\subset \mathbb{R}^N$ and a bounded
Borel data function $F:\mathbb{R}^N\setminus \mathcal{D}\to \mathbb{R}$, we consider the
following family of non-local averaging problems:
\begin{equation*}\tag*{${\mathrm{(DPP)}}_\epsilon$}
u_\epsilon(x) = \left\{\begin{array}{ll} \mathcal{A}_\epsilon u_\epsilon(x) &
~~\mbox{for } x\in\mathcal{D}\\ F(x) & ~~\mbox{for } x\in\mathbb{R}^N\setminus \mathcal{D}.
\end{array}\right.
\end{equation*}
We prove that for every $\epsilon>0$ there exists exactly one
$u_\epsilon$ satisfying the above, which is bounded Borel on $\mathbb{R}^N$ (and continuous in $\mathcal{D}$).
We then show, for $\mathcal{D}$ satisfying the exterior cone condition
and for uniformly continuous $F$, that any sequence
$\{u_\epsilon\}_{\epsilon\to 0}$ has a further subsequence converging
uniformly in $\mathbb{R}^N$ to a continuous limit $u$ that is a viscosity solution to (\ref{ndci}).
To this end, each $u_\epsilon(x)$ is shown to be the value of
the following zero-sum two-players game, which is a non-local version
of the Tug-of-War with noise introduced in \cite{PS}.
In this game, each Player chooses a unit direction vector
according to their own strategy, based on the knowledge of all prior moves and
random outcomes. With equal probabilities, direction from Player 1 or
Player 2 is picked; this resulting direction is called $y$. The
current game position $x_n$ is then updated to a next position
$x_{n+1}$ within the shifted and truncated cone $x_n+T_\mathrm{\bf p}^{\epsilon,\infty}(y)$,
randomly according to the probability-normalisation of the measure
$|z|^{-N-2s}\mathrm{d}z$ on $T_\mathrm{\bf p}^{\epsilon,\infty}(y)$. Such
process, started at a point $x_0\in\mathbb{R}^N$ is stopped the first time
when $x_n\not\in\mathcal{D}$, whereas Player 1 collects from their opponent the
payoff given by the value $F(x_n)$. We show that the expected value of the
payoff, under condition that both Players play optimally, has the
min-max property, yielding the solution $u_\epsilon$ to the
dynamic programming principle \ref{dppe}.
Convergence as $\epsilon\to 0$ is obtained by showing the approximate
equicontinuity of the family $\{u_\epsilon\}_{\epsilon\to 0}$, for
which the sufficient condition is expressed via ``game-regularity'' of
the boundary points. This general condition, implied in particular
by the exterior cone condition on $\partial\mathcal{D}$,
is similar in spirit to the celebrated Doob's boundary regularity criterion for Brownian motion.
The order of arguments follows then a general program in the context
of Tug-of-War games (see a recent textbook \cite{L}), which has been put forward in
\cite{PSSW} and which has so far yielded results for
$\mathrm{\bf p}$-Laplacian, obstacle problems, subriemannian geometries and
time-dependent problems. The fact that this program can be carried out
in the present non-local setting, is not obvious, and it is another
main result of this work.
\smallskip
\subsection{Outline of the paper.}
We set the notation and introduce the main integral operators in
section \ref{sec2}. The non-local asymptotic expansions (\ref{raz}) and
(\ref{dwa}), together with precise bounds on their error terms,
are proved in sections \ref{sec3} and \ref{sec4}, respectively. The
dynamic programming principles \ref{dppe} are discussed in
section \ref{sec_dppe}. The fact that the uniform
limits of their solutions $\{u_\epsilon\}_{\epsilon\to 0}$ are automatically
viscosity solutions to (\ref{ndci}), is shown in section
\ref{sec_visc}. The non-local Tug-of-War game is defined and proved to
yield solutions $u_\epsilon$ in section
\ref{sec_game}. Proofs of the asymptotic equicontinuity and game-regularity are carried out in sections
\ref{sec_ft} and \ref{sec_convergence}, where we rely on further
analysis of a barrier function from \cite{BCF2}. Finally, in Appendix
\ref{appendix} we prove uniqueness of viscosity solutions to
(\ref{ndci}) under a more restrictive assumption which necessitates
extending $\Delta_\mathrm{\bf p}^s$ to include the case $\nabla u(x)=0$.
It is not clear if solutions to (\ref{ndci}) as posed originally, are unique.
\section{The fractional quotients and the fractional $\mathrm{\bf p}$-Laplacian}\label{sec2}
We consider the following measure on the Borel subsets of $\mathbb{R}^N$:
$$\mathrm{d}\mu_s^N(z) \doteq \frac{C(N,s)}{|z|^{N+2s}}\;\mbox{d}z \quad \mbox{ where }\;
C(N,s)=\frac{4^s
s\Gamma\big(\frac{N}{2}+s\big)}{\pi^{N/2}\Gamma\big(1-s\big)} =
\Big(\int_{\mathbb{R}^N}\frac{1-\cos \langle z, e_1\rangle}{|z|^{N+2s}}\;\mbox{d}z\Big)^{-1},$$
where the exponent $s$ in this paper is assumed to belong to the range:
$$s\in \big(\frac{1}{2},1\big).$$
One can show \cite{hitch} that $C(N,s) = s(1-s) c_{N,s}$ where $c_{N,s}$ is bounded and positive uniformly in $s$.
The role of the normalizing constant $C(N,s)$ is to ensure that the
operator $-(-\Delta)^s$, given by: $-(-\Delta)^su(x) \doteq
-\int_{\mathbb{R}^N} u(x+z) + u(x-z) -2u(x) \;\mathrm{d}\mu_s^N(z)$,
is a pseudo-differential operator with symbol $|\xi|^{2s}$.
\begin{definition}\label{def_cone}
Fix $\mathrm{\bf p}\in [2,\infty)$. We define the infinite cone $T_\mathrm{\bf p}$ and the spherical cup $A_\mathrm{\bf p}$:
$$T_\mathrm{\bf p}\doteq \big\{z\in\mathbb{R}^N;~ \angle (e_1,z)<\alpha_\mathrm{\bf p} \big\}, \qquad A_\mathrm{\bf p} \doteq T_\mathrm{\bf p}\cap \{|z|=1\},$$
where $\alpha_\mathrm{\bf p}\in (0,\frac{\pi}{2}]$ is the angle such that:
\begin{equation}\label{Ap}
\mathrm{\bf p}-1 = \frac{\int_{A_\mathrm{\bf p}} \langle z,e_1\rangle^2
\;\mbox{d}\sigma(z)}{\int_{A_\mathrm{\bf p}} \langle z,e_2\rangle^2 \;\mbox{d}\sigma(z)}.
\end{equation}
For every $0\leq a \leq b\leq\infty$ and $|y|=1$, we also have the
truncated cones
$$T_\mathrm{\bf p}^{a,b}(y)\doteq \big\{z\in\mathbb{R}^N;~ \angle (y,z)<\alpha_\mathrm{\bf p}~~\mbox{
and }~~ a<|z|<b\big\}, \qquad T_\mathrm{\bf p}^{a,b}\doteq T_\mathrm{\bf p}^{a,b}(e_1).$$
Further, for two unit vectors $y\neq\tilde y$ we define the
rotation $R_{\tilde y, y}\in SO(N)$ as the unique orientation
preserving rotation, in plane spanned by $y, \tilde y$, and
such that $R_{\tilde y, y}y=\tilde y$. When $y=\tilde y$, we set: $R_{y,y}=Id_N$.
Note that: $T_\mathrm{\bf p}^{a,b}(y) = R_{y,e_1} T_\mathrm{\bf p}^{a,b}$ and $T_\mathrm{\bf p} = T_\mathrm{\bf p}^{0,\infty} =T_\mathrm{\bf p}^{0,\infty}(e_1)$.
\end{definition}
The well posedness of this definition and its rationale will be explained
in Lemma \ref{lem_Tp}.
\smallskip
\begin{definition}\label{def_operators}
Fix an exponent $\mathrm{\bf p}\in [2,\infty)$. Given a bounded, Borel function $u:\mathbb{R}^N\to\mathbb{R}$,
we define the following family of integral operators, parametrised by $\epsilon>0$:
\begin{equation*
\mathcal{L}_{s,\p}^\epsilon[u] (x) \doteq \sup_{|y|=1}\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z) - u(x) \;\mathrm{d}\mu_s^N(z) +
\inf_{| y|=1} \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z) - u(x) \;\mathrm{d}\mu_s^N(z).
\end{equation*}
When additionally $u\in C^{1,1}(x)$ and the corresponding gradient-like vector $p_x\neq 0$, we have:
$$\mathcal{L}_{s,\p}[u] (x) \doteq\int_{T_\mathrm{\bf p}^{0,\infty}(\frac{p_x}{|p_x|})} L_{u}(x,z,z)\;\mathrm{d}\mu_s^N(z),$$
where for each $x, z, \tilde z\in\mathbb{R}^N$ we set:
$$L_{u}(x, z, \tilde z) \doteq u(x+z) + u(x-\tilde z) - 2u(x).$$
\end{definition}
\smallskip
\begin{remark}
Recall that $u\in C^{1,1}(x)$ provided that there are $p_x\in \mathbb{R}^N$ and $C_x,r_x>0$ with:
\begin{equation}\label{C11}
\big|u(x+z) - u(x) - \langle p_x, z\rangle\big|\leq C_x |z|^2
\qquad\mbox{for all }\; |z|<r_x.
\end{equation}
One immediate consequence of (\ref{C11}) is that:
\begin{equation}\label{C2}
\big|L_{u} (x,z,\tilde z) - \langle p_x, z-\tilde z\rangle \big|\leq C_x
\big(|z|^2 + |\tilde z|^2\big) \qquad\mbox{for all }\; |z|, |\tilde z|<r_x.
\end{equation}
Also, when $u\in C^2(\bar B_{r_x})$, where
$B_{r_x}$ denotes the open ball centered at $x$ and with
radius $r_x$, then condition (\ref{C11}) holds automatically with
$p_x = \nabla u(x)$ and $C_x = \frac{1}{2}\|\nabla^2u\|_{L^\infty(B_{r_x})}$.
\end{remark}
\medskip
Since $\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})<\infty$, each integral
$\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}u(x+z)\;\mathrm{d}\mu_s^N(z)$ and consequently
$\mathcal{L}_{s,\p}^\epsilon[u] (x)$ are well defined and finite for any bounded, Borel $u$. On the other hand,
$\mu_s^N(T_\mathrm{\bf p})=\infty$, so a corresponding formulation $\mathcal{L}_{s,\mathrm{\bf p}}^0$
is in general not valid. We now observe:
\begin{proposition}\label{lem1}
Let $u:\mathbb{R}^N\to \mathbb{R}$ be a bounded, Borel function. Then:
$$|\mathcal{L}_{s,\p}^\epsilon[u] (x)|\leq 2 C(N,s) |A_\mathrm{\bf p}|\cdot \frac{\|u\|_{L^\infty}}{s \epsilon^{2s}} \quad\mbox{for all } x\in\mathbb{R}^N.$$
If moreover $u\in C^{1,1}(x)$, then $\mathcal{L}_{s,\p}^\epsilon[u] (x)$ are uniformly bounded in $\epsilon$:
\begin{equation*
|\mathcal{L}_{s,\p}^\epsilon[u] (x)| \leq C(N,s) |A_\mathrm{\bf p}|\cdot \Big(\frac{C_x r_x^{2-2s}}{1-s}
+ \frac{2\|u\|_{L^\infty}}{s r_x^{2s}}\Big).
\end{equation*}
When $p_x\neq 0$ then the same bound above holds for $\mathcal{L}_{s,\p}[u](x)$.
\end{proposition}
\begin{proof}
The first claim is self-evident, because: $\mu_s^N(T_\mathrm{\bf p}^{\epsilon,
\infty})= C(N,s) \int_\epsilon^\infty \frac{t^{N-1}
|A_\mathrm{\bf p}|}{t^{N+2s}}\;\mbox{d}t = C(N,s)\frac{|A_\mathrm{\bf p}|}{2s\epsilon^{2s}}$.
For the second claim, by changing variables we deduce that:
$$\mathcal{L}_{s,\p}^\epsilon[u] (x) = \sup_{|y|=1} \inf_{|\tilde y|=1} \int_{T_\mathrm{\bf p}^{\epsilon,
\infty}(y)} u(x+z) + u(x-R_{\tilde y, y}z) - 2u(x) \;\mathrm{d}\mu_s^N(z).$$
Then, by (\ref{C2}) we get, for any $|y|=|\tilde y|=1$:
\begin{equation*}
\begin{split}
\Big|\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z) + & \;u(x-R_{\tilde y, y}z) - 2u(x) \;\mathrm{d}\mu_s^N(z) -
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}(y)} \langle p_x, z-R_{\tilde y, y}z\rangle
\;\mathrm{d}\mu_s^N(z)\Big| \\ & \leq \int_{T_\mathrm{\bf p}^{\epsilon, r_x}(y)}
2C_x |z|^2\;\mathrm{d}\mu_s^N(z) + \int_{T_\mathrm{\bf p}^{r_x, \infty}(y)} 4\|u\|_{L^\infty}\;\mathrm{d}\mu_s^N(z)
\\ & = C(N,s) |A_\mathrm{\bf p}|\cdot \Big(\frac{C_x r_x^{2-2s}}{1-s} + \frac{2\|u\|_{L^\infty}}{s r_x^{2s}}\Big).
\end{split}
\end{equation*}
On the other hand:
\begin{equation*}
\begin{split}
\sup_{|y|=1}\inf_{|\tilde y| =1} & \int_{T_\mathrm{\bf p}^{\epsilon, r_x}(y)} \langle p_x, z-R_{\tilde y,
y}z\rangle \;\mathrm{d}\mu_s^N(z) \\ & = \sup_{|y|=1}\inf_{|\tilde y| =1} \Big\langle p_x,
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}(y)}z \;\mathrm{d}\mu_s^N(z) - \int_{T_\mathrm{\bf p}^{\epsilon, r_x}(\tilde y)}z \;\mathrm{d}\mu_s^N(z)\Big\rangle = 0.
\end{split}
\end{equation*}
This results in:
\begin{equation*}
\begin{split}
& |\mathcal{L}_{s,\p}^\epsilon[u] (x) | = \Big|\mathcal{L}_{s,\p}^\epsilon[u] (x) - \sup_{|y|=1}\inf_{|\tilde y| =1} \int_{T_\mathrm{\bf p}^{\epsilon, r_x}(y)} \langle p_x, z-R_{\tilde y,
y}z\rangle \;\mathrm{d}\mu_s^N(z) \Big | \\ & \leq
\sup_{|y|=|\tilde y|=1} \Big|\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z) + u(x-R_{\tilde y, y}z) - 2u(x) \;\mathrm{d}\mu_s^N(z) -
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}(y)} \langle p_x, z-R_{\tilde y, y}z\rangle \;\mathrm{d}\mu_s^N(z)\Big|,
\end{split}
\end{equation*}
ending the proof of the bound for $|\mathcal{L}_{s,\p}^\epsilon[u] (x) |$. The statement for $|\mathcal{L}_{s,\p}[u] (x)|$ follows similarly.
\end{proof}
We close this section by noting some useful identities:
\begin{lemma}\label{lem_Tp}
For every $\mathrm{\bf p}\in [2, \infty)$ there exists $\alpha_\mathrm{\bf p}\in (0, \frac{\pi}{2}]$ such that (\ref{Ap}) holds.
Moreover:
\begin{enumerate}[leftmargin=7mm]
\item[(i)] $\displaystyle \int_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mathrm{d}\sigma(z)=\frac{|A_\mathrm{\bf p}|}{N+\mathrm{\bf p}-2}, ~$
and $~\displaystyle \int_{A_\mathrm{\bf p}}\langle z,
e_1\rangle^2\;\mathrm{d}\sigma(z)=\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2} |A_\mathrm{\bf p}|.$
We also have: $~\displaystyle \int_{T_\mathrm{\bf p}^{0,\epsilon}}\langle z,
e_2\rangle^2\;\mathrm{d}\mu_s^N(z) = \frac{C(N,s) |A_\mathrm{\bf p}|}{(N+\mathrm{\bf p}-2) (2-2s)}\epsilon^{2-2s}$.
\item[(ii)] When $\nabla^2u(x)$ and $p_x\doteq \nabla u(x)\neq 0$ are well defined, then:
$$\int_{T_\mathrm{\bf p}^{0,\epsilon}(\frac{p_x}{|p_x|})} \big\langle \nabla^2u(x) :
z\otimes z\big\rangle\;\mathrm{d}\mu_s^N(z) = \frac{C(N,s) |A_\mathrm{\bf p}|}{(N+\mathrm{\bf p}-2)(2-2s)}
\epsilon^{2-2s}\cdot |p_x|^{2-\mathrm{\bf p}} \Delta_\mathrm{\bf p} u(x).$$
\end{enumerate}
\end{lemma}
\begin{proof}
We consider the following function, which is continuous on $(0,\pi)$:
$$\alpha\mapsto Q(\alpha) \doteq \frac{\int_{A(\alpha)} \langle z,e_1\rangle^2
\;\mbox{d}\sigma(z)}{\int_{A(\alpha)}\langle z,e_2\rangle^2
\;\mbox{d}\sigma(z)}, \quad \mbox{ where }\; A(\alpha) = \{|z|=1;~\angle (e_1,z)<\alpha\}.$$
Since $Q(\frac{\pi}{2}) = {\frac{1}{2}\int_{\{|z|=1\} } \langle z,e_1\rangle^2
\;\mbox{d}\sigma(z)}/\big({\frac{1}{2}\int_{\{|z|=1\} }\langle z,e_2\rangle^2 \;\mbox{d}\sigma(z)}\big)=1$,
while $\lim_{\alpha\to 0} Q(\alpha) = \infty$, it follows that for each $\mathrm{\bf p}-1
\in [1,\infty)$ there indeed exists $\alpha_\mathrm{\bf p}\in (0,\frac{\pi}{2}]$ satisfying $Q(\alpha_\mathrm{\bf p}) = \mathrm{\bf p}-1$.
To prove (i), we compute, putting $A_\mathrm{\bf p} = A({\alpha_\mathrm{\bf p}})$:
$$Q(\alpha_\mathrm{\bf p}) = \frac{\int_{A_\mathrm{\bf p}} 1\;\mbox{d}\sigma(z) - (N-1)\int_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mbox{d}\sigma(z)}{\int_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mbox{d}\sigma(z)} = \frac{1}{\fint_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mbox{d}\sigma(z)} - (N-1),$$
which implies that: $\fint_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mbox{d}\sigma(z)=\frac{1}{N+\mathrm{\bf p}-2}$, and consequently: $\fint_{A_\mathrm{\bf p}}\langle z,
e_1\rangle^2\;\mbox{d}\sigma(z)= 1- (N-1) \fint_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mbox{d}\sigma(z)=\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}$.
On the other hand:
\begin{equation*}
\begin{split}
\int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_2\rangle^2\;\mathrm{d}\mu_s^N(z) & =
C(N,s)\int_0^\epsilon \frac{t^2 t^{N-1}}{t^{N+2s}} \int_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mbox{d}\sigma(z)\; \mbox{d}t \\ & = C(N,s) \int_{A_\mathrm{\bf p}}\langle z,
e_2\rangle^2\;\mbox{d}\sigma(z)\cdot \frac{\epsilon^{2-2s}}{2-2s}.
\end{split}
\end{equation*}
To prove (ii), observe that:
\begin{equation*}
\begin{split}
& \int_{T_\mathrm{\bf p}^{0,\epsilon}} z\otimes z\;\mathrm{d}\mu_s^N(z) = diag
\Big(\int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_1\rangle^2\;\mathrm{d}\mu_s^N(z),
\int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_2\rangle^2\;\mathrm{d}\mu_s^N(z), \ldots\Big) \\ & =
\int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_2\rangle^2\;\mathrm{d}\mu_s^N(z)\cdot Id_N +
\Big( \int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_1\rangle^2\;\mathrm{d}\mu_s^N(z) -
\int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_2\rangle^2\;\mathrm{d}\mu_s^N(z)\Big)
e_1\otimes e_1 \\ & = \int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z,
e_2\rangle^2\;\mathrm{d}\mu_s^N(z)\cdot \Big( Id_N +(\mathrm{\bf p}-2) \,e_1\otimes e_1\Big)
\\ & = \frac{C(N,s) |A_\mathrm{\bf p}|}{(N+\mathrm{\bf p}-2) (2-2s)}\epsilon^{2-2s}\cdot \Big( Id_N +(\mathrm{\bf p}-2) \,e_1\otimes e_1\Big),
\end{split}
\end{equation*}
where we used:
$$\frac{ \int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_1\rangle^2\;\mathrm{d}\mu_s^N(z)}{
\int_{T_\mathrm{\bf p}^{0,\epsilon}} \langle z, e_2\rangle^2\;\mathrm{d}\mu_s^N(z)}= Q(\alpha_\mathrm{\bf p})=\mathrm{\bf p}-1.$$
It thus follows that:
\begin{equation*}
\begin{split}
& \Big\langle \nabla^2u(x) : \int_{T_\mathrm{\bf p}^{0,\epsilon}(\frac{p_x}{|p_x|})}
z\otimes z \;\mathrm{d}\mu_s^N(z) \Big\rangle \\ & = \frac{C(N,s) |A_\mathrm{\bf p}|}{(N+\mathrm{\bf p}-2) (2-2s)}\epsilon^{2-2s}
\Big\langle \nabla^2u(x) : Id_N +(\mathrm{\bf p}-2) \frac{p_x}{|p_x|}\otimes \frac{p_x}{|p_x|}\Big\rangle,
\end{split}
\end{equation*}
which completes the argument, upon recalling the formula: $\Delta_\mathrm{\bf p} u =
|\nabla u|^{\mathrm{\bf p}-2}\big(\Delta u + (\mathrm{\bf p}-2)\Delta_\infty u)$ and the
definition of the $\infty$-Laplacian:
$\Delta_\infty u = \langle \nabla^2 u : \frac{\nabla u}{|\nabla u|}\otimes \frac{\nabla u}{|\nabla u|}\rangle$.
\end{proof}
\begin{remark}\label{rem2.5}
In \cite{BCF2}[Section 4.1.2], the fractional $\mathrm{\bf p}$-Laplacian
$\Delta_\mathrm{\bf p}^su$ has been introduced by means of a scaled version
of the operator $\mathcal{L}_{s,\p}[u]$. In particular, when $p_x\neq 0$ it follows that:
$$\Delta_\mathrm{\bf p}^s u(x)= \frac{2-2s}{C(N,s)} \cdot\frac{N+\mathrm{\bf p}-2}{|A_\mathrm{\bf p}|}\mathcal{L}_{s,\p}[u] (x).$$
\end{remark}
\section{A non-local asymptotic expansion}\label{sec3}
In this section we prove the formula (\ref{raz}) with a precise form
of the error term. In what follows, we denote $B_r=B_r(x)$
for a fixed referential point $x\in\mathbb{R}^N$.
Given a function $u:\mathbb{R}^N\to\mathbb{R}$ that is uniformly continuous on
$\mathbb{R}^N\setminus \bar B_r$, we denote its modulus of continuity by:
$$\omega_u (a)=\sup\big\{|u(z)-u(\bar z)|;~ z, \bar z \in \mathbb{R}^N\setminus
\bar B_{r_x},~ |z-\bar z|\leq a\big\}.$$
\begin{theorem}\label{thm4.7}
Let $u\in C^2(\bar B_{r_x})$ satisfy $p_x\doteq \nabla
u(x)\neq 0$, and denote $C_x\doteq \frac{1}{2}\|\nabla^2u\|_{L^\infty(B_{r_x})}$.
Assume that $u$ is uniformly continuous on
$\mathbb{R}^N\setminus \bar B_{r_x}$ with modulus of continuity $\omega_u$. Recall that:
$$\mathcal{A}_\epsilon u(x) \doteq
\frac{1}{2}\Big(\sup_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z)
\;\mathrm{d}\mu_s^N(z) + \inf_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z)\;\mathrm{d}\mu_s^N(z) \Big).$$
Then there holds:
\begin{equation}\label{sec3_m}
\begin{split}
\Big|\mathcal{A}_\epsilon&u(x) - u(x) -
\frac{s}{C(N,s) |A_\mathrm{\bf p}|}\epsilon^{2s}\mathcal{L}_{s,\p}[u] (x) \Big| \\ & \leq \frac{s}{1-s}\cdot
C_x \epsilon^2 + \epsilon^{2s}\Big(4
sC_x\frac{r_x^{2-2s}-\epsilon^{2-2s}}{1-s}\cdot m_\epsilon
+ \big(r_x^{-2s} + \frac{2s}{2s-1}r_x^{1-2s}\big)\cdot \omega_u(m_\epsilon)\Big),
\end{split}
\end{equation}
where we define:
\begin{equation}\label{meke}
\begin{split}
& m_\epsilon = \max\Big\{
\frac{16 (N+\mathrm{\bf p}-2)}{\mathrm{\bf p}-1} \cdot \frac{C_x}{|p_x|} \cdot\frac{2s-1}{1-s}\cdot
\frac{r_x^{2-2s}-\epsilon^{2-2s}}{\epsilon^{1-2s} -{r_x}^{1-2s}}, ~ ~\kappa_\epsilon\Big\}, \\ &
\kappa_\epsilon = \sup\Big\{ m;~ m\in [0,2] \; \mbox{ and } \; m^2
\leq \frac{N+\mathrm{\bf p}-2}{\mathrm{\bf p}-1}\cdot\frac{8\omega_u(m)}{|p_x|} \cdot \frac{\frac{2s-1}{2s}
r_x^{-2s} +r_x^{1-2s}}{\epsilon^{1-2s} - r_x^{1-2s}}\Big\}.
\end{split}
\end{equation}
\end{theorem}
\medskip
Before giving the proof, a few observations are in order:
\begin{remark}\label{rem4}
\begin{enumerate}[leftmargin=7mm]
\item [(i)] Using the crude bound $\omega_u\leq 2\|u\|_{L^\infty}$, it
follows that for all $\epsilon<\frac{r_x}{2}$ we have:
\begin{equation*}
\begin{split}
\kappa_\epsilon & \leq
\Big(\frac{16\|u\|_{L^\infty}}{|p_x|}\cdot \frac{N+\mathrm{\bf p}-2}{\mathrm{\bf p}-1} \cdot
\frac{\frac{2s-1}{2s} r_x^{-2s} + r_x^{1-2s}}{\epsilon^{1-2s} - r_x^{1-2s}}\Big)^{1/2}
\\ & \leq 8 \Big(\frac{\|u\|_{L^\infty}}{|p_x|}\cdot \frac{N+\mathrm{\bf p}-2}{\mathrm{\bf p}-1}\cdot
\frac{r_x^{-2s} + r_x^{1-2s}}{2s-1}\Big)^{1/2}\epsilon^{s-1/2}.
\end{split}
\end{equation*}
In the second inequality above
we used that $\epsilon<\frac{r_x}{2}$ implies: $\epsilon^{1-2s} -
r_x^{1-2s}> \epsilon^{1-2s} (1-2^{1-2s})$, and that $1-2^{1-2s}\geq (2s-1)\ln\sqrt{2}
>\frac{2s-1}{4}$ in the range $s\in (\frac{1}{2},1)$.
Since the first quantity in $m_\epsilon$ is of order $\epsilon^{2s-1}$, the
bounding coefficient of order $\epsilon^{2s}$ in (\ref{sec3_m}) becomes:
$$ C_{N,\mathrm{\bf p}, s} C(r_x)\cdot C\big(\frac{\|u\|_{L^\infty}}{|p_x|}\big)\cdot
\big( C_x \epsilon^{s-1/2} + \omega_u(\epsilon^{s-1/2})\big), $$
where $C(\alpha)$ depends only on the indicated quantity $\alpha$.
\item[(ii)] When $u\in C^{0,\alpha}(\mathbb{R}^N\setminus \bar B_{r_x})$ with
$\alpha\in (0,1)$, then $\omega_u(m)=[u]_\alpha m^{\alpha}$ and
we similarly obtain:
$$\kappa_\epsilon \leq
\Big(\frac{32\;[u]_{\alpha}}{|p_x|}\cdot \frac{N+\mathrm{\bf p}-2}{\mathrm{\bf p}-1} \cdot
\frac{r_x^{-2s} +r_x^{1-2s}}{2s-1}\Big)^{\frac{1}{2-\alpha}}\epsilon^{\frac{2s-1}{2-\alpha}},$$
resulting in the following bounding constant:
$$ C_{N,\mathrm{\bf p}, s} C(r_x)\cdot C\big(\frac{[u]_{\alpha}}{|p_x|}\big)\epsilon^{\alpha\cdot\frac{2s-1}{2-\alpha}}.$$
\item[(iii)] When $u$ is Lipschitz on $\mathbb{R}^N\setminus \bar
B_{r_x}$ with the Lipschitz constant $\mbox{Lip}_u$, we get:
$$\kappa_\epsilon \leq \frac{32 \; \mbox{Lip}_u}{|p_x|}\cdot \frac{N+\mathrm{\bf p}-2}{\mathrm{\bf p}-1} \cdot
\frac{r_x^{-2s} + r_x^{1-2s}}{2s-1}\epsilon^{2s-1},$$
so both quantities in $m_\epsilon$ are of the same order $\epsilon^{2s-1}$ and:
$$ m_\epsilon\leq \frac{32}{|p_x|}\cdot \frac{N+\mathrm{\bf p}-2}{\mathrm{\bf p}-1} \cdot\max\Big\{\frac{2C_x r_x^{2-2s}}{1-s},
\frac{\mbox{Lip}_u\cdot (r_x^{-2s}+r_x^{1-2s})}{2s-1}\Big\}\epsilon^{2s-1}.$$
Consequently, the discussed bounding expression becomes:
$$ C_{N,\mathrm{\bf p},s} C(r_x)\cdot C\big(C_x,\frac{1}{|p_x|}, \mbox{Lip}_u\big)\epsilon^{2s-1}. $$
\end{enumerate}
\end{remark}
\medskip
In order to estimate the difference of $\mathcal{L}_{s,\p}^\epsilon[u]$ and $\mathcal{L}_{s,\p}[u]$ when $p_x\neq 0$, we will
analyze the behaviour of approximations to the extremizers $y, \tilde y$ in Definition \ref{def_operators}.
The proof below follows the outline of the proof of \cite[Theorem
1]{EDL} in the fractional $\infty$-Laplacian setting.
\begin{lemma}\label{prop3}
Under the same assumptions and notation as in Theorem \ref{thm4.7},
for every $\epsilon< r_x$ there holds, with the quantity $m_\epsilon$ is as (\ref{meke}).
\begin{equation}\label{tre}
\begin{split}
\Big|\mathcal{L}_{s,\p}^\epsilon[u] (x)- & \int_{T_\mathrm{\bf p}^{\epsilon,
\infty}(\frac{p_x}{|p_x|})}L_{u}(x, z, z)\;\mathrm{d}\mu_s^N(z)\Big|\\ & \leq C(N,s)|A_\mathrm{\bf p}|\cdot\Big(
4m_\epsilon C_x\frac{r_x^{2-2s}-\epsilon^{2-2s}}{1-s} +
2\omega_u(m_\epsilon)\Big( \frac{r_x^{-2s}}{2s} + \frac{r_x^{1-2s}}{2s-1}\Big)\Big),
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
{\bf 1.} For every $\epsilon<\eta_x$ and every $\delta>0$ satisfying:
\begin{equation}\label{tre.5}
\delta\leq \frac{\big(16 C_x \int_{T_\mathrm{\bf p}^{\epsilon,
r_x}}|z|^2\;\mathrm{d}\mu_s^N(z)\big)^2}{|p_x|\int_{T_\mathrm{\bf p}^{\epsilon, r_x}}\langle z,e_1\rangle\;\mathrm{d}\mu_s^N(z)},
\end{equation}
let $|y_\delta^\epsilon|=1$ be such that
$\displaystyle \sup_{|y|=1}\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}u(x+z)\;\mathrm{d}\mu_s^N(z)
\leq \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y_\delta^\epsilon)}u(x+z)\;\mathrm{d}\mu_s^N(z)
+\delta$. Then:
\begin{equation*}
\begin{split}
\delta &\geq \int_{T_\mathrm{\bf p}^{\epsilon,
\infty}(\frac{p_x}{|p_x|})}u(x+z)\;\mathrm{d}\mu_s^N(z) - \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y_\delta^\epsilon)}u(x+z)\;\mathrm{d}\mu_s^N(z)
\\ & \geq \int_{T_\mathrm{\bf p}^{\epsilon, r_x}(\frac{p_x}{|p_x|})}u(x+z)-
u(x+R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}}z)\;\mathrm{d}\mu_s^N(z) -
\omega_u \big (|Id_N - R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}}|\big) \cdot \int_{T_\mathrm{\bf p}^{r_x, \infty}}(1+|z|)\;\mathrm{d}\mu_s^N(z),
\end{split}
\end{equation*}
because:
\begin{equation*}
\begin{split}
&\int_{T_\mathrm{\bf p}^{r_x, \infty}(\frac{p_x}{|p_x|})} \big|u(x+z) - u(x+
R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}}z)\big| \;\mathrm{d}\mu_s^N(z) \\ & \quad \leq
\int_{T_\mathrm{\bf p}^{r_x, \infty}(\frac{p_x}{|p_x|})} (1+|z|) \cdot
\sup\Big\{|u(y_1) - u(y_2)|; ~ y_1, y_2\not\in\bar B_{r_x}, ~
|y_1-y_2|\leq |Id_N - R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}}|\Big\} \;\mathrm{d}\mu_s^N(z)
\\ & \quad \leq \omega_u(|Id_N - R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}}|)\cdot
\int_{T_\mathrm{\bf p}^{r_x, \infty}} (1+|z|) \;\mathrm{d}\mu_s^N(z).
\end{split}
\end{equation*}
Call $m\doteq \big|\frac{p_x}{|p_x|}-y_\delta^\epsilon\big|$ and note that
$|Id_N - R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}}|=m$. Recalling (\ref{C11}) further leads to:
\begin{equation}\label{tre.6}
\begin{split}
\delta &\geq
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}(\frac{p_x}{|p_x|})}\Big\langle\nabla u(x+z),
(Id_N - R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}})z\Big\rangle\;\mathrm{d}\mu_s^N(z) \\ &\quad
- C_x \int_{T_\mathrm{\bf p}^{\epsilon, r_x}} m^2 |z|^2 \;\mathrm{d}\mu_s^N(z) -
\omega_u(m) \int_{T_\mathrm{\bf p}^{r_x, \infty}} (1+|z|) \;\mathrm{d}\mu_s^N(z)
\\ & \geq \Big\langle p_x, (Id_N - R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}})
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}(\frac{p_x}{|p_x|})} z \;\mathrm{d}\mu_s^N(z)\Big\rangle \\ & \quad
- C_x m (2+m) \cdot \int_{T_\mathrm{\bf p}^{\epsilon, r_x}} |z|^2 \;\mathrm{d}\mu_s^N(z) -
\omega_u(m)\cdot \int_{T_\mathrm{\bf p}^{r_x, \infty}} (1+|z|) \;\mathrm{d}\mu_s^N(z),
\end{split}
\end{equation}
since $\big|\nabla u(x+z) - \nabla u(x)\big|\leq 2C_x|z|$ for all $z\in B_{r_x}$.
We now observe that:
$ \int_{T_\mathrm{\bf p}^{\epsilon, r_x}(\frac{p_x}{|p_x|})} z \;\mathrm{d}\mu_s^N(z)=
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)\cdot
\frac{p_x}{|p_x|}$, which yields that the first term in the right
hand side of (\ref{tre.6}) equals:
$$\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)\cdot
\Big\langle p_x, (Id_N - R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}})
\frac{p_x}{|p_x|} \Big\rangle = \int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)\cdot
\big\langle p_x, \frac{p_x}{|p_x|} - y_\delta^\epsilon\big\rangle.$$
Finally, noting the identity:
\begin{equation*}
\begin{split}
m^2=\Big|\frac{p_x}{|p_x|} - y_\delta^\epsilon\Big|^2 = 2 - 2 \Big\langle
\frac{p_x}{|p_x|}, y_\delta^\epsilon\Big\rangle = \frac{2}{|p_x|}
\Big\langle p_x,\frac{p_x}{|p_x|} - y_\delta^\epsilon\Big\rangle,
\end{split}
\end{equation*}
the discussed term becomes:
$$\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z) \cdot \frac{m^2|p_x|}{2}.$$
Consequently, it follows by (\ref{tre.6}) that:
\begin{equation}\label{quattro}
\begin{split}
m^2 \leq 2\cdot \frac{\delta+ 4C_xm
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} |z|^2 \;\mathrm{d}\mu_s^N(z) +\omega_u(m)
\int_{T_\mathrm{\bf p}^{r_x, \infty}} (1+ |z|) \;\mathrm{d}\mu_s^N(z) }{|p_x|\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)}.
\end{split}
\end{equation}
\smallskip
{\bf 2.} We now analyze the bound (\ref{quattro}) in the following distinct cases. In
the first case:
\begin{equation}\label{cinque}
m^2\leq \frac{4\delta}{|p_x|\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)}\leq
\Big(\frac{32 C_x \int_{T_\mathrm{\bf p}^{\epsilon, r_x}} |z|^2 \;\mathrm{d}\mu_s^N(z)}{|p_x|\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)}\Big)^2,
\end{equation}
where we used (\ref{tre.5}) in the second inequality. In the reverse
case, we
get:
\begin{equation*}
m^2 \leq \frac{16C_xm
\int_{T_\mathrm{\bf p}^{\epsilon, r_x}}|z|^2\;\mathrm{d}\mu_s^N(z) +4\omega_u(m) \int_{T_\mathrm{\bf p}^{r_x, \infty}}
(1+|z|)\;\mathrm{d}\mu_s^N(z)}{|p_x| \int_{T_\mathrm{\bf p}^{\epsilon, r_x}}\langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)}\doteq I_1 + I_2.
\end{equation*}
When $I_2\leq I_1$, then the above yields the same bound as in (\ref{cinque}), namely:
\begin{equation*}
m\leq \frac{32 C_x \int_{T_\mathrm{\bf p}^{\epsilon, r_x}} |z|^2
\;\mathrm{d}\mu_s^N(z)}{|p_x|\int_{T_\mathrm{\bf p}^{\epsilon, r_x}} \langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)}
= \frac{16 C_x}{|p_x|\fint_{A_\mathrm{\bf p}}\langle z, e_1\rangle \;\mbox{d}\sigma(z)}\cdot\frac{2s-1}{1-s}\cdot
\frac{r_x^{2-2s}-\epsilon^{2-2s}}{\epsilon^{1-2s} -{r_x}^{1-2s}}.
\end{equation*}
When $I_1<I_2$, then:
\begin{equation*}
m^2 \leq \frac{8\omega_u(m) \int_{T_\mathrm{\bf p}^{r_x, \infty}}
(1+|z|)\;\mathrm{d}\mu_s^N(z)}{|p_x| \int_{T_\mathrm{\bf p}^{\epsilon, r_x}}\langle z, e_1\rangle \;\mathrm{d}\mu_s^N(z)}
= \frac{8\omega_u(m)}{|p_x|\fint_{A_\mathrm{\bf p}}\langle z, e_1\rangle\;\mbox{d}\sigma(z)}\cdot \frac{\frac{2s-1}{2s}
r_x^{-2s} +r_x^{1-2s}}{\epsilon^{1-2s} - r_x^{1-2s}},
\end{equation*}
so that $ m\leq \kappa_\epsilon$ as $\fint_{A_\mathrm{\bf p}}\langle z, e_1\rangle\;\mbox{d}\sigma(z)\geq
\fint_{A_\mathrm{\bf p}}\langle z, e_1\rangle^2\;\mbox{d}\sigma(z)=\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}.$
Hence $m\leq m_\epsilon$ in both cases.
\smallskip
{\bf 3.} By the same analysis as in step 1, we see that the unit
vector $\tilde y_\delta^\epsilon$ satisfying:
$$\inf_{|y|=1}\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z) - u(x)\;\mathrm{d}\mu_s^N(z) \geq
\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(-\tilde y_\delta^\epsilon)} u(x+z) - u(x) \;\mathrm{d}\mu_s^N(z) -\delta,$$
differs from the unit vector $\frac{p_x}{|p_x|}$ at most by $m_\epsilon$. Note that:
\begin{equation*}
\begin{split}
&\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(\tilde y_\delta^\epsilon)} L_u(x,z,z) \;\mathrm{d}\mu_s^N(z) -\delta
\\ & = \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(\tilde y_\delta^\epsilon)} u(x+z) -
u(x) \;\mathrm{d}\mu_s^N(z) + \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(-\tilde y_\delta^\epsilon)} u(x+z) - u(x) \;\mathrm{d}\mu_s^N(z)
-\delta \leq \mathcal{L}_{s,\p}^\epsilon[u] (x) \\ & \leq \int_{T_\mathrm{\bf p}^{\epsilon, \infty}( y_\delta^\epsilon)} u(x+z) - u(x) \;\mathrm{d}\mu_s^N(z) + \int_{T_\mathrm{\bf p}^{\epsilon,
\infty}(-y_\delta^\epsilon)} u(x+z) - u(x) \;\mathrm{d}\mu_s^N(z) +\delta \\ & =
\int_{T_\mathrm{\bf p}^{\epsilon, \infty}( y_\delta^\epsilon)} L_u(x,z,z) \;\mathrm{d}\mu_s^N(z) +\delta,
\end{split}
\end{equation*}
which yields:
\begin{eqnarray}\label{sette}
\begin{split}
& \Big|\mathcal{L}_{s,\p}^\epsilon[u] (x) - \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(\frac{p_x}{|p_x|})} L_u(x,z,z) \;\mathrm{d}\mu_s^N(z)
\Big|\leq \delta + \max \big\{ |J(y_\delta^\epsilon)|, |J(\tilde y_\delta^\epsilon|)\big\},\\ &
\mbox{where: } J(y)\doteq \int_{T_\mathrm{\bf p}^{\epsilon,
\infty}(\frac{p_x}{|p_x|})} u(x+z) - u(x+R_{y,\frac{p_x}{|p_x|}}z) + u(x-z) - u(x - R_{y,\frac{p_x}{|p_x|}}z) \;\mathrm{d}\mu_s^N(z).
\end{split}
\end{eqnarray}
We now estimate the two terms $ J(y_\delta^\epsilon)$, $ J(\tilde
y_\delta^\epsilon)$ and show that they are bounded independently of
$\delta$. This will allow to pass $\delta\to 0$ in (\ref{sette}) and directly
conclude the claimed estimate (\ref{tre}).
We start by splitting the integral in $J(y_\delta^\epsilon)$ in two
terms: $|J(y_\delta^\epsilon)|\leq J_1+ J_2$, where:
\begin{equation*}
\begin{split}
J_1 & = \Big|\int_{T_\mathrm{\bf p}^{\epsilon, r_x}(\frac{p_x}{|p_x|})} u(x+z) -
u(x+R_{y_\delta^\epsilon,\frac{p_x}{|p_x|}}z) + u(x-z) - u(x -
R_{y_\delta^\epsilon,\frac{p_x}{|p_x|}}z) \;\mathrm{d}\mu_s^N(z)\Big|
\\ & \leq \int_{T_\mathrm{\bf p}^{\epsilon, r_x}(\frac{p_x}{|p_x|})} \Big|\big\langle
\nabla u(x+z), z-R_{y_\delta^\epsilon, \frac{p_x}{|p_x|}}z\big\rangle
- \big\langle \nabla u(x-z), z-R_{y_\delta^\epsilon,
\frac{p_x}{|p_x|}}z\big\rangle\Big|\;\mathrm{d}\mu_s^N(z) \\ & \quad + \int_{T_\mathrm{\bf p}^{\epsilon,
r_x}} 2C_x m^2|z|^2\;\mathrm{d}\mu_s^N(z) \\ & \leq (4m+2m^2)\int_{T_\mathrm{\bf p}^{\epsilon, r_x}}
C_x |z|^2\;\mathrm{d}\mu_s^N(z)\leq 8mC_x \int_{T_\mathrm{\bf p}^{\epsilon, r_x}} |z|^2\;\mathrm{d}\mu_s^N(z).
\end{split}
\end{equation*}
The remaining estimate is:
\begin{equation*}
\begin{split}
J_2 & = \Big|\int_{T_\mathrm{\bf p}^{r_x, \infty}(\frac{p_x}{|p_x|})} u(x+z) -
u(x+R_{y_\delta^\epsilon,\frac{p_x}{|p_x|}}z) + u(x-z) - u(x - R_{y_\delta^\epsilon,\frac{p_x}{|p_x|}}z) \;\mathrm{d}\mu_s^N(z)\Big|
\\ & \leq 2\omega_u(m) \int_{T_\mathrm{\bf p}^{ r_x, \infty}}(1+|z|)\;\mathrm{d}\mu_s^N(z)
\end{split}
\end{equation*}
In conclusion, we obtain:
$$ |J(y_\delta^\epsilon)| \leq 8m_\epsilon C_x \int_{T_\mathrm{\bf p}^{\epsilon, r_x}}
|z|^2\;\mathrm{d}\mu_s^N(z) + 2\omega_u(m_\epsilon) \int_{T_\mathrm{\bf p}^{ r_x, \infty}}(1+|z|)\;\mathrm{d}\mu_s^N(z).$$
Clearly, $|J(\tilde y_\delta^\epsilon)| $ enjoys the same
bound. The result in Lemma now follows by (\ref{sette}).
\end{proof}
\medskip
By Remark \ref{rem4} (iii) we see that in case of $u$ which is
Lipschitz on $\mathbb{R}^N\setminus \bar B_{r_x}$, the order of the error
bounding quantity in Theorem \ref{thm4.7} is
$C(s)\cdot(\epsilon^{4s-1}+\epsilon^2)$ as $\epsilon\to 0+$, where
$C(s)$ blows up as $s\to 1-$. This drawback will be remedied by means
of another asymptotic expansion in Theorem \ref{thm6}, proved in
section \ref{sec4}. We are now ready to give:
\noindent {\bf Proof of Theorem \ref{thm4.7}}
In view of (\ref{C2}) we get:
\begin{equation*}
\begin{split}
\Big|\mathcal{L}_{s,\p}[u] (x) - \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(\frac{p_x}{|p_x|})} L_{u}(x,z,z)\;\mathrm{d}\mu_s^N(z)\Big|
& \leq \int_{T_\mathrm{\bf p}^{0,\epsilon}(\frac{p_x}{|p_x|})} |L_{u}(x,z,z)|\;\mathrm{d}\mu_s^N(z) \\
& \leq C(N,s) |A_\mathrm{\bf p}| \cdot C_x\frac{\epsilon^{2-2s}}{1-s}.
\end{split}
\end{equation*}
Consequently, Lemma \ref{prop3} yields:
\begin{equation*
\begin{split}
\Big|\mathcal{L}_{s,\p}^\epsilon[u] (x&) - \mathcal{L}_{s,\p}[u] (x)\Big| \leq \frac{4 C(N,s) |A_\mathrm{\bf p}|}{1-s} \cdot C_x \big(r_x^{2-2s} -
\epsilon^{2-2s}\big) \cdot m_\epsilon \\ & + \frac{C(N,s) |A_\mathrm{\bf p}|}{s}\cdot \Big(r_x^{-2s} +
\frac{2s}{2s-1}r_x^{1-2s}\Big)\cdot \omega_u(m_\epsilon) +
\frac{C(N,s) |A_\mathrm{\bf p}|}{1-s} \cdot C_x\epsilon^{2-2s}.
\end{split}
\end{equation*}
The result follows by collecting terms and scaling by the factor: $\frac{s}{C(N,s)|A_\mathrm{\bf p}|}\epsilon^{2s}$.
\hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip
\section{A local - non-local asymptotic expansion}\label{sec4}
In this section we present a refined version of the argument in
Theorem \ref{thm4.7}. We need one more estimate before giving the
proof of expansion (\ref{dwa}) in Theorem \ref{thm6}.
\begin{proposition}\label{prop5}
Let $u\in C^2(\bar B_{r_x})$ satisfy: $p_x\doteq \nabla
u(x)\neq 0$. For every $\epsilon< r_x$ such that $\epsilon
|\nabla^2u(x)|\leq |p_x|$, denote $B_\epsilon = B_\epsilon(x)$. Then there holds:
\begin{equation*
\begin{split}
\Big| &\frac{C(N,s) |A_\mathrm{\bf p}|}{(N+p-2)(1-s)}\cdot \epsilon^{-2s}\Big(\frac{\mathrm{\bf p}-2}{2}
\big(\sup_{B_\epsilon} u + \inf_{B_\epsilon} u \big) +
(N+2)\fint_{B_\epsilon} u(y)\;\mathrm{d}y - (N+\mathrm{\bf p}) u(x)\Big)
\\ & \qquad - \int_{T_\mathrm{\bf p}^{0,\epsilon}(\frac{p_x}{|p_x|})} L_u(x,z,z)\;\mathrm{d}\mu_s^N(z)\Big| \leq
\frac{C(N,s)|A_\mathrm{\bf p}|}{1-s}\cdot\epsilon^{2-2s}
\sup_{y\in B_\epsilon}|\nabla^2u(y) - \nabla^2u(x)| \\ &
\qquad\qquad\qquad \qquad\qquad\qquad \qquad\qquad \quad + \frac{2C(N,s) |A_\mathrm{\bf p}|}{1-s}
\cdot\frac{\mathrm{\bf p}-2}{N+\mathrm{\bf p}-2} \cdot \epsilon^{3-2s} \frac{|\nabla^2u(x)|^2}{|p_x|}.
\end{split}
\end{equation*}
\end{proposition}
\begin{proof}
An application of Taylor's expansion and the identity in Lemma \ref{lem_Tp} (iii)
results in the following estimate:
\begin{equation*}
\begin{split}
\Big| & \int_{T_\mathrm{\bf p}^{0,\epsilon}(\frac{p_x}{|p_x|})} L_u(x,z,z)\;\mathrm{d}\mu_s^N(z)
- \frac{C(N,s) |A_\mathrm{\bf p}|}{(N+\mathrm{\bf p}-2) (2-2s)}\cdot \epsilon^{2-2s}|p_x|^{2-\mathrm{\bf p}}\Delta_\mathrm{\bf p} u(x)\Big| \\ & \leq
\int_{T_\mathrm{\bf p}^{0,\epsilon}(\frac{p_x}{|p_x|})} |L_u(x,z,z) - \langle
\nabla^2 u(x) : z\otimes z\rangle|\;\mathrm{d}\mu_s^N(z) \\ & \leq \int_{T_\mathrm{\bf p}^{0,\epsilon}}
|z|^2\;\mathrm{d}\mu_s^N(z)\cdot \sup_{y\in B_\epsilon}|\nabla^2u(y) - \nabla^2u(x)|
= \frac{C(N,s)|A_\mathrm{\bf p}|}{2-2s}\cdot \epsilon^{2-2s}\sup_{y\in B_\epsilon}|\nabla^2u(y) - \nabla^2u(x)|.
\end{split}
\end{equation*}
We now invoke the following folklore claim (whose proof we recall below):
\begin{equation}\label{otto}
\begin{split}
\Big|&(\mathrm{\bf p}-2)\big( \sup_{B_\epsilon} u + \inf_{B_\epsilon} u\big) +2(N+2)\fint_{B_\epsilon}u(y)\;\mathrm{d}y
-2(N+\mathrm{\bf p})u(x) - \epsilon^2|p_x|^{2-\mathrm{\bf p}}\Delta_\mathrm{\bf p} u(x)\Big| \\ & \leq
4\epsilon^3(\mathrm{\bf p}-2) \frac{|\nabla^2u(x)|^2}{|p_x|}
+ \epsilon^2 (N+\mathrm{\bf p}-2)\cdot \sup_{y\in B_\epsilon}|\nabla^2u(y) - \nabla^2u(x)|.
\end{split}
\end{equation}
Summing up the scaled versions of the last two displayed formulas completes the argument.
\end{proof}
\medskip
\noindent {\bf Proof of claim (\ref{otto}).}
{\bf 1.} We first deduce the bound in the particular case when
$\epsilon=1$, $x=0$ and $u$ is a quadratic polynomial
with gradient given by a unit vector $p\in\mathbb{R}^N$ and Hessian given by a
symmetric matrix $B\in \mathbb{R}^{N\times N}$ satisfying $|B|\leq 1$:
$$u(z)=\langle p, z\rangle + \frac{1}{2}\langle B: z\otimes z\rangle.$$
It is straightforward that:
\begin{equation}\label{one}
\fint_{B_1} u(z)\;\mbox{d}z=\frac{1}{2}\Big\langle
B:\fint_{B_1}z\otimes z\;\mbox{d}z \Big\rangle = \frac{1}{2(N+2)}\big\langle Id_N:B\big\rangle
= \frac{1}{2(N+2)} \Delta u(0).
\end{equation}
In order to address the nonlinear averaging quantity $\sup_{B_1} u +
\inf_{B_1} u$, consider $z_{max}\in \bar B_1$ that is a maximizer of $u$. Note that
$|z_{max}|=1$, because $|\nabla u(z)|= |p+Bz|> 1 -|B|\geq 0$
for all $|z|\leq 1$. Further, since $u(z_{max})\geq u(p)$, there holds:
$$ \langle p, z_{max}\rangle\geq 1+ \frac{1}{2}\langle B: p\otimes p -
z_{max}\otimes z_{max}\rangle \geq 1 - |z_{max}-p|\cdot |B|.$$
Consequently: $|z_{max}-p|^2 = 2- 2\langle z_{max}, p\rangle \leq 2 |z_{max}-p|\cdot |B|$, so that:
$$|z_{max}-p|\leq 2|B|.$$
Noting that $\langle p, z_{max}-p\rangle\leq 0$, we hence arrive at:
$$0\leq u(z_{max})-u(p) = \langle p, z_{max}-p\rangle +\frac{1}{2}\langle B:
z_{max}\otimes z_{max} - p\otimes p \rangle\leq |z_{max}-p|\cdot
|B|\leq 2 |B|^2.$$
An entirely similar argument yields the bound for a minimizer
$z_{min}$ of $u$ on $\bar B_1$:
$$0\geq u(z_{min})-u(-p) \geq -2 |B|^2,$$
which results in the bound:
\begin{equation}\label{two}
\begin{split}
\big| \sup_{B_1} u + \inf_{B_1} u - \Delta_\infty u(0)\big| & =
\big|u(z_{max})+u(z_{min}) - \langle B: p\otimes p\rangle\big|
\\ & = \big|u(z_{max}) - u(p) + u(z_{min}) - u(-p)\big| \leq 4|B|^2.
\end{split}
\end{equation}
The estimate (\ref{otto}) follows in the present case summing
(\ref{one}) scaled by $2(N+2)$ and (\ref{two}) scaled by $\mathrm{\bf p}-2$.
\smallskip
{\bf 2.} For the general case, we may still assume $x=0$ and
$u(0)=0$. Define $\bar u(y)= \langle p_x, y\rangle +
\frac{1}{2}\langle \nabla^2 u(0):y\otimes y\rangle$ and note that:
\begin{equation}\label{nove}
\begin{split}
\frac{2(N+2)}{N}\cdot\Big |\fint_{B_\epsilon} u(y)\;\mbox{d}y - \fint_{B_\epsilon} & \bar
u(y)\;\mbox{d}y\Big|,\quad
\Big|\big(\sup_{B_\epsilon} u + \inf_{B_\epsilon} u\big) -
\big(\sup_{B_\epsilon}\bar u + \inf_{B_\epsilon}\bar u\big) \Big|
\\ & \leq \epsilon^2\sup_{y\in B_\epsilon}|\nabla^2 u(y) -
\nabla^2 u(0)|,
\end{split}
\end{equation}
by means of Taylor's expansion. We now apply the conclusion of step 1 to the rescaled function:
$$\frac{1}{\epsilon |p_x|} \bar u(\epsilon z)=
\big\langle\frac{p_x}{|p_x|}, z\big\rangle + \frac{1}{2}\Big\langle \epsilon
\frac{\nabla^2 u(0)}{|p_x|}:z\otimes z\Big\rangle\quad \mbox{ for
}\; z\in \bar B_1.$$
It follows that for all $\epsilon$ satisfying $\epsilon |\nabla^2 u(0)|\leq |p_x|$, we get:
\begin{equation*}
\begin{split}
\Big|& \frac{\mathrm{\bf p}-2}{\epsilon |p_x|}\big(\sup_{B_\epsilon}\bar u +
\inf_{B_\epsilon}\bar u\big) + \frac{2(N+2)}{\epsilon |p_x|}\fint_{B_\epsilon}
\bar u(y)\;\mbox{d}y
- \Big( \epsilon \frac{\Delta u(0)}{|p_x|} + (\mathrm{\bf p}-2)\Big\langle
\epsilon\frac{\nabla^2 u(0)}{|p_x|}:\frac{p_x}{|p_x|}
\otimes \frac{p_x}{|p_x|}\Big\rangle \Big)\Big| \\ & \leq
4\epsilon^2\cdot (\mathrm{\bf p}-2) \frac{|\nabla^2 u(0)|^2}{|p_x|^2},
\end{split}
\end{equation*}
which yields:
$$\big|(\mathrm{\bf p}-2)\big(\sup_{B_\epsilon}\bar u +
\inf_{B_\epsilon}\bar u\big) +2(N+2) \fint_{B_\epsilon}\bar
u(y)\;\mbox{d}y - \epsilon^2 |p_x|^{2-\mathrm{\bf p}}\Delta_\mathrm{\bf p} u(0) \big| \leq
4\epsilon^3 (\mathrm{\bf p}-2)\frac{|\nabla^2u(0)|^2}{|p_x|}.$$
Combined together with (\ref{nove}), the above estimate ends the proof of (\ref{otto}).
\hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip
\medskip
\begin{theorem}\label{thm6}
Under the same assumptions and notation as in Theorem \ref{thm4.7},
for every $\epsilon< r_x$ such that $\epsilon
|\nabla^2\phi(x)|\leq |p_x|$, denote $B_\epsilon=B_\epsilon(x)$ and
consider the average:
\begin{equation*}
\begin{split}
\bar\mathcal{A}_\epsilon u(x) = &\; \frac{(1-s)(N+\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s}\cdot\frac{1}{2}\Big(\sup_{|y|=1}
\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z)\;\mathrm{d}\mu_s^N(z) + \inf_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}
u(x+z)\;\mathrm{d}\mu_s^N(z) \Big) \\ & + \frac{s(\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s}\cdot\frac{1}{2}
\big(\sup_{B_\epsilon}u + \inf_{B_\epsilon}u \big) + \frac{s(N+2)}{N+\mathrm{\bf p}-2+2s}\fint_{B_\epsilon}
u(y)\;\mathrm{d}y.
\end{split}
\end{equation*}
Then there holds:
\begin{equation*}
\begin{split}
\Big|& \bar\mathcal{A}_\epsilon u(x) - u(x) - \frac{(1-s)s}{C(N,s)
|A_\mathrm{\bf p}|}\cdot\frac{N+\mathrm{\bf p}-2}{N+\mathrm{\bf p}-2+2s}\cdot \epsilon^{2s} \mathcal{L}_{s,\p}[u] (x) \Big|\\ & \leq
\frac{4s (N+\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s} C_x (r_x^{2-2s}-\epsilon^{2-2s})\cdot \epsilon^{2s}m_\epsilon
\\ & \quad + \frac{(1-s)(N+\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s}\big(r_x^{-2s}+\frac{2s}{2s-1}r_x^{1-2s}\big)
\cdot \epsilon^{2s}\omega_u(m_\epsilon) \\ & \quad
+ \frac{s(N+\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s}\cdot \epsilon^2 \sup_{y\in B_\epsilon}\big|\nabla^2u(y)-\nabla^2u(x)\big|
+\frac{2s(\mathrm{\bf p}-2)}{N+\mathrm{\bf p}-2+2s}\cdot\epsilon^3\frac{|\nabla^2 u(x)|^2}{|p_x|},
\end{split}
\end{equation*}
where the quantity $m_\epsilon$ is as in the statement of Lemma \ref{prop3}.
\end{theorem}
\begin{proof}
We sum up formulas in Lemma \ref{prop3} and Proposition \ref{prop5}, and multiply the
result by the factor:
$\frac{(1-s)s}{C(N,s)|A_\mathrm{\bf p}|}\cdot\frac{N+\mathrm{\bf p}-2}{N+\mathrm{\bf p}-2+2s}\cdot \epsilon^{2s}$.
\end{proof}
\begin{remark}\label{rem7}
\begin{enumerate}[leftmargin=7mm]
\item[(i)] Analysis similar to Remark \ref{rem4} allows for computing
the order of the error bound in Theorem \ref{thm6} when $u\in
C^{0,\alpha}\big(\mathbb{R}^N\setminus \bar B_{r_x}\big)$. In
particular, when $\alpha=1$ then the bounding quantity becomes:
\begin{equation*}
\begin{split}
&C_{N,\mathrm{\bf p},s} \cdot \Big(C\big(C_x, \frac{1}{|p_x|}, \mbox{Lip}_u\big)\cdot C(r_x)
\epsilon^{4s-1} + C\big(|\nabla^2 u(x)|,
\frac{1}{|p_x|}\big)\epsilon^3 + o(\epsilon^2)\Big).
\end{split}
\end{equation*}
When additionally $u\in C^{2,1}(B_{r_x})$,
the above quantity has order $\epsilon^{4s-1}+\epsilon^3$, which
further reduces to $\epsilon^3$ when $s=1$.
\item[(ii)] For a more precise analysis of the asymptotic expansion when $s\to 1-$, note that:
\begin{equation*}
\begin{split}
& \kappa_\epsilon\leq \sup\Big\{ m;~ m\in [0,2] \; \mbox{ and } \; m^2\leq
\frac{32\;\omega_u(m)}{|p_x|}\cdot\frac{N+\mathrm{\bf p}-2}{\mathrm{\bf p}-1}\cdot\frac{
r_x^{-2s} + r_x^{1-2s}}{2s-1}\epsilon^{2s-1}\Big\}, \\ &
\frac{8 \|\nabla^2 u\|_{L^\infty(B_{r_x})}}{|p_x|}\cdot\frac{2s-1}{1-s}\cdot
\frac{r_x^{2-2s}-\epsilon^{2-2s}}{\epsilon^{1-2s} -{r_x}^{1-2s}}
\leq \frac{8 \|\nabla^2u\|_{L^\infty(B_{r_x})}}{|p_x|}\cdot
16 r_x^{2-2s}|\ln\epsilon|\epsilon^{2s-1}.
\end{split}
\end{equation*}
The first bound above is valid when $\epsilon<\frac{r_x}{2}$ (see
Remark \ref{rem4} (i)), while for the second bound we used that:
$r_x^{2-2s}-\epsilon^{2-2s}\leq (2-2s) (\ln r_x - \ln\epsilon)
r_x^{2-2s}\leq 4(1-s)|\ln\epsilon| r_x^{2-2s}$, valid for all $\epsilon<e^{-|\ln r_x|}$.
It is thus clear that $m_\epsilon \leq o(1)$ as $\epsilon\to 0+$, independently
of $s\in (\frac{1}{2},1)$ bounded away from $\frac{1}{2}$.
In particular, for each fixed $\epsilon$, the bounding quantity in
Theorem \ref{thm6} converges to:
$$2\frac{\mathrm{\bf p}-2}{N+\mathrm{\bf p}}\cdot \epsilon^3\frac{|\nabla^2 u(x)|^2}{|p_x|}
+ \frac{N+\mathrm{\bf p}-2}{N+\mathrm{\bf p}} \cdot \epsilon^2\sup_{y\in B_\epsilon}\big|\nabla^2
u(y)-\nabla^2 u(x)\big|$$
as $s\to 1-$, which is consistent with (\ref{otto}).
\end{enumerate}
\end{remark}
\section{The averaging operator $\mathcal{A}_\epsilon$ and its dynamic
programming principle}\label{sec_dppe}
Let $\mathcal{D}\subset\mathbb{R}^N$ be open, bounded and let $F:\mathbb{R}^N\to \mathbb{R}$ be
bounded, Borel. In this section we discuss the non-local Dirichlet-type problem in:
\begin{equation}\label{dppe}\tag*{${\mathrm{(DPP)}}_\epsilon$}
u(x) = \left\{\begin{array}{ll} \mathcal{A}_\epsilon u(x) &
~~\mbox{for } x\in\mathcal{D}\\ F(x) & ~~\mbox{for } x\in\mathbb{R}^N\setminus \mathcal{D}.
\end{array}\right.
\end{equation}
Equivalently, the above equation can be written as $u=S_\epsilon u$, where
the operator $S_\epsilon$ applied on a bounded Borel function
$v:\mathbb{R}^N\to\mathbb{R}$ returns the bounded Borel function:
\begin{equation}\label{Se}
S_\epsilon v = \mathds{1}_{\mathcal{D}} \cdot \mathcal{A}_\epsilon v + \mathds{1}_{\mathbb{R}^N\setminus \mathcal{D}} \cdot F
\end{equation}
\smallskip
The main result of this section is the following observation:
\begin{theorem}\label{th_exists}
For any bounded Borel data $F:\mathbb{R}^N\to \mathbb{R}$, the problem \ref{dppe}
has the unique bounded Borel solution $u_\epsilon^F:\mathbb{R}^N\to\mathbb{R}$ and there
holds: $\|u_\epsilon^F\|_{L^\infty}\leq \|F\|_{L^\infty}$. Moreover, the
solution operator to \ref{dppe} is monotone, that is $F\leq \bar F$
implies $u_\epsilon^F\leq u_\epsilon^{\bar F}$.
\end{theorem}
\smallskip
Before the proof, we derive another useful property:
\begin{lemma}\label{conti}
Let $u:\mathbb{R}^N\to \mathbb{R}$ be bounded Borel. Then the following functions of $x$:
$$\inf_{|y|=1} \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z)\;\mathrm{d}\mu_s^N(z),\qquad
\sup_{|y|=1} \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z)\;\mathrm{d}\mu_s^N(z), \qquad \mathcal{A}_\epsilon u,$$
are uniformly continuous on $\mathbb{R}^N$.
\end{lemma}
\begin{proof}
Denote any of the three listed functions by $f$ and observe that, for
a fixed $x,\bar x\in\mathbb{R}^N$ satisfying $|x-\bar x| <1$ there holds:
\begin{equation}\label{dieci}
|f(x) - f(\bar x)| \leq \sup_{|y|=1}
\Big|\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z)\;\mathrm{d}\mu_s^N(z) -
\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(\bar x+z)\;\mathrm{d}\mu_s^N(z)\Big|
\end{equation}
For any $|y|=1$, we may write:
\begin{equation*}
\begin{split}
& \Big|\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(x+z)\;\mathrm{d}\mu_s^N(z) - \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u(\bar x+z)\;\mathrm{d}\mu_s^N(z)\Big|
\\ &\qquad =\frac{C(N,s)}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}\cdot
\Big|\int_{x+T_\mathrm{\bf p}^{\epsilon, \infty}(y)} \frac{u(z)}{|x-z|^{N+2s}}\;\mathrm{d}z -
\int_{\bar x+T_\mathrm{\bf p}^{\epsilon, \infty}(y)} \frac{u(z)}{|\bar
x-z|^{N+2s}}\;\mathrm{d}z \Big|\\ & \qquad \leq \frac{C(N,s) \|u\|_{L^\infty}}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}\cdot
\Big( \int_{(x+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\cap (\bar x+T_\mathrm{\bf p}^{\epsilon,
\infty}(y)) } \Big|\frac{1}{|x-z|^{N+2s}} - \frac{1}{|\bar x-z|^{N+2s}} \Big|\;\mbox{d}z
\\ & \qquad\quad + \int_{(x+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\setminus (\bar x+T_\mathrm{\bf p}^{\epsilon,
\infty}(y)) } \frac{1}{|x-z|^{N+2s}} \;\mbox{d}z
+ \int_{(\bar x+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\setminus (x+T_\mathrm{\bf p}^{\epsilon,
\infty}(y)) } \frac{1}{|\bar x-z|^{N+2s}} \;\mbox{d}z\Big).
\end{split}
\end{equation*}
We now estimate the three last integral terms above. Since $|\bar
x-z|>\frac{1}{2}|x-z|$ whenever $|x-z|>2$, it follows that for any
$r\geq 2$ there holds:
\begin{equation}\label{undici}
\begin{split}
&\int_{(x+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\cap (\bar x+T_\mathrm{\bf p}^{\epsilon,
\infty}(y)) } \Big|\frac{1}{|x-z|^{N+2s}} - \frac{1}{|\bar x-z|^{N+2s}} \Big|\;\mbox{d}z \\ & \qquad \qquad
\leq \int_{x+T_\mathrm{\bf p}^{r, \infty} }\frac{1+2^{N+2s}}{|x-z|^{N+2s}} \;\mbox{d}z
+ \int_{x+T_\mathrm{\bf p}^{\epsilon, r} }\frac{N+2s}{\epsilon^{N+1+2s}} |x-\bar x| \;\mbox{d}z
\\ & \qquad \qquad \leq \frac{1+2^{N+2s}}{2s}|A_\mathrm{\bf p}|\cdot r^{-2s} + \frac{(N+2s)
r^N}{N\epsilon^{N+1+2s}}|A_\mathrm{\bf p}|\cdot |x-\bar x|,
\end{split}
\end{equation}
where we also used that: $\big||x-z|^{-N-2s} - |\bar
x-z|^{-N-2s} \big|\leq \frac{N+2s}{\epsilon^{N+1+2s}} |x-\bar x|$ for
$z\in (x+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\cap (\bar x+T_\mathrm{\bf p}^{\epsilon, \infty}(y))$.
On the other hand, we have:
\begin{equation}\label{dodici}
\begin{split}
&\int_{(x+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\setminus (\bar x+T_\mathrm{\bf p}^{\epsilon,
\infty}(y)) } \frac{1}{|x-z|^{N+2s}} \;\mbox{d}z
\\ & \qquad \qquad \leq \frac{|A_\mathrm{\bf p}|}{2s}r^{-2s} + \big|T_\mathrm{\bf p}^{\epsilon, r}(y)\setminus
((\bar x - x)+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\big|\cdot\frac{1}{\epsilon^{N+2s}}
\end{split}
\end{equation}
Further, it easily follows that:
$$\sup_{|y|=1} \big|T_\mathrm{\bf p}^{\epsilon, r}(y)\setminus ((\bar x - x)+T_\mathrm{\bf p}^{\epsilon, \infty}(y))\big|
\leq \sup_{|z|<|x-\bar x|} \big|T_\mathrm{\bf p}^{\epsilon, r}\setminus
(z+T_\mathrm{\bf p}^{\epsilon, \infty})\big| \leq \big|(\partial T_\mathrm{\bf p}^{\epsilon,
r}) + B_{|x-\bar x|}\big|.$$
In conclusion (\ref{dieci}) becomes, in view of (\ref{undici}) and (\ref{dodici}):
\begin{equation*}
\begin{split}
\big|f(x) - f(\bar x)\big| \leq \frac{C(N,s) \|u\|_{L^\infty}}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}\cdot
\Big(& \frac{3+2^{N+2s}}{2s}|A_\mathrm{\bf p}|\cdot r^{-2s} + \frac{(N+2s)
r^N}{N\epsilon^{N+1+2s}}|A_\mathrm{\bf p}|\cdot |x-\bar x| \\ & + \big|(\partial T_\mathrm{\bf p}^{\epsilon,
r}) + B_{|x-\bar x|}\big|\cdot\frac{2}{\epsilon^{N+2s}} \Big).
\end{split}
\end{equation*}
It is clear that by taking $r$ large and then $|x-\bar x|$
appropriately small, the right hand side above can be bounded by any
$\delta>0$. This proves the claimed uniform continuity.
\end{proof}
\medskip
\noindent {\bf Proof of Theorem \ref{th_exists}}
Define $v_0\equiv \inf F$ and set $v_n\doteq (S_\epsilon)^n
v_0$, where the operator $S_\epsilon$ is as in (\ref{Se}).
Each function $v_n:\mathbb{R}^N\to \mathbb{R}$ is continuous in $\mathcal{D}$ by Lemma
\ref{conti} and hence Borel in $\mathbb{R}^N$. The sequence
$\{v_n\}_{n=1}^\infty$ is uniformly bounded: $\|v_n\|_{L^\infty }\leq
\|F\|_{L^\infty}$ and nondecreasing because $v_0\leq v_1$, and
$S_\epsilon$ is order-preserving. Thus, $\{v_n\}_{n=1}^\infty$ has a
pointwise limit $v:\mathbb{R}^N\to \mathbb{R}$ which is bounded Borel and obeys the
same bound: $\|v\|_{L^\infty }\leq \|F\|_{L^\infty}$.
We now show that one can take $u_\epsilon^F=v$. Indeed, for every $x\in\mathcal{D}$ there holds:
\begin{equation*}
\begin{split}
|v_{n+1}(x) - S_\epsilon v(x) | & = |S_\epsilon v_n(x) - S_\epsilon
v(x)|\leq \sup_{|y|=1} \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}|(v_n-v)(x+z)|\;\mathrm{d}\mu_s^N(z)
\\ & \leq \frac{1}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}\int_{\mathbb{R}^N\setminus B_\epsilon}|(v_n-v)(x+z)|\;\mathrm{d}\mu_s^N(z)
~~ \to 0 \quad\mbox{ as } n\to\infty,
\end{split}
\end{equation*}
by the monotone convergence theorem. We thus obtain: $v=S_\epsilon v$
on $\mathcal{D}$, as claimed.
To prove uniqueness, assume that $v, \bar v$ are two bounded, Borel
solutions of \ref{dppe}. Clearly $v=\bar v$ on $\mathbb{R}^N\setminus \mathcal{D}$
and denote $M\doteq \sup_\mathcal{D} |v-\bar v|$. For every $x\in \mathcal{D}$ we
observe that:
$$|(v-\bar v)(x)| = |\mathcal{A}_\epsilon v(x) - \mathcal{A}_\epsilon\bar v(x)|\leq
\sup_{|y|=1}\fint_{x+T_\mathrm{\bf p}^{\epsilon, \infty}(y)} |(v - \bar v)(z)|\;\mathrm{d}\mu_s^N(z)
\leq M\cdot \frac{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \mathrm{diam}\mathcal{D}})}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}.$$
Hence: $M\leq M \cdot \frac{\mu_s^N(T_\mathrm{\bf p}^{\epsilon,
\mathrm{diam}\mathcal{D}})}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}$, so there
must be $M=0$ and thus $v=\bar v$ in $\mathcal{D}$.
Finally, the claimed monotonicity of the solution operator to
\ref{dppe} follows from the monotonicity of $S_\epsilon$. The proof is done.
\hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip
\section{Convergence to viscosity solutions of $\Delta_\mathrm{\bf p}^s u = 0$.}\label{sec_visc}
In this section we will identify the limits of solutions to the
non-local dynamic programming principle \ref{dppe} in the vanishing removed
singularity radius $\epsilon\to 0$, as viscosity solutions to the
homogeneous Dirichlet problem for $\Delta_\mathrm{\bf p}^s$.
\begin{theorem}\label{th_viscosity}
Let $F:\mathbb{R}^N\to\mathbb{R}$ be uniformly continuous and bounded, and let $\mathcal{D}$ be
an open, bounded subset of $\mathbb{R}^N$. Assume that $\{u_\epsilon\}_{\epsilon\in J}$ is a sequence of solutions
to \ref{dppe} which converges as $\epsilon\to 0$, $\epsilon \in J$
uniformly, to some continuous limit function $u\in C(\mathbb{R}^N)$. Then for every
$x\in \mathcal{D}$, $r>0$ and every $\phi\in C^2(\mathbb{R}^N)$ such that $\phi(x) = u(x)$ and
$\nabla \phi(x)\neq 0$, we have:
\begin{enumerate}[leftmargin=7mm]
\item[(i)] if $\phi > u$ on $\bar B_r(x)\setminus \{x\}$, then
$\mathcal{L}_{s,\mathrm{\bf p}}[\tilde \phi](x)\geq 0$,
\item[(ii)] if $\phi < u$ on $\bar B_r(x)\setminus \{x\}$, then
$\mathcal{L}_{s,\mathrm{\bf p}}[\tilde \phi](x)\leq 0$,
\end{enumerate}
where we denoted: $\tilde\phi = \mathds{1}_{\bar B_r(x)}\cdot\phi +\mathds{1}_{\mathbb{R}^N\setminus \bar B_r(x)}\cdot u$.
\end{theorem}
\begin{remark}\label{remi_visc}
Recall by Remark \ref{rem2.5} that $\mathcal{L}_{s,\mathrm{\bf p}}$
differs from $\Delta_\mathrm{\bf p}^s$ only by a multiplicative constant,
depending on $N, s, \mathrm{\bf p}$. It is also clear that $u$ as in Theorem \ref{th_viscosity} satisfies $u=F$
in $\mathbb{R}^N\setminus \mathcal{D}$. A function $u$ satisfying only
the one-sided comparison with test functions (i) (rather than both
conditions (i) and (ii)) is called a viscosity subsolution to the non-local Dirichlet problem:
\begin{equation}\label{nD}
\Delta_\mathrm{\bf p}^s u = 0 \quad \mbox{in } \mathcal{D}, \qquad u=F \quad \mbox{in } \mathbb{R}^N\setminus \mathcal{D}.
\end{equation}
When (i) is replaced by (ii), then $u$ is called a viscosity
supersolution. Satisfaction of both conditions is referred to $u$
being a viscosity solution of (\ref{nD}). See also \cite[Definition 4.4]{BCF2}.
\end{remark}
\medskip
\noindent {\bf Proof of Theorem \ref{th_viscosity}}
{\bf 1.} Let $\phi, r$ and $x\in\mathcal{D}$ be as indicated. We will show that
(ii) holds, while the property (i) can be deduced by a symmetric argument.
For all $j\in\mathbb{N}$ such that $\bar B_{1/j}(x)\subset\mathcal{D}$, define
$\epsilon_j>0$ by requesting that:
$$\|u_\epsilon - u\|_{L^\infty}\leq \frac{1}{2} \min_{\bar
B_r(x)\setminus B_{1/j}(x)} (u-\phi) \qquad\mbox{for all }\;
\epsilon\leq \epsilon_j,~ \epsilon\in J.$$
Without loss of generality, the sequence $\{\epsilon_j\}_{j\to\infty}$
is decreasing to $0$. Let $\{x_\epsilon\in\mathcal{D}\}_{\epsilon\in J}$ be a
sequence with the property that:
$$(u_\epsilon - \phi)(x_\epsilon) = \min_{\bar B_{1/j}(x)} (u_\epsilon-\phi)
\qquad \mbox{and} \qquad x_\epsilon\in\bar B_{1/j}(x) ~~ \mbox{ for
all }\; \epsilon\in (\epsilon_{j+1}, \epsilon_j]\cap J.$$
Then, for all $\bar x\in \bar B_r(x)\setminus B_{1/j}(x)$ we have:
\begin{equation*}
\begin{split}
(u_\epsilon - \phi)(\bar x) & \geq (u-\phi)(\bar x) - \frac{1}{2}\min_{\bar
B_r(x)\setminus B_{1/j}(x)} (u-\phi)\geq \frac{1}{2}\min_{\bar
B_r(x)\setminus B_{1/j}(x)} (u-\phi) \\ & \geq (u_\epsilon - u)(x) =
(u_\epsilon-\phi)(x) \geq (u_\epsilon-\phi)(x_\epsilon).
\end{split}
\end{equation*}
This implies:
\begin{equation}\label{tredici}
(u_\epsilon-\phi)(x_\epsilon) = \min_{\bar B_r(x)} (u_\epsilon-\phi) ~~
\mbox{ for all }\; \epsilon\in J \qquad\mbox{and} \qquad x_\epsilon\to x
~~ \mbox{ as }\; \epsilon\to 0 ,~\epsilon\in J.
\end{equation}
\smallskip
{\bf 2.} Since $u_\epsilon$ satisfies \ref{dppe}, it follows that:
\begin{equation*}
\begin{split}
\mathcal{A}_\epsilon\tilde\phi (x_\epsilon) - \tilde\phi(x_\epsilon) & =
\big(\mathcal{A}_\epsilon\tilde\phi (x_\epsilon) - \tilde\phi(x_\epsilon) \big)
- \big(\mathcal{A}_\epsilon u_\epsilon (x_\epsilon) - u_\epsilon (x_\epsilon)\big)
\\ & \leq \sup_{|y|=1} \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}\tilde
\phi(x_\epsilon +z) - u_\epsilon(x_\epsilon+z) - \phi(x_\epsilon) + u_\epsilon(x_\epsilon)\;\mathrm{d}\mu_s^N(z).
\end{split}
\end{equation*}
We fix $|y|=1$ and estimate the integral above. By (\ref{tredici}) it follows that $\tilde
\phi(x_\epsilon+z)-u_\epsilon(x_\epsilon+z) \leq
\phi(x_\epsilon)-u_\epsilon(x_\epsilon)$ whenever $x_\epsilon+z\in\bar
B_r(x)$, so the said integral is bounded by:
\begin{equation*}
\begin{split}
\frac{1}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}&\int_{T_\mathrm{\bf p}^{\epsilon,
\infty}(y)\setminus \bar B_r(x-x_\epsilon)} u(x_\epsilon+z) -
u_\epsilon(x_\epsilon+z) - \phi(x_\epsilon) +
u_\epsilon(x_\epsilon)\;\mathrm{d}\mu_s^N(z) \\ & \leq \frac{\mu_s^N(T_\mathrm{\bf p}^{\epsilon,
\infty}(y)\setminus \bar B_r(x-x_\epsilon))}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}
\cdot \big(2\|u-u_\epsilon\|_{L^\infty} + |u(x_\epsilon) - \phi(x_\epsilon)|\big).
\end{split}
\end{equation*}
Hence we get:
\begin{equation}\label{quattordici}
\begin{split}
\mathcal{A}_\epsilon\tilde\phi (x_\epsilon) - \tilde\phi(x_\epsilon) & \leq
\frac{\mu_s^N(T_\mathrm{\bf p}^{r/2, \infty})}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})}
\cdot \big(2\|u-u_\epsilon\|_{L^\infty} + |u(x_\epsilon) - u(x)|+
|\phi(x_\epsilon) -\phi(x)|\big) \\ & = o(\epsilon^{2s})\quad \mbox{ as }\; \epsilon\to 0.
\end{split}
\end{equation}
\smallskip
{\bf 3.} In this step, we show that:
\begin{equation}\label{diciasette}
\big|\mathcal{L}_{s,\mathrm{\bf p}}^\epsilon[\tilde \phi](x_\epsilon) -
\mathcal{L}_{s,\mathrm{\bf p}}[\tilde \phi](x_\epsilon)\big| = o(1)\quad \mbox{ as }\; \epsilon\to 0.
\end{equation}
The argument relies on verifying the proof of Lemma \ref{prop3}. With
the parallel notation $m=\big|\frac{p_x^\epsilon}{|p_x^\epsilon|} - y_\delta^\epsilon\big|$,
where $p_x^\epsilon=\nabla \phi(x_\epsilon)\neq 0$, and where the unit
vector $y_\delta^\epsilon$ is an almost
maximizer of the function $y\mapsto \int_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)}\tilde\phi(x_\epsilon+z)\;\mathrm{d}\mu_s^N(z)$, we get the following
replacement of (\ref{tre.6}):
\begin{equation*}
\begin{split}
\delta & \geq \int_{T_\mathrm{\bf p}^{\epsilon,r-|x_\epsilon
-x|}(\frac{p_x^\epsilon}{|p_x^\epsilon|})}\phi(x_\epsilon +z) -
\phi\big (x_\epsilon+R_{y_\delta^\epsilon,
\frac{p_x^\epsilon}{|p_x^\epsilon|}}z\big)\;\mathrm{d}\mu_s^N(z) \\ & \quad -
\omega_u(m)\cdot \int_{T_\mathrm{\bf p}^{r+|x_\epsilon-x|, \infty}}(1+|z|)\;\mathrm{d}\mu_s^N(z) -
2\|\tilde \phi\|_{L^\infty}\cdot \mu_s^N\big(T_\mathrm{\bf p}^{r-|x_\epsilon-x|, r+|x_\epsilon-x|}\big)
\\ & \geq \Big\langle p_x^\epsilon, \big(Id_N - R_{y_\delta^\epsilon,
\frac{p_x^\epsilon}{|p_x^\epsilon|}}\big) \int_{T_\mathrm{\bf p}^{\epsilon,r-|x_\epsilon
-x|}(\frac{p_x^\epsilon}{|p_x^\epsilon|})} z \;\mathrm{d}\mu_s^N(z) \Big\rangle -
O(1) m - O(1) \omega_u(m) - O(1) |x_\epsilon - x| \\ & \geq
\frac{m^2|p_x^\epsilon|}{2}\int_{T_\mathrm{\bf p}^{\epsilon, r/2}}\langle z,e_1\rangle \;\mathrm{d}\mu_s^N(z) - O(1),
\end{split}
\end{equation*}
where $O(1)$ depends on $N,s,\mathrm{\bf p},r$ and $\|\tilde \phi\|_{L^\infty}$,
$\|\nabla^2\phi\|_{L^\infty(\bar B_r(x))}$. Consequently:
\begin{equation}\label{ds1}
m^2\leq \frac{O(1)}{|p_x^\epsilon|\int_{T_\mathrm{\bf p}^{\epsilon, r/2}}\langle
z,e_1\rangle \;\mathrm{d}\mu_s^N(z)}\leq \frac{O(1)}{|p_x^\epsilon|}\epsilon^{2s-1}.
\end{equation}
Further, as in (\ref{sette}) we get:
$$\Big|\mathcal{L}_{s,\mathrm{\bf p}}^\epsilon[\tilde\phi](x_\epsilon) -
\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(\frac{p_x^\epsilon}{|p_x^\epsilon|})}L_{\tilde\phi}(x_\epsilon,
z,z)\;\mathrm{d}\mu_s^N(z)\Big|\leq \delta +J_1 +J_2+J_3,$$
where:
\begin{equation*}
\begin{split}
J_1 & \leq \Big|\int_{T_\mathrm{\bf p}^{\epsilon, r-|x_\epsilon-x|}(\frac{p_x^\epsilon}{|p_x^\epsilon|})}
\phi(x_\epsilon+z) - \phi \big(x_\epsilon + R_{y_\delta^\epsilon, \frac{p_x^\epsilon}{|p_x^\epsilon|}}z\big)
+\phi(x_\epsilon -z) - \phi \big(x_\epsilon - R_{y_\delta^\epsilon, \frac{p_x^\epsilon}{|p_x^\epsilon|}}z\big)
\;\mathrm{d}\mu_s^N(z)\Big| \\ & \leq \int_{T_\mathrm{\bf p}^{\epsilon, r-|x_\epsilon-x|}(\frac{p_x^\epsilon}{|p_x^\epsilon|})}
\Big|\big\langle \nabla\phi (x_\epsilon+z) - \nabla\phi(x_\epsilon
-x), \big(Id_N - R_{y_\delta^\epsilon, \frac{p_x^\epsilon}{|p_x^\epsilon|}}\big)z\big\rangle\Big|\;\mathrm{d}\mu_s^N(z)
+ O(1) m^2 \\ & \leq O(1) m,
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
J_2 & \leq \Big|\int_{T_\mathrm{\bf p}^{\epsilon, r+|x_\epsilon-x|}(\frac{p_x^\epsilon}{|p_x^\epsilon|})}
u (x_\epsilon+z) - u \big(x_\epsilon + R_{y_\delta^\epsilon, \frac{p_x^\epsilon}{|p_x^\epsilon|}}z\big)
+u(x_\epsilon -z) - u \big(x_\epsilon - R_{y_\delta^\epsilon, \frac{p_x^\epsilon}{|p_x^\epsilon|}}z\big)
\;\mathrm{d}\mu_s^N(z)\Big| \\ & \leq O(1) \omega_u(m), \\
J_3 & \leq \int_{T_\mathrm{\bf p}^{ r-|x_\epsilon-x|, r+|x_\epsilon-x|}}
4\|\tilde\phi\|_{L^\infty}\;\mathrm{d}\mu_s^N(z) \leq O(1) |x_\epsilon-x|.
\end{split}
\end{equation*}
After passing $\delta\to 0$ and recalling (\ref{ds1}), we conclude:
\begin{equation*}
\Big|\mathcal{L}_{s,\mathrm{\bf p}}^\epsilon[\tilde \phi](x_\epsilon)
-\int_{T_\mathrm{\bf p}^{\epsilon, \infty}(\frac{p_x^\epsilon}{|p_x^\epsilon|})} L_{\tilde\phi}
(x_\epsilon, z,z)\;\mathrm{d}\mu_s^N(z) \Big| \leq O(1)\big( m + \omega_u(m) +
|x_\epsilon-x|\big) = o(1) \quad \mbox{ as }\; \epsilon\to 0.
\end{equation*}
The above implies (\ref{diciasette}), in view of: $\big|
\int_{T_\mathrm{\bf p}^{0, \epsilon}(\frac{p_x^\epsilon}{|p_x^\epsilon|})} L_{\tilde\phi}
(x_\epsilon, z,z)\;\mathrm{d}\mu_s^N(z) \big| = O(1)\epsilon^{2-2s}$.
\smallskip
{\bf 4.} Recall that by Theorem \ref{thm4.7} we also directly have:
$$\Big|\mathcal{L}_{s,\mathrm{\bf p}}^\epsilon[\tilde \phi](x)
- \mathcal{L}_{s,\mathrm{\bf p}}[\tilde\phi](x)\Big|\leq o(1) \quad \mbox{ as }\; \epsilon\to 0,$$
because $\tilde\phi = \phi$ on $\bar B_r(x)$ has regularity $C^2$,
while $\tilde\phi = u$ on $\mathbb{R}^N\setminus \bar B_r(x)$ is uniformly
continuous. Together with (\ref{diciasette}), this yields:
\begin{equation}\label{quindici}
\begin{split}
\Big|\big(\mathcal{A}_\epsilon\tilde\phi &(x_\epsilon) - \tilde\phi(x_\epsilon)\big)
- \big(\mathcal{A}_\epsilon \tilde\phi(x) - \tilde\phi (x)\big)\Big| =
\frac{1}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})} \Big|\mathcal{L}_{s,\mathrm{\bf p}}^\epsilon[\tilde \phi](x_\epsilon)
- \mathcal{L}_{s,\mathrm{\bf p}}^\epsilon[\tilde\phi](x)\Big| \\ &=
\frac{1}{\mu_s^N(T_\mathrm{\bf p}^{\epsilon, \infty})} \Big(\Big|\mathcal{L}_{s,\mathrm{\bf p}}[\tilde \phi](x_\epsilon)
- \mathcal{L}_{s,\mathrm{\bf p}}[\tilde\phi](x)\Big| + o(1)\Big) \quad \mbox{ as }\; \epsilon\to 0.
\end{split}
\end{equation}
We now denote $R\doteq R_{\frac{p_x^\epsilon}{|p_x^\epsilon|},\frac{p_x}{|p_x|} }$and estimate:
\begin{equation*}
\begin{split}
& \Big|\mathcal{L}_{s,\mathrm{\bf p}}[\tilde \phi](x_\epsilon) -
\mathcal{L}_{s,\mathrm{\bf p}}[\tilde\phi](x)\Big| \\ & \leq
\int_{T_\mathrm{\bf p}^{0,\infty}(\frac{p_x}{|p_x|})}\Big|\big(\tilde\phi(x_\epsilon + R z)
+ \tilde\phi (x_\epsilon - R z) - 2\tilde\phi(x_\epsilon)\big) - \big(\tilde \phi(x+z)
+\tilde\phi(x-z)-2\phi(x)\big)\Big|\;\mathrm{d}\mu_s^N(z) \\ & \leq \frac{1}{2}\int_{T_\mathrm{\bf p}^{0,r}}|z|^2\;\mathrm{d}\mu_s^N(z)
\cdot \sup_{z\in B_r(x)}\Big|\big( \nabla\phi(x_\epsilon + Rz) + \nabla\phi(x_\epsilon-Rz)\big)
- \big( \nabla\phi(x + Rz) + \nabla\phi(x-Rz)\big)\Big|\\
&\quad + \int_{T_\mathrm{\bf p}^{r+|x_\epsilon - x|,\infty}}
2\omega_u(|x_\epsilon - x|) + 2\omega_u(|Id_N-R|)\cdot (1+|z|)\;\mathrm{d}\mu_s^N(z)
\\ &\quad + \int_{T_\mathrm{\bf p}^{r-|x_\epsilon - x|, r+|x_\epsilon - x|}} 6\|\tilde\phi\|_{L^\infty}\;\mathrm{d}\mu_s^N(z)
\\ &\leq O(1) \cdot\Big(|x_\epsilon - x| + \omega_u(|x_\epsilon-x|) +
\omega_u\big(\big|\frac{p_x^\epsilon}{|p_x^\epsilon|} -
\frac{p_x}{|p_x|}\big|\big)\Big) \leq o(1) \quad \mbox{ as }\; \epsilon\to 0.
\end{split}
\end{equation*}
At this point, combining (\ref{quindici}) with (\ref{quattordici}) results in:
$$\mathcal{A}_\epsilon\tilde \phi(x) - \tilde\phi(x) \leq o(\epsilon^{2s}) \quad \mbox{ as }\; \epsilon\to 0.$$
On the other hand, Theorem \ref{thm4.7} implies that:
$$\mathcal{A}_\epsilon\tilde \phi(x) - \tilde\phi(x) =
\frac{s}{C(N,s) |A_\mathrm{\bf p}|} \epsilon^{2s}\cdot
\mathcal{L}_{s,\mathrm{\bf p}}[\tilde\phi](x) + o(\epsilon^{2s}).$$
The above two asymptotic statements directly yield
$\mathcal{L}_{s,\mathrm{\bf p}}[\tilde\phi](x)\leq 0$, as claimed.
\hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip
\section{The non-local Tug-of-War game with noise}\label{sec_game}
In this section, we develop the basic probability setting related to the equation \ref{dppe}.
\medskip
{\bf 1.} Consider the probability space $(T_\mathrm{\bf p}^{1,\infty}, \mathcal{B},
\frac{1}{\mu_s^N(T_\mathrm{\bf p}^{1,\infty})}\mu_s^N)$ equipped with the standard Borel
$\sigma$-algebra and the normalised $\mu_s^N$ measure, and define
$(\Omega_1, \mathcal{F}_1, \mathbb{P}_1)$ as the product space with
the counting measure on the discrete set $\{1,2\}$. In particular,
for every $B\in\mathcal{B}$, we have:
$$\mathbb{P}_1\big(B\times \{1,2\}\big) = \frac{2s}{|A_\mathrm{\bf p}|}\int_{B}\frac{1}{|z|^{N+2s}}\;\mbox{d}z.$$
Further, the countable product of $(\Omega_1, \mathcal{F}_1,
\mathbb{P}_1)$ is denoted by $(\Omega, \mathcal{F}, \mathbb{P})$, where:
\begin{equation*}
\begin{split}
\Omega= (\Omega_1)^{\mathbb{N}} = \big\{\omega=\{(z_i,
s_i)\}_{i=1}^\infty; ~ z_i\in T_\mathrm{\bf p}^{1,\infty},~ s_i\in \{1,2\} ~~\mbox{ for all } i\in\mathbb{N}\big\}.
\end{split}
\end{equation*}
For each $n\in\mathbb{N}$, the probability space $(\Omega_n, \mathcal{F}_n, \mathbb{P}_n)$ is the
product of $n$ copies of $(\Omega_1, \mathcal{F}_1,
\mathbb{P}_1)$ and the $\sigma$-algebra $\mathcal{F}_n$ is
identified with the sub-$\sigma$-algebra of $\mathcal{F}$,
consisting of sets $A\times \prod_{i=n+1}^\infty\Omega_1$
for all $A\in\mathcal{F}_n$. Then $\{\mathcal{F}_n\}_{n=0}^\infty$ where
$\mathcal{F}_0= \{\emptyset, \Omega\}$, is a filtration of $\mathcal{F}$.
\medskip
{\bf 2.} Given are two families of functions $\sigma_I=\{\sigma_I^n\}_{n=0}^\infty$ and
$\sigma_{II}=\{\sigma_{II}^n\}_{n=0}^\infty$, defined on the
corresponding spaces of ``finite histories'' $H_n=\mathbb{R}^N\times (\mathbb{R}^N\times\Omega_1)^n$:
$$\sigma_I^n, \sigma_{II}^n:H_n\to \{y\in\mathbb{R}^N;~ |y|=1\},$$
assumed to be measurable with respect to the (target) Borel
$\sigma$-algebra and the (domain) product $\sigma$-algebra on $H_n$.
For every $x\in\mathbb{R}^N$ and $\epsilon\in (0,1)$ we recursively define:
$$\big\{X_n^{\epsilon, x, \sigma_I, \sigma_{II}}:\Omega\to\mathbb{R}^N\big\}_{n=0}^\infty.$$
For simplicity of notation, we often suppress some of the superscripts $\epsilon, x, \sigma_I, \sigma_{II}$
and write $X_n$
instead of $ X_n^{\epsilon, x, \sigma_I, \sigma_{II}}$, if no ambiguity arises.
Recall that $R_{\tilde y, y}\in SO(N)$ is as in Definition \ref{def_cone}. We put:
\begin{equation}\label{processMp}
\begin{split}
& \, X_0\equiv x, \\ & \, X_n\big((z_1,s_1), \ldots,
(z_n,s_n)\big) \doteq X_{n-1} + \left\{\begin{array}{ll}
\epsilon R_{\sigma_I^{n-1}, e_1}z_n & \mbox{for } s_n=1 \vspace{1mm} \\
\epsilon R_{\sigma_{II}^{n-1}, e_1}z_n & \mbox{for } s_n=2. \end{array} \right.\\
\end{split}
\end{equation}
In this ``game'', each of the two players chooses (deterministically) a
direction $y$, according to their ``strategy'' $\sigma_I$ and
$\sigma_{II}$. These choices are activated by the value of the equally probable outcomes:
$s_n=1$ activates $\sigma_I$ and $s_n=2$ activates $\sigma_{II}$.
The position $X_{n-1}$ is then advanced by a shift
$\epsilon R z\in T_\mathrm{\bf p}^{\epsilon, \infty}(y)$, randomly in $z$ according to the
normalised measure $\mu_s^N$ on $T_\mathrm{\bf p}^{1,\infty}$.
\medskip
{\bf 3.} Given an open, bounded domain $\mathcal{D}\subset\mathbb{R}^N$,
define further the $\mathcal{F}$-measurable random variable:
$\tau^{\epsilon, x, \sigma_I, \sigma_{II}}:\Omega\to \mathbb{N}\cup\{+\infty\}$ by:
$$\tau(\omega) \doteq \min\big\{n\geq 1;~ X_n\not\in \mathcal{D}\big\}.$$
We observe that $\tau$ is finite $\mathbb{P}$-a.e., making it a
stopping time. Indeed, since $\mathbb{P}_1(T_\mathrm{\bf p}^{\mathrm{diam}\mathcal{D},
\infty}\times \{1,2\})>0$ it follows that $\mathbb{P}\big(\omega;~ \exists
i ~~ z_i\in T_\mathrm{\bf p}^{\mathrm{diam}\mathcal{D}, \infty}\big)=1$, and on this event $\tau<\infty$.
Let $F:\mathbb{R}^N\to\mathbb{R}$ be a given bounded, uniformly continuous function. In our
``game'', the first ``player'' collects from his
opponent the payoff, given by the data $F$ at the stopping position. The incentive
of the collecting ``player'' to maximize the outcome and of the
disbursing ``player'' to minimize it, leads to the definition of the two game values in:
\begin{equation}\label{15}
\begin{split}
& u_I^\epsilon(x) =
\sup_{\sigma_I}\inf_{\sigma_{II}}\mathbb{E}\Big[F\circ \big(X^{\epsilon, x,
\sigma_I, \sigma_{II}}\big)_{\tau^{\epsilon, x, \sigma_I, \sigma_{II}}}\Big], \\
& u_{II}^\epsilon(x) =
\inf_{\sigma_{II}}\sup_{\sigma_{I}}\mathbb{E}\Big[F\circ \big(X^{\epsilon, x,
\sigma_I, \sigma_{II}}\big)_{\tau^{\epsilon, x, \sigma_I, \sigma_{II}}}\Big].
\end{split}
\end{equation}
It is clear that $u_{I}^\epsilon$, $u_{II}^\epsilon$ depend only on the values of
$F$ on $\mathbb{R}^N\setminus\mathcal{D}$. We now show that both game values coincide with the unique solution to the
dynamic programming principle \ref{dppe} modeled on the non-local
asymptotic expansion in Theorem \ref{thm4.7}.
\medskip
\begin{lemma}\label{lem_ue}
For each $\epsilon\ll 1$ we have $u_I^\epsilon =
u_{II}^\epsilon=u_\epsilon$, where $u_\epsilon$ is the unique bounded, Borel solution to \ref{dppe}.
\end{lemma}
\begin{proof}
{\bf 1.} We will show that $u_{II}^\epsilon\leq u_\epsilon$, while the
inequality $u_\epsilon\leq u_{I}^\epsilon$ can be proved by a
symmetric argument and $u_I^\epsilon \leq u_{II}^\epsilon$ is always valid.
Fix $x\in\mathbb{R}^N$ and $\epsilon, \delta>0$. We choose a strategy
$\sigma_{II,0}=\{\sigma_{II,0}^n(X_n)\}_{n=0}^\infty$ satisfying:
\begin{equation}\label{markov}
\inf_{|y|=1} \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u_\epsilon(x+z)\;\mathrm{d}\mu_s^N(z) + \frac{\delta}{2^{n+1}}
\geq \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(\sigma_{II,0}^n)} u_\epsilon(x+z)\;\mathrm{d}\mu_s^N(z).
\end{equation}
The fact that such Borel-regular strategy exists follows from Lemma \ref{conti}.
Indeed, let $\{B(x_i, \xi)\}_{i=1}^\infty$ be a locally finite
covering of $\mathbb{R}^N$, where:
$$\Big|\inf_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u_\epsilon(x+z)\;\mathrm{d}\mu_s^N(z)
- \inf_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u_\epsilon(\bar
x+z)\;\mathrm{d}\mu_s^N(z)\Big|\leq \frac{\delta}{2^{n+2}} \qquad \mbox{for all }\; |x-\bar x|<\xi. $$
For each $i\in\mathbb{N}$ there exists then $|y_i|=1$ with the property:
$\big|\inf_{|y|=1}\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y)} u_\epsilon(x_i+z)\;\mathrm{d}\mu_s^N(z) - \fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(y_i)}
u_\epsilon(x_i+z)\;\mathrm{d}\mu_s^N(z)\big|<\frac{\delta}{2^{n+2}}$. We hence define:
$$\sigma_{II,0}^n(x) = y_i\qquad\mbox{for all }\; x\in B(x_i,
\xi)\setminus \bigcup_{j=1}^{i-1}B(x_j,\xi).$$
\smallskip
{\bf 2.} Fix $x\in\mathcal{D}$ and a strategy $\sigma_I$. Consider the sequence of random variables:
$$M_n\doteq u_\epsilon\circ X_{n\wedge \tau}^{\epsilon, x, \sigma_I,\sigma_{II,0}} + \frac{\delta}{2^n}.$$
We now check that $\{M_n\}_{n=1}^\infty$ is a supermartingale with
respect to the filtration $\{\mathcal{F}_n\}_{n=0}^\infty$. On the
event $n>\tau$ there clearly holds $\mathbb{E}(M_n|\mathcal{F}_{n-1})
=M_{n-1}$. On the other hand, on the event $n\leq\tau$ we have
$X_{n-1}\in\mathcal{D}$, so the property \ref{dppe} and (\ref{markov}) imply that:
\begin{equation*}
\begin{split}
& \mathbb{E}(M_n\mid \mathcal{F}_{n-1}) - M_{n-1} \\ & =\frac{1}{2}\Big(
\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(\sigma_{I}^n)} u_\epsilon(X_{n-1}+z)\;\mathrm{d}\mu_s^N(z) +
\fint_{T_\mathrm{\bf p}^{\epsilon, \infty}(\sigma_{II,0}^n)} u_\epsilon(X_{n-1}+z)\;\mathrm{d}\mu_s^N(z) \Big)
-\frac{\delta}{2^{n+1}} -u_\epsilon(X_{n-1}) \\ & \leq
\mathcal{A}_\epsilon u_\epsilon(X_{n-1}) -u_\epsilon(X_{n-1}) =0.
\end{split}
\end{equation*}
Using Doob's optional stopping theorem, we arrive at:
$$u_\epsilon(x)+\delta =\mathbb{E}[M_0]\geq \mathbb{E}[M_\tau] = \frac{\delta}{2^{\tau}}
+\mathbb{E}\big[F\circ X_\tau^{\epsilon, x, \sigma_I,
\sigma_{II,0}}\big]\geq \frac{\delta}{2^{\tau}}
+ \inf_{\sigma_{II}}\mathbb{E}\big[F\circ X_\tau^{\epsilon, x, \sigma_I, \sigma_{II}}\big].$$
This yields: $u_\epsilon(x)\geq\delta\geq u^\epsilon_{II}(x)$,
concluding the proof in view of $\delta$ being arbitrary.
\end{proof}
\section{Auxiliary estimates for the barrier function}\label{sec_ft}
The purpose of this section is to show the first boundary regularity
estimate for the game process $\{X_n\}_{n=0}^\infty$, towards establishing
our main asymptotic equicontinuity result.
\begin{theorem}\label{cone_thengood}
Let $\mathcal{D}\subset\mathbb{R}^N$ be an open, bounded domain satisfying the external cone condition.
Namely, assume that there exists a finite cone $C$ such that for
each $x\in\partial \mathcal{D}$ there holds: $x+ S_xC\subset\mathbb{R}^N\setminus \mathcal{D}$, for
some rotation $S_x\in SO(N)$. Then:
\begin{equation}\label{gr1}
\begin{split}
\forall \delta>0\quad\exists \hat\delta<\delta,~
\hat\epsilon>0\quad & \forall \epsilon<\hat\epsilon,~ x\in\partial\mathcal{D},~ x_0\in B_{\hat\delta}(x)\cap\mathcal{D} \quad
\\ & \exists \sigma_{I,0}\quad\forall\sigma_{II}\quad
\mathbb{P}\big(\exists n<\tau \quad X_n^{\epsilon, x_0,\sigma_{I,0}, \sigma_{II}}\not\in B_\delta(x)\big)\leq\bar\theta,
\end{split}
\end{equation}
with a constant $\bar\theta<1$ depending only on $N,\mathrm{\bf p},s$ and the cone $C$.
\end{theorem}
The proof will rely on a suitable barrier functions, introduced in \cite{BCF2}. Namely, consider the
uniformly continuous and bounded $f_t:\mathbb{R}^N\to\mathbb{R}$, where for each $t>0$ we define:
$$f_t(x) = \min\big\{2^t, |x|^{-t}\big\}.$$
We start by observing a refinement of \cite[Lemma 3.10]{BCF2}:
\begin{proposition}\label{Lspft}
There exists $t_0\gg 1$ depending on $N,\mathrm{\bf p}, s$, such that for all
$t\geq t_0$ and all $|x|\geq 1$ there holds:
$$\mathcal{L}_{s,\mathrm{\bf p}}[f_t](x)\geq C |x|^{-2s-t},$$
with a constant $C$ depending on $N,\mathrm{\bf p}, s$ but not on $t$ or $x$.
\end{proposition}
\begin{proof}
Observe that $\mathcal{L}_{s,\mathrm{\bf p}}[f_t](x)=\mathcal{L}_{s,\mathrm{\bf p}}[f_t](|x| e_1)$ by
rotational invariance. It hence suffices to estimate, after changing variables:
\begin{equation}\label{muno}
\begin{split}
\int_{T_\mathrm{\bf p}^{0,\infty}}\frac{L_{f_t}(|x| e_1, z, z)}{|z|^{N+2s}}\;\mbox{d}z & =
|x|^{-2s}\int_{T_\mathrm{\bf p}^{0,\infty}}\frac{f_t(|x| (e_1+z)) + f_t(|x|(
e_1-z))-2f_t(|x| e_1)}{|z|^{N+2s}}\;\mbox{d}z \\ & \geq
|x|^{-2s-t}\int_{T_\mathrm{\bf p}^{0,\infty}}\frac{f_t(e_1+z) + f_t(e_1-z)-2f_t(e_1)}{|z|^{N+2s}}\;\mbox{d}z,
\end{split}
\end{equation}
where in the last step above we used that $f_t(|x| z)\geq
|x|^{-t}f_t(z)$ which can be easily checked directly. Further, $L_{f_t}(e_1, z,
z)\geq -2$ for all $z$, while for $|z|\leq \frac{1}{2}$ we have:
\begin{equation*}
\begin{split}
L_{f_t}(e_1, z, z) & = |1+|z|^2 +
2\langle e_1, z\rangle|^{-t/2} + |1+|z|^2 - 2\langle e_1, z\rangle|^{-t/2} -2
\\ & \geq 2(1+|z|^2)^{-t/2} + \frac{t}{2}\big(\frac{t}{2}+1\big)(2\langle
e_1, z\rangle)^2 \big(1+|z|^2\big)^{-t/2-2} - 2 \\ & \geq 2\big(1-\frac{t}{2}|z|^2\big)
+ \frac{t}{2} (t+2) \langle e_1, z\rangle^2 \big(1 - \big(\frac{t}{2}+2\big)|z|^2\big)^{-t/2-2} -2\\ &
= \frac{t}{2}(t+2) \langle e_1, z\rangle^2 - t|z|^2 - \frac{t}{4}(t+2)(t+4) \langle e_1, z\rangle^2|z|^2,
\end{split}
\end{equation*}
by Taylor's expansion and since $(1+|z|^2)^{-\alpha}\geq 1- \alpha|z|^2$ whenever $\alpha>0$.
Thus (\ref{muno}) becomes:
\begin{equation*}
\begin{split}
& \int_{T_\mathrm{\bf p}^{0,\infty}} \frac{L_{f_t}(|x| e_1, z, z)}{|z|^{N+2s}}\;\mbox{d}z \geq
|x|^{-2s-t}\Big(\int_{T_\mathrm{\bf p}^{0,r}} \frac{L_{f_t}(e_1, z, z)
}{|z|^{N+2s}}\;\mbox{d}z - \int_{T_\mathrm{\bf p}^{r, \infty}} \frac{2}{|z|^{N+2s}}\;\mbox{d}z \Big)\\ &
\quad = |x|^{-2s-t} |A_\mathrm{\bf p}|\cdot\Big(
\frac{t}{2}(t+2)\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}\cdot\frac{r^{2-2s}}{2-2s} - t \frac{r^{2-2s}}{2-2s} \\ &
\quad \qquad\quad\qquad\qquad \quad -
\frac{t}{4}(t+2)(t+4) \frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}\cdot\frac{r^{4-2s}}{4-2s} - \frac{r^{-2s}}{s} \Big)
\doteq |x|^{-2s-t} |A_\mathrm{\bf p}|\cdot I_{N,\mathrm{\bf p}, s,t,r}
\end{split}
\end{equation*}
by recalling Lemma \ref{lem_Tp} (i) and where we fixed some
appropriate $r<\frac{1}{2}$. We now estimate the quantity $I_{N,\mathrm{\bf p}, s,t,r}$.
When $\frac{t+2}{2}\cdot\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}\geq 2$, then the first two
terms in $I_{N,\mathrm{\bf p}, s,t,r}$ are bounded from below by:
$\frac{t}{4}(t+2)\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}\cdot
\frac{r^{2-2s}}{2-2s}$. Further, when
$(t+4)\frac{r^2}{4-2s}\leq\frac{1}{2(2-2s)}$, then the first three
terms in $I_{N,\mathrm{\bf p}, s,t,r}$ are bounded from below by:
$\frac{t}{8}(t+2)\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}\cdot
\frac{r^{2-2s}}{2-2s}$. Finally, when
$\frac{t}{8}(t+2)\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}\cdot\frac{r^2}{2-2s}\geq
\frac{2}{s}$, then we have:
$$I_{N,\mathrm{\bf p}, s,t,r}\geq \frac{r^{-2s}}{s}\geq \frac{1}{s}\Big(\frac{2-2s}{2-s} (t+4)\Big)^s.$$
It is clear that the above listed conditions, namely:
$$\frac{t+2}{2}\cdot\frac{\mathrm{\bf p}-1}{N+\mathrm{\bf p}-2}\geq 2\quad\mbox{ and }\quad
\exists r< \frac{1}{2}:\quad \frac{16
(2-2s)(N+\mathrm{\bf p}-2)}{s(\mathrm{\bf p}-1)}\cdot\frac{1}{t(t+2)}\leq r^2\leq \frac{2-s}{2-2s}\cdot\frac{1}{t+4}$$
are compatible for sufficiently large $t\geq t_0(N,\mathrm{\bf p},s)$. The proof is done.
\end{proof}
\begin{corollary}\label{ft_ring}
Let $t_0\gg 1$ be as Proposition \ref{Lspft}. For every $t\geq t_0$
and $R>1$ there exists $\epsilon_0>0$, depending on $N,\mathrm{\bf p},s,t,R$ such that:
$$\mathcal{A}_{\epsilon}f_t(x)\geq f_t(x) + C \epsilon^{2s} R^{-2s-t}
\qquad\mbox{for all }\; |x|\in [1, R], ~~\epsilon<\epsilon_0,$$
with a constant $C$ depending only on $N,\mathrm{\bf p}, s$.
\end{corollary}
\begin{proof}
We apply Theorem \ref{thm4.7} with $r_x\in \big(\frac{1}{4},
\frac{1}{2}\big)$ and note that $C_x\leq C(N,t)$ and $|p_x|\geq t
R^{-t-1}$ whenever $|x|\in [1,R]$. In view of Proposition \ref{Lspft} it follows that:
$$\mathcal{A}_\epsilon f_t(x) \geq f_t(x) + C_{N,\mathrm{\bf p},s}\epsilon^{2s}
R^{-2s-t} - C_{N,s,t}\epsilon^2 - C_{N,s,t}\epsilon^{2s}\big(m_\epsilon +\omega_{f_t}(m_\epsilon)\big).$$
Recalling Remark \ref{rem4} (i) we see that: $m_\epsilon\leq
C_{N,\mathrm{\bf p},s,t}\epsilon^{s-\frac{1}{2}}R^{t+1}$ and $\omega_{f_t}(m_\epsilon)\leq C_tm_\epsilon$.
Hence:
$$\mathcal{A}_\epsilon f_t(x) \geq f_t(x) + C_{N,\mathrm{\bf p},s}\epsilon^{2s}
\big(R^{-2s-t} - C_{N,s,t}\epsilon^{2-2s} - C_{N,\mathrm{\bf p},s,t} \epsilon^{s-\frac{1}{2}}R^{t+1} \big),$$
and so the result follows for $C_{N,s,t}\epsilon^{2-2s} + C_{N,\mathrm{\bf p},s,t} \epsilon^{s-\frac{1}{2}}R^{t+1} \leq \frac{1}{2}
R^{-2s-t}$.
\end{proof}
\medskip
Towards the proof of Theorem \ref{cone_thengood}, we note that:
\begin{proposition}\label{fe_prob}
Given $R>1$ and $t\geq t_0$, $\epsilon<\epsilon_0$ as in Proposition \ref{Lspft} and Corollary \ref{ft_ring}, let
$v_\epsilon:\mathbb{R}^N\to\mathbb{R}$ be the unique bounded, Borel solution to the problem:
$$ v_\epsilon (x) = \left\{\begin{array}{ll}\mathcal{A}_\epsilon
v_\epsilon (x) & \mbox{ for } \; |x|\in (1, R)\vspace{1mm}\\ f_t(x) & \mbox{ for } \; |x|\not\in (1, R).
\end{array}\right.$$
Then we have:
\begin{enumerate}[leftmargin=7mm]
\item[(i)] $v_\epsilon \geq f_t$ in $\mathbb{R}^N$,
\item[(ii)] For every $\tilde R\in (1,R)$ exists $\theta_{\tilde R, R}<1$
depending only on $R,\tilde R, N, \mathrm{\bf p}, s$ such that for all $|x|\in [1,\tilde R]$ and $\epsilon<\epsilon_0$ there holds:
$$\exists \sigma_{I,0}\quad\forall\sigma_{II}\qquad
\mathbb{P}\big(|X^{\epsilon, x, \sigma_{I,0}, \sigma_{II}}_\tau|\geq R\big)\leq\theta_{\tilde R, R},$$
where $\tau$ denotes the first exit time from the annulus $B_{R}(0)\setminus \bar B_1(0)$.
\item[(iii)] For a given $r>0$, let $\bar \tau = \min\{i\geq 0;~
|X_i|\not\in (r, rR^2)\}$. Then, for every $|x|\in [r,rR]$ and
$\epsilon<r\epsilon_0$ there holds, with a constant $\theta_R<1$
depending only on $R, N,\mathrm{\bf p},s$:
$$\exists \sigma_{I,0}\quad\forall\sigma_{II}\qquad
\mathbb{P} \big(|X^{\epsilon, x, \sigma_{I,0}, \sigma_{II}}_{\bar \tau}|\geq rR^2\big)\leq\theta_R.$$
\end{enumerate}
\end{proposition}
\begin{proof}
{\bf 1.} To show (i), observe that by Corollary \ref{ft_ring} we have, for all $|x|\in (1,R)$:
\begin{equation}\label{bda}
\begin{split}
v_\epsilon(x) - f_t(x) & = \big(\mathcal{A}_\epsilon v_\epsilon (x) -
\mathcal{A}_\epsilon f_t(x)\big) + \big( \mathcal{A}_\epsilon f_t(x) -
f_t(x)\big) \\ & \geq \inf_{|y|=1} \fint_{T_\mathrm{\bf p}^{\epsilon,\infty}(y)}
(v_\epsilon -f_t)(x+z)\;\mathrm{d}\mu_s^N(z) + C_{N,\mathrm{\bf p},s}\epsilon^{2s} R^{-2s-t}\\ &
\geq \inf_{\mathbb{R}^N} (v_\epsilon-f_t) + C_{N,\mathrm{\bf p},s}\epsilon^{2s} R^{-2s-t}.
\end{split}
\end{equation}
Assume that $M_\epsilon \doteq \inf_{\mathcal{D}} (v_\epsilon-f_t)<0$, in which
case there also holds: $M_\epsilon =\inf_{\mathbb{R}^N} (v_\epsilon-f_t)$. Let
$\{x_n\}_{n=1}^\infty$ me a minimizing sequence in $\mathcal{D}$. Applying
(\ref{bda}) at each $x_n$ and passing to the limit $n\to\infty$, it
follows that: $M_\epsilon>M_\epsilon$, which is a contradiction.
\smallskip
{\bf 2.} To show (ii), fix $t=t_0$ and recall that (i) implies:
$$0\leq v_\epsilon(x) -f_t(x)
=\sup_{\sigma_I}\inf_{\sigma_{II}}\mathbb{E}\big[f_t\circ X^{\epsilon, x, \sigma_{I}, \sigma_{II}}_\tau
- f_t(x)\big].$$
Since $\frac{\tilde R^{-t} - R^{-t}}{2}>0$, it follows that
there exists $\sigma_{I,0}$ such that for all $\sigma_{II}$ there holds:
\begin{equation*}
\begin{split}
- \frac{\tilde R^{-t} - R^{-t}}{2}&\leq \mathbb{E}\big[f_t\circ X^{\epsilon, x, \sigma_{I,0}, \sigma_{II}}_\tau
- f_t(x)\big] \\ & = \int_{\{|X_\tau|\geq R\}} f_t(X_\tau) -
f_t(x)\;\mbox{d}\mathbb{P} + \int_{\{|X_\tau|<\leq 1\}} f_t(X_\tau) - f_t(x)\;\mbox{d}\mathbb{P}
\\ & \leq \mathbb{P}(|X_\tau|\geq R) \big(R^{-t} - \tilde R^{-t} \big) +
(1-\mathbb{P}(|X_\tau|\geq R)) \big(2^{t} - \tilde R^{-t} \big)
\\ & = \mathbb{P}(|X_\tau|\geq R) \big(R^{-t} - 2^t\big) + 2^t - \tilde R^{-t}.
\end{split}
\end{equation*}
Consequently, we obtain: $\mathbb{P}(|X_\tau|\geq R)\leq \frac{\frac{1}{2} (\tilde R^{-t} - R^{-t})
+ 2^t -\tilde R^{-t}}{2^t- R^{-t}} = \frac{ 2^t - \frac{1}{2} (\tilde
R^{-t} + R^{-t})}{2^t- R^{-t}} \doteq\theta_{\tilde R, R}<1.$
The statement in (iii) follows by scaling invariance after applying
(ii) to $R<R^2$ in place of $\tilde R<R$, so that $\theta_R\doteq\theta_{\tilde R, R}$. This ends the proof.
\end{proof}
\medskip
We finally are ready to give:
\smallskip
\noindent {\bf Proof of Theorem \ref{cone_thengood}}
The cone condition implies existence of $d>1$ and $\bar r>0$ such
that for all $r<\bar r$ there is a ball $B_r(\bar x)\subset \mathbb{R}^N\setminus
\mathcal{D}$, centered at $\bar x$ with $|x-\bar x|=rd$. Define $R=2d-1$, so that $x\in B_{rR}(\bar x)$.
Given $\delta>0$, let $r<\bar r$ be such that: $\delta\geq rR^2 + rd =
r\big(R^2 + \frac{R+1}{2}\big)$. Letting $\hat\delta =rd$ we get:
\begin{equation}\label{g}
B_{\hat\delta}(x)\subset B_{rR}(\bar x)\setminus \bar
B_r(x)\quad\mbox{ and }\quad B_{rR^2}(\bar x) \subset B_\delta(x).
\end{equation}
Fix $x_0\in B_{\hat\delta}(x)\cap\mathcal{D}$ and $\epsilon< r\epsilon_0$,
where $\epsilon_0$ is as in Proposition \ref{fe_prob} (iii). Denote
by $\bar\tau$ the exit time from the annulus $B_{rR^2}(\bar x) \setminus \bar B_r(x)$.
Then, there exists $\sigma_{I,0}$ such that for all $\sigma_{II}$ there holds:
$$\mathbb{P}\big(\exists n<\tau \quad X_n^{\epsilon, x_0,\sigma_{I,0},
\sigma_{II}}\not\in B_\delta(x)\big)\leq
\mathbb{P}\big(X_{\bar\tau} \not\in B_{rR^2}(\bar x)\big)\leq \theta_R\doteq\bar\theta. $$
The first inequality above follows from (\ref{g}), while the second
inequality is a direct consequence of Proposition \ref{fe_prob} (iii).
\hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip
\section{Approximate equicontinuity of solutions to \ref{dppe}}\label{sec_convergence}
In this section, we assume that the open, bounded domain
$\mathcal{D}\subset\mathbb{R}^N$ satisfies the external cone condition. Our goal is
to show that the family $\{u_\epsilon\}_{\epsilon\to 0}$ of solutions
to \ref{dppe} with a given bounded, uniformly continuous $F:\mathbb{R}^N\to \mathbb{R}$
is then approximately equicontinuous, namely:
\begin{equation}\label{ecpe}
\forall\xi>0\quad \exists \hat\epsilon, \delta>0 \quad
\forall \epsilon\in (0,\hat\epsilon)\quad \forall x, \bar
x\in\mathbb{R}^N\qquad |x-\bar
x|<\delta \implies |u_\epsilon(x) - u_\epsilon(\bar x)|\leq\xi.
\end{equation}
Together with the uniform boundedness of the family $\{u_\epsilon\}_{\epsilon\to
0}$, the above condition yields, via the Ascoli-Arzel\`a theorem, that
every sequence in the said family has a further subsequence, converging
uniformly as $\epsilon\to 0$ to some continuous limit function.
\begin{lemma}\label{bdary_enough}
Condition (\ref{ecpe}) is implied by the following weaker
equicontinuity statement:
\begin{equation}\label{ecpe1}
\begin{split}
\forall\xi>0\quad \exists \hat\epsilon, \delta>0 \quad
\forall \epsilon\in (0,\hat\epsilon)\quad & \forall x\in\mathcal{D},~ \bar
x\in\partial\mathcal{D}\qquad \\ & |x-\bar x|<\delta \implies |u_\epsilon(x) - u_\epsilon(\bar x)|\leq\xi.
\end{split}
\end{equation}
\end{lemma}
\begin{proof}
Since $u_\epsilon = F$ on $\mathbb{R}^N\setminus\mathcal{D}$, we get (\ref{ecpe}) for
$x,\bar x\not\in\mathcal{D}$ in view of the uniform continuity of $F$. By
(\ref{ecpe1}), it suffices to consider the case $x,\bar x\in\mathcal{D}$. Fix
$\xi>0$ and choose $\hat\epsilon, \hat\delta>0$ such that:
$$\epsilon\in (0,\hat\epsilon),~~ z\in\mathbb{R}^N\setminus \mathcal{D}^{\hat\delta},~~
|w|\leq\hat\delta \implies |u_\epsilon(z) - u_\epsilon(z+w)|\leq \xi,$$
where we denoted the inner set:
$$\mathcal{D}^{\hat\delta} = \big\{x\in \mathcal{D};~ \mathrm{dist}(x,\mathbb{R}^N\setminus\mathcal{D})>\hat\delta\big\}.$$
Fix $x,\bar x\in \mathcal{D}$ such that $|x-\bar x|<\frac{\hat\delta}{2}$
and consider the function $\bar u_\epsilon:\mathbb{R}^N\to\mathbb{R}$ given by:
$$\bar u_\epsilon(z) \doteq u_\epsilon\big(z-(x-\bar x)\big)+\xi. $$
Observe that $\bar u_\epsilon$ solves \ref{dppe} on $\mathcal{D}^{\hat\delta}$, and
subject to its own external data $\bar u_\epsilon$ on
$\mathbb{R}^N\setminus\mathcal{D}^{\hat\delta}$, as:
\begin{equation*}
\begin{split}
\mathcal{A}_\epsilon \bar u_\epsilon(z) & = \frac{1}{2}
\big(\inf_{|y|=1} + \sup_{|y|=1}\big)\fint_{T_\mathrm{\bf p}^{\epsilon,
\infty}(y)} u_\epsilon\big(z-(x-\bar x)+\hat
z\big)+\xi\;\mathrm{d}\mu_s^N(\hat z) \\ & = u_\epsilon(z-(x-\bar x)) +\xi
= \bar u_\epsilon(z) \qquad \mbox{for all }\; z\in \mathcal{D}^{\hat\delta}.
\end{split}
\end{equation*}
On the other hand, $u_\epsilon$ solves the same problem (with its own external data). Since:
$$u_\epsilon(z)-\bar u_\epsilon(z) = u_\epsilon(z) -
u_\epsilon(z-(x-\bar x))-\xi\leq 0 \qquad\mbox{for all }\; z\not\in \mathcal{D}^{\hat\delta},$$
the monotonicity of the solution operator to \ref{dppe}
(see Theorem \ref{th_exists}) implies that
$u_\epsilon\leq\bar u_\epsilon$ in $\mathbb{R}^N$, so in particular we get:
$u_\epsilon(x) \leq u_\epsilon(\bar x)+\xi$.
The reverse inequality $u_\epsilon(x) \geq u_\epsilon(\bar x)-\xi$ can
be shown by a symmetric argument. The proof is done.
\end{proof}
\medskip
We now replace condition (\ref{ecpe1}) by the boundary game regularity
condition in the spirit of condition (\ref{gr1}). More precisely, we say that $x\in\partial\mathcal{D}$ is
game-regular when:
\begin{equation}\label{gr}
\forall \xi,\delta>0\quad\exists \hat\delta,
\hat\epsilon>0\quad\forall \epsilon<\hat\epsilon,~ \bar x\in B_{\hat\delta}(x)\cap\mathcal{D} \quad
\exists \sigma_{I,0}\quad\forall\sigma_{II}\quad
\mathbb{P}\big(X_\tau^{\epsilon, x,\sigma_{I,0}, \sigma_{II}}\not\in B_\delta(x)\big)\leq\xi.
\end{equation}
\begin{lemma}\label{gr_then_good}
If every boundary point $x\in\partial\mathcal{D}$ satisfies (\ref{gr}), then
(\ref{ecpe1}) holds for every bounded, uniformly continuous data function $F:\mathbb{R}^N\to \mathbb{R}$.
\end{lemma}
\begin{proof}
Fix $\xi>0$ and choose $\delta>0$ such that:
$$|F(x) - F(\bar x)|\leq\frac{\xi}{3}\qquad\mbox{for all }\; |x-\bar x|<\delta.$$
By (\ref{gr}) there exists $\hat\delta, \hat\epsilon>0$ so that for
all $\epsilon<\hat\epsilon$, $x\in\partial\mathcal{D}$ and $\bar x\in
B_{\hat\delta}(x)$ there exists $\sigma_{I,0}$ with:
$$\mathbb{P}\big(X_\tau^{\epsilon, x,\sigma_{I,0}, \sigma_{II}}\not\in
B_\delta(x)\big)\leq\frac{\xi}{1+6\|F\|_{L^\infty}}\qquad\mbox{for all }\; \sigma_{II}.$$
Taking $\epsilon, x, \bar x$ as indicated, we obtain:
\begin{equation*}
\begin{split}
u_\epsilon(\bar x) - u_\epsilon(x) & = u^\epsilon_I(\bar x) - F(x) \geq
\inf_{\sigma_{II}}\mathbb{E}\big[ F\circ X_\tau^{\epsilon, x, \sigma_{I,0}, \sigma_{II}}
-F(x)\big] \\ & \geq \mathbb{E}\big[ F\circ X_\tau^{\epsilon, x, \sigma_{I,0}, \sigma_{II,0}} -F(x)\big]-\frac{\xi}{3}
\\ & = \int_{\{X_\tau\in B_\delta(x)\}}F(X_\tau)-F(x)\;\mbox{d}\mathbb{P} + \int_{\{X_\tau\not\in B_\delta(x)\}}F(X_\tau)-F(x)\;\mbox{d}\mathbb{P}
-\frac{\xi}{3} \\ & \geq -2\|F\|_{L^\infty}\cdot \mathbb{P}(X_\tau\not\in
B_\delta(x)) - \frac{\xi}{3} - \frac{\xi}{3} \geq -\xi,
\end{split}
\end{equation*}
where we used an almost-infimizing strategy $\sigma_{II,0}$. The
inequality $u_\epsilon(\bar x) - u_\epsilon(x)\leq \xi$ follows by a
symmetric argument. This ends the proof.
\end{proof}
\begin{proposition}\label{pomoc_gr}
Fix $\delta>0$, $k\geq 2$, $\epsilon<\frac{\delta}{k}$ and $|x|<\frac{\delta}{k}$.
Then there holds:
$$\forall \sigma_{I},~~\sigma_{II}\qquad
\mathbb{P}\big(|X^{\epsilon, x, \sigma_{I}, \sigma_{II}}_{\bar{\bar \tau}}|\geq
\delta\big)\leq\big(\frac{2}{k-1}\big)^{2s}\doteq a_k,$$
where we defined the stopping time: $\bar{\bar\tau}\doteq \min\big\{i\geq 0;~ |X_i|\geq
\frac{\delta}{k}\big\}$.
\end{proposition}
\begin{proof}
Let $\delta,k,\epsilon,x$ be as in the statement of the result. It follows that:
\begin{equation*}
\begin{split}
&\mathbb{P}\big(|X_{\bar{\bar \tau}}|\geq\delta\big) \\ & \leq \sup\Big\{
\frac{\mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(y))\setminus
B_\delta(0)\big) + \mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(\bar y))\setminus B_\delta(0)\big)}
{\mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(y))\setminus
B_{\delta/k}(0)\big) + \mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(\bar y))\setminus B_{\delta/k}(0)\big)};
~ |x|<\frac{\delta}{k},~ |y|=|\bar y|=1\Big\} \\ &\leq
\sup\Big\{ \frac{\mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(y))\setminus
B_\delta(0)\big)}{\mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(y))\setminus
B_{\delta/k}(0)\big)}; ~ |x|<\frac{\delta}{k},~ |y|=1\Big\},
\end{split}
\end{equation*}
where we used the fact that $\frac{\alpha_1+\beta_1}{\alpha_2+\beta_2}\leq\max\big\{
\frac{\alpha_1}{\alpha_2}, \frac{\beta_1}{\beta_2}\big\}$. Further, denoting:
\begin{equation*}
a= \inf_{|x|<\delta/k}\mathrm{dist}(x,\mathbb{R}^N\setminus B_\delta(0)) =
\delta - \frac{\delta}{k},\qquad
b= \sup_{|x|<\delta/k}\mathrm{dist}(x,\mathbb{R}^N\setminus B_{\delta/k}(0)) = 2 \frac{\delta}{k},
\end{equation*}
leads to:
\begin{equation*}
\mathbb{P}\big(|X_{\bar{\bar \tau}}|\geq\delta\big) \leq
\frac{\sup\Big\{ \mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(y))\setminus
B_\delta(0)\big);~ |x|<\frac{\delta}{k},~ |y|=1\Big\} }
{\inf\Big\{ \mu_s^N\big((x+T_\mathrm{\bf p}^{\epsilon,\infty}(y))\setminus
B_\delta(0)\big);~ |x|<\frac{\delta}{k},~ |y|=1\Big\} }\leq
\frac{\mu_s^N(T_\mathrm{\bf p}^{a,\infty})}{\mu_s^N(T_\mathrm{\bf p}^{b,\infty})}.
\end{equation*}
Since $\mu_s^N(T_\mathrm{\bf p}^{a,\infty})= \frac{C(N,s) |A_\mathrm{\bf p}|}{2s
a^{2s}}$, we obtain that $\mathbb{P}\big(|X_{\bar{\bar
\tau}}|\geq\delta\big)\leq \big(\frac{b}{a}\big)^{2s}$, as claimed.
\end{proof}
\medskip
Here is the main result of this section:
\begin{theorem}\label{main_equi_thm}
Let $\mathcal{D}\subset\mathbb{R}^N$ be an open, bounded domain satisfying the external
cone condition. Then (\ref{gr}) holds for every $x\in\partial\mathcal{D}$.
\end{theorem}
\begin{proof}
{\bf 1} Fix $\xi,\delta>0$ and $x\in\partial\mathcal{D}$. Without loss of
generality $x=0$. Fix $k_0\geq 4$ such that
$a_{k_0}=\big(\frac{2}{k_0-1} )^{2s}<\frac{\xi}{2}$. It follows by
Proposition \ref{pomoc_gr} that for all $\epsilon<\frac{\delta}{k_0}$ we have:
\begin{equation}\label{ajeden}
\begin{split}
\mathbb{P} \big(X_{\tau} \not \in B_\delta(0)\big) & = \mathbb{P} \Big(\big\{|X_{
\tau}|\geq\delta\big\}\cap \big\{ \exists n<\tau \quad
|X_n|\in\big [\frac{\delta}{k},\delta\big )\big\} \Big) \\ & \quad
\qquad\qquad + \mathbb{P} \Big(\big\{|X_{\tau}|\geq\delta\big\}\cap \big\{ \not\exists n<\tau \quad
|X_n|\in\big [\frac{\delta}{k},\delta\big )\big\} \Big)
\\ & \leq \mathbb{P} \Big(\exists n<\tau \quad |X_n|\geq\frac{\delta}{k_0}\Big) + \frac{\xi}{2}.
\end{split}
\end{equation}
Denote: $\epsilon_0 = \delta_1=\frac{\delta}{k_0}$. We now show that:
\begin{equation}\label{gr2}
\begin{split}
\exists \hat\delta<\delta_1,~~\hat\epsilon<\epsilon_0\quad\forall \epsilon<\hat\epsilon,~~
|x_0|<\hat\delta\quad\exists\sigma_{I,0}\quad\forall\sigma_{II}\qquad \mathbb{P}\big(
\exists n<\tau \quad |X_n|\geq \delta_1\big) \leq \frac{\xi}{2}.
\end{split}
\end{equation}
Together with (\ref{ajeden}), (\ref{gr2}) will establish the result.
\smallskip
{\bf 2.} By Proposition \ref{pomoc_gr}, there exists $k\geq k_0$ such
that $\bar\theta + a_k<1$. Let $m\geq 2$ satisfy:
\begin{equation}\label{adwa}
\big(\bar\theta + a_k\big)^m\leq\frac{\xi}{2},
\end{equation}
where $\bar\theta$ is as in (\ref{gr1}).
We now define $\{\epsilon_i\}_{i=1}^m$, $\{\delta_i\}_{i=2}^m$,
by applying Theorem \ref{cone_thengood} to $\delta_i$ in place of
$\delta$, recursively in:
\begin{equation}\label{set}
\epsilon_i = \min\{\epsilon_{i-1},\hat\epsilon(\delta_i)\},\quad
\delta_i = \frac{\hat\delta(\delta_{i-1})}{k}.
\end{equation}
We also set:
$$\hat\epsilon\doteq \min\{\epsilon_m, \frac{\delta_m}{k}\},\quad \hat\delta \doteq\hat\delta(\delta_m),
\quad \mbox{ and }\quad \kappa_i\doteq\min\{j\geq 0;~ |X_j|\geq \delta_i\} \quad \mbox{for all }\;
i=1\ldots m.$$
Given $\epsilon<\hat\epsilon$ and $|x_0|<\hat\delta$, define the strategy $\sigma_{I,0}$ as follows:
\begin{itemize}[leftmargin=7mm]
\item For $j<\kappa_m$, we utilize the strategy $\sigma_{I,0,m}$ from
(\ref{gr1}), chosen for the starting point $x_0$:
$$\sigma_{I,0}^{j}\big(x_0, (x_1,z_1,s_1),\ldots, (x_j, z_j,s_j)\big) =
\sigma_{I,0,m}^{j} \big(x_0, (x_1,z_1,s_1),\ldots, (x_j, z_j,s_j)\big). $$
\item If $|X_{\kappa_m}|\geq k\delta_m$, we keep the definition above for all $j\geq \kappa_m$.
\item If $|X_{\kappa_m}|\in [\delta_m, k\delta_m)$, then for
$j\in[\kappa_m, \kappa_{m-1})$ we utilize the strategy $\sigma_{I,0,m-1}$ from
(\ref{gr1}), chosen for the starting point $X_{\kappa_m}$:
$$\sigma_{I,0}^{j}\big(x_0, (x_1,z_1,s_1),\ldots, (x_j, z_j,s_j)\big) =
\sigma_{I,0,m-1}^{j-\kappa_m} \big(x_{\kappa_m}, (x_{\kappa_m+1},z_{\kappa_m+1},s_{\kappa_m+1}),\ldots, (x_j, z_j,s_j)\big). $$
\item Continue in this fashion, concatenating the strategies
$\sigma_{I,0,i}$ for the remaining indices $i=m-2,\ldots, 1$. Each
$\sigma_{I,0,i}$ is chosen from (\ref{gr1}) for the starting point $X_{\kappa_{i+1}}$.
\end{itemize}
\smallskip
By several applications of (\ref{gr1}) and Proposition \ref{pomoc_gr}, it follows that:
\begin{equation*}
\begin{split}
\mathbb{P} \big(\exists n&<\tau \quad |X_n^{\epsilon, x_0, \sigma_{I,0},
\sigma_{II}}|\geq\delta_1\big) = \mathbb{P} \big(\kappa_1<\tau\big)
\\ & = \mathbb{P} \big(\kappa_2<\kappa_1<\tau \mbox{ and } |X_{\kappa_2}|\in [\delta_2, 4\delta_2)\big)
+ \mathbb{P} \big(\kappa_2\leq \kappa_1<\tau \mbox{ and } |X_{\kappa_2}|\geq 4\delta_2\big)
\\ & \leq \bar\theta \cdot \mathbb{P} \big(\kappa_2<\tau \mbox{ and } |X_{\kappa_2}|\in [\delta_2, 4\delta_2)\big)
+ \mathbb{P} \big(|X_{\kappa_2}|\geq 4\delta_2\big) \cdot \mathbb{P} \big(\kappa_2<\tau\big) \\ & \leq
\big(\bar\theta + a_k\big) \mathbb{P} (\kappa_2<\tau).
\end{split}
\end{equation*}
An iteration of the above argument and one final application of (\ref{gr1}) together with (\ref{adwa}) yield:
\begin{equation*}
\begin{split}
\mathbb{P} \big(\exists n&<\tau \quad |X_n|\geq\delta_1\big) \leq
\big(\bar\theta + a_k\big)^{m-1} \mathbb{P} (\kappa_m<\tau) \leq \big(\bar\theta + a_k\big)^{m}
\leq\frac{\xi}{2}.
\end{split}
\end{equation*}
Thus (\ref{gr2}) has been verified.
\end{proof}
\medskip
By Lemma \ref{gr_then_good}, Lemma \ref{bdary_enough} and invoking the
Ascoli-Arzel\`a theorem, we immediately get, in view of
Theorem \ref{th_viscosity} and Remark \ref{remi_visc}:
\begin{corollary}
Let $\mathcal{D}\subset\mathbb{R}^N$ be an open, bounded domain satisfying the external
cone condition. For a given bounded, uniformly continuous data
function $F:\mathbb{R}^N\to\mathbb{R}$, consider the family $\{u_\epsilon\}_{\epsilon\to 0}$ of solutions to \ref{dppe}.
Then, every sequence $\{u_\epsilon\}_{\epsilon\in J, \epsilon\to 0}$
has a further subsequence that converges uniformly as $\epsilon\to 0$,
to a viscosity solution of (\ref{nD}).
\end{corollary}
|
1,314,259,993,534 | arxiv | \section{Introduction}
Since their inception, sequential Monte Carlo (SMC) resampling methods (a.k.a., particle filters)~\cite{Doucet2000,Doucet2001} have emerged as a useful tool to estimate and track targets with non-linear and/or non-Gaussian dynamics. Unlike the Kalman filter~\cite{Kalman1960} and its variants~\cite{Wan2000}, particle filters (PF) do not use a fixed functional form of the posterior probability density function (PDF). Instead, they employ a finite number of points, called ``particles'', to discretely approximate the posterior probability density function (PDF) in state space~\cite{Thrun2005}.
A standard PF algorithm consists of two parts: (i) sequential importance sampling (SIS) and (ii) resampling~\cite{Doucet2001}. A popular combined implementation of these two parts is the sequential importance resampling (SIR) algorithm. Depending on the application, SIR may need a large number of particles to adequately sample the state space. This demands substantial computational resources that scale linearly with the number of particles and may hinder actualization of many practical real-time applications.
Here, we introduce the \textit{piecewise constant SIR} (pcSIR) algorithm, which reduces the computational cost of SIR while providing tracking accuracy comparable to standard SIR. The main idea behind pcSIR is to group particles in state space (i.e., creating \textit{bins}) and to represent each group of particles by a single \textit{representative particle}. Only the weight of this representative dummy particle is then updated. We choose the dummy particle to sit in the center of mass of the group of particles it represents and to carry the mean properties of all the particles in the respective group. This is inspired by first-order multipole expansions from particle function approximation theory~\cite{greengard1987fast}. Once the weight of the dummy particle is computed, all other particles in the same group receive the same weight, which is copied from the dummy instead of being re-computed through the likelihood model for each individual particle, as in the original SIR. This way, an pcSIR-based PF can outperform a classical SIR-based PF by orders of magnitude in overall runtime in applications where evaluation of the likelihood is computationally expensive. Expensive likelihoods are particularly common when tracking objects in images, where each likelihood evaluation entails a numerical simulation of the image-formation process (see, e.g., Ref.~\cite{smaltmi}).
We outline the mathematical roots of pcSIR and derive an upper bound on the expected approximation error with respect to the chosen \textit{bin} (i.e., Cartesian mesh \textit{cell} in 2D) size. This error stems from the point-wise approximation of the likelihood function and is quantified using mid-point Riemann-sum error analysis~\cite{Davis1967,Thomas1984}. We numerically quantify the errors in the state estimates (based on the posterior distribution) obtained by SIR and pcSIR as a function of the number of particles used, and show that there is almost no difference between SIR and pcSIR in terms of tracking accuracy. Furthermore, with a focus on biological image processing, we show that relating the bin size to the pixel size of an image provides satisfactory, and sometimes even higher-quality results in pcSIR compared with standard SIR.
The structure of this manuscript is as follows: Section~2 summarizes similar approaches to PF for state estimation. Section~3 recapitulates the classical SIR algorithm, whereas Section~4 introduces our new pcSIR method, discusses the theoretical framework behind pcSIR, and provides detailed pseudocode. In Section~5, we benchmark pcSIR against SIR in terms of tracking accuracy, runtime, and error convergence using two different likelihood functions and different types of images. Finally, Section~6 discusses the results and concludes the manuscript with an outlook.
\section{Related Works}
\label{sec:relwork}
Recent years have seen great interest in challenging tracking problems where the targets usually have non-linear and/or non-Gaussian dynamics. Two nonparametric algorithms, namely the histogram filters (HF) and particle filters (PF), stand out amongst others as main classes of algorithms that successfully tackle difficult tracking problems~\cite{Thrun2005}. In both variants, posterior distributions are approximated by a finite set of values.
In HF, the state space is decomposed into smaller -- usually rectangular -- boxes and only a single value is used to represent the cumulative posterior in each box. In a mathematical sense, HFs can be seen as piecewise constant approximations to a posterior distribution. The size and the number of the boxes affect the computational runtime and tracking accuracy of an application.
In PF, random samples (i.e., point particles) are drawn from the posterior distribution and typically a large number of particles is required to track targets successfully. This increases the computational resources needed, and many PF-based applications are limited by their computational cost.
Combining ideas from PF and HF, the box particle filter (BPF)~\cite{Abdallah2008} uses box-shaped particles. While BPF resembles HF with mobile boxes, these box particles are propagated based on interval analysis~\cite{Jaulin2001}, which is fundamentally different from PF and HF. BPF is especially useful in situations where imprecise measurements yield wide posterior densities~\cite{Gning2013}. Despite its advantages, however, BPF is not well understood and lacks important theoretical background, such as a proof of convergence and insight into the resampling step based on interval analysis~\cite{Gning2013}. Also, its exact computational cost yet remains to be investigated and compared with traditional HF and PF.
\section{The Classical SIR Particle Filter}
\label{sec:sir}
Recursive Bayesian importance sampling~\cite{Geweke1989} of an unobserved and discrete Markov process $\{\mathbf{x}_{k}\}_{k=1,\ldots ,K}$ is based on three components: (i) the measurement vector $\mathbf{Z}^k=\{\mathbf{z}_{1},\ldots ,\mathbf{z}_{k}\}$, (ii) the dynamics (i.e., state transition) probability distribution $p(\mathbf{x}_{k} | \mathbf{x}_{k-1})$, and (iii) the likelihood $p(\mathbf{z}_{k} | \mathbf{x}_{k})$. Then, the state posterior $p(\mathbf{x}_{k} | \mathbf{Z}^{k})$ at time $k$ is recursively computed as:
\begin{equation}
\underset{\text{posterior}}{\underbrace{p(\mathbf{x}_{k} | \mathbf{Z}_{k})}} = \frac{\overset{\text{likelihood}} {\overbrace{ p(\mathbf{z}_{k} | \mathbf{x}_{k})}}\,\, \overset{\text{prior}} {\overbrace{p(\mathbf{x}_{k} | \mathbf{Z}^{k-1})}}}{\underset{\text{normalization}}{\underbrace{p(\mathbf{z}_{k} | \mathbf{Z}^{k-1})}}}\, ,
\end{equation}
where the prior is defined as:
\begin{equation}
p(\mathbf{x}_{k} | \mathbf{Z}^{k-1}) = \int p(\mathbf{x}_{k} | \mathbf{x}_{k-1}) \, p(\mathbf{x}_{k-1} | \mathbf{Z}^{k-1}) \, \mathrm{d}\mathbf{x}_{k-1}.
\end{equation}
In the PF approach, the posterior at each time point $k$ is approximated by $N$ weighted samples (i.e., particles) $\{\mathbf{x}^i_k, w^i_k\}_{i=1,\ldots ,N}$. This approximation is achieved by drawing a set of particles from an importance function (i.e., proposal distribution) $\pi(\cdot)$ and updating their weights according to the dynamics PDF and the likelihood. This process is called sequential importance sampling (SIS)~\cite{Doucet2001}. However, SIS suffers from the \textit{weight degeneracy}, where small particle weights become even smaller and do not contribute to the posterior any more. To overcome this, a \textit{resampling} step is performed~\cite{Doucet2001} whenever the sample size falls below a preset threshold. Using the standard notation, as in Refs.~\cite{Bashi2003,Doucet2001}, the complete SIR algorithm is given in Algorithm~\ref{alg:sir}.
\begin{algorithm}
\caption{Sequential Importance Resampling (SIR)} \label{alg:sir}
\begin{algorithmic}[1]
\Procedure{SIR}{}
\For{$i=1 \to N$} \Comment{Initialization, $k$=0}
\State $w_0^{i} \gets 1/N$
\State Draw $\mathbf{x}_0^{i}$ from $\pi(\mathbf{x}_0)$
\EndFor
\For{$k=1 \to K$}
\For{$i=1 \to N$} \Comment SIS step
\State Draw a sample $\tilde{\mathbf{x}}_k^i$ from $\pi(\mathbf{x}_k | \mathbf{x}_{k-1}^i,\mathbf{Z}^{k})$
\State Update the importance weights
\State $\tilde{w}_k^i \gets w_{k-1}^i \frac{p(\mathbf{z}_{k} | \tilde{\mathbf{x}}_{k}^i) p(\tilde{\mathbf{x}}_{k}^i | \mathbf{x}_{k-1}^i)}{\pi(\tilde{\mathbf{x}}_k^i | \mathbf{x}_{k-1}^i,\mathbf{Z}^{k})}$
\EndFor
\For{$i=1 \to N$}
\State $w_k^i \gets \tilde{w}_k^i / \sum_{j=1}^{N} \tilde{w}_k^j$
\EndFor
\Comment Calculate the effective sample size
\State $\widehat{N}_{\textrm{eff}} \gets 1 / \sum_{j=1}^{N} (w_{k}^{j})^2$
\If{$\widehat{N}_{\textrm{eff}}<N_{\textrm{threshold}}$} \Comment Resampling step
\State Sample a set of indices $\{s(i)\}_{i=1,\ldots ,N}$ distributed such that $\Pr[s(i)=l]=w_{k}^{l}$ for $l= 1 \to N$.
\For{$i=1 \to N$}
\State $\mathbf{x}_{k}^{i} \gets \tilde{\mathbf{x}}_{k}^{s(i)}$
\State $w_{k}^{i} \gets 1/N$ \Comment Reset the weights
\EndFor
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{The Piecewise Constant SIR Particle Filter}
\label{sec:asir}
In classical SIR, all particle weights are updated according to the likelihood, which may impart a high computational load. Moreover, the computational cost scales linearly with the number of particles. Therefore, depending on the application, the likelihood evaluation often constitutes the most time-consuming part of a PF.
To address this problem, we propose the pcSIR algorithm, which aims at reducing the computational cost of importance weight update by exploiting the nature of the particle function approximation underlying SIR~\cite{greengard1987fast}. We do this by grouping the particles into non-overlapping multi-dimensional \textit{bins} (i.e., Cartesian mesh cells in higher dimensions), which are then represented by only a single dummy particle positioned at the center of mass of the real particles in that bin. The center of mass is computed using the state vectors and weights of all particles within the bin and is solely used to \textit{represent} that bin by a single \textit{dummy particle}. This amounts to a first-order multipole expansion of the PDF approximated by the particles~\cite{greengard1987fast}. Higher-order approximations are easily possible by storing on the dummy particle not only the mean, but also higher-order moments of the particle distribution in the bin. However, the overall error of a PF is dominated by the Monte-Carlo sampling error, which is of order 1/2. A first-order function approximation is hence sufficient.
The importance weight update is then only applied to the dummy particle. All other particles in the same bin are assigned the same weight that the dummy particle received. Thus, we approximate the likelihood by a mixture of uniform PDFs and bypass the costly likelihood update step for all particles. The pcSIR algorithm differs from SIR only in the SIS part, where the particles are binned and several averaging operations are performed. This makes it straightforward to implement pcSIR in any existing SIR code. The detailed pseudo-code is given in Algorithm~\ref{alg:asir}.
\begin{algorithm}
\caption{Piecewise Constant Sequential Importance Resampling (pcSIR)} \label{alg:asir}
\begin{algorithmic}[1]
\Procedure{pcSIR}{}
\For{$i=1 \to N$} \Comment{Initialization, k=0}
\State $w_0^{i} \gets 1/N$
\State Draw $\mathbf{x}_0^{i}$ from $\pi(\mathbf{x}_0)$
\EndFor
\State Create $B$ bins of equal size $I_{1,\ldots ,B}$
\For{$k=1 \to K$}
\For{$i=1 \to N$} \Comment piecewise constant SIS step
\State Draw a sample $\tilde{\mathbf{x}}_k^i$ from $\pi(\mathbf{x}_k | \mathbf{x}_{k-1}^i,\mathbf{Z}^{k})$
\State Assign $\tilde{\mathbf{x}}_k^i$ to a bin
\EndFor
\For{$j=1 \to B$} \Comment Visit all bins \\
\Comment Create a \textit{representative} particle that has the mean values of the state vector of all particles in the same bin
\State $\mathbf{x}_{\textrm{dum}} \gets \textrm{mean}
\{\tilde{\mathbf{x}}_k^1,\ldots ,\tilde{\mathbf{x}}_k^{N_{I_{j}}}\}$
\State Update the importance weights
\State $w_{\textrm{dum}_k} \gets w_{\textrm{dum}_{k-1}} \frac{p(\mathbf{z}_{k} | \mathbf{x}_{\textrm{dum}}) p(\mathbf{x}_{\textrm{dum}} | \mathbf{x}_{k-1}^i)}{\pi(\mathbf{x}_{\textrm{dum}} | \mathbf{x}_{k-1}^i,\mathbf{Z}^{k})}$
\For{all $\tilde{\mathbf{x}}_k^i$ in bin $I_j$}
\State $w_k^i \gets w_{\textrm{dum}_k}$
\EndFor
\EndFor
\Comment Calculate the effective sample size
\State $\widehat{N}_{\textrm{eff}} \gets 1 / \sum_{j=1}^{N} (w_{k}^{j})^2$
\If{$\widehat{N}_{\textrm{eff}}<N_{\textrm{threshold}}$} \Comment Resampling step
\State Sample a set of indices $\{s(i)\}_{i=1,\ldots ,N}$ distributed such that $\Pr[s(i)=l]=w_{k}^{l}$ for $l= 1 \to N$.
\For{$i=1 \to N$}
\State $\mathbf{x}_{k}^{i} \gets \mathbf{x}_{k}^{s(i)}$
\State $w_{k}^{i} \gets 1/N$ \Comment Reset the weights
\EndFor
\EndIf
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
The final function approximation used in pcSIR is related to BPF, where the box support is also approximated by a mixture of piecewise constant functions~\cite{Gning2010}. However, the theoretical motivation and the algorithmic implementation of this piecewise constant approximation is very different in pcSIR and in BPF. Gning \textit{et al.}~\cite{Gning2010,Gning2013} used interval analysis~\cite{Jaulin2001} to show that the uniform PDF approximation of the {\em posterior} becomes more accurate as the number of intervals increases. In pcSIR, the piecewise constant approximation of the {\em likelihood} is rooted in particle function approximation theory and can be understood as a first-order multipole expansion~\cite{greengard1987fast}. This dispenses with the need for interval analysis and provides a different algorithmic implementation and error analysis.
Unlike BPF, pcSIR still uses point particles. Thus, state estimation problems that result in narrow posterior densities can easily be handled by pcSIR, which is not the case for BPF. With pcSIR, we provide a simple way of using uniform PDFs to approximate the {\em likelihood} function, which eventually results in a satisfactory posterior representation through the Bayesian formulation. Moreover, it requires only few modifications to the classical SIR, which makes pcSIR an attractive choice for practical implementations.
\subsection{Theoretical framework}
The pcSIR algorithm is a function-approximation algorithm. It divides the $n$-dimensional state space into $n$-dimensional bins. In each bin, a sufficiently differentiable likelihood function is approximated by a constant value. The error analysis of such piecewise constant approximations is well understood on the basis of Taylor's theorem for multivariate functions.
For the sake of example, we present the theoretical framework of pcSIR with a focus on image processing. When processing a sequence of 2D images, the likelihood function $p(\mathbf{z}_{k}|\mathbf{x}_{k})$ is typically a two-dimensional function that is discretized over a finite set of particles. In SIR, the likelihood is approximated by $N$ particles, where the particle number $N$ defines the accuracy for the specific application. Therefore, the approximation error of SIR is denoted $\mathtt{E}_{SIR}(N)$.
With pcSIR, in the considered application, only the positions of the particles play a role in the likelihood update. This allows pcSIR to bin the state space. Therefore, the approximation error in $p(\mathbf{z}_{k}|\mathbf{x}_{k})$ depends on both the number of particles $N$ and the maximum lengths of the bins $l_x$ and $l_y$ in both dimensions. Hence, the overall approximation error of pcSIR is denoted $\mathtt{E}_{pcSIR}(N,l_x,l_y)$.
First, we analyze the effect of bin size on $\mathtt{E}_{pcSIR}(N,l_x,l_y)$. For that purpose, we consider two cases: The first considers bins of varying rectangular shapes (i.e., $l_x \neq l_y$). In this setting, we fix $N$ and let the approximation error depend on the bin lengths in both dimensions, hence $\mathtt{E}_{pcSIR}(l_x,l_y)$. In the second case, all cells are squares of edge length $l$. The pcSIR approximation error can then be expressed as $\mathtt{E}_{pcSIR}(l)$.
Second, we compare SIR with an pcSIR in which each bin corresponds to a single pixel in a ``pseudo''-tracking test case (see Section~\ref{sec:benchmark}) In this comparison, we assume Gaussian and uniform priors of different sizes. A smooth likelihood function is approximated by SIR and pcSIR and later applied to the prior. Thus, we obtain the estimation errors for the state. We call this experiment ``pseudo''-tracking, since by eliminating the explicit dynamics PDF, we can focus on the approximation error and its convergence with increasing $N$.
\subsection{The effect of cell size on $\mathtt{E}_{pcSIR}$}
The particle locations in a cell cannot be determined \textit{a priori} since the movement of the particles depends on the data. We hence assume that for small cells and statistically large numbers of particles, we have a uniform particle distribution within a cell. The approximation errors introduced by the pcSIR algorithm in 2D are described in detail in the Appendix.
In pcSIR, the state space is decomposed into non-overlapping cells. Choosing an appropriate cell size is hence crucial for pcSIR. Similar to histogram filters~\cite{Thrun2005}, the accuracy of pcSIR is determined by the cell size. In the highest possible resolution, there is one particle per cell, which recovers the classical SIR algorithm.
In image processing, it is convenient to choose the image pixels as the cells of pcSIR. This constitutes a good choice since in typical image-processing applications, the pixel size already reflects the sizes of the objects represented in the image in order not to under-sample the objects and not to store unnecessary data. We call pcSIR with single-pixel cells pcSIR-1x1. Due to the characteristics of the likelihood function, however, there may be cases where sub-pixel resolution or higher accuracy is needed. Therefore, we also investigate pcSIR-2x2, where each pixel is divided into four cells. In the following Section, we empirically benchmark the effect of cell size on pcSIR performance and accuracy.
\section{Experimental Results}
\label{sec:benchmark}
We study the performance of pcSIR by considering a biological image-processing application: the tracking of sub-cellular (here, ``cell'' refers to the biological cell being imaged and is not to be confused with the pcSIR bin cells) objects imaged by fluorescence microscopy~\cite{akhmanova2005, komarova2009mammalian,helmuth2009deconvolving}. There, intracellular structures such as endosomes, vesicles, mitochondria, or viruses are labeled with fluorescent dyes and imaged over time with a confocal microscope. Many biological studies start from analyzing the dynamics of those structures and extracting parameters that characterize their behavior, such as average velocity, instantaneous velocity, spatial distribution~\cite{helmuth2010beyond,Shivanandan:2013}, motion correlations, etc.
\subsection{Dynamics model}
The motion of sub-cellular objects can be represented by a variety of dynamics models, ranging from random walks to constant-velocity models to more complex dynamics where switching between motion types occurs~\cite{smal_media,godinez2012identifying}.
Here, we use a nearly-constant-velocity model, which is frequently used in practice~\cite{smaltmi,rong2003survey}. The state vector in this case is $\mathbf{x}=(\hat{x}, \hat{y}, v_x, v_y, I_0)^T$, where $\hat{x}$ and $\hat{y}$ are the $x$- and $y$-positions of an object, $(v_x,v_y)$ its velocity vector, and $I_0$ its fluorescence intensity.
\subsection{Likelihood / Appearance model}
Many sub-cellular objects are smaller than what can be resolved by the microscope, making them appear in a fluorescence image as diffraction-limited bright spots with an intensity profile given by the impulse-response function of the microscope, the so-called point-spread-function (PSF)~\cite{smaltmi,smal_media,helmuth2009deconvolving}.
In practice, the PSF of a fluorescence microscope is well approximated by a 2D Gaussian~\cite{thomann2002automatic,zhang2007gaussian}.
Object appearance in a 2D image is hence modeled as:
\begin{equation}
\label{eqn:object}
I(x,y;x_0, y_0) = I_0 \exp\left(-\frac{(x-x_0)^2 + (y-y_0)^2}{2\sigma^2_{\textrm{PSF}}}\right) + I_{\textrm{bg}},
\end{equation}
where $(x_0, y_0)$ is the position of the object, $I_0$ is its intensity, $I_{\textrm{bg}}$ is the background intensity, and $\sigma_{\textrm{PSF}}$ is the standard deviation of the Gaussian PSF. Typical microscope setups yield images with pixel edge lengths corresponding to 60 to 200\,nm real-world length in the imaged sample. For the images used here, the pixel size is 67\,nm and the microscope has $\sigma_{\textrm{PSF}}=78$\,nm (or 1.16 pixels). During image acquisition, the ``ideal'' intensity profile $I(x,y)$ is corrupted by measurement noise, which in the case of fluorescence microscopy has mixed Gaussian-Poisson statistics. For the resulting noisy image $\mathbf{z}_k=Z_k(x,y)$ at time point $k$, the likelihood $p(\mathbf{z}_k|\mathbf{x}_{k})$ is:
\begin{equation}
\label{eqn:likelihood}
p(\mathbf{z}_k|\mathbf{x}_{k}) \varpropto \exp\!\!\left(\!-\frac{1}{2\sigma^2_{\xi}}\!\!\sum_{(x_i, y_i)\in\mathbb{S}_{\mathbf{x}}}\!\!\!\!\!\!\left [Z_k(x_i, y_i)-I(x_i, y_i;\hat{x}, \hat{y})\right ]^2\!\!\right)\!\!,
\end{equation}
where $\sigma_{\xi}$ controls the peakiness of the likelihood, $(x_i, y_i)$ are the integer coordinates of the pixels in the image, $(\hat{x}, \hat{y})$ are the spatial components of the state vector $\mathbf{x}_k$, and $\mathbb{S}_{\mathbf{x}}$ defines a small region in the image centered at the object location specified by the state vector $\mathbf{x}_k$. Here, $\mathbb{S}_{\mathbf{x}}=[\hat{x} - 3\sigma_{\textrm{PSF}}, \hat{x} + 3\sigma_{\textrm{PSF}}]\times[\hat{y} - 3\sigma_{\textrm{PSF}}, \hat{y} + 3\sigma_{\textrm{PSF}}]$.
\subsection{Experimental setup}
We focus on single sub-cellular object tracking (a problem which is related to the ``track-before-detect'' problem~\cite{ristic2004beyond}) and compare pcSIR with SIR in two test cases, which differ in the size of the tracked object. We consider two different object sizes in order to compare cases where the likelihood is computationally cheap to evaluate with cases where this is more costly. 20 synthetic image sequences of different quality (i.e., signal-to-noise ratios, SNR) are generated by simulating a microscope. Each sequence is composed of 50 frames of size 512$\times$512 pixels. The movies show a single object moving according to the dynamics model. Examples are shown in Fig.~\ref{image_data}.
The two object sizes correspond to $\sigma_{\textrm{PSF}}=1.16$ and $\sigma_{\textrm{PSF}}=13$, and are named ``small object tracking'' and ``large object tracking'', respectively (Fig.~\ref{image_data}(a-c)). The positions and directions of motion of the objects are randomly chosen within the image plane. The speed (i.e., the displacement in pixels per frame) is drawn uniformly at random over the interval $[2, 7]$ for large objects and over $[2, 4]$ for small objects. The SNR of the images of large objects is 2 (ca. 6 dB), that for small objects is 4 (ca. 12 dB). We use the SNR definition for Poisson noise~\cite{cheezum2001quantitative}. In the literature on sub-cellular object tracking, a SNR of 4 is considered critical, as for lower SNRs many of the available tracking methods fail~\cite{thomann2002automatic}.
\begin{figure}[]
\centering
\includegraphics[width=0.8\columnwidth]{paper_imageData}
\caption{Examples of object appearance for different object sizes and SNR: (a) $\sigma_{\textrm{PSF}} = 1.16$, SNR=2, (b) $\sigma_{\textrm{PSF}} = 1.16$, SNR=4, (c) $\sigma_{\textrm{PSF}} = 13$, SNR=2. (d/e): Typical object trajectories generated using the nearly-constant-velocity dynamics model.}
\label{image_data}
\end{figure}
Knowing the ground-truth object positions and those estimated by the PF, we quantify the tracking accuracy by the root-mean-square error (RMSE) in units of pixels. The likelihood kernel for the large objects has a support of 65$\times$65 pixels and is correspondingly costly to evaluate. The kernel for the small objects has a support of 9$\times$9 pixels and is cheaper to evaluate. Examples of noise-free and noisy object profiles, together with their likelihood kernels, are shown in Fig.~\ref{Likelihoods}.
Using double-precision arithmetics, a single PF particle requires 52\,KB (i.e., six doubles and one integer) of computer memory. The particles are initialized at the ground-truth location and all tests are repeated 50 times for different realizations of the image-noise process on a single core of a 12-core Intel\textregistered \, Xeon\textregistered \, E5-2640 2.5\,GHz CPU with 128\,GB DDR3 800\,MHz memory on MPI-CBG's MadMax computer cluster. All algorithms are implemented in Java (v. 1.7.0\_13) within the Parallel Particle Filtering (\texttt{PPF}) library~\cite{demirel2013ppf}. The results are summarized in Figs.~\ref{asirvssir_bigObj} and \ref{asirvssir_smallObj} for large and small objects, respectively.
\begin{figure}[]
\centering
\includegraphics[width=0.8\columnwidth]{LikelihoodsGrid}
\caption{Examples of likelihood profiles. The noise-free objects are shown in (a, e), and the noisy (SNR=2) object in (i) with $\sigma_{\textrm{PSF}} = 1.16$. We show the corresponding likelihood kernels (b, f, j), the approximated likelihoods used by pcSIR-1x1 (c, g, k), and the approximated likelihoods used by pcSIR-2x2 (d, h, l). In (b, c, d) the parameter $\sigma_{\xi}$ is 30, for the rest $\sigma_{\xi}=10$. The distance between the grid-lines corresponds to the size of the image pixel.}
\label{Likelihoods}
\end{figure}
\subsection{Results}
When tracking large objects (Fig.~\ref{asirvssir_bigObj}), both pcSIR versions provide significant speedups over the classical SIR algorithm. For 12\,800 particles, pcSIR-1x1 is more than two orders of magnitude faster than SIR with a 2.4\% loss in tracking accuracy. pcSIR-2x2 provides an up to 5.8\%
better tracking accuracy than SIR while running over 50 times faster. Since SIR is also an approximation of the actual posterior distribution, in some cases pcSIR may provide a better representation of the posterior and thus a higher tracking accuracy. This phenomenon has been previously described~\cite{koblentspopulation}.
When tracking small objects, the likelihood support requires sub-pixel resolution and the effect of bin size is more visible (Fig.~\ref{asirvssir_smallObj}). pcSIR-1x1 uses rather coarse bins compared to the likelihood support (Fig.~\ref{Likelihoods}), resulting in a pronounced loss of tracking accuracy. Visually, however, the trajectories produced by SIR and pcSIR-1x1 are virtually indistinguishable, since the tracking accuracy of pcSIR-1x1 is still in the sub-pixel regime (about 0.27 pixel). When finer bins (pcSIR-2x2) are used, the tracking accuracy of pcSIR is again better than that of SIR, and pcSIR runs more than five times faster than SIR.
\begin{figure}[]
\centering
\includegraphics[width=0.92\columnwidth]{pcSIR-1x1-2x2_vs_SIR_largeObj}
\caption{Runtime performance and tracking accuracy of pcSIR-1x1 ($\times$) and pcSIR-2x2 ($\triangledown$) compared with SIR ($\circ$) for a 65 pixel wide likelihood kernel. The number of particles used starts from 100 and is doubled for each case until 12\,800. The timings of all three methods are presented in $\log$-$\log$ scale (upper left), whereas the relative speedups of the pcSIR methods over SIR are shown in the upper-right plot. The accuracy loss (lower right) of pcSIR-1x1 drops rapidly as the number of particles in the system is increased. Error bars show standard deviations across the 50 repetitions of each experiment.}
\label{asirvssir_bigObj}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=0.92\columnwidth]{pcSIR-1x1-2x2_vs_SIR_smallObj}
\caption{Runtime performance and tracking accuracy of pcSIR-1x1 ($\times$) and pcSIR-2x2 ($\triangledown$) compared with SIR ($\circ$) for a nine-pixel wide likelihood kernel. The number of particles used starts from 8\,000 and is doubled for each case until 1\,024\,000. The timings of all three methods are presented in $\log$-$\log$ scale (upper left), whereas the relative speedups of the pcSIR methods over SIR are shown in the upper-right plot. For the accuracy comparisons (lower left), we show only the results for pcSIR-2x2 and SIR, since pcSIR-1x1's coarse bin resolution results in a 150\% worse tracking accuracy than SIR. Error bars show standard deviations across the 50 repetitions of each experiment.}
\label{asirvssir_smallObj}
\end{figure}
\subsection{Convergence of SIR and pcSIR}
Both SIR and pcSIR employ particle approximations of a smooth, differentiable function, the order of accuracy of which depends on the number of particles $N$.
In order to eliminate uncertainties resulting from the dynamics model, we assume the prior $p(\mathbf{x}_k|\mathbf{Z}^{k-1})$ to be either a uniform distribution over 3$\times$3 or 5$\times$5 pixels, or a Gaussian with $\sigma_{\textrm{prior}}=\{0.5,0.8\}$, respectively. We then evaluate the likelihood in Eq.~(\ref{eqn:likelihood}) with $\sigma_{\xi} =20$ using both SIR and pcSIR. We call this a ``pseudo''-tracking experiment.
The object is a single PSF (Eq.~(\ref{eqn:object})) with $\sigma_{\textrm{PSF}}=1.16$. Visualizations of the object, likelihood, and prior are shown in Fig.~\ref{fig:spotandlikelihood}.
\begin{figure}[]
\centering
\includegraphics[width=\columnwidth]{SpotandLikelihood2}
\caption{The ``pseudo''-tracking experiment: (a) the object with $\sigma_{\textrm{PSF}} = 1.16$, SNR=2; (b) the corresponding likelihood with $\sigma_{\xi} = 20$; (c) a uniform prior of support 5$\times$5 pixel; (d) a Gaussian prior with $\sigma_{\textrm{prior}}=0.5$; (e) a uniform prior of support 3$\times$3 pixel; (f) a Gaussian prior with $\sigma_{\textrm{prior}}=0.8$. Thin white lines indicate the image pixel grid.}
\label{fig:spotandlikelihood}
\end{figure}
We compare two versions of pcSIR, which differ in the placement of the dummy particles: In pcSIR-CoC, the dummy particles are placed at the geometric centers of the bins, whereas in pcSIR-CoM, the centers of mass of the state vectors of all particles inside that bin are used. Each convergence experiment is repeated 1000 times for different realizations of the random process, and the number of particles is increased up to 100\,000. We quantify the RMSE of the state estimation as a function of the number of particles used. The resulting convergence plots for pcSIR and SIR are shown in Fig.~\ref{fig:asirconvergence05}.
We observe no significant differences between SIR and the two pcSIR variants. The error of pcSIR-CoM is always slightly lower than that of pcSIR-CoC. SIR is generally the most accurate, but is outperformed by pcSIR-CoM in some cases, confirming our experimental tests as well as the findings in Ref.~\cite{koblentspopulation}. As $N$ increases, the errors of all methods decrease at the same rate. In all cases, however, the runtimes of both pcSIR variants were significantly less than that of SIR.
\begin{figure}[]
\centering
\includegraphics[width=0.44\columnwidth]{Rplot_sigmaprior05}
\includegraphics[width=0.44\columnwidth]{Rplot_sigmaprior08}
\caption{The ``pseudo''-tracking experiment results for the Gaussian prior with $\sigma_{\textrm{prior}}=0.5$ and the uniform prior with 3$\times$3-pixel support (left), and for the Gaussian prior with $\sigma_{\textrm{prior}}=0.8$ and the uniform prior with 5$\times$5-pixel support (right). The state estimation errors of pcSIR relative to SIR range between $-12\%\ldots +6\%$. The difference between pcSIR and SIR decreases as $N$ increases. Both pcSIR and SIR converge with increasing number of particles. The RMSE error is reduced by about 30\% every time the number of particles doubles, corresponding to a convergence order of $\sqrt{N}$, as expected for a Monte Carlo method. Error bars are below symbol size.}
\label{fig:asirconvergence05}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
We proposed a fast approximate SIR algorithm, called pcSIR. pcSIR is based on spatially binning particles in \textit{cells} and representing each cell by a single dummy particle at the center of mass of the cell's particle distribution, carrying the average state vector of all particles in that cell. This approximates the likelihood by a first-order multipole expansion~\cite{greengard1987fast}. pcSIR significantly reduces the computational cost of SIR and enables tackling larger problems as well as tackling mid-size problems in real time. In some configurations, especially when sub-pixel resolution is used for the bins, pcSIR may yield more accurate results than SIR.
We performed both theoretical and experimental error analysis of pcSIR. We showed that the error in the posterior decreases as the number of particles increases. Moreover, pcSIR converges at the same rate as SIR, since the Monte-Carlo sampling error masks the error from the function approximation. We presented theoretical upper bounds on the likelihood approximation error as a function of cell size in pcSIR.
We experimentally tested the tracking accuracy and runtime performance of two pcSIR variants for image processing: pcSIR-1x1 and pcSIR-2x2. In our benchmarks, pcSIR showed significant speedups over SIR. As more particles are used, the relative speedup over SIR seems to grow exponentially for large-object tracking scenarios, where the likelihood is costly to evaluate. In the presented benchmarks with 12\,800 particles, SIR required 5 minutes to track the large object through a 50-frame 2D image sequence. pcSIR-1x1 needed only 2.3 seconds to accomplish the same task at the expense of a 2.4\% smaller accuracy. pcSIR-2x2 completed the task in 5.3 seconds with a 5.8\% better tracking accuracy than SIR. This improvement stems from the fact that for some posterior distributions, the piecewise constant likelihood approximate of pcSIR may be a more regular representation than that generated by SIR. This is a known phenomenon~\cite{koblentspopulation}. The relative speedups of the two pcSIR variants over classical SIR were 130-fold and 57-fold for 12\,800 particles, respectively. For larger numbers of particles, we expect even larger speedups.
For small-object tracking, both pcSIR variants showed an average 5-fold improvement in execution time for the largest tested particle number. However, the tracking accuracy of pcSIR-1x1 is greatly reduced, since the likelihood function has a narrow support that is not well sampled by the coarse bins. While the errors are in the range of 150\%, they are barely visible in the final trajectories since the average RMSE is only about 0.27 pixels. Interestingly, pcSIR-2x2 shows improvements both in overall runtime (5-fold) and in tracking accuracy (1\%), which suggests that pcSIR-2x2 may be a good algorithm for tracking small objects.
We believe that pcSIR can be used in many PF applications that require large numbers of particles, costly likelihood evaluations, or real-time performance. When tracking accuracy is not critical, pcSIR-1x1 can offer orders of magnitude speedup in image-processing applications. If a loss in tracking accuracy is undesired, pcSIR-2x2 still offers significant speedups while in some cases even improving accuracy over SIR. In other applications, one can adjust the size of the averaging bins according to the desired accuracy. Future improvements could involve adaptive bin sizes.
\section*{Appendix: Approximation error of pcSIR in 2D}
\label{app:asir1}
Approximating integrable functions by piecewise constant functions is well understood in mathematics on the basis of Riemann integral theory~\cite{Davis1967,Thomas1984}. We formulate the approximation error $\mathtt{E}_{pcSIR}(l_x,l_y)$ of pcSIR with rectangular cells and then simplify it to $\mathtt{E}_{pcSIR}(l)$ for square cells. All results can be extended to higher-dimensional settings.
Let the likelihood $p(\mathbf{z}_{k}|\mathbf{x}_{k})$ be a twice continuously differentiable function $f(x,y)$ within a domain $D \in \mathbb{R}^{[x_0,x_{n}]\times[y_0,y_{m}]}$, which is divided into $B=n\times m$ non-overlapping rectangular cells. Further, $l_{k_i}$ and $l_{k_j}$ denote the width (i.e., in $x$-direction) and the height (i.e., in $y$-direction) of cell $I_k$ in $D$, where $D= \bigcup_{k=1}^{B}I_k$. The indices $i$ and $j$ are given by $i=1,\ldots ,n$ and $j=1,\ldots ,m$, and the maximum side lengths in both dimensions are defined as $l_x = \max_{k_i}(l_{k_i})$ and $l_y = \max_{k_j}(l_{k_j})$ where $l_{k_i}=x_k-x_{k-1}$ and $l_{k_j}=y_k-y_{k-1}$. Then, the total approximation error $\mathtt{E}_{pcSIR}(l_x,l_y)$ of the likelihood in $D$ obtained by pcSIR (Algorithm~\ref{alg:asir}) is bounded by:
\begin{equation*}
\begin{split}
\mathtt{E}_{pcSIR}(l_x,l_y) \leq & \frac{1}{24} \left [ \max_{[D]}|f_{xx}| \, l_x^3 \, l_y + \max_{[D]}|f_{yy}| \, l_x l_y^3 \right ] ,
\end{split}
\end{equation*}
where $\max_{[x_0,x_{n}]}|f_{xx}|$ and $\max_{[y_0,y_{m}]}|f_{yy}|$ are the maxima of the absolute values of $\frac{\partial^2 f}{\partial x^2}$ and $\frac{\partial^2 f}{\partial y^2}$ in $D$, respectively.
This result can be derived by mid-point Riemann-sum approximation of an integral. While the dummy particle does not have to be located at the center of a cell, for the sake of simplicity of the derivation, we assume that pcSIR uses the mid-point for piecewise constant likelihood approximation. Assume that $f(x,y)$ is twice continuously differentiable in region $D \in \mathbb{R}^{[x_0,x_{n}]\times[y_0,y_{m}]}$ and the following partial derivatives are defined: $\frac{\partial^2 f}{\partial x^2}=f_{xx}$, $\frac{\partial^2 f}{\partial y^2}=f_{yy}$ and $\frac{\partial^2 f}{\partial x \partial y}=f_{xy}$. The approximation error $\mathtt{E}_{I_{k}}(l_x,l_y)$ can be calculated by integrating the multivariate Taylor approximation
\begin{equation}
\label{eqn:asir1}
\begin{split}
\mathtt{E}_{I_{k}}(l_x,l_y) &= f(x,y)-f(a,b) \\
& = f_x(a,b)(x-a)+f_y(a,b)(y-b) \\
& + \frac{1}{2!} [f_{xx}(a,b)(x-a)^2 + f_{yy}(a,b)(y-b)^2 \\
& + 2 f_{xy}(a,b)(x-a)(y-b)]
\end{split}
\end{equation}
over the two-dimensional interval $I_k=[x_{k-1},x_k]\times[y_{k-1},y_k]$, where $a=\frac{x_{k-1}+x_k}{2}$, $b=\frac{y_{k-1}+y_k}{2}$, and $D= \bigcup_{k=1}^{B}I_k$, hence:
\begin{equation}
\label{eqn:asir1}
\begin{split}
\mathtt{E}_{I_{k}}(B) =& f_x(a,b) \int \!\!\! \int_{I_k} (x-a) \,\mathrm{d}x\,\mathrm{d}y \\
+ & f_y(a,b)\int \!\!\! \int_{I_k} (y-b) \,\mathrm{d}x\,\mathrm{d}y \\
+ & \frac{1}{2!} \left [ f_{xx}(a,b) \int \!\!\! \int_{I_k} (x-a)^2 \,\mathrm{d}x\,\mathrm{d}y \right. \\
+ & f_{yy}(a,b) \int \!\!\! \int_{I_k} (y-b)^2 \,\mathrm{d}x\,\mathrm{d}y \\
+ & \left. 2 f_{xy}(a,b) \int \!\!\! \int_{I_k} (x-a)(y-b) \,\mathrm{d}x\,\mathrm{d}y \right ].
\end{split}
\end{equation}
Substituting $a$ and $b$ in Eq.~(\ref{eqn:asir1}), we find:
\begin{equation}
\label{eqn:asir2}
\begin{split}
\mathtt{E}_{I_{k}}(l_x,l_y) = & \frac{1}{2!} \left [ f_{xx}(a,b) \int \!\!\! \int_{I_k} \left (x-\frac{x_{k-1}+x_k}{2} \right)^2 \,\mathrm{d}x\,\mathrm{d}y \right. \\
+ & \left. f_{yy}(a,b) \int \!\!\! \int_{I_k} \left (y-\frac{y_{k-1}+y_k}{2}\right)^2 \,\mathrm{d}x\,\mathrm{d}y \right ].
\end{split}
\end{equation}
We substitute $l_{k_i}=x_k-x_{k-1}$, $l_{k_j}=y_k-y_{k-1}$ and evaluate the integrals. Then Eq.~(\ref{eqn:asir2}) becomes
\begin{equation}
\label{eqn:asir3}
\begin{split}
\mathtt{E}_{I_{k}}(l_x,l_y) = \frac{1}{24} \left [ f_{xx}(a,b)\, l_{k_i}^3 \, l_{k_j} + f_{yy}(a,b)\, l_{k_j}^3 \, l_{k_i} \right ].
\end{split}
\end{equation}
Next, we sum the absolute values of the partial errors in all $I_k$ regions in order to provide an upper bound on the total approximation error in the closed region $[D]$ as:
\begin{equation}
\label{eqn:asir4}
\begin{split}
\mathtt{E}(l_x,l_y) \leq & \frac{1}{24} \left [ \max_{[D]}|f_{xx}| \, \max_{k_i}(l_{k_i})^3 \,\max_{k_j}(l_{k_j}) \right. \\
&\quad \left.+ \max_{[D]}|f_{yy}| \, \max_{k_j}(l_{k_j})^3 \, \max_{k_i}(l_{k_i}) \right ].
\end{split}
\end{equation}
By substituting $l_x$ and $l_y$ into Eq.~(\ref{eqn:asir4}), we find the total error in the closed region $[D]$:
\begin{equation}
\label{eqn:asir4.5}
\begin{split}
\mathtt{E}(l_x,l_y) \leq & \frac{1}{24} \left [ \max_{[D]}|f_{xx}| \, l_x^3 \, l_y + \max_{[D]}|f_{yy}| \, l_x l_y^3 \right ].
\end{split}
\end{equation}
For equi-sized square cells, a tighter bound for the approximation error can be derived by repeating the steps that lead to Eq.~(\ref{eqn:asir3}). The derivation diverges here by taking the minimum possible value for the side lengths $l_{k_i}$ and $l_{k_j}$ of the small interval $I_k$ in region $D \in \mathbb{R}^{[x_0,x_{n}]\times[y_0,y_{m}]}$, where $D= \bigcup_{k=1}^{B}I_k$. When $I_k$ is a square with $l=l_{k_i}=x_k-x_{k-1}=l_{k_j}=y_k-y_{k-1}$, we obtain the bound on $\mathtt{E}_{I_{k}}(l)$ as:
\begin{equation}
\label{eqn:asir5}
\begin{split}
\mathtt{E}_{l_k}(l) \leq & \frac{l^4}{24} \left [ f_{xx} + f_{yy} \right ].
\end{split}
\end{equation}
By summing the absolute values of the errors in all $I_k$ regions, we get the total error in closed region $[D]$:
\begin{equation}
\label{eqn:asir6}
\begin{split}
\mathtt{E}(l) \leq & \frac{l^4}{24} \left [ \max_{[D]}|f_{xx}| + \max_{[D]}|f_{yy}| \right ].
\end{split}
\end{equation}
\section*{Acknowledgments}
The authors thank the MOSAIC Group (MPI-CBG, Dresden) for fruitful discussions and the MadMax cluster team (MPI-CBG, Dresden) for operational support. \"{O}mer Demirel was funded by grant \#200021--132064 from the Swiss National Science Foundation (SNSF), awarded to I.F.S. Ihor Smal was funded by a VENI grant (\#639.021.128) from the Netherlands Organization for Scientific Research (NWO).
\bibliographystyle{unsrt}
|
1,314,259,993,535 | arxiv |
\subsection{An Illustrative Example}
To further illustrate our MaTrust trust inference problem (Problem~\ref{P:matrust}), we give an intuitive example as shown in Fig.~\ref{F:example}.
\begin{figure*}[!t]
\centering
\subfigure[The observed locally-generated pair-wise trust relationships]{
\label{F:example:events}\centering
\includegraphics[width=1.5in]{events.eps}}
\hspace{0.1in}
\subfigure[The partially observed trust matrix \textbf{T}]{
\label{F:example:t}\centering
\includegraphics[width=2.0in]{TFG-1.eps}}
\hspace{0.1in}
\subfigure[The inferred trustor matrix \textbf{F} and trustee matrix \textbf{G}]{
\label{F:example:fg}\centering
\includegraphics[width=3.0in]{TFG-2.eps}}
\caption{An illustrative example for MaTrust.}
\label{F:example}\centering
\end{figure*}
\hide{
\begin{eqnarray}
{\mat{T}=\begin{pmatrix}
/ & 1 & 1 & 1 & 1\\
0.5 & / & 1 & ? & ?\\
? & 1 & / & ? & ?\\
0.5 & ? & ? & / & ?\\
0.5 & ? & ? & 1 & /\\
\end{pmatrix} }
\nonumber \\
{\mat{F}=\begin{pmatrix}
1 & 1 \\
1 & 0 \\
1 & 0 \\
0 & 1 \\
0 & 1 \\
\end{pmatrix} } ~~~
{\mat{G}=\begin{pmatrix}
0.5 & 0.5 \\
1 & 0 \\
1 & 0 \\
0 & 1 \\
0 & 1 \\
\end{pmatrix} } \nonumber
\end{eqnarray}
}
In this example, we observe several locally-generated pair-wise trust relationships between five users (e.g., `{\em Alice}', `{\em Bob}', `{\em Carol}', `{\em David}', and `{\em Elva}') as shown in Fig.~\ref{F:example:events}. Each observation contains a trustor, a trustee, and a numerical trust rating from the trustor to the trustee. We then model these observations as a $5 \times 5$ partially observed matrix $\textbf{T}$ (see Fig.~\ref{F:example:t}) where $\textbf{T}(i,j)$ is the trust rating from the $i^{th}$ user to the $j^{th}$ user if the rating is observed and $\textbf{T}(i,j)$ = `?' otherwise. Notice that we do not consider self-ratings and thus represent the diagonal elements of $\textbf{T}$ as `$/$'. By setting the number of factors $s = 2$, our goal is to infer two $5 \times 2$ matrices $\textbf{F}$ and $\textbf{G}$ (see Fig.~\ref{F:example:fg}) from the input matrix $\textbf{T}$. Each row of the two matrices is the stereotype for the corresponding user, and each column of the matrices represents a certain aspect/factor in trust inference (e.g., `delivering time', `product price', etc).
For example, we can see that {\em Alice} trusts others strongly wrt both `delivering time' and `product price' (based on matrix $\mat F$), and she is in turn moderately trusted by others wrt these two factors (based on matrix $\mat G$). On the other hand, both {\em Bob} and {\em Carol} put more emphasis on the delivering time, while {\em David} and {\em Elva} care more about the product price.
Once $\textbf{F}$ and $\textbf{G}$ are inferred, we can use these two matrices to estimate the unseen trustworthiness scores (i.e., the `?' elements in $\textbf{T}$). For instance, the trustworthiness from {\em Carol} to {\em Alice} can be estimated as $\mat{\hat T}(3,1) = \textbf{F}(3,:) \textbf{G}(1,:)' = 0.5$. This estimation is reasonable because Carol has the same stereotype as Bob and the trustworthiness score from {\em Bob} to {\em Alice} is also $0.5$. As another example, {\em David} and {\em Elva} have similar preferences (i.e., the same stereotypes), and thus we conjecture that they would trust each other (i.e., $\mat{\hat T}(4,5)$ = $\textbf{F}(4,:) \textbf{G}(5,:)' = 1$). In the rest of the paper, we will mainly focus on how to characterize $\textbf{F}$ and $\textbf{G}$ from the partially observed input matrix $\mat T$.
\hide{
\begin{figure}[!t]
\centering
\subfigure[The input trust matrix \textbf{T}]{
\label{F:example:t}\centering
\includegraphics[width=1.8in]{T.eps}}
\hspace{0.1in}\\
\subfigure[The trustor matrix \textbf{F}]{
\label{F:example:f}\centering
\includegraphics[width=1.1in]{F.eps}}
\hspace{0.1in}
\subfigure[The trustee matrix \textbf{G}]{
\label{F:example:g}\centering
\includegraphics[width=1.1in]{G.eps}}
\caption{An illustrative example for MaTrust.}
\label{F:example}\centering
\end{figure}
To further illustrate our MaTrust trust inference problem (Problem~\ref{P:matrust}), we give an example in Fig.~\ref{F:example}. In this example, the trust matrix $\textbf{T}$ in Fig.~\ref{F:example:t} contains the existing trust values between four users. Our goal is to estimate, for instance, the trust that \emph{User 3} should place on \emph{User 2}.
In this example, MaTrust factorizes $\textbf{T}_{4 \times 4}$ into $\textbf{F}_{4 \times 2}$ (Fig.~\ref{F:example:f}) and $\textbf{G}_{4 \times 2}$ (Fig.~\ref{F:example:g}). Each row of $\textbf{F}$ and $\textbf{G}$ is the characterized stereotype for the corresponding user. We may consider the two columns of $\textbf{F}$ and $\textbf{G}$ as some important factors for trust inference, e.g., competence for \emph{Factor 1} and honesty for \emph{Factor 2}. The stereotypes in $\textbf{F}$ and $\textbf{G}$ can then be considered as the \emph{preference} of the factors in the trustor's opinion and the \emph{significance} of the factors in the trustee, respectively. For instance, \emph{User 3}, as a trustor, prefers honest trustees much more than competent trustees ($\textbf{F}(1,:) = (0.31, 0.67)$), while \emph{User 3} itself is more an honest trustee than a competent trustee ($\textbf{G}(1,:) = (0.04, 0.43)$). Finally, the trust that \emph{User 3} should put on \emph{User 2} is computed based on $\textbf{F}(3,:)$ and $\textbf{G}(2,:)$ as: $\textbf{F}(3,:) \textbf{G}(2,:)' = 0.31*0.3 + 0.67*0.65 \approx 0.5$.
}
\hide{
\reminder{Yuan: i think we can drop the following}
As we can see from the example, MaTrust explores the multi-aspect of trust by characterizing several latent factors as well as their preference in the trustor side and significance in the trustee side. In addition, the trust inference result for any trustor-trustee pair can be immediately computed by MaTrust, as long as $\textbf{F}$ and $\textbf{G}$ are successfully characterized. The main workload, which can be completed in advance, is then from the factorization on $\textbf{P}$. This workload leads to the subproblem as follows:
\begin{problem}{The Matrix Factorization Problem for MaTrust}\label{P:factorization}
\begin{description}
\item[Given:] an input trust matrix $\textbf{P}_{n \times n}$;
\item [Find:] the low-rank matrices $\textbf{F}_{n \times r}$ and $\textbf{G}_{n \times r}$, such that $\textbf{P}(i,j) \approx \textbf{F}(i,:) \textbf{G}(:,j)'$ for all elements $(i,j)$ $\in$ $\mathcal{K}$.
\end{description}
\end{problem}
We will focus on the solution of the Problem~\ref{P:matrust} and Problem~\ref{P:factorization} in the rest of the paper.
}
\hide{As we can see from the problem definitions, MaTrust explores the multi-aspect of trust by characterizing several latent factors as well as their preference in the trustor side and significance in the trustee side. Actually, MaTrust can be complementary to other trust inference models, including the trust propagation ones. We give a possible framework to combine multiple trust inference models in the appendix.}
\subsection{The Basic Formulation}
Formally, Problem~\ref{P:matrust} can be formulated as the following optimization problem: \begin{eqnarray}\label{E:optbasic}
\min_{\textbf{F},\textbf{G}} \sum_{(i,j)\in \mathcal{K}} (\textbf{T}(i,j) - \textbf{F}(i,:)\textbf{G}(j,:)')^2 + \lambda ||\textbf{F}||_{fro}^2 + \lambda ||\textbf{G}||_{fro}^2
\end{eqnarray}
\noindent where $\lambda$ is a regularization parameter; $||\textbf{F}||_{fro}$ and $||\textbf{G}||_{fro}$ are the Frobenious norm of the trustor and trustee matrices, respectively.
By this formulation, MaTrust aims to minimize the squared error on the set of observed trust ratings. Notice that in Eq.~\eqref{E:optbasic}, we have two additional regularization terms ($||\textbf{F}||_{fro}^2$ and $||\textbf{G}||_{fro}^2$) to improve the solution stability. The parameter $\lambda \ge 0$ controls the amount of such regularizations. Based on the resulting $\textbf{F}$ and $\textbf{G}$ of the above equation, the unseen trustworthiness score $\mat{\hat T}(u,v)$ can then be estimated by the corresponding stereotypes $\textbf{F}(u,:)$ and $\textbf{G}(v,:)$ as: \begin{equation}
\mat{\hat T}(u,v) = \textbf{F}(u,:)\textbf{G}(v,:)'
\end{equation}
\subsection{Incorporating Bias}
The above formulation can naturally incorporate some prior knowledge such as trust bias into the inference procedure. In this paper, we explicitly consider the following three types of trust bias: \emph{global bias}, \emph{trustor bias}, and \emph{trustee bias}, although other types of bias can be incorporated in a similar way.
\begin{itemize}
\item The {\em global bias} represents the average level of trust in the community. The intuition behind this is that users tend to rate optimistically in some reciprocal environments (e.g., e-Commerce~\cite{resnick2002trust}) while they are more conservative in others (e.g., security-related applications). As a result, it might be useful to take such global bias into account and we model it as a scalar $\mu$.
\item The {\em trustor bias} is based on the observation that some trustors tend to generously give higher trust ratings than others. This bias reflects the propensity of a given trustor to trust others, and it may vary a lot among different trustors. Accordingly, we can model the trustor bias as vector \textbf{x} with $\textbf{x}(i)$ indicating the trust propensity of the $i^{th}$ trustor.
\item The third type of bias ({\em trustee bias}) aims to characterize the fact that some trustees might have relatively higher capability in terms of being trusted than others. Similar to the second type of bias, we model this type of bias as vector \textbf{y}, where $\textbf{y}(j)$ indicates the overall capability of the $j^{th}$ trustee compared to the average.
\end{itemize}
Each of these three types of bias can be represented as a {\em specified} factor for our MaTrust model, respectively. By incorporating such bias into Eq.~\eqref{E:optbasic}, we have the following formulation: \begin{eqnarray}\label{E:optbias}
\min_{\textbf{F},\textbf{G}} \sum_{(i,j)\in \mathcal{K}} &~& (\textbf{T}(i,j) - \textbf{F}(i,:)\textbf{G}(j,:)')^2 + \lambda ||\textbf{F}||_{fro}^2 + \lambda ||\textbf{G}||_{fro}^2\nonumber\\
\textrm{Subject~to:~}&~& \mat{F}(:,1) = \mu\mat{1},~\mat{G}(:,1)=\alpha_1\mat{1}~~~\textrm{(global~bias)}\nonumber\\
&~& \mat{F}(:,2) = \mat{x},~\mat{G}(:,2)=\alpha_2\mat{1}~~~\textrm{(trustor~bias)}\nonumber\\
&~& \mat{F}(:,3) = \alpha_3\mat{1},~\mat{G}(:,3)=\mat{y}~~~\textrm{(trustee~bias)}\end{eqnarray}
\noindent where $\alpha_1,\alpha_2,$ and $\alpha_3$ are the weights of bias that we need to estimate based on the existing trust ratings.
In addition to these three {\em specified} factors, we refer to the remaining factors in the trustor and trustee matrices as {\em latent} factors. To this end, we define two $n \times r$ sub-matrices of $\mat{F}$ and $\mat{G}$ for the latent factors. That is, we define $\mat{F}_0 = \mat{F}(: , 4:s)$ and $\mat{G}_0 = \mat{G}(: , 4:s)$, where each column of $\mat{F}_0$ and $\mat{G}_0$ corresponds to one latent factor and $r$ is the number of latent factors. With this notation, we have the following equivalent form of Eq.~\eqref{E:optbias}: \begin{eqnarray}\label{E:optfinal}
\min_{\textbf{F}_0,\textbf{G}_0,\alpha_1,\alpha_2,\alpha_3} \sum_{(i,j)\in \mathcal{K}}&~& (\textbf{T}(i,j) - (\alpha_1 \mu + \alpha_2 \textbf{x}(i) + \alpha_3 \textbf{y}(j) + \nonumber\\ &~& \textbf{F}_0(i,:)\textbf{G}_0(j,:)'))^2 + \lambda\|\mat{F}\|_{fro}^2 + \lambda\|\mat{G}\|_{fro}^2
\end{eqnarray}
Notice that there is no coefficient before $\textbf{F}_0(i,:)\textbf{G}_0(j,:)'$ as it will be automatically absorbed into $\textbf{F}_0$ and $\textbf{G}_0$. Once we have inferred all the parameters (i.e., $\textbf{F}_0$, $\textbf{G}_0$, $\alpha_1$, $\alpha_2$, and $\alpha_3$) of Eq.~\eqref{E:optfinal}, the unseen trustworthiness score $\mat{\hat T}(u,v)$ can be immediately estimated as: \begin{equation}\label{E:onlineinfer}
\mat{\hat T}(u,v) = \textbf{F}_0(u,:)\textbf{G}_0(v,:)' + \alpha_1 \mu + \alpha_2 \textbf{x}(u) + \alpha_3 \textbf{y}(v)
\end{equation}
In the above formulations, we need to specify the three types of bias, i.e., to compute $\mu$, $\textbf{x}$, and $\textbf{y}$. Remember that the only information we need for MaTrust is the existing trust ratings. Therefore, we simply estimate the bias information from $\textbf{T}$ as follows: \begin{equation}\label{E:bias}
\left\{ \begin{aligned}
\mu = \sum_{(i,j)\in \mathcal{K}} \textbf{T}(i,j) / |\mathcal{K}|~~~~~~~~~~~~~~~\\
\textbf{x}(i) = \sum_{j,(i,j)\in \mathcal{K}} \textbf{T}(i,j)/|row_i| - \mu \\
\textbf{y}(j) = \sum_{i,(i,j)\in \mathcal{K}} \textbf{T}(i,j)/|col_j| - \mu \\
\end{aligned} \right.
\end{equation}
where $|row_i|$ is the number of the observed elements in the $i^{th}$ row of $\textbf{T}$, and $|col_j|$ is the number of the observed elements in the $j^{th}$ column of $\textbf{T}$.
\subsection{Discussions and Generalizations}
We further present some discussions and generalizations of our optimization formulation.
First, it is worth pointing out that our formulation in Eq.~\eqref{E:optbasic} differs from the standard matrix factorization (e.g., SVD) as in the objective function, we try to minimize the square loss {\em only} on those observed trust pairs. This is because the majority of trust pairs are missing from the input trust matrix $\mat T$. In this sense, our problem setting is conceptually similar to the standard collaborative filtering, as in both cases, we aim to fill in missing values in a partially observed matrix (trustor-trustee matrix vs. user-item matrix). Indeed, if we fix the coefficients $\alpha_1=\alpha_2=\alpha_3=1$ in Eq.~\eqref{E:optbias}, it is reduced to the collaborative filtering algorithm in~\cite{koren2009matrix}. Our formulation in Eq.~\eqref{E:optbias} is more general as it also allows to learn the optimal coefficients from the input trust matrix $\mat T$. Our experimental evaluations show that such subtle treatment is crucial and it leads to further performance improvement over these existing techniques.
Second, although our MaTrust is a subjective trust inference metric where different trustors may form different opinions on the same trustee~\cite{massa2005controversial}, as a side product, the proposed MaTrust can also be used to infer an objective, unique trustworthiness score for each trustee. For example, this objective trustworthiness score can be computed based on the trustee matrix $\textbf{G}$. We will compare this feature of MaTrust with a well studied objective trust inference metric EigenTrust~\cite{kamvar2003eigentrust} in the experimental evaluation section.
Finally, we would like to point out that our formulation is flexible and can be generalized to other settings. For instance, our current formulation adopts the square loss function in the objective function. In other words, we implicitly assume that the residuals of the pair-wise trustworthiness scores follow a Gaussian distribution, and in our experimental evaluations, we found it works well. Nonetheless, our upcoming proposed MaTrust algorithm can be generalized to {\em any} Bregman divergence in the objective function. Also, we can naturally incorporate some additional constraints (e.g., non-negativity, sparseness, etc) in the trustor and trustee matrices. After we infer all the parameters (e.g., the coefficients for the bias, and the trustor and trustee matrices, etc), we use a linear combination (i.e., inner product) of the trustor stereotype (i.e., $\textbf{F}(u,:)$) and trustee stereotype (i.e., $\textbf{G}(v,:)$) to compute the trustworthiness score $\mat{\hat T}(u,v)$. We can also generalize this linear form to other non-linear combinations, such as the logistic function. For the sake of clarity, we skip the details of such generalizations in the paper.
\hide{
We now discuss some generations to our basic formulation.
First, in our basic formulation, we use linear combination (i.e., inner product) of the trustor stereotype (i.e., $\textbf{F}(u,:)$) and trustee stereotype (i.e., $\textbf{G}(v,:)$) to compute the trustworthiness score $\mat{\hat T}(u,v)$. Other non-linear combinations, such as logistic function, can also be used to combine these two vectors.
Second, the existing trust relationships are usually sparse in many real-world settings~\cite{guha2004propagation,golbeck2006inferring}. Therefore, we can store them in the sparse format which could save a lot of space for MaTrust.
Finally, in addition to bias, our formulation allows the incorporation of other additional knowledge such as temporal dynamics and confidence levels. We leave it as future work.
}
\subsection{Data Sets Description}
Many existing trust inference models design specific simulation studies to verify the underlying assumptions of the corresponding inference models. In contrast, we focus on two widely used real, benchmark data sets in order to compare the performance of different trust inference models.
The first data set is {\em advogato}\footnote{http://www.trustlet.org/wiki/Advogato\_dataset.}. It is a trust-based social network for open source developers. To allow users to certify each other, the network provides 4 levels of trust assertions, i.e., `{\em Observer}', `{\em Apprentice}', `{\em Journeyer}', and `{\em Master}'. These assertions can be mapped into real numbers which represent the degree of trust. To be specific, we map `{\em Observer}', `{\em Apprentice}', `{\em Journeyer}', and `{\em Master}' to 0.1, 0.4, 0.7, and 0.9, respectively (a higher value means more trustworthiness).
The second data set is {\em PGP} (short for Pretty Good Privacy)~\cite{hang2009operators}. PGP adopts the concept of `web of trust' to establish a decentralized model for data encryption and decryption. Similar to {\em advogato}, the web of trust in PGP data set contains 4 levels of trust as well. In our experiments, we also map them to 0.1, 0.4, 0.7, and 0.9, respectively.
Table~\ref{T:statistics} summarizes the basic statistics of the two resulting partially observed trust matrices $\mat T$. Notice that for the {\em advogato} data set, it contains six different snapshots, i.e., {\em advogato-1}, {\em advogato-2},..., {\em advogato-6}, etc. We use the largest snapshot (i.e., {\em advogato-6}) in the following unless otherwise specified.
\begin{figure}[!t]
\centering
\subfigure[Trustor bias distribution on advogato]{
\label{F:trustorbias:advogato}\centering
\includegraphics[width=1.5in]{trustorbias-advogato.eps}}
\hspace{0.1in}
\subfigure[Trustee bias distribution on advogato]{
\label{F:trusteebias:advogato}\centering
\includegraphics[width=1.5in]{trusteebias-advogato.eps}}
\hspace{0.1in}
\subfigure[Trustor bias distribution on PGP]{
\label{F:trustorbias:pgp}\centering
\includegraphics[width=1.5in]{trustorbias-pgp.eps}}
\hspace{0.1in}
\subfigure[Trustee bias distribution on PGP]{
\label{F:trusteebias:pgp}\centering
\includegraphics[width=1.5in]{trusteebias-pgp.eps}}
\caption{The distributions of trustor bias and trustee bias.}
\label{F:biasdistribution}\centering
\end{figure}
Fig.~\ref{F:biasdistribution} presents the distributions of trustor bias and trustee bias. As we can see, many users in adovogato perform averagely on trusting others and being trusted by others. On the other hand, a considerable part of PGP users are cautiously trusted by others, and even more users tend to rate others strictly. The global bias is 0.6679 and 0.3842 for advogato and PGP, respectively. This also confirms that the security-related PGP network is a more conservative environment than the developer-based advogato network.
\subsection{Effectiveness Results}
\begin{figure}[!t]
\centering
\subfigure[{\em advogato} data set]{
\label{F:effectiveness:advogato}\centering
\includegraphics[width=2.5in]{effectiveness.eps}}
\hspace{0.1in}
\subfigure[{\em PGP} data set]{
\label{F:effectiveness:pgp}\centering
\includegraphics[width=2.5in]{effectiveness-pgp.eps}}
\caption{Comparisons with subjective trust inference models. The proposed MaTrust significantly outperforms all the other models wrt both RMSE and MAE on both data sets.}
\label{F:effectiveness}\centering
\end{figure}
We use both {\em advogato} (i.e., {\em advogato-6}) and {\em PGP} for effectiveness evaluations. For both data sets, we hide a randomly selected sample of 500 observed trustor-trustee pairs as the test set, and apply the proposed MaTrust as well as other compared methods on the remaining data set to infer the trustworthiness scores for those hidden pairs. To evaluate and compare the accuracy, we report both the root mean squared error (RMSE) and the mean absolute error (MAE) between the estimated and the true trustworthiness scores. Both RMSE and MAE are measured on the 500 hidden pairs in the test set. We set $\lambda = 1.0$, $r = 10$, $m_1 = 10$, $m_2 = 100$, and $\xi_1 = \xi_2 = 10^{-6}$ in our experiments unless
otherwise specified.
{\em (A) Comparisons with Existing Subjective Trust Inference Methods}. We first compare the effectiveness of MaTrust with several benchmark trust propagation models, including \emph{CertProp}~\cite{hang2009operators}, \emph{MoleTrust}~\cite{massa2005controversial}, \emph{Wang\&Singh}~\cite{wang2006trust,wang2007formal}, and \emph{Guha}~\cite{guha2004propagation}. For all these subjective methods, the goal is to infer a pair-wise trustworthiness score (i.e., to what extent the user $u$ trusts another user $v$).
The result is shown in Fig.~\ref{F:effectiveness}. We can see that the proposed MaTrust significantly outperforms all the other trust inference models wrt both RMSE and MAE on both data sets. For example, on {\em advogato} data set, our MaTrust improves the best existing method (CertProp) by 37.1\% in RMSE and by 23.0\% in MAE. As for {\em PGP} data set, the proposed MaTrust improves the best existing method (Wang\&Singh) by 25.3\% in RMSE and by 34.3\% in MAE. The results suggest that multi-aspect of trust indeed plays a very important role in the inference process.
{\em (B) Comparisons with Existing Objective Trust Inference Methods}. Although our MaTrust is a subjective trust inference metric, as a side product, it can also be used to infer an objective trustworthiness score for each trustee. To this end, we set $r=1$ in MaTrust algorithm, and aggregate the resulting trustee matrix/vector $\mat {G}_0$ with the bias (the global bias $\mu$ and the trustee bias $\textbf{y}$). We compare the result with a widely-cited objective trust inference model {\em EigenTrust}~\cite{kamvar2003eigentrust} in Table~\ref{T:eigentrust}. As we can see, MaTrust outperforms EigenTrust in terms of both RMSE and MAE on both data sets. For example, on {\em advogato} data set, MaTrust is 58.6\% and 68.9\% better than EigenTrust wrt RMSE and MAE, respectively.
\begin{table}[!t]
\caption{Comparisons with {\em EigenTrust}. MaTrust is better than EigenTrust wrt both RMSE and MAE on both data sets.}
\label{T:eigentrust}\centering
\begin{tabular}{c||c|c}
\hline
RMSE/MAE & advogato & PGP \\ \hline\hline
EigenTrust & 0.700 / 0.653 & 0.519 / 0.371 \\
MaTrust & 0.290 / 0.203 & 0.349 / 0.280 \\\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{The importance of trust bias. Trust bias significantly improves trust inference accuracy.}
\label{T:bias}\centering
\begin{tabular}{c||c|c}
\hline
RMSE/MAE & advogato & PGP \\ \hline\hline
MaTrust without trust bias & 0.228 / 0.164 & 0.244 / 0.135 \\
MaTrust & 0.169 / 0.119 & 0.192 / 0.111 \\\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{Comparisons with {\em SVD} and {\em KBV}~\cite{koren2009matrix}. MaTrust outperforms both of them.}
\label{T:koren}\centering
\begin{tabular}{c||c|c}
\hline
RMSE/MAE & advogato & PGP \\ \hline\hline
SVD & 0.629 / 0.579 & 0.447 / 0.306 \\
KBV & 0.179 / 0.125 & 0.217 / 0.133 \\
MaTrust & 0.169 / 0.119 & 0.192 / 0.111 \\\hline
\end{tabular}
\end{table}
{\em (C) Trust Bias Evaluations}. We next show the importance of trust bias by comparing MaTrust with the results when trust bias is not incorporated. The result is shown in Table~\ref{T:bias}. As we can see, MaTrust performs much better when trust bias is incorporated. For example, on {\em PGP} data set, trust bias helps MaTrust to obtain 21.3\% and 17.8\% improvements in RMSE and MAE, respectively. This result confirms that trust bias also plays an important role in trust inference.
{\em (D) Comparisons with Existing Matrix Factorization Methods}. We also compare MaTrust with two existing matrix factorization methods: {\em SVD} and the collaborative filtering algorithm~\cite{koren2009matrix} for recommender systems (referred to as {\em KBV}).
The result is shown in Table~\ref{T:koren}. As we can see from the table, MaTrust again outperforms both SVD and KBV on both data sets. {\em SVD} performs poorly as it treats all the unobserved trustor-trustee pairs as zero elements in the trust matrix $\mat T$. MaTrust also outperforms {\em KBV}. For example, MaTrust improves {\em KBV} by 11.5\% in RMSE and by 16.5\% in MAE on {\em PGP} data set. As mentioned before, {\em KBV} can be viewed as a special case of the proposed MaTrust if we fix all the coefficients as $1$. This result confirms that by simultaneously learning the bias coefficients from the input trust matrix $\mat T$ (i.e., the relative weights for different types of bias), MaTrust leads to further performance improvement.
\begin{figure}[!t]
\centering
\subfigure[RMSE and MAE of MaTrust wrt $r$. We fix $r = 10$.]{
\label{F:rank}\centering
\includegraphics[width=2.7in]{rank.eps}}
\hspace{0.1in}
\subfigure[RMSE and MAE of MaTrust wrt $\lambda$. We fix $\lambda = 1.0$.]{
\label{F:lamda}\centering
\includegraphics[width=2.7in]{lamda.eps}}
\caption{The sensitivity evaluations. MaTrust is robust wrt both parameters.}
\label{F:parameters}\centering
\end{figure}
{\em (E) Sensitivity Evaluations}. Finally, we conduct a parametric study for MaTrust. The first parameter is the latent factor size $r$. We can observe from Fig.~\ref{F:rank} that, in general, both RMSE and MAE stay stable wrt $r$ with a slight decreasing trend. For example, compared with the results of $r=2$, the RMSE and MAE decrease by 3.1\% and 4.3\% on average if we increase $r=20$. The second parameter in MaTrust is the regularization coefficient $\lambda$. As we can see from Fig.~\ref{F:lamda}, both RMSE and MAE decrease when $\lambda$ increases up to $0.8$; and they stay stable after $\lambda > 0.8$. Based on these results, we conclude that MaTrust is robust wrt both parameters. For all the other results we report in the paper, we simply fix $r=10$ and $\lambda=1.0$.
\subsection{Efficiency Results}
For efficiency experiments, we report the average wall-clock time. All the experiments were run on a machine with two 2.4GHz Intel Cores and 4GB memory.
\begin{figure}[!t]
\centering
\subfigure[Wall-clock time on {\em advogato} data set]{
\label{F:efficiency:advogato}\centering
\includegraphics[width=2.7in]{efficiency.eps}}
\hspace{0.1in}
\subfigure[Wall-clock time on {\em PGP} data set]{
\label{F:efficiency:pgp}\centering
\includegraphics[width=2.7in]{efficiency-pgp.eps}}
\caption{Speed comparison. MaTrust is much faster than all the other methods.}
\label{F:efficiency}\centering
\end{figure}
{\em (A) Speed Comparison}. We first compare the on-line response of MaTrust with \emph{CertProp}, \emph{MoleTrust}, \emph{Wang\&Singh}, and \emph{Guha}. Again, we use the {\em advogato-6} snapshot and {\em PGP} in this experiment, and the result is shown in Fig.~\ref{F:efficiency}. Notice that the y-axis is in the logarithmic scale. \hide{When applying CertProp, MoleTrust, and Wang\&Singh on PGP, we also shortened the propagation distance to make these models feasible.}
We can see from the figure that the proposed MaTrust is much faster than all the alternative methods on both data sets. For example, MaTrust is 2,000,000 - 3,500,000x faster than MoleTrust. This is because once we have inferred the trustor/truestee matrices as well as the coefficients for the bias (Step 1-12 in Algorithm~\ref{A:matrust}), it only takes {\em constant} time for MaTrust to output the trustworthiness score (Step 13 in Algorithm~\ref{A:matrust}).
Among all the alternative methods, Guha is the most efficient. This is because its main workload can also be completed in advance. However, the pre-computation of Guha needs additional $O(n^2)$ space as the model fills nearly all the missing elements in the trust matrix, making it unsuitable for large data sets. In contrast, MaTrust only requires $O(|\mathcal{K}| + nr + r^2)$ space, which is usually much smaller than $n^2$.
\begin{figure}[!t]
\centering
\subfigure[Wall-clock time vs. $n$ on advogato]{
\label{F:scalability:advogato}\centering
\includegraphics[width=1.5in]{scalability1.eps}}
\hspace{0.1in}
\subfigure[Wall-clock time vs. $|{\cal{K}}|$ on advogato]{
\label{F:scalability:advogato}\centering
\includegraphics[width=1.5in]{scalability2.eps}}
\hspace{0.1in}
\subfigure[Wall-clock time vs. $n$ on PGP]{
\label{F:scalability:pgp}\centering
\includegraphics[width=1.5in]{scalability1-pgp.eps}}
\hspace{0.1in}
\subfigure[Wall-clock time vs. $|{\cal{K}}|$ on PGP]{
\label{F:scalability:pgp}\centering
\includegraphics[width=1.5in]{scalability2-pgp.eps}}
\caption{Scalability of the proposed MaTrust. MaTrust scales linearly wrt the data size ($n$ and $|{\cal{K}}|$).}
\label{F:scalability}\centering
\end{figure}
{\em (B) Scalability}. Finally, we present the scalability result of MaTrust by reporting the wall-clock time of the pre-computational stage (i.e., Step 1-12 in Algorithm~\ref{A:matrust}). For {\em advogato} data set, we directly report the results on all the six snapshots (i.e., {\em advogato-1}, ..., {\em advogato-6}). For {\em PGP}, we use its subsets to study the scalability. The result is shown in Fig.~\ref{F:scalability}, which is consistent with the complexity analysis in Section~\ref{sec:algorithmanalysis}. As we can see from the figure, MaTrust scales linearly wrt to both $n$ and $|\mathcal{K}|$, indicating that it is suitable for large-scale applications.
\subsection{Trust Propagation Models}
To date, a large body of trust inference models are based on trust propagation where trust is propagated along connected users in the trust network, i.e., the web of locally-generated trust ratings. Based on the interpretation of trust propagation, we further categorize these models into two classes: \emph{path interpretation} and \emph{component interpretation}.
In the first category of path interpretation, trust is propagated along a path from the trustor to the trustee, and the propagated trust from multiple paths can be combined to form a final trustworthiness score. For example, Wang and Singh~\cite{wang2006trust,wang2007formal} as well as Hang et al.~\cite{hang2009operators} propose operators to concatenate trust along a path and aggregate trust from multiple paths. Liu et al.~\cite{liu2010optimal} argue that not only trust values but social relationships and recommendation role are important for trust inference. In contrast, there is no explicit concept of paths in the second category of component interpretation. Instead, trust is treated as random walks on a graph or on a Markov chain~\cite{richardson2003trust}. Examples of this category include~\cite{guha2004propagation,massa2005controversial,ziegler2005propagation,kuter2007sunny,nordheimer2010trustworthiness}.
Different from these existing trust propagation models, the proposed MaTrust focuses on the multi-aspect of trust and directly characterizes several factors/aspects from the existing trust ratings. Compared with trust propagation models, our MaTrust has several unique advantages, including (1) multi-aspect property of trust can be captured; and (2) various types of prior knowledge can be naturally incorporated. In addition, one known problem about these propagation models is the slow on-line response speed~\cite{yao2012subgraph}, while MaTrust enjoys the {\em constant} on-line response time and the {\em linear} scalability for pre-computation.
\subsection{Multi-Aspect Trust Inference Models}
Researchers in social science have explored the multi-aspect property of trust for several years~\cite{sirdeshmukh2002consumer}. In computer science, there also exist a few trust inference models that {\em explicitly} explore the multi-aspect property of trust. For example, Xiong and Liu~\cite{xiong2004peertrust} model the value of the transaction in trust inference; Wang and Wu~\cite{wang2011multi} take competence and honesty into consideration; Tang et al.~\cite{tang2012mtrust} model aspect as a set of products that are similar to each other under product review sites; Sabater and Sierra~\cite{sabater2002reputation} divide trust in e-commerce environment into three aspects: price, delivering time, and quality.
However, all these existing multi-aspect trust inference methods require more information (e.g., value of transaction as well as user's preference for it, product and its category, etc.) and therefore become infeasible when such information is not available. In contrast, MaTrust does not require any information other than the locally-generated trust ratings, and could therefore be used in more general scenarios.
In terms of trust bias, Mishra et al.~\cite{mishra2011finding} propose an iterative algorithm to compute trustor bias. In contrast, our focus is to incorporate various types of trust bias as specified factors/aspects to increase the accuracy of trust inference.
\subsection{Other Related Methods}
Recently, researchers begin to apply machine learning models for trust inference. Nguyen et al.~\cite{nguyen2009trust} learn the importance of several trust-related features derived from a social trust framework. Our method takes a further step here by simultaneously learning the latent factors and the importance of bias. Seemingly similar concept of stereotype for trust inference is also used by Liu et al.~\cite{liu2009stereotrust} and Burnett et al.~\cite{burnett2010bootstrapping}. These methods learn the stereotypes from the user profiles of the trustees that the trustor has interacted with, and then use these stereotypes to reflect the trustor's first impression about unknown trustees. In contrast, MaTrust builds its stereotypes based on the existing trust ratings to capture multiple aspects for trust inference. There are also some recent work on using link prediction approaches to predict the {\em binary} trust/distrust relationship~\cite{leskovec2010predicting,chiang2011exploiting,hsieh2012low}. In this paper, we focus on the more general case where we want to infer a {\em continuous} trustworthiness score from the trustor to the trustee.
Finally, multi-aspect methods have been extensively studied in recommender systems~\cite{bell2007modeling,koren2009matrix,ma2009learning}. In terms of methodology, the closest related work is the collaborative filtering algorithm in~\cite{koren2009matrix}, which can be viewed as a special case of the proposed MaTrust. As mentioned before, our MaTrust is more general by learning the optimal weights for the prior knowledge and it leads to further performance improvement. On the application side, the goal of recommender systems is to predict users' flavors of items. It is interesting to point out that (1) on one hand, trust between users could help to predict the flavors as we may give more weight to the recommendations provided by trusted users; (2) on the other hand, trust itself might be affected by the similarity of flavors since users usually trust others with a similar taste~\cite{golbeck2009trust}. Although out of the scope of this paper, using recommendation to further improve trust inference accuracy might be an interesting topic for future work.
\section{Detailed Algorithm~1}
Here, we present the complete algorithm to update the trustor/trustee matrices when the bias coefficients are fixed (i.e., Algorithm~\ref{A:skeleton} for Eq.~\eqref{E:optsimple}). As mentioned above, we apply the alternating strategy by alternatively fixing one of the two matrices and optimizing the other. For simplicity, let us consider how to update $\textbf{F}_0$ when $\textbf{G}_0$ is fixed. Updating $\textbf{G}_0$ when $\textbf{F}_0$ is fixed can be done in a similar way. By fixing $\textbf{G}_0$, Eq.~\eqref{E:optsimple} can be further simplified as follows: \begin{equation}\label{E:optf}
\min_{\textbf{F}_0} \sum_{(i,j)\in \mathcal{K}} (\textbf{P}(i,j) - \textbf{F}_0(i,:)\textbf{G}_0(j,:)')^2 + \lambda ||\textbf{F}_0||_{fro}^2
\end{equation}
In fact, the above optimization problem in Eq.~\eqref{E:optf} now becomes convex wrt $\textbf{F}_0$. It can be further decoupled into many independent sub-problems, each of which only involves a single row in $\mat F_0$: \begin{equation}\label{E:optfline}
\min_{\textbf{F}_0(i,:)} \sum_{j,(i,j)\in \mathcal{K}} (\textbf{P}(i,j) - \textbf{F}_0(i,:)\textbf{G}_0(j,:)')^2 + \lambda ||\textbf{F}_0(i,:)||^2
\end{equation}
The optimization problem in Eq.~\eqref{E:optfline} can now be solved by the standard ridge regression wrt the corresponding row $\textbf{F}_0(i,:)$.
\begin{algorithm}[h]
\caption{alternateUpdate($\textbf{P}, \textbf{F}_0, \textbf{G}_0$).}\label{A:alternating}
\begin{algorithmic}[1]
\REQUIRE {The $n \times n$ matrix $\textbf{P}$, the $n \times r$ matrix $\textbf{F}_0$, and the fixed $n \times r$ matrix $\textbf{G}_0$}
\ENSURE {The updated matrix $\textbf{F}_1$ of $\textbf{F}_0$}
\STATE $\textbf{F}_1$ $\leftarrow$ $\textbf{F}_0$;
\FOR {i = 1 : n}
\STATE $\textbf{a}$ $\leftarrow$ the vector of column indices of existing elements in $\textbf{P}(i,j)~(j=1,2,...,n)$;
\STATE column vector $\textbf{d}$ $\leftarrow$ $\textbf{0}_{|\textbf{a}| \times 1}$;
\STATE matrix $\textbf{G}_1$ $\leftarrow$ $\textbf{0}_{|\textbf{a}| \times r}$;
\FOR {j = 1: $|\textbf{a}|$}
\STATE $\textbf{d}(j)$ $\leftarrow$ $\textbf{P}(i,\textbf{a}(j))$;
\STATE $\textbf{G}_1(j,:)$ $\leftarrow$ $\textbf{G}_0(\textbf{a}(j), :)$;
\ENDFOR
\STATE $\textbf{F}_1(i,:)$ $\leftarrow$ $(\textbf{G}'_1 \textbf{G}_1 + \lambda \cdot \textbf{I}_{r \times r})^{-1} \textbf{G}'_1 \textbf{d}$;
\ENDFOR
\RETURN $\textbf{F}_1$;
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{A:alternating} presents the overall solution for updating the trustor matrix $\mat{F}_0$. Based on Algorithm~\ref{A:alternating}, we present Algorithm~\ref{A:factorization} to alternatively update the trustor and trustee matrices $\textbf{F}_0$ and $\textbf{G}_0$. The algorithm first generates two $n \times r$ matrices for $\textbf{F}_0$ and $\textbf{G}_0$ where each element is initialized as $1/r$. At each iteration, the algorithm then alternatively calls Algorithm~\ref{A:alternating} to update the two matrices. The iteration ends when the stopping criteria are met, i.e., either the $L_2$ norm between successive estimates of both $\textbf{F}_0$ and $\textbf{G}_0$ is below our threshold $\xi_2$ or the maximum iteration step $m_2$ is reached.
\begin{algorithm}[t]
\caption{updateMatrix(\textbf{P}, $r$).}\label{A:factorization}
\begin{algorithmic}[1]
\REQUIRE {The $n \times n$ matrix $\textbf{P}$, and the latent factor size $r$}
\ENSURE {The $n \times r$ trustor matrix $\textbf{F}_0$, and the $n \times r$ trustee matrix $\textbf{G}_0$}
\STATE generate the $n \times r$ matrices $\textbf{F}_0$ and $\textbf{G}_0$ randomly;
\WHILE {not convergent}
\STATE $\textbf{F}_0$ $\leftarrow$ alternateUpdate($\textbf{P}$, $\textbf{F}_0$, $\textbf{G}_0$);
\STATE $\textbf{G}_0$ $\leftarrow$ alternateUpdate($\textbf{P}'$, $\textbf{G}_0$, $\textbf{F}_0$);
\ENDWHILE
\RETURN [$\textbf{F}_0$, $\textbf{G}_0$];
\end{algorithmic}
\end{algorithm}
\section{Proofs for Lemmas}
Next, we present the proofs for the lemmas in Section~\ref{sec:algorithmanalysis}.
\noindent \textbf{Proof Sketch for Lemma~\ref{L:effectiveness}:} (P1) First, Eq.~\eqref{E:optfline} is convex and therefore Step 10 in Algorithm~\ref{A:alternating} finds the global optima for updating a single row in the matrix $\mat{F}_0$. Notice that the optimization problem in Eq.~\eqref{E:optfline} is equivalent to that in Eq.~\eqref{E:optf}, and thus we have proved that Algorithm~\ref{A:alternating} finds the global optimal solution for the optimization problem in Eq.~\eqref{E:optf}.
(P2) Next, based on (P1) and the alternating procedure in Algorithm~\ref{A:factorization}, we have that Algorithm~\ref{A:factorization} finds a local minima for the optimization problem in Eq.~\eqref{E:optsimple}.
(P3) Finally, based on (P2) and the alternating procedure in Algorithm~\ref{A:matrust}, Lemma~\ref{L:effectiveness} holds. \hfill $\Box$ \hfill
\noindent \textbf{Proof of Lemma~\ref{L:efficiency}:} (P1) In Algorithm~\ref{A:alternating}, the time cost for Step~1 is $O(n r)$. Let $a_i$ denote the number of elements in $\textbf{a}$ of the $i^{th}$ iteration. The time cost for Step~3-5 is then $O(a_i r)$ since we store $\textbf{P}$ in sparse format. We need another $O(a_i r)$ time in the inner iteration (Step~6-9). The time cost of Step~10 is $O(a_i r^2 + r^2 + r^3 + a_i r^2 + a_i r + r) = O(r^3+a_i r^2)$. Therefore, the total time cost for the algorithm is $O(n r) + O(\sum_i (r^3 + a_i r^2)) = O(nr^3 + |\mathcal{K}| r^2)$ where $\sum_i a_i = |\mathcal{K}|$.
(P2) In Algorithm~\ref{A:factorization}, the time cost for Step~1 is $O(n r)$. As indicated by (P1), we need $O(n r^3 + |\mathcal{K}| r^2)$ time for both Step~3 and Step~4. The total time cost is $O(n r^3 m_2 + |\mathcal{K}| r^2 m_2)$.
(P3) In Algorithm~\ref{A:matrust}, the time cost for Step~1 is $O(|\mathcal{K}|)$ as we store $\textbf{T}$ in sparse format. Step~2 needs $O(1)$ time. We need $O(|\mathcal{K}|)$ time for Step~4-6. As indicated by (P2), we need $O(n r^3 m_2 + |\mathcal{K}| r^2 m_2)$ time for Step~7. We need $O(|\mathcal{K}| r)$ time for Step~8-10. As for updating the coefficients, we need $O(|\mathcal{K}| c^2 + c^3)$ time where $c$ is the number of specified factors, which is $3$ in our case. Therefore, the time cost for Step~11 is $O(|\mathcal{K}|)$. The total time cost is $O(n r^3 m_1 m_2 + |\mathcal{K}| r^2 m_1 m_2)$, which completes the proof. \hfill $\Box$ \hfill
\noindent \textbf{Proof of Lemma~\ref{L:efficiency2}:} (P1) In Algorithm~\ref{A:alternating}, we need $O(nr)$ space for Step~1 and $O(1)$ space for Step~2. We need another $O(nr)$ space for Step 3-5. For Step 6-9 we only need $O(1)$ space. We need $O(nr + r^2)$ space for Step~10. Among the different iterations of the algorithm, we can re-use the space from the previous iteration. Finally, the overall space cost is $O(|\mathcal{K}| + nr + r^2)$.
(P2) In Algorithm~\ref{A:factorization}, we need $O(nr)$ space for Step~1. Step~3 and Step~4 need $O(|\mathcal{K}| + nr + r^2)$ space. The space for each iteration can be re-used. The total space cost is $O(|\mathcal{K}| + nr + r^2)$.
(P3) In Algorithm~\ref{A:matrust}, we need $O(|\mathcal{K}|)$ space for the input since we store $\textbf{T}$ in sparse format. We need $O(n)$ space for Step~1 and $O(1)$ space for the Step~2. We need another $O(|\mathcal{K}|)$ space for Step~4-6. By (P2), Step~7 needs $O(|\mathcal{K}| + nr + r^2)$ space. Step 8-10 can re-use the space from Step 4-6. Step~11 needs $O(|\mathcal{K}|)$ space. For each iteration, the space can be re-used. The total space cost of Algorithm~\ref{A:matrust} is $O(|\mathcal{K}| + nr + r^2)$, which completes the proof. \hfill $\Box$ \hfill
\subsection{The MaTrust Algorithm}
Unfortunately, the optimization problem in Eq.~\eqref{E:optfinal} is not jointly convex wrt the coefficients ($\alpha_1$, $\alpha_2$, and $\alpha_3$) and the trustor/trustee matrices ($\textbf{F}_0$ and $\textbf{G}_0$) due to the coupling between them. Therefore, instead of seeking for a global optimal solution, we try to find a local minima by alternatively updating the coefficients and the trustor/trustee matrices while fixing the other.
The alternating procedure will lead to a local optima when the convergence criteria are met, i.e., either the $L_2$ norm between successive estimates of both $\textbf{F}$ and $\textbf{G}$ (which are equivalent to $\alpha_1$, $\alpha_2$, $\alpha_3$, $\textbf{F}_0$, and $\textbf{G}_0$) is below our threshold $\xi_1$ or the maximum iteration step $m_1$ is reached.
\subsubsection{Sub-routine 1: updating the trustor/trustee matrices}
First, let us consider how to update the trustor/trustee matrices ($\textbf{F}_0$ and $\textbf{G}_0$) when we fix the coefficients ($\alpha_1$, $\alpha_2$, and $\alpha_3$). For clarity, we define an $n \times n$ matrix $\textbf{P}$ as follows: \begin{equation}\label{E:t2p}
\textbf{P}(i,j) = \left\{ \begin{array}{ll}
\textbf{T}(i,j) - (\alpha_1 \mu + \alpha_2 \textbf{x}(i) + \alpha_3 \textbf{y}(j)) & \textrm{if $(i,j)\in {\cal {K}}$}\\
\textrm{`?'} & \textrm{otherwise}
\end{array} \right.
\end{equation}
where $\alpha_1$, $\alpha_2$, and $\alpha_3$ are some fixed constants.
\hide{
\begin{eqnarray}\label{E:t2p}
\textbf{P}(i,j) &=& {\textbf{T}(i,j) - (\alpha_1 \mu + \alpha_2 \textbf{x}(i) + \alpha_3 \textbf{y}(j))~\textrm{if~} (i,j)\in {\cal {K}}}
\nonumber\\
~&=& \textrm{`?'}~\textrm{otherwise}
\end{eqnarray}
}
Based on the above definition, Eq.~\eqref{E:optfinal} can be simplified (by ignoring some constant terms) as: \begin{equation}\label{E:optsimple}
\min_{\textbf{F}_0,\textbf{G}_0} \sum_{(i,j)\in \mathcal{K}} (\textbf{P}(i,j) - \textbf{F}_0(i,:)\textbf{G}_0(j,:)')^2 + \lambda ||\textbf{F}_0||_{fro}^2 + \lambda ||\textbf{G}_0||_{fro}^2
\end{equation}
Therefore, updating the trustor/trustee matrices when we fix the coefficients unchanged becomes a standard matrix factorization problem for missing values. Many existing algorithms (e.g.,~\cite{koren2009matrix,ma2009learning,buchanan2005damped}) can be plugged in to solve Eq.\eqref{E:optsimple}. In our experiment, we found the so-called alternating strategy, where we recursively update one of the two trustee/trustor matrices while keeping the other matrix fixed, works best and thus recommend it in practice. A brief skeleton of the algorithm is shown in Algorithm~\ref{A:skeleton}, and the detailed algorithms are presented in the appendix for completeness.
\begin{algorithm}[t]
\caption{updateMatrix(\textbf{P}, $r$). (See the appendix for the details)}\label{A:skeleton}
\begin{algorithmic}[1]
\REQUIRE {The $n \times n$ matrix $\textbf{P}$, and the latent factor size $r$}
\ENSURE {The $n \times r$ trustor matrix $\textbf{F}_0$, and the $n \times r$ trustee matrix $\textbf{G}_0$}
\STATE [$\textbf{F}_0$, $\textbf{G}_0$] = alternatingFactorization(\textbf{P}, $r$);
\RETURN [$\textbf{F}_0$, $\textbf{G}_0$];
\end{algorithmic}
\end{algorithm}
\subsubsection{Sub-routine 2: updating the coefficients}
Here, we consider how to update the coefficients ($\alpha_1$, $\alpha_2$, and $\alpha_3$) when we fix the trustor/trustee matrices.
If we fix the trustor and trustee matrices ($\mat{F}_0$ and $\mat{G}_0$) and let: \begin{equation}\label{E:t2p2}
\textbf{P}(i,j) = \left\{ \begin{array}{ll}
\textbf{T}(i,j) - \textbf{F}_0(i,:)\textbf{G}_0(j,:)' & \textrm{if $(i,j)\in {\cal {K}}$}\\
\textrm{`?'} & \textrm{otherwise}
\end{array} \right.
\end{equation}
Eq.~\eqref{E:optfinal} can then be simplified (by dropping constant terms) as: \begin{eqnarray}\label{E:optsimplecoeff}
\min_{\alpha_1, \alpha_2, \alpha_3} \sum_{(i,j)\in \mathcal{K}} (\textbf{P}(i,j) - (\alpha_1 \mu + \alpha_2 \mat{x}(i) + \alpha_3 \mat{y}(j)))^2 + n\lambda\sum_{i=1}^3 \alpha_i^2
\end{eqnarray}
\hide{
\begin{eqnarray}\label{E:t2p2}
\textbf{P}(i,j) &=& {\textbf{T}(i,j) - \textbf{F}_0(i,:)\textbf{G}_0(j,:)'~\textrm{if~} (i,j)\in {\cal {K}}}\nonumber\\
~ &=& \textrm{`?'}~\textrm{otherwise}\end{eqnarray}
}
To simplify the description, let us introduce another scalar $k$ to index each pair $(i,j)$ in the observed trustor-trustee pairs $\cal K$, that is, $(i,j)\in{\cal{K}}\rightarrow k=\{1,2,...,|{\cal{K}}|\}$. Let $\mat b$ denote a vector of length $|\mathcal{K}|$ with $\mat{b}(k)=\textbf{P}(i,j)$. We also define a $|\mathcal{K}|\times 3$ matrix $\mat A$ as: $\textbf{A}(k,1) = \mu$, $\textbf{A}(k,2) = \textbf{x}(i)$, $\textbf{A}(k,3) = \textbf{y}(j)$ $(k=1,2,...,|\mathcal{K}|)$; and a $3 \times 1$ vector $\mat{\alpha}=[\alpha_1, \alpha_2, \alpha_3]'$. Then, Eq.~\eqref{E:optsimplecoeff} can be formulated as the ridge regression problem wrt the vector $\mat{\alpha}$:\begin{eqnarray}
\min_{\mat{\alpha}} ||\mat{b} - \mat{A}\mat{\alpha}||_2^2 + n\lambda\|\mat{\alpha}\|^2
\end{eqnarray}
In practice, we shrink the regularization parameter in the above equation from $n\lambda$ to $\lambda$ to strengthen the importance of bias. Therefore, we can update the coefficients as: \begin{equation}\label{E:linearregression}
\mat{\alpha}=[\alpha_1, \alpha_2, \alpha_3]' = (\textbf{A}' \textbf{A} + \lambda\mat{I}_{3 \times 3})^{-1} \textbf{A}' \textbf{b}
\end{equation}
\subsubsection{Putting everything together: MaTrust}
\begin{algorithm}[t]
\caption{MaTrust($\textbf{T}$, $\mathcal{K}$, $r$, $u$, $v$).}\label{A:matrust}
\begin{algorithmic}[1]
\REQUIRE {The $n \times n$ partially observed trust matrix $\textbf{T}$, the set of observed trustor-trustee pairs $\mathcal{K}$, the latent factor size $r$, trustor $u$, and trustee $v$}
\ENSURE {The estimated trustworthiness score $\mat{\hat T}(u,v)$}
\STATE [$\mu$, \textbf{x}, \textbf{y}] $\leftarrow$ computeBias($\textbf{T}$);
\STATE initialize $\alpha_1 = \alpha_2 = \alpha_3 = 1$;
\WHILE {not convergent}
\FOR {each $(i,j) \in \mathcal{K}$}
\STATE $\textbf{P}(i,j)$ $\leftarrow$ $\textbf{T}(i,j) - (\alpha_1 \mu + \alpha_2 \textbf{x}(i) + \alpha_3 \textbf{y}(j))$;
\ENDFOR
\STATE [$\textbf{F}_0$, $\textbf{G}_0$] = updateMatrix($\textbf{P}$, $r$);
\FOR {each $(i,j) \in \mathcal{K}$}
\STATE $\textbf{P}(i,j)$ $\leftarrow$ $\textbf{T}(i,j) - \textbf{F}_0(i,:) \textbf{G}_0(j:,)'$;
\ENDFOR
\STATE $[\alpha_1, \alpha_2, \alpha_3]'$ = updateCoefficient($\textbf{P}$, $\mu$, \textbf{x}, \textbf{y});
\ENDWHILE
\RETURN $\mat{\hat T}(u,v) \leftarrow \textbf{F}_0(u,:)\textbf{G}_0(v,:)' + \alpha_1 \mu + \alpha_2 \textbf{x}(u) + \alpha_3 \textbf{y}(v)$;
\end{algorithmic}
\end{algorithm}
Putting everything together, we propose Algorithm~\ref{A:matrust} for our MaTrust trust inference problem. The algorithm first uses Eq.~\eqref{E:bias} to compute the global bias, trustor bias, and trustee bias (Step~1), and initializes the coefficients (Step~2). Then the algorithm begins the alternating procedure (Step~3-12). First, it fixes $\alpha_1$, $\alpha_2$, and $\alpha_3$, and applies Eq.~\eqref{E:t2p} to incorporate bias. After that, the algorithm invokes Algorithm~\ref{A:skeleton} to update the trustor matrix $\textbf{F}_0$ and trustee matrix $\textbf{G}_0$. Next, the algorithm fixes $\textbf{F}_0$ and $\textbf{G}_0$, and uses ridge regression in Eq.~\eqref{E:linearregression} to update $\alpha_1$, $\alpha_2$, and $\alpha_3$. The alternating procedure ends when the stopping criteria of Eq.~\eqref{E:optfinal} are met. Finally, the algorithm outputs the estimated trustworthiness from the given trustor $u$ to the trustee $v$ using Eq.~\eqref{E:onlineinfer} (Step~13).
It is worth pointing out that Step 1-12 in the algorithm can be pre-computed and their results ($\textbf{F}_0$, $\textbf{G}_0$, $\alpha_1$, $\alpha_2$, and $\alpha_3$) can be stored in the off-line/pre-computational stage. When an on-line trust inference request arrives, MaTrust only needs to apply Step 13 to return the inference result, which only requires a constant time.
\subsection{Algorithm Analysis}\label{sec:algorithmanalysis}
Here, we briefly analyze the effectiveness and efficiency of our MaTrust algorithm and provide detailed proofs in the appendix.
The effectiveness of the proposed MaTrust algorithm can be summarized in Lemma~\ref{L:effectiveness}, which says that overall, it finds a local minima solution. Given that the original optimization problem in Eq.~\eqref{E:optfinal} is not jointly convex wrt the coefficients ($\alpha_1$, $\alpha_2$, and $\alpha_3$) and the trustor/trustee matrices ($\textbf{F}_0$ and $\textbf{G}_0$), such a local minima is acceptable in practice.
\begin{lemma}{\bf {Effectiveness of MaTrust}}.\label{L:effectiveness}
Algorithm~\ref{A:matrust} finds a local minima for the optimization problem in Eq.~\eqref{E:optfinal}.
\end{lemma}
\proof See the Appendix \hfill $\Box$ \hfill
The time complexity of the proposed MaTrust is summarized in Lemma~\ref{L:efficiency}, which says that MaTrust scales {\em linearly} wrt the number of users and the number of the observed trustor-trustee pairs.
\begin{lemma}{\bf {Time Complexity of MaTrust}}.\label{L:efficiency}
Algorithm~\ref{A:matrust} requires $O(nr^3m_1m_2 + |\mathcal{K}|r^2m_1m_2)$ time, where $m_1$ and $m_2$ are the maximum iteration numbers in Algorithm~\ref{A:matrust} and Algorithm~\ref{A:skeleton}, respectively.
\end{lemma}
\proof See the Appendix \hfill $\Box$ \hfill
The space complexity of MaTrust is summarized in Lemma~\ref{L:efficiency2}, which says that MaTrust requires {\em linear} space wrt the number of users and the number of the observed trustor-trustee pairs.
\begin{lemma}{\bf {Space Complexity of MaTrust}}.\label{L:efficiency2}
Algorithm~\ref{A:matrust} requires $O(|\mathcal{K}| + nr + r^2)$ space.
\end{lemma}
\proof See the Appendix \hfill $\Box$ \hfill
Notice that for both time complexity and space complexity, we have a {\em polynomial} term wrt the number of the latent factors $r$. In practice, this parameter is small compared with the number of the users ($n$) or the number of the observed trustor-trustee pairs ($|\mathcal{K}|$). For example, in our experiments, we did not observe significant performance improvement when the number of latent factors is larger than $10$ (See the next section for the detailed evaluations).
\section{Introduction}
\input{001intro}
\section{Related Work}\label{sec:relatedwork}
\input{005rel}
\section{Problem Definition}
\input{002probdef}
\section{The Proposed Optimization Formulation}
\input{003matrust}
\section{The Proposed MaTrust Algorithm}
\input{035alg}
\section{Experimental Evaluation}\label{sec:exp}
\input{004exp}
\section{Conclusion}
In this paper, we have proposed an effective multi-aspect trust inference model (MaTrust). The key idea of MaTrust is to characterize several aspects/factors for each trustor and trustee based on the existing trust relationships. The proposed MaTrust can naturally incorporate the prior knowledge such as trust bias by expressing it as specified factors. In addition, MaTrust scales linearly wrt the input data size (e.g., the number of users, the number of observed trustor-trustee pairs, etc). Our experimental evaluations on real data sets show that trust bias can truly improve the inference accuracy, and that MaTrust significantly outperforms existing benchmark trust inference models in both effectiveness and efficiency. Future work includes investigating the capability of MaTrust to address the distrust as well as the trust dynamics.
\bibliographystyle{abbrv}
|
1,314,259,993,536 | arxiv | \section{Introduction}
It has long been known that interactions can have drastic effects in low dimensional systems \cite{TG}. A striking example of this was elucidated by Kane and Fisher \cite{KF}. It was shown that a local impurity can be a relevant or irrelevant perturbation to a Luttinger Liquid depending on the sign of the interaction in the liquid. For repulsive interactions amongst the fermions the strength of the impurity will grow at low energy and the one dimensional system will be split into two Luttinger liquids weakly coupled at their edges by a tunnelling term (Weak-Tunnelling Hamiltonian), while for attractive interactions the strength of the impurity will decrease and the system will heal itself. Hence one finds a vanishing conductance at the impurity site at low temperature in the first case and in a perfect conductance in the second.
This has implications for many experimentally realisable quantum systems. Amongst these are chiral edge states of Quantum Hall materials \cite{LLRMP} and electronic quantum circuits\cite{circuit}. More exciting perhaps is the possibility to realise such a system with cold atomic gases \cite{IBRMP}. The measure of control afforded by these experiments in addition to the ability to tune parameters including the interaction strength makes this the perfect setting to study the effects of interactions on a localised impurity. Isolated one dimensional systems are readily achievable and recent advances have made it possible to study transport albeit with 2 dimensional leads \cite{Coldatom} \cite{esslinger}.
In such isolated quantum systems, integrability also has a large effect. The existence of a large number of conserved quantities strongly constrains the dynamics \cite{NC} and will have implications for transport.
In this article we introduce a new type of coordinate Bethe Ansatz for use in quantum impurity models with bulk interaction. We present the method by solving exactly the Kane-Fisher model of an impurity in a Luttinger liquid with arbitrary boundary conditions. The method uses a scattering Bethe basis which incorporates the impurity scattering processes that lead to a varying number of left and right movers. The boundary condition problem leads to a Quantum Inverse Scattering problem which is in turn solved using the Off Diagonal Bethe Ansatz (ODBA) \cite{ODBA} approach of deriving the Bethe Ansatz equations. It has the advantage that it does not require an explicit reference state and so is suited to problems where it is absent, which is the case in the present model. Incorporating twisted boundary conditions being physically equivalent to driving a persistent current around the system allows for the possibility of studying transport across the impurity.
We also study the Weak-Tunnelling Hamiltonian describing two separate Luttinger liquids coupled via a tunnelling parameter. The model is of great interest by itself and is thought to describe the strong coupling fixed point of the Kane-Fisher model. We find that the Weak-Tunnelling Hamiltonian is solvable by the same procedure requiring only simple modifications and show it is dual to the impurity model.
The rest of the article is organised as follows: In section II we introduce the scattering Bethe basis which incorporates the impurity's selecting - scattering mechanism and prove it's consistency by introducing a generalization of the Yang-Baxter and reflection equations. In section III we provide a similar construction for the Weak-Tunnelling Hamiltonian.
The spectrum of the model is found in section IV. The system of Bethe Ansatz equations is shown to be formally similar to that of the open XXZ model with boundary terms. One diagonal boundary corresponds to the twist and the other describes the impurity. Using the ODBA we are able to obtain the eigenvalues and Bethe equations. The thermodynamics of the model are discussed in section V where we calculate the free energy and specific heat of the impurity as well as the difference in the impurity entropy in the UV and IR when interactions are repulsive. The Weak-Tunnelling Hamiltonian is examined, its complementarity with the Kane-Fisher model is shown and the thermodynamics in the attractive regime briefly discussed.
\section{Bethe Basis of the impurity-Luttinger model}
The Hamiltonian of the impurity model we seek to diagonalise is $H=H_{k}+H_{g}+H_I$ with the various terms given by,
\begin{eqnarray}\label{H}
H_{k}&=&\sum_{\sigma=\pm,a=\uparrow,\downarrow}\int \sigma\psi^\dag_{\sigma,a}\left(-i\partial_x- \cal{A} \right)\psi_{\sigma,a}(x),\\
H_g&=&\sum_{a,b}4g\int \psi_{+,a}^\dag\psi^\dag_{-,b}\psi_{-,b}\psi_{+,a},\\\nonumber
H_I&=&\sum_aU \left[\psi^\dag_{+,a}(0)\psi_{-,a}(0)+\psi_{-,a}^\dag (0)\psi_{+,a} (0)\right]\\
&&+U' \left[\psi^\dag_{+,a}(0)\psi_{+,a}(0)+\psi_{-,a}^\dag (0)\psi_{-,a} (0)\right].
\end{eqnarray}
Here $\psi^\dag_{\pm,a},~\psi_{\pm,a}$ with $a=\uparrow,\downarrow$ are creation operators for the right ($+$) and left ($-$) moving fermions with spin, $U'$ and $U$ describe the forward and backward scattering off the impurity respectively and $g$ is the fermion-fermion interaction strength. We have set $v_f=1$ and $\epsilon_f=0$. In addition have included a gauge field $\mathcal{A}$ which, when the system is placed on a ring means it is threaded by a flux $\Phi=\int_x \mathcal{A}$. Equivalently we may solve for the wavefunction with twisted boundary conditions. This will induce a persistent current throughout the system and allow the effect of the impurity on the current to be studied. Since we have chosen the interaction to be isotropic in spin we will assume these indices as implicit in what follows.
To begin we discuss the construction of the eigenfunctions of $H$. In the presence of the impurity only the total number of fermions $N=N_++N_-$ is conserved, hence the wave functions must
consist of components of left and right movers consistent with $N$. We start with the single particle eigenstates, the most general form for which can be written as
\begin{eqnarray}\nonumber
\int \mathrm{d}x \left[\left(e^{ikx}A^{[10]}_+\psi^\dag_+(x)+e^{-ikx}A^{[10]}_-\psi^\dag_-(x)\right)\theta(-x)\right.\\\label{n1}
\left.+\left(e^{ikx}A^{[01]}_+\psi^\dag_+(x)+e^{-ikx}A^{[01]}_-\psi^\dag_-(x)\right)\theta(x)\right]\ket{0}.
\end{eqnarray}
Applying the Hamiltonian to the wave function fixes two of these amplitudes $A_\pm^{[\cdot\cdot]}$. Here we wish to take a physical picture and define a $S^{10}$ which maps a particle past the impurity. This is in contrast to what is standard in Bethe ansatz where the S-matrix maps between regions of configuration space to the left and right of the impurity. Therefore we consider $A_+^{[10]}$and $A_-^{[01]}$ as the incoming amplitudes and $A_-^{[10]}$and $A_+^{[01]}$ as the outgoing ones. The solution of the Schrodinger equation relates the two sets via
\begin{eqnarray}\label{s}
\begin{pmatrix}
A^{[01]}_+\\
A^{[10]}_-
\end{pmatrix}=S
\begin{pmatrix}
A^{[10]}_+\\
A^{[01]}_-
\end{pmatrix},~~
S=\begin{pmatrix}
\alpha && \beta\\
\beta &&\alpha
\end{pmatrix},\\~~~~\alpha=\frac{1-U^2/4+U'^2/4}{1+i U'+U^2/4-U'^2/4},\\
\beta=\frac{-iU}{1+i U'+U^2/4-U'^2/4}.
\end{eqnarray}
We recognise $\alpha$ and $\beta$ as the transmission and reflection coefficients respectively and note the unimportant role of the forward scattering term. Its presence merely redefines these coefficients but does not change the left-right mixing imposed by the backward scattering term. In what follows we set $U'=0$.
The form in which we have written the above equation allows us to easily apply periodic or twisted boundary conditions,
\begin{equation}\label{PBc}
e^{-ikL}\begin{pmatrix}
A^{[10]}_+\\
A^{[01]}_-
\end{pmatrix}=\begin{pmatrix}
e^{i\Phi} && 0\\
0 &&e^{-i\Phi}
\end{pmatrix}
S
\begin{pmatrix}
A^{[10]}_+\\
A^{[01]}_-
\end{pmatrix}.
\end{equation}
We now proceed to the two particle case. The interaction term $ H_g$ couples left- to right-movers only and preserves their number unchanged unlike the impurity term. Thus in the absence of the impurity a state consisting of one left mover and one right mover takes the form $|F^{L,R}\rangle = \int dx \, dy\, F(x,y)
\psi^{\dagger}_+(x) \psi^{\dagger}_-(y) |0\rangle$, where the wave function $F(x,y)$ must satisfy the eigenvalue equation,
\begin{eqnarray}\nonumber
[-i(\partial_x - \partial_y) + 4 g \delta(x-y)] F(x,y)= EF(x,y)
\end{eqnarray}
The solution is easily found to be
\begin{eqnarray}\nonumber
F(x,y)= A e^{ik_1x-ik_2 y}[\theta(x-y) +e^{i\phi} \theta(y-x)]
\end{eqnarray}
and the scattering phase shift given by
\begin{eqnarray}\nonumber
e^{i\phi}=\frac{1-ig}{1+ig}.
\end{eqnarray}
For the scattering of two right movers or two left movers the phase shift is actually undetermined by the Schrodinger equation, we choose it to be: $ e^{i\phi_{++} }= e^{i\phi_{--}} =1$.
As seen for a single particle the impurity mixes both the left and right movers. A non-interacting model could therefore be handled via utilising an odd-even basis $\psi_{e/o}(x)=(\psi_+(x) \pm \psi_-(-x))/\sqrt2$. However doing so for the full model will only serve to complicate the interaction term. On the other hand in the absence of the impurity the left-right basis is appropriate. To diagonalise both we need to use a basis which naturally incorporates both aspects, we'll refer to it as an in-out scattering Bethe Basis.
To construct it we divide configuration space into 8 regions, to be labelled $Q$ , which are specified not only by the ordering of $x_1$, $x_2$ and the impurity but also according to which position is closer to the origin. For example if $x_1$ is to the left of the impurity, $x_2$ to its right with $x_2$ closer to the impurity then the amplitude in this region is denoted $A^{[102B]}_{\sigma_1\sigma_2}$, $\sigma_j=\pm$ being the chirality of the particle at $x_j$. The region in which $x_1$ is closer is denoted $A^{[102A]}_{\sigma_1\sigma_2}$. The consequence for the wavefunction is that we include Heaviside functions $\theta (x_Q)$ which have support only in a certain region, e.g $\theta(x_{[102B]})=\theta(x_2)\theta(-x_1)\theta (-x_1-x_2)$. A general two particle eigenstate for $H$ can be written as,
\begin{eqnarray}\nonumber
\ket{k_1,k_2}=\sum_Q\sum_{\sigma_1\sigma_2}\int\theta (x_Q)A_{\sigma_1\sigma_2}^{Q}e^{\sigma_1ik_1x_1+\sigma_2ik_2x_2}\\
\times\psi^\dag_{\sigma_1}(x_1)\psi^\dag_{\sigma_1}(x_2)\ket{0}.
\end{eqnarray}
The form of this wavefunction requires some comment. The linear derivative acts as $\pm i(\partial_1-\partial_2)$ when the particles are of opposite chirality and as $\pm i(\partial_1+\partial_2)$ when they have the same chirality. This allows us to introduce an arbitrary function of $x_1\pm x_2$ when the particles are of the same or opposite chirality. Accordingly, applying the Hamiltonian to this ansatz fixes some but not all the amplitudes. In particular when switching between the regions weighted by $\theta(\pm(x_1-x_2))$ in the $\sigma_1=\sigma_2$ sector and $\theta(\pm(x_1+x_2))$ in the $\sigma_1=-\sigma_2$ sector the linear derivative allows us to choose any S-matrix we like provided it does not mix the $\sigma_1=\sigma_2$ with the $\sigma_1=-\sigma_2$ amplitudes \footnote{This procedure is actually very natural and is required whenever a degenerate level is perturbed. In our case, the energy level $k_1+k_2$ is degenerate with $(k_1+q)+(k_2-q)$ for any $q$. Thus, as degenerate perturbation theory requires, an appropriate basis in the degenerate subspace needs to be found in which the perturbation can be turned on. This corresponds to the consistent choice of the S-matrices, as described}. The specific form of this additional S-matrix is dictated by the requirement that the wavefunction be consistent. Typically this would require the S-matrices be solutions of the Yang Baxter equation but here the different configuration space set up will modify this and will lead to a generalised Yang-Baxter relation. To make these statements more explicit let us form column vectors of the amplitudes,
\begin{eqnarray}
\centering\nonumber
&&\vec{A}_1=\begin{pmatrix}
A_{++}^{[120B]}\\
A_{+-}^{[102B]}\\
A_{-+}^{[201B]}\\
A_{--}^{[021B]}
\end{pmatrix}~\vec{A}_2=\begin{pmatrix}
A_{++}^{[210A]}\\
A_{+-}^{[102A]}\\
A_{-+}^{[201A]}\\
A_{--}^{[012A]}
\end{pmatrix}~\vec{A}_3=\begin{pmatrix}
A_{++}^{[201A]}\\
A_{+-}^{[012A]}\\
A_{-+}^{[210A]}\\
A_{--}^{[102A]}
\end{pmatrix}\\\nonumber
&&\vec{A}_4=\begin{pmatrix}
A_{++}^{[201B]}\\
A_{+-}^{[021B]}\\
A_{-+}^{[120B]}\\
A_{--}^{[102B]}
\end{pmatrix}
~\vec{A}_5=\begin{pmatrix}
A_{++}^{[021B]}\\
A_{+-}^{[201B]}\\
A_{-+}^{[102B]}\\
A_{--}^{[120B]}
\end{pmatrix}
~\vec{A}_6=\begin{pmatrix}
A_{++}^{[012A]}\\
A_{+-}^{[201A]}\\
A_{-+}^{[102A]}\\
A_{--}^{[210A]}
\end{pmatrix}
\\&&~~~~~~\vec{A}_7=\begin{pmatrix}
A_{++}^{[102A]}\\
A_{+-}^{[210A]}\\
A_{-+}^{[012A]}\\
A_{--}^{[201A]}
\end{pmatrix}
~~~~~~~\vec{A}_8=\begin{pmatrix}
A_{++}^{[102B]}\\
A_{+-}^{[120B]}\\
A_{-+}^{[021B]}\\
A_{--}^{[201B]}
\end{pmatrix}
\end{eqnarray}
We interpret $\vec{A}_1$ ($\vec{A}_2$) as the amplitudes where both particles are incident on the impurity but particle 2 (1) is closer, $\vec{A}_5$ ($\vec{A}_6$) are the amplitudes in which both particles are outgoing with particle 2 (1) closer to the impurity, $\vec{A}_8$ ($\vec{A}_3$) describes particle 2 (1) having scattered off the impurity and is still closer to the impurity than 1 (2) while $\vec{A}_7$ ($\vec{A}_4$ ) also describes particle 2 (1) having scattered but with 1 (2) is closer. The Hamiltonian fixes the following relations between these amplitudes
\begin{eqnarray}
\vec{A}_8=S^{20}\vec{A}_1,~~~\vec{A}_3=S^{10}\vec{A}_2,\\
\vec{A}_5=S^{20}\vec{A}_4,~~~\vec{A}_6=S^{10}\vec{A}_7,\\
\vec{A}_7=S^{12}\vec{A}_8,~~~\vec{A}_4=S^{12}\vec{A}_3,\\\label{S1}
\end{eqnarray}
where
\begin{eqnarray}
S^{20}=S\otimes \mathbb{1},~~~S^{10}=\mathbb{1}\otimes S,\label{S}
\end{eqnarray}
and, as discussed above,
\begin{eqnarray}
S^{12}=\begin{pmatrix}
1&&0&&0&&0\\
0&&e^{i\phi}&&0&&0\\
0&&0&&e^{i\phi}&&0\\
0&&0&&0&&1
\end{pmatrix}.
\end{eqnarray}
The freedom mentioned previously enters upon considering $ \vec{A}_1\leftrightarrow \vec{A}_2$ and $ \vec{A}_5\leftrightarrow \vec{A}_6$. Again, these S-matrices are restricted only in that they cannot mix $\sigma_1=\sigma_2$ amplitudes with $\sigma_1=-\sigma_2$.
We choose to take
\begin{eqnarray}
\vec{A}_2=W^{12}\vec{A}_1,~~~\vec{A}_6=W^{12}\vec{A}_5,\\\label{W}
W^{12}=\begin{pmatrix}
1&&0&&0&&0\\
0&&0&&1&&0\\
0&&1&&0&&0\\
0&&0&&0&&1
\end{pmatrix}.
\end{eqnarray} This is dictated by the consistency of the wave function which requires the S-matrices to satisfy a reflection equation,
\begin{equation}
S^{20}S^{12}S^{10}W^{12}=W^{12}S^{10}S^{12}S^{20}
\end{equation}
\begin{figure}
\includegraphics[trim=30 30 30 30,width=.5\textwidth]{YB.pdf}\label{fRE}
\caption{(Color Online)The amplitudes are related by applying the operators as depicted here. For consistency we require the amplitudes obtained by proceeding clockwise or counter-clockwise are the same resulting in \eqref{RE}.}
\end{figure}
Inserting \eqref{W}\eqref{S}\eqref{S1} it is easy to see this indeed holds. A schematic representation is given in the Fig 1. By introducing the extra regions indexed by $A,B$ we have changed the consistency condition from the Yang-Baxter equation to a generalised version that takes the form of a reflection equation. As explained, the partition to these extra regions is dictated by linear derivative and the degeneracies associated with it, which require us to choose the correct basis in the degenerate subspace. This basis, the Bethe basis, corresponds to the introduction of the S-Matrix $W^{12}$ which satisfies the consistency conditions. Such a degeneracy is not present in a massive theory in which case integrability is inconsistent with a nontrivial bulk interaction in the presence of a transmitting and reflecting impurity \cite{mussardo}.
The generalisation to $N$ particles is immediate. The $N$ particle eigenstate with energy $E=\sum_j^N k_j$ is,
\begin{eqnarray}
\ket{\vec{k}}=\sum_Q\sum_{\vec{\sigma}}\int\theta (x_Q)A_{\vec{\sigma}}^{Q}e^{i\sum \sigma_j k_jx_j}\prod \psi^\dag_{\sigma_j}(x_j)\ket{0}.
\end{eqnarray}
The sum is over the $2^N N!$ regions consisting of all orderings of $x_j$ and the origin and indexed by which particle is closest to the impurity. Just as in the two particle case the amplitudes $A_{\vec{\sigma}}^{Q}$ are related to each other by applying the S-matrices,
\begin{eqnarray}
S^{j0}&=&S_j\otimes_{k\neq j}\mathbb{1},\\\label{Sij}
S^{ij}&=&\begin{pmatrix}
1&&0&&0&&0\\
0&&e^{i\phi}&&0&&0\\
0&&0&&e^{i\phi}&&0\\
0&&0&&0&&1
\end{pmatrix}_{ij}\otimes_{k\neq i,j}\mathbb{1},~~\\\label{Wij}
~~W^{ij}&=&\begin{pmatrix}
1&&0&&0&&0\\
0&&0&&1&&0\\
0&&1&&0&&0\\
0&&0&&0&&1
\end{pmatrix}_{ij}\otimes_{k\neq i,j}\mathbb{1}.
\end{eqnarray}
The subscripts denote which particle spaces the operators act upon. In order for this wavefunction to be consistent it must satisfy the following Yang-Baxter and reflection equations,
\begin{eqnarray}\label{RE}
S^{k0}S^{jk}S^{j0}W^{jk}&=&W^{jk}S^{j0}S^{jk}S^{k0}\\\label{YB1}
W^{jk}W^{jl}W^{kl}&=&W^{kl}W^{jl}W^{jk}\\\label{YB2}
W^{jk}S^{jl}S^{kl}&=&S^{kl}S^{jl}W^{jk}.
\end{eqnarray}
Satisfying these is a sufficient condition for the consistency of the wave function because the S-matrices form a representation of the reflection group just as those in other integrable models form a representation of the permutation group \citep{ZinnJustin}. This will be made evident in the next section when the continuous versions of the S-matrices and the Bethe equations are found.
To determine the thermodynamic spectrum of the model we place the system on a ring of size $L$. The flux, $\Phi=\mathcal{A} L$ through the loop then imposes twisted boundary conditions so that upon traversing the entire system a particle picks up an additional phase $e^{\sigma i\Phi}$, $\sigma$ being the chirality of the particle. We obtain the following equations which determine $k_j$
\begin{eqnarray}\label{PBC}
&& e^{-ik_jL}A_{\sigma_1\dots\sigma_N}=\left(Z_j\right)^{\sigma'_1\dots\sigma'_N}_{\sigma_1\dots\sigma_N}A_{\sigma'_1\dots\sigma'_N}\\
&&Z_j=W^{j-1j}.. W^{1j}B_jS^{1j}.. S^{jN}S^{j0}W^{jN}.. W^{jj+1}
\end{eqnarray}
where the matrix $Z_j$ transfers the $j$th particle around the ring. Here the matrices $B_j$ act in the $j$th particle chirality space and impose the twisted boundary conditions,
\begin{equation}
B_j=\begin{pmatrix}
e^{i\Phi}&&0\\0&&e^{-i\Phi}
\end{pmatrix}.
\end{equation}
Alternatively we could require hard wall boundary conditions at $x=\pm L/2$ by taking $B_j=\sigma_x$.
Using \eqref{RE}\eqref{YB1}\eqref{YB2} it can be shown that all transfer matrices $Z_j$ are equivalent and so we restrict our attention to solving,
\begin{eqnarray}\nonumber
\left(B_1S^{12}\dots S^{1N} S^{10}W^{1N}\dots W^{12}\right)^{\sigma'_1\dots\sigma'_N}_{\sigma_1\dots\sigma_N}A_{\sigma'_1\dots\sigma'_N}\\\label{Z}
= e^{-ikL}A_{\sigma_1\dots\sigma_N}.
\end{eqnarray}
This is a feature of many quantum impurity models. It arises due to the lack of a dimensionful parameter in the Hamiltonian which results in S-matrices which are $k$ independent. We denote the operator on the left hand side $Z$. Its eigenvalues determine the allowed values of the momenta $k_j$ and therefore the spectrum, $E= \sum_j k_j$. However, before proceeding to the diagonalization of the transfer matrix we turn to the solution of another closely related model, the Weak-Tunnelling model.
\section{Bethe Ansatz eigenstates of the Weak -Tunnelling Hamiltonian}
The embedding of an impurity in a Luttinger liquid could be viewed from the complementary scenario of two liquids which are coupled by a weak link or tunnel junction.
Therefore in addition to the impurity model we will also consider the Weak-Tunnelling Hamiltonian, $H_{WT}$ which is believed to govern the behaviour of the system in the vicinity of the strong coupling point. It includes two Luttinger liquids each described by $H_k+H_g$, occupying the regions from $-L/2$ to $0$ and $0$ to $L/2$ denoted by the subscripts $l$ and $r$ respectively. These are coupled to each other via the tunnelling term,
\begin{eqnarray}
H_t=t\big(\psi^\dag_{+,r}(0)+\psi^\dag_{-,r}(0)\big)\big(\psi_{+,l}(0)+\psi_{-,l}(0)\big)+\text{h.c} \;\;\;
\end{eqnarray}
which allows for tunnelling between the otherwise disjoint Luttinger liquids.
The single particle solution of the Weak-Tunnelling Hamiltonian is of a similar form to \eqref{n1},
\begin{eqnarray}\nonumber
&&\int_{-\frac{L}{2}}^0 \left[e^{ikx}A^{[10]}_+\psi^\dag_{+,l}(x)+e^{-ikx}A^{[10]}_-\psi^\dag_{-,l}(x)\right]\ket{0}\\
&&+\int^{\frac{L}{2}}_0 \left[e^{ikx}A^{[01]}_+\psi^\dag_{+,r}(x)+e^{-ikx}A^{[01]}_-\psi^\dag_{-,r}(x)\right]\ket{0}.\;\;\;\;
\end{eqnarray}
Here we have used the same notation as in the impurity case so that $A_{\sigma}^{[10]}$ is the amplitude of a particle of chirality $\sigma$ in the left system and $A_{\sigma}^{[01]}$ in the right system. Acting on this with the Hamiltonian and using the boundary conditions $\psi^\dag_{+,l}(0)=\psi^\dag_{-,l}(0)$ and $\psi^\dag_{+,r}(0)=\psi^\dag_{-,r}(0)$ we find that
\begin{eqnarray}\label{t}
\begin{pmatrix}
A^{[01]}_+\\
A^{[10]}_-
\end{pmatrix}=S_t
\begin{pmatrix}
A^{[10]}_+\\
A^{[01]}_-
\end{pmatrix}
&&,~~S_t=\begin{pmatrix}
\alpha_t && \beta_t\\
\beta_t &&\alpha_t
\end{pmatrix},\\~~~~\alpha_t=\frac{-4it}{1+4t^2},~&&~~~\beta_t=\frac{1-4t^2}{1+4t^2}.
\end{eqnarray}
The imposition of hard wall boundary conditions at $x=\pm L/2$ gives this time
\begin{eqnarray}
e^{-ikL}\begin{pmatrix}
A^{[01]}_+\\
A^{[10]}_-
\end{pmatrix}=\sigma_x
S_t
\begin{pmatrix}
A^{[01]}_+\\
A^{[10]}_-
\end{pmatrix}.
\end{eqnarray}
The set up for higher particle number is the same as for the impurity model and the analysis of the preceding section transfers to the present case. This enables us to construct consistent $N$ particle eigenstates. The two particle S-matrices are given by \eqref{Sij} and \eqref{Wij}. The difference is the single particle S-matrix $S^{j0}$ being replaced with $S^{j0}_t=S_{t\,j}\otimes^N_{k\neq j}\mathbb{1}$. These are readily seen to satisfy the consistency conditions \eqref{RE}-\eqref{YB2}.
As before we impose boundary conditions to determine the spectrum and obtain for hard walls at $x=\pm L/2$,
\begin{eqnarray}\nonumber
\left(\sigma_x S^{12}\dots S^{1N} S_t^{10}W^{1N}\dots W^{12}\right)^{\sigma'_1\dots\sigma'_N}_{\sigma_1\dots\sigma_N}A_{\sigma'_1\dots\sigma'_N}\\\label{Zt}
= e^{-ikL}A_{\sigma_1\dots\sigma_N}.
\end{eqnarray}
We could also have applied periodic or twisted boundary conditions by including $B_1$ instead of $\sigma_x$. The system with periodic or twisted boundary conditions no longer describes two disjoint liquids filling the left and right half lines but rather a ring containing a weak link. This is the dual system to the impurity model on a ring. To distinguish with the impurity model we denote the operator above by $Z_t$.
In what follows we will be concerned with properties of the impurity and weak link which will be independent of the type boundary condition imposed.
\section{Off-Diagonal Bethe Ansatz}
In the previous section we showed that in order to determine the spectrum of $H$ or $H_{WT}$ we must diagonalise $Z$ or $Z_t$. To achieve this we will make use of the Off Diagonal Bethe Ansatz \cite{ODBA}. This method allows one to determine the eigenvalues and eigenvectors of a transfer matrix when a proper reference state is absent. It has already been successfully used to obtain the exact solutions for many integrable models with a broken $U(1)$ symmetry. The present problem will be shown to be mappable onto one arising when an XXZ Hamiltonian is diagonalised with open boundary conditions, which is amongst those already considered\cite{ODBAXXZ}. We will use its solution to obtain the eigenvalues of $Z$ and $Z_t$. Although the following procedure can be used with any type of boundary conditions we will do so only for twisted boundary conditions.
We begin by constructing the monodromy matrix, the central object of the quantum inverse scattering (QIS) and of the ODBA approaches. It is formed from an XXZ - like R-matrix
and of reflection matrices. The R-matrix is
\begin{equation}\label{R}
\mathcal{R}(u)=\begin{pmatrix}
1&&0&&0&&0\\
0&&\frac{\sinh{u}}{\sinh{(u+\eta)}}&&\frac{\sinh{\eta}}{\sinh{(u+\eta)}}&&0\\
0&&\frac{\sinh{\eta}}{\sinh{(u+\eta)}}&&\frac{\sinh{u}}{\sinh{(u+\eta)}}&&0\\
0&&0&&0&&1
\end{pmatrix}.
\end{equation}
where $u$ is the spectral parameter and $\eta$ the crossing parameter which encodes the interactions of the model. We shall identify it in our case as : $e^{-\eta}=e^{i\phi}= \frac{1-ig}{1+ig}$ with $g$ the Luttinger liquid interaction coupling constant.
The reflection or boundary matrices, $K^\pm(u)$, we use take the form of integrable boundary conditions for the XXZ model \cite{DeVega} with components,
\begin{eqnarray}\label{K}
K^-_{11}(u)&=&K^-_{22}(u)=2i\cosh{(c+\theta/2)}\cosh{u}\\
K^-_{12}(u)&=&K^-_{21}(u)=\sinh{2u}\\\nonumber
K^+_{11}(u)&=&2\left(\sinh{(-\theta)}\cosh{(i\Phi)}\cosh{(u+\eta)}\right.\\&&\left.+\cosh{(\theta)}\sinh{(i\Phi)}\sinh{(u+\eta)}\right)\\\nonumber
K^+_{22}(u)&=&2\left(\sinh{(-\theta)}\cosh{(i\Phi)}\cosh{(u+\eta)}\right.\\&&\left.-\cosh{(\theta)}\sinh{(i\Phi)}\sinh{(u+\eta)}\right)\\\label{k}
K^+_{12}(u)&=&K^+_{21}(u)=-\sinh{(2u+2\eta)}
\end{eqnarray}
Herein we have introduced the parameter $c = \log{\left((1-U^2/4)/U\right)}$ for the impurity model, $U$ being the strength of coupling of the impurity to the liquid, or $c=\log{\left(4t/(1-4t^2)\right)}$ for the Weak -Tunnelling model. Let us denote the latter by $c_t$ when a distinction is required. The logarithmic dependence on the bare coupling constant will be important later when considering thermodynamic quantities, we will see that it leads to generation of a scale with power law dependence on the bare parameters in \eqref{H}. In addition we have also introduced an inhomogeneity parameter $\theta$ which will enable us to relate the monodromy matrix to $Z$or $Z_t$. Using the definitions we construct the monodromy matrix,
\begin{eqnarray}\nonumber
\Xi_0(u)=\mathcal{C} K^+(u)\mathcal{R}_{01}(u+\theta/2)\dots \mathcal{R}_{0N}(u+\theta/2)\\\label{Xi}
\times K^-(u)\mathcal{R}_{0N}(u-\theta/2)\dots \mathcal{R}_{01}(u-\theta/2) \label{C}
\end{eqnarray}
with $\mathcal{C}=\frac{-\beta e^{-\eta}}{\sinh{\theta}\sinh{\frac{3\theta}{2}}}$ and $\beta\to\beta_t$ for the Weak-Tunnelling model. An auxiliary space indexed by $0$, very useful for a convenient formulation of the problem, has been introduced. The form of \eqref{Xi} is similar to that of the XXZ model with two boundaries described by $K^+$ and $K^-$. The transfer matrix is given by the trace over this auxiliary space,
\begin{eqnarray}
t(u)&=&\text{Tr}_0\,\Xi(u).
\end{eqnarray}
The judicious choice of boundary matrices means that the transfer matrices commute for differing spectral parameter, $[t(u),t(v)]=0$ \cite{Sklyannin} and by expanding in powers of $u$ a set of operators which commute with $t(v)$ is generated. This proves the integrability of the transfer matrix.
We now return to our original problem, the diagonalization of $Z$. The choice of \eqref{R} and \eqref{K}-\eqref{k} as well as the dependence of the monodromy matrix on $\theta$ means that we can relate this to the transfer matrix. In particular, setting $u=\theta/2$ we have,
\begin{equation}
Z=\lim_{\theta\rightarrow\infty}t(\theta/2).
\end{equation}
and similarly $Z_t$ with the appropriate replacements. What we have shown, therefore, is that determining the spectrum of $Z$ or $Z_t$ is related to that of the open XXZ chain with prescribed inhomogeneities, boundaries and twists. In addition we have established the integrability of both the Kane-Fisher impurity and Weak-Tunnelling models.
At this point the QIS method ceases to be of use. The reason for this is the non diagonal nature of the boundary matrices means that there is no proper reference state upon which to build the eigenstates of $t(u)$ and determine the eigenvalues. This can be circumvented by means of the newly developed ODBA approach which utilises certain algebraic properties of the transfer matrix to completely determine its eigenvalues in terms of an inhomogeneous T-Q relation. The eigenvalue is parametrised by Bethe roots, $\mu_j$ which are fixed by the Bethe equations. The states can then also be recovered by means of separation of variables \cite{SovODBA}. Presently we are only interested in eigenvalues of $t(u)$ and so postpone any discussion of the states to future work.
The transfer matrix $t(u)$ has previously been considered in \cite{ODBAXXZ} wherein the eigenvalues, $\Lambda(u)$, and the Bethe equations were determined. Inserting \eqref{K}-\eqref{k} and \eqref{C} into their results we find for $N$ even,
\begin{eqnarray}\nonumber
\Lambda(\theta/2)=-4i\beta e^{i\phi}\frac{\sinh{(\theta-2i\phi)}\cosh{(c)}\cosh{(\theta/2)}}{\sinh{(\theta-i\phi)}\sinh{\theta}}\\\label{Lambda}
\times\cosh{(\theta/2-i\Phi)}\prod^N_j\frac{\sinh{(\theta/2-\mu_j+i\phi)}}{\sinh{(\theta/2+\mu_j-i\phi)}}.
\end{eqnarray}
We have restricted ourselves to $u=\theta/2$ since we are only interested in determining $e^{-ikL}=\lim_{\theta\to\infty} \Lambda(\theta/2)$. In addition we obtain the Bethe equations,
\begin{widetext}
\begin{eqnarray}\nonumber
\frac{\left[\cosh{\left(i(N+1)\phi+c+i\pi/2+i\Phi-\theta/2+2\sum^N_{j=1}\mu_j\right)}-1\right]\sinh{(2\mu_j-i\phi)}\sinh{(2\mu_j-2i\phi)}}{2i\cosh{(\mu_j+c+\theta/2-i\phi)}\cosh{(\mu_j-i\phi)}\cosh{(\mu_j-i\phi+i\Phi)}\sinh{(\mu_j-\theta-i\phi)}}\\\label{BAE}
=\prod^N_{l=1}\frac{\sinh{(\mu_j+\mu_l-i\phi)}\sinh{(\mu_j+\mu_l-2i\phi)}}{\sinh{(\mu_j+\theta/2-i\phi)}\sinh{(\mu_j-\theta/2-i\phi)}}
\end{eqnarray}
\end{widetext}
along with the selection rules $\mu_j\neq\mu_k$ and $\mu_j\neq\mu_k+i\phi$. These selection rules are analogous to the exclusion principle in other Bethe Ansatz problems \cite{Korepin}. Upon taking the limit, $\theta\to\infty$ \eqref{Lambda} and \eqref{BAE} completely determine the spectrum of $Z$. Prior to doing so we should consider the dependence of $\mu_j$ on $\theta$. The dependence of the Bethe parameters on the inhomogeneity $\theta$ follows from the form of \eqref{Lambda} and \eqref{BAE} with half the roots scaling as $-\theta/2$ while the other half go as $\theta/2$. This is also the case for $N$ odd, as $N+1$ Bethe parameters are required by the ODBA solution\cite{ODBAXXZ}. We separate out the $\theta$ dependent part and introduce two sets of Bethe parameters $\{ \lambda_j, \nu_j \}$,
\begin{equation}\label{mu}
\mu_j=\begin{cases} \lambda_j+i\phi/2+\theta/2 &
\text{if}j\leq \frac{N}{2}\\
-\nu_{j-N/2}+i\phi/2-\theta/2&
\text{if}j>\frac{N}{2}.
\end{cases}
\end{equation}
The validity of this assumption will be checked by recovering the Luttinger liquid spectrum when the impurity is removed. Inserting \eqref{mu} into \eqref{Lambda} the eigenvalues become
\begin{equation}\label{ee}
e^{-ikL}=\frac{-e^{-i\Phi}}{\alpha}\prod^{N/2}_j\frac{\sinh{(\lambda_j-i\phi/2)}}{\sinh{(\nu_j+i\phi/2)}}e^{-\lambda_j+\nu_j+i\phi}.
\end{equation}
Two sets of Bethe equations for $\lambda_j$ and $\nu_j$ are obtained from \eqref{BAE} and \eqref{mu},
\begin{eqnarray}\nonumber
&&\sinh^N{(\lambda_j-i\phi/2)}=-e^{-2\lambda_j-i\phi+2c+2i\Phi}e^{2\sum_k(2\lambda_k-\nu_k)}\\\label{LBAE}
&&~~~~~~~~\times\prod^{N/2}_k\sinh{(\lambda_j-\nu_k)}\sinh{(\lambda_j-\nu_k-i\phi)}\\\nonumber
&&\sinh^N{(\nu_j+i\phi/2)}=\frac{2i\cosh{(c-\nu_j-i\phi/2)}}{e^{\nu_j-c+i\phi/2}}e^{2\sum_k\lambda_k}\\\label{NBAE}
&&~~~~~~~~\times\prod^{N/2}_k\sinh{(\nu_j-\lambda_k)}\sinh{(\nu_j-\lambda_k+i\phi)}
\end{eqnarray}
with the selection rules now reading $\lambda_j\neq\nu_k,~\lambda_j\neq\lambda_k,~\nu_j\neq\nu_k$. The complexity of both the eigenvalues and Bethe equations is a common feature of models solved by ODBA and accordingly makes them more difficult to treat. However we can gain some insight as to the structure of the solutions by considering the case of weak or vanishing impurity strength $U \to 0$. This will also serve as a check on \eqref{mu} by correctly reproducing the spectrum of the Luttinger Liquid. In this limit the impurity parameter, $c\to\infty$, blows up. Inserting this in \eqref{LBAE}, \eqref{NBAE} we see that the solutions are either $\lambda_j=\nu_j$ or $\lambda_j=\nu_j+i\phi$. In terms of the original parameters these are $\mu_{j+N/2}=-\mu_j+i\phi$ or $\mu_{j+N/2}=-\mu_j+2i\phi$. This leaves half the parameters, $\mu_j, ~j\leq N/2$ undetermined. To fix these remaining $\mu_j$, we return to the expression for $\Lambda(u)$ as given by \cite{ODBAXXZ} and assume there are $M$ pairs such that $\mu_{j+N/2}=-\mu_j+i\phi$ while the other $N/2-M$ are of the form $\mu_{j+N/2}=-\mu_j+2i\phi$. Upon taking $c\to\infty$ we find that the $N/2-M$ latter pairs decouple and we are left with a T-Q relation in terms of $M$ parameters $\mu_j$ (see Appendix B). From this we derive the eigenvalues
\begin{eqnarray}\label{E}
e^{-ikL}=e^{Mi\phi-i\Phi}\prod^M_{j=1}\frac{\sinh{(\lambda_j-i\phi/2)}}{\sinh{(\lambda_j+i\phi/2)}}.
\end{eqnarray}
The Bethe equations are similar to those of the XXZ model,
\begin{eqnarray}\nonumber
\frac{\sinh^N{(\lambda_j-i\phi/2)}}{\sinh^N{(\lambda_j+i\phi/2)}}&=&e^{i(N-2M)\phi+2i\Phi}\\\label{llBae}
&&\times \prod^M_{k\neq j}\frac{\sinh{(\lambda_j-\lambda_k-i\phi)}}{\sinh{(\lambda_j-\lambda_k+i\phi)}}.
\end{eqnarray}
The extra phase factor in the Bethe equations will not change the structure of the solutions which are either real or form strings in the Thermodynamic limit \cite{Takahashi} for $-\pi\leq\phi\leq\pi$. It is however, crucial in obtaining the correct energy of the Luttinger liquid.
Combining \eqref{E} and \eqref{llBae} we obtain,
\begin{eqnarray}\nonumber
E=\frac{2\pi}{L}\sum^N_kn_k-\frac{2\pi}{L}\sum_j^MI_j-\frac{2M(N-M)}{L}\phi\\\label{lle} +\frac{\Phi}{L}(N-2M).
\end{eqnarray}
Here $n_k$ and $I_j$ are the quantum numbers associated to the charge and chiral degrees of freedom. The last term is recognisable as $-\mathcal{A}(N_+-N_-)$. This validates our choice of \eqref{mu}.
Before proceeding to a study of the impurity thermodynamics we should note that strings represent gapless excitations of the Luttinger liquid and their structure depends heavily on the strength of the interaction. While we have successfully diagonalised the model for all $\phi$ and $U\geq0$, for clarity we hereafter restrict ourselves to the simplest structure and take $|\phi|=\pi/\nu$ with $\nu >2$ an integer. This then fixes the allowed string lengths and parities. Common to other integrable models we can have $j$-strings
\begin{equation}
\lambda^{(j,l)}=\lambda^j+i(2j+1-l)\phi/2,
\end{equation}
for $j=1\dots,\nu-1$. These are said to have parity $v_j=1$. In addition to these we may also have strings of negative parity, $v_\nu=-1$ which are centred on the $i\pi/2$ axis. As a consequence of our choice of $\phi$, however only $1$-strings of negative parity are allowed,
\begin{equation}
\lambda^\nu_\alpha+i\pi/2.
\end{equation}
Once again these represent bulk excitations and so will not be affected by the introduction of a local impurity. Our choice of scattering Bethe has dictated these as the appropriate excitations of the bulk which diagonalise the impurity.
The formal similarity between the Bethe Ansatz equations of the XXZ systems with boundaries and the impurity Luttinger system arises from the analogy of spin degrees of freedom in the first and the chiral degrees of freedom in the second system, though their dynamics is of course very different. We note that for the XXZ with generic boundary fields the residual $U(1)$ spin symmetry is broken by the off diagonal elements of the boundary matrices and it is this that necessitates the use of the ODBA. For the Luttinger liquid we also have a $U(1)$ symmetry (with charge $N_+-N_-$) which is why we are led to taking the XXZ R-matrix while the inclusion of the impurity breaks this and forces us to adopt the ODBA.
\section{Thermodynamics}
Having shown how the spectra of $Z$ and $Z_t$ are described by \eqref{ee}, \eqref{LBAE} and \eqref{NBAE} we determine from it the spectrum of $H$ and $H_{WT}$ and proceed to study their thermodynamic behaviour. In particular we calculate the free energy and entropy of the impurity and tunnel junction. In doing so we are interested in impurity effects but not finite size effects. As a result we will lose sensitivity to the influence of the flux $\Phi$ \cite{Shastri}. In the following we set $\Phi$ to zero and will address transport properties through the Kubo formula.
Dealing directly with \eqref{LBAE} and \eqref{NBAE} is arduous due to their non standard form but methods have been developed to extract physical quantities in the thermodynamic limit \cite{BdryEnergy}. Here we will adopt a different approach. We have just seen that for $c\to\infty$ the eigenvalues and Bethe equations are given by \eqref{E} and \eqref{llBae}. For large but finite $c$, corresponding to $U\ll 1$ the form of these equations are modified by an impurity term which is necessarily of the order $1/N$. Indeed we know that any bulk properties cannot be modified by introducing an impurity. Thus, we make the assumption that the Bethe parameters are either real, form strings of positive parity such that
\begin{eqnarray}
\text{Im}\{\lambda^{(j,l)}\}=\text{Im}\{\nu^{(j,l)}\}=(2j+1-l)\phi/2
\end{eqnarray}
or negative parity Im$\{\lambda_j\}=$Im$\{\nu_j\}=\pi/2$ in the thermodynamic limit
or come in pairs $\text{Im}\{\lambda_j-\nu_j\}=\phi$.
Proceeding from this assumption we can derive the continuous form of the Bethe Ansatz equations (BAE). The result is that the distributions for the $j$-strings and holes, $\rho_j(x)$ and holes $\rho_j^h(x)$ \cite{Takahashi} satisfy,
\begin{eqnarray}\label{TBA}
Na_j(x)+b_j(x)=\rho_j(x)+\rho_j^h(x)+\sum_k^\nu A_{jk}*\rho_k(x)\; \; \;\;\\
Na_\nu(x)+b_\nu(x)=-\rho_\nu(x)-\rho_\nu^h(x)+\sum_k^\nu A_{\nu k}*\rho_k(x)\; \; \;
\end{eqnarray}
where we define:
\begin{eqnarray}
a_j(x)&=&\frac{1}{2\pi}\d{}{x}p(x,n_j,v_j)\\
A_{jk}(x)&=&\frac{1}{2\pi}\d{}{x}\Theta_{jk}(x)\\
b_j(x)&=&-\frac{1}{4\pi}\d{}{x}p(x-c/\phi,n_j,-v_j)
\end{eqnarray}
with
\begin{eqnarray}
p(x,n_j,v_j)=2v_j\arctan{\left((\cot{n_j\phi/2})^{v_j}\tanh{\phi x}\right)} \;\;\;\;\;
\\
\Theta_{jk}(x)=p(x,|n_j-n_k|,v_jv_k)+p(x,n_j+n_k,v_jv_k) \nonumber\\
+2\sum_q p(x,|n_j-n_k|+2q,v_jv_k)
\end{eqnarray}
and $*$ denoting a convolution $f*g(x)=\int \mathrm{d} y\,f(x-y)g(y)$.
The change in sign for the $v=-1$ roots arises because $p_j(x,n_j,v_j)$ changes from monotonically increasing to decreasing when $v_j=1\to v_j=-1$. In order to have $\rho_\nu(x)\geq 0$ we need to introduce the sign. The energy in terms of these string configurations is
\begin{equation}\label{Estring}
E=-\sum_{j=1}^\nu D\int\rho_j(x)\left(p(x,n_j,v_j)+\theta(v_j)\pi\right).
\end{equation}
The form of the Bethe equations is very similar to the that of the anisotropic Kondo model (AKM). Indeed if we change the parity of the impurity terms, $b_j(x)$, from $-1$ to $1$ so that it is now $a_j(x)$ we recover the equations for the AKM with zero external field \cite{TWAKM}. The change in the parity of the impurity term can be understood by noticing the impurity we presently consider is not merely a particle at a fixed location but introduces a new aspect, the mixing of the left and right movers this is in contrast to the Kondo model or AKM. In addition the change in parity ensures that if the non interacting limit is taken, $\phi\to 0$, the impurity term vanishes and the distributions are those of free fermions.
We now proceed to construct the free energy by means of the Yang-Yang approach and its generalisation. The approach is well known and we just provide the main steps. The free energy, $F=E-TS$, where $E$ is given by \eqref{Estring} and $S=\sum_j\int \left[(\rho_j+\rho_j^h)\log{(\rho_j+\rho_j^h)}-\rho_j\log{(\rho_j)}-\rho^h_j\log{(\rho^h_j)}\right]$ is the entropy associated to the distributions, is minimised with respect to $\rho_j$ which are solutions of the BAE.
The result of this minimisation gives the thermodynamic Bethe ansatz equations (TBA),
\begin{eqnarray}\nonumber
\log{\eta_j(x)}=s*\log{(1+\eta_{j+1}(x))(1+\eta_{j-1}(x))}~~~~~~~~\\\label{gt}
+\delta_{j,\nu-2}s*\log{(1+\eta^{-1}_\nu(x))}-\delta_{j,1}\frac{2D}{T}\arctan{e^{\pi x}}\\
\log{\eta_{\nu-1}(x)}=s*\log{(1+\eta_{\nu-2}(x))}=-\log{\eta_{\nu}(x)}~~~
\end{eqnarray}
with $\eta_j(x)=\rho_j^h(x)/\rho_j(x)$, $s(x)=\frac{1}{2\cosh{\pi x}}$. The density $D=\frac{N}{L}$ plays also the role bandwidth up to a factor of $\pi$ for the linear spectrum: setting $k_F=0$ the ground state is filled down to $-N \frac{2\pi}{L}$.
Having taken the thermodynamic limit and derived the TBA equations we proceed to take the scaling limit to obtain universal quantities, eliminating any dependence on $D$. As we shall see the the model generates an energy scale $T_{KF}$ which will be held fixed as $D \to \infty$. Thus high and low temperature regimes will be defined with respect to $T_{KF}$ and always small compared to $D$. With this in mind we introduce the universal functions \cite{TWAKM},
\begin{eqnarray}\label{U}
\varphi_j(x)&=&\frac{1}{T}\log{\big(\eta_j(x+\frac{1}{\pi}\log{\frac{T}{D}})\big)}.
\end{eqnarray}
Inserting these into \eqref{gt} and approximating the driving term, $-\frac{2D}{T}\arctan{\exp{\pi (x+\frac{1}{\pi}\log{\frac{T}{D}}}}\simeq -2e^{\pi x}$, an approximation valid since only this range of values contributes to $\eta_1(x)$, we obtain the universal (or scaling) form of the TBA equations,
\begin{eqnarray}\nonumber
\varphi_j(x)&=&s*\log{(1+e^{\varphi_{j-1}(x)})(1+e^{\varphi_{j+1}(x)})}\\\label{UTBA}
&&-\delta_{j,1}2e^{\pi x},~~~j<\nu-2\\\nonumber
\varphi_{\nu-2}(x)&=&s*\log(1+e^{\varphi_{\nu-1}(x)})(1+e^{\varphi_{\nu-3}(x)})\\
&&~~~~~~~~~~~~~~~~~~~~~~~~\times(1+e^{-\varphi_\nu (x)}),\\
\varphi_{\nu-1}(x)&=&s*\log{(1+e^{\varphi_{\nu-2}(x)})}=-\varphi_{\nu}(x).
\end{eqnarray}
The free energy can then be written as:
\begin{eqnarray}
F=F^{LL}+F^{i}
\end{eqnarray}
with $F^{LL}=E_0-T N\int s(x)\log{(1+\exp{\varphi_1(x)})}$ being the bulk contribution ($E_0$ the ground state energy) which the impurity contribution is,
\begin{eqnarray}\label{F}
F^{i}=-T\int\mathrm{d} x\, s(x+\frac{1}{\pi}\log\frac{T}{T_{KF}})
\log{(1+e^{\varphi_{\nu-1}(x)})}.
\end{eqnarray}
We note here the appearance of a scale $T_{KF}=De^{\pi c/\phi}$ which has been generated by the model. We will measure all temperatures relative to this scale and can obtain universal results by keeping $T_{KF}$ fixed while taking $D\to\infty$. In terms of the original parameters of the Hamiltonian this is
\begin{eqnarray}
T_{KF}
&=&D\left(\frac{U}{1-U^2/4}\right)^{\frac{\pi}{2\arctan g}}.
\end{eqnarray}
This scale is power law in the interaction strength which matches predictions made by Renormalisation Group techniques \cite{KF}. Having identified the scale we can determine the dependence of the impurity strength on the cutoff $D$. The behaviour depends on the sign of the interaction strength. For repulsive interactions $g>0$,
\begin{equation}
U(D)\sim \left(\frac{T_{KF}}{D}\right)^{\frac{2\arctan g}{\pi}}
\end{equation}
which show $U\to 0$ as $D\to \infty$, or running the argument backwards, indicating the strengthening of the impurity at small energy scales as $D$ is decreased. In contrast, for attractive interactions the $U(D)$ grows with the scale signifying the healing of the system at low energy.
Likewise, the Weak-Tunnelling Hamiltonian also generates a scale $T_{WT}=De^{\pi c_t/\phi}$. The complementary nature of these models is exposed when written in the bare parameters,
\begin{eqnarray}
T_{WT}=D\left(\frac{4t}{1-4t^2}\right)^{-\frac{\pi}{2\arctan g}}.
\end{eqnarray}
The change in the sign of the exponent causes the tunnelling parameter to run oppositely to the impurity strength. The two systems thus become disjoint when the interactions are repulsive and completely joined for attractive interactions at low energies.
Any thermodynamic calculations are valid only when the generated scale is less than the cutoff. Accordingly we are hereafter restricted to the repulsive regime of the impurity model and the attractive regime for the Weak-Tunnelling Hamiltonian. We will only present the former but the latter is similar with the appropriate replacement of the scale.
Having taken the scaling limit we turn now to study the universal temperature dependence of the free energy. It requires the full solution of the TBA equations which can be achieved only numerically. Here we shall consider the high $T \gg T_{KF} $ and low temperature $T \ll T_{KF}$ limits and leave the study of the crossover to a later publication.
The free energy is given in terms of $\varphi_{\nu-1}$ which is coupled to all other $\varphi_j$ but still we can obtain some results for high and low temperature. At $T\gg T_{KF}$ the integral in \eqref{F} is dominated by the behaviour at $x\to-\infty$, in this limit the driving term drops out of \eqref{gt} and the solutions are constants. Denoting $e^{\varphi_j(-\infty)}=x_j$, we get,
\begin{eqnarray}\label{high}
x_j=(j+1)^2-1,~~~~x_{\nu-1}=\nu-1=1/x_\nu.
\end{eqnarray}
Similarly for low $T\ll T_{KF}$ we look for solutions at $x\to\infty$. This time we denote $e^{\varphi_j(\infty)}=y_j$ and find
\begin{equation}\label{low}
y_j=j^2-1,~~~y_{\nu-1}=\nu-2=1/y_{\nu}.
\end{equation}
Using the expression for the free energy along with~\eqref{low} and \eqref{high} we can calculate impurity free energy near the UV and IR fixed points,
\begin{eqnarray}
F^i_{UV}=\frac{T}{2}\log{(\nu)},~~F^i_{IR}=\frac{T}{2}\log{(\nu-1)}
\end{eqnarray} The difference in the impurity entropy between fixed points,
\begin{equation}\label{Ent}
S^i_{UV}-S^i_{IR}=\frac{1}{2}\log{\frac{\nu}{\nu-1}}
\end{equation}
shows the usual decrease as the system flows from the UV to the IR fixed points (a flow from weak to strong coupling regime for repulsive interactions), a decrease which in the language of the renormalisation group corresponds the degrees of freedom that were integrated out.
The form of this result agrees with the values calculated for the boundary terms in both the boundary Sine-Gordon model \cite{FSW} as well as XXZ with parallel boundary fields \cite{deSa}, however the degrees of freedom as well as the interpretation of $\nu$ are different in those cases.
We now consider the corrections to the asymptotic limits
\eqref{high} and \eqref{low} which can also be calculated \cite{deSa}. The corrections yield the specific heat which is found to scale as,
\begin{eqnarray}\label{c}
C(T\ll T_{KF})&\sim & \left(\frac{T}{T_{KF}}\right)^\frac{2}{\nu-1}\\
C(T\gg T_{KF})&\sim &\left(\frac{T_{KF}}{T}\right)^\frac{2}{\nu}
\end{eqnarray}
indicating that both the strong and weak coupling fixed point are Non-Fermi Liquid in nature.
Using arguments from boundary conformal field theory \cite{AL} we can identify the leading irrelevant operators at both fixed points and thus determine the scaling of the conductance as given by Kubo's formula. At low temperature the conductance vanishes as $G \sim T ^\frac{2}{\nu-1}$ corresponding to the effective increase of the strength of the impurity $U$ as $D$ is decreased noted earlier. Thus the low temperature physics is governed by strong coupling Hamiltonian where the wire is cut by the impurity and for which the Weak-Tunnelling model is the starting point. At high temperatures, in addition to the wire conductance $G_0= K e^2/h$, with $K=(\nu -1)/\nu$ for our choice of $\phi$, we have the impurity correction $ G \sim T^{-\frac{2}{\nu}}$, its vanishing at high temperatures corresponding to the healing of the wire \cite{KF}. We expect similar results to be obtained from finite size calculations on a ring threaded by flux $\Phi$.
\section{elementary excitations}
In the previous section we derived the impurity thermodynamics of both the Kane-Fisher impurity model and Weak-Tunnelling model with spin isotropic bulk interaction. Here we will discuss the elementary excitation of the models, which we call {\it chirons} owing to their origin in the chiral degrees of freedom.
The ground state of the system contains only real roots whose distribution is governed by the $j=1$ equation of \eqref{TBA} with the $\rho^h_1(x)=\rho_j(x)=0$ for $j>1$. Excitations above this ground state are obtained by adding holes in this distribution. The chiron energy, $\varepsilon=2D\arctan{e^{\pi x^h}}$, appears as the diving term in the TBA equations \eqref{gt} with $x^h$ being the position of the hole in the distribution.
Using the method of \cite{4lectures} we can determine their phase shift as they scatter past the impurity. To do this we note that in the absence of the impurity the chiron energy should take on values $2\pi I^h/L$ (See Eq.\eqref{lle}). The $1/L$ deviation of $\varepsilon$ from this value gives the chiron- impurity phase shift. Up to an overall constant phase the impurity S-matrix is
\begin{eqnarray}
S^{c,i}(\varepsilon)&=&e^{i\Delta^{c,i}(\frac{1}{\pi}\log{(\varepsilon/T_{KF})})},\\
\Delta^{c,i}(x)&=&\int\frac{\mathrm{d}\omega}{8\pi i\omega}\frac{\tanh{(\omega/2)}}{\sinh{\left((\pi/\phi-1)\omega/2\right)}}e^{i\omega x} \nonumber
\end{eqnarray}
This is valid for $\pi/\phi$ being an arbitrary rational number between $0$ and $1$. We see that the phase shift is non trivial at both low and high energies as both IR and UV fixed points are non trivial. This is to be compared with bare electrons which are perfectly transmitted at high energies and reflected at low energy.
Adding two holes to the ground state distribution allows us to calculate the chiron-chiron phase shift in the same manner,
\begin{eqnarray}
S^{c,c}(\varepsilon_1,\varepsilon_2)&=&e^{i\Delta^{c,c}(\varepsilon_1-\varepsilon_2)}, \\
\Delta^{c,c}(x)&=&\int\frac{\mathrm{d}\omega}{4\pi i\omega}\frac{\sinh{((\pi/\phi-2)\omega/2)}e^{i\omega x} }{\cosh{(\omega/2)}\sinh{\left((\pi/\phi-1)\omega/2\right)}}\nonumber
\end{eqnarray}
With $\varepsilon_j$ the energies of the two chirons. The full physical spectrum is thus built up by adding holes and strings to the ground state distribution. The interpretation of the strings is commented on below.
We now turn to discuss the relation between our approach with the bootstrap approach where the spectrum of the Hamiltonian and the various S-matrices are postulated on the basis of integrability properties. It is known that the impurity model without spin is related via bosonisation and folding procedures to the massless limit of the boundary Sine-Gordon model. Its spectrum is taken to consist of Solitons, anti-Solitons and their bound states known as Breathers. The dressed S-matrices, derived via the bootstrap method of \cite{Zam}, are non diagonal for generic interaction strength and calculating thermodynamic quantities leads to an equation similar in structure to \eqref{PBC}. For special values of the interaction however, the bulk scattering becomes diagonal and the computations simplify considerably, the right hand side becoming a mere phase. The inclusion of spin in this method is more complicated and is only achieved in certain interaction regimes \cite{Qwire}.
In contrast the present method constitutes a bottom up approach. We have diagonalised the actual quantum Hamiltonian with spin for all values of the interaction, our restriction to $\phi=\pi/\nu$ is purely for the convenience of its simplified string structure. It is in this parameter regime where the TBA and free energy in both approaches coincide allowing us to identify the first $\nu-2$ string distributions with Breathers and the last two with symmetric and anti-symmetric combinations of a Soliton and anti-Soliton.
\section{Conclusions}
In this paper we have solved exactly two related Hamiltonians, a spin isotropic Luttinger liquid coupled to an impurity or a tunnel junction with arbitrary boundary conditions. This was achieved via a new type of coordinate Bethe ansatz that incorporates the reflecting and transmitting properties of the impurity in conjunction with the Off Diagonal Bethe Ansatz. We found that determining the spectrum is equivalent to an analogous problem for an open XXZ chain with one boundary corresponding to the impurity and the other the boundary condition. The thermodynamics was then studied and it was shown that a scale is naturally generated by both models such that the impurity strength and tunnelling parameter run oppositely confirming the duality of the models. The impurity free energy for the simplest interaction regime was calculated and was seen to coincide with that obtained in \cite{FSW} for the case without spin. The diagonalisation of the
model allows us to view the system as a gas of excitations in the chiral degrees of freedom, chirons, which scatter with a pure phase off the impurity.
The methods presented herein, we believe to be quite general and provide a template for solving other impurity models with interacting bulk. Indeed the coordinate Bethe ansatz is readily applied to the model with spin anisotropic interaction and with a Kondo impurity. Moreover the formulation naturally allows for arbitrary boundary conditions to be imposed allowing for the potential to study the effects of impurities on mesoscopic rings with arbitrary flux \cite{Meso}.
\acknowledgements{This research was supported by NSF grant DMR 1410583. We are grateful to Sung-Po Chao, Yashar Komijani and Giuseppe Mussardo for useful discussions.}
|
1,314,259,993,537 | arxiv | \section{Introduction}
In this paper we further explore the relation between string theory
and some 4-dimensional structures such that these structures do not
appear as a result of usual compactification or model-buildings in
string theory. These structures are rather involved in the mathematics
of string theory but they are able to encode (in 4 dimensions) some
dynamics of branes configuration and the geometry of certain string
backgrounds. At the same time, these structures are of fundamental
importance for 4-dimensional physics.
In our previous paper \cite{AsselmeyerKrol2011} we observed a correlation
between D as well NS brane configurations in some backgrounds and
the appearance of exotic smoothness on the topological $\mathbb{R}^{4}$.
It is known that the $\mathbb{R}^{4}$ with standard smoothness structure
is part of the string background. A variation of the brane configurations
induce a change of the smoothness structure, i.e. one has to consider
different smoothings of the $\mathbb{R}^{4}$. But this result is
the unique feature of $\mathbb{R}^{n}$ holding only for $n=4$ where
a variety (actually a continuum) of smoothings of $\mathbb{R}^{n}$
must exist. In fact in any other dimension $n\neq4$ there exists
precisely a unique standard smooth $\mathbb{R}^{n}$ \cite{Asselmeyer2007}.
Physics corresponding to exotic smooth $\mathbb{R}^{4}$ has been
gradually exhibited since the nineties. In a recent series of papers,
new aspects important for quantum gravity are being worked out \cite{AsselmeyerKrol2009,AsselmeyerKrol2009a,AsselmeyerKrol2010,Krol2010}
directly.
The recognition of the role of exotic $\mathbb{R}^{4}$ in string
theory relies so far on the following steps:
\begin{itemize}
\item Standard smooth $\mathbb{R}^{4}$appears as a part of a exact string
background;
\item The process of changing the exotic smoothness on $\mathbb{R}^{4}$
is capable to encode the change in the configuration of specific D
or NS branes \cite{AsselmeyerKrol2011}.
\item All exotic $\mathbb{R}^{4}$'s appearing in this setup are \emph{small
exotic $\mathbb{R}^{4}$'s}, i.e. a small exotic $\mathbb{R}^{4}$
embeds smoothly in the standard smooth $\mathbb{R}^{4}$ as open subsets.
\end{itemize}
Thus, string configurations can be expressed inherently in terms of
4-dimensional structures, i.e. exotic smooth $\mathbb{R}^{4}$'s are
complex enough to encode some string configurations. Particularly
all these phenomena disappear when one changes the smoothness to the
standard one.
In this paper we consider the quantum regime of D-branes in string
theory. Especially the correct setup for quantum branes is an open
problem. However a natural proposal is the consideration of (non-commutative)
$C^{\star}$-algebras replacing (classical, submanifold-like) branes
as well manifold spacetime. In the context of $C^{\star}$-algebras
there are many important counterparts of differential-geometric results
including Poincar\'e duality, characteristic classes or the Riemann-Roch
theorem. Especially one obtains a generalized formula for charges
of quantum D-branes \cite{Szabo2008b,Szabo2008a}.
The basic technical ingredient of the analysis of small exotic $\mathbb{R}^{4}$'s
is the relation between exotic (small) $\mathbb{R}^{4}$'s and non-cobordant
codimension-1 foliations of $S^{3}$ as well gropes and wild embeddings
as shown in \cite{AsselmeyerKrol2009}. The foliation of the 3-sphere
is classified by the Godbillon-Vey class as element of the cohomology
group $H^{3}(S^{3},\mathbb{R})$. By using the $S^{1}$-gerbes it
was possible to interpret the integral elements $H^{3}(S^{3},\mathbb{Z})$
as characteristic classes of a $S^{1}$-gerbe over $S^{3}$ \cite{AsselmeyerKrol2009a}.
In the next section we will explain the whole complex of ideas more
carefully. Then we present some facts and definitions of K-homology
and KK-theory used to introduce stable D-branes as K-theory classes
in terms of tachyons condensation. These K-theory classes can be naturally
described by use of K-string theory (e.g. \cite{AsakawaSugimotoTerasima2002}).
Furthermore there is a canonical interpretation for spectral triples
including tachyon fields. This correspondence is further developed
into the realm of noncommutative $C^{\star}$-algebras, following
e.g. \cite{Szabo2008a,Szabo2008b}, in section \ref{sub:Branes-on-separable}.
Now a natural interpretation of quantum stable D-branes is given by
branes on the $C^{\star}$-algebra. In fact a categorical description
is necessary for an understanding of quantum D-branes: objects are
quantum D-branes and the morphisms in the category are KK-theory classes.
Then in section \ref{sec:Exotic--and-branes} we explore the notion
of stable D-branes in the convolution non-commutative algebra of the
foliations representing exotic $\mathbb{R}^{4}$'s. In section \ref{sub:Net-of-exotic}
we establish the (partial) correspondence between stable D-branes
as above and the net of exotic smooth $\mathbb{R}^{4}$'s embedded
in some exotic $\mathbb{R}^{4}$. A discussion of the results closes
the paper.
\section{Exotic $\mathbb{R}^{4}$ and codimension-one foliations of the 3-sphere}
The main line of the topological argumentation can be briefly described
as follows:
\begin{enumerate}
\item In Bizacas exotic $\mathbb{R}^{4}$ one starts with the neighborhood
$N(A)$ of the Akbulut cork $A$ in the K3 surface $M$. The exotic
$\mathbb{R}^{4}$ is the interior of $N(A)$.
\item This neighborhood $N(A)$ decomposes into $A$ and a Casson handle
representing the non-trivial involution of the cork.
\item From the Casson handle we construct a grope containing Alexanders
horned sphere.
\item Akbuluts construction gives a non-trivial involution, i.e. the double
of that construction is the identity map.
\item From the grope we get a polygon in the hyperbolic space $\mathbb{H}^{2}$.
\item This polygon defines a codimension-1 foliation of the 3-sphere inside
of the exotic $\mathbb{R}^{4}$ with an wildly embedded 2-sphere,
Alexanders horned sphere \cite{Alex:24}.
\item Finally we get a relation between codimension-1 foliations of the
3-sphere and exotic $\mathbb{R}^{4}$.
\end{enumerate}
Now we will explain the details in this construction (see also \cite{AsselmeyerKrol2009}).
An exotic $\mathbb{R}^{4}$ is a topological space with $\mathbb{R}^{4}-$topology
but with a different (i.e. non-diffeomorphic) smoothness structure
than the standard $\mathbb{R}_{std}^{4}$ getting its differential
structure from the product $\mathbb{R}\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}$.
The exotic $\mathbb{R}^{4}$ is the only Euclidean space $\mathbb{R}^{n}$
with an exotic smoothness structure. The exotic $\mathbb{R}^{4}$
can be constructed in two ways: by the failure to arbitrarily split
a smooth 4-manifold into pieces (large exotic $\mathbb{R}^{4}$) and
by the failure of the so-called smooth h-cobordism theorem (small
exotic $\mathbb{R}^{4}$). Here we will use the second method.
Consider the following situation: one has two topologically equivalent
(i.e. homeomorphic), simple-connected, smooth 4-manifolds $M,M'$,
which are not diffeomorphic. There are two ways to compare them. First
one calculates differential-topological invariants like Donaldson
polynomials \cite{DonKro:90} or Seiberg-Witten invariants \cite{Akb:96}.
But there is another possibility: It is known that one can change
a manifold $M$ to $M'$ by using a series of operations called surgeries.
This procedure can be visualized by a 5-manifold $W$, the cobordism.
The cobordism $W$ is a 5-manifold having the boundary $\partial W=M\sqcup M'$.
If the embedding of both manifolds $M,M'$ in to $W$ induces homotopy-equivalences
then $W$ is called an h-cobordism. Furthermore we assume that both
manifolds $M,M'$ are compact, closed (no boundary) and simply-connected.
As Freedman \cite{Fre:82} showed a h cobordism implies a homeomorphism,
i.e. h-cobordant and homeomorphic are equivalent relations in that
case. Furthermore, for that case the mathematicians \cite{CuFrHsSt:97}
are able to prove a structure theorem for such h-cobordisms:\\
\emph{Let $W$ be a h-cobordism between $M,M'$. Then there are
contractable submanifolds $A\subset M,A'\subset M'$ together with
a sub-cobordism $V\subset W$ with $\partial V=A\sqcup A'$, so that
the h-cobordism $W\setminus V$ induces a diffeomorphism between $M\setminus A$
and $M'\setminus A'$.} \\
Thus, the smoothness of $M$ is completely determined (see also
\cite{Akbulut08,Akbulut09}) by the contractible submanifold $A$
and its embedding $A\hookrightarrow M$ determined by a map $\tau:\partial A\to\partial A$
with $\tau\circ\tau=id_{\partial A}$ and $\tau\not=\pm id_{\partial A}$($\tau$
is an involution). One calls $A$, the \emph{Akbulut cork}. According
to Freedman \cite{Fre:82}, the boundary of every contractible 4-manifold
is a homology 3-sphere. This theorem was used to construct an exotic
$\mathbb{R}^{4}$. Then one considers a tubular neighborhood of the
sub-cobordism $V$ between $A$ and $A'$. The interior $int(V)$
(as open manifold) of $V$ is homeomorphic to $\mathbb{R}^{4}$. If
(and only if) $M$ and $M'$ are homeomorphic, but non-diffeomorphic
4-manifolds then $int(V)\cap M$ is an exotic $\mathbb{R}^{4}$. As
shown by Bizaca and Gompf \cite{Biz:94a,BizGom:96} one can use $int(V)$
to construct an explicit handle decomposition of the exotic $\mathbb{R}^{4}$.
We refer for the details of the construction to the papers or to the
book \cite{GomSti:1999}. The idea is simply to use the cork $A$
and add some Casson handle $CH$ to it. The interior of this construction
is an exotic $\mathbb{R}^{4}$. Therefore we have to consider the
Casson handle and its construction in more detail. Briefly, a Casson
handle $CH$ is the result of attempts to embed a disk $D^{2}$ into
a 4-manifold. In most cases this attempt fails and Casson \cite{Cas:73}
looked for a substitute, which is now called a Casson handle. Freedman
\cite{Fre:82} showed that every Casson handle $CH$ is homeomorphic
to the open 2-handle $D^{2}\times\mathbb{R}^{2}$ but in nearly all
cases it is not diffeomorphic to the standard handle \cite{Gom:84,Gom:89}.
The Casson handle is built by iteration, starting from an immersed
disk in some 4-manifold $M$, i.e. a map $D^{2}\to M$ with injective
differential. Every immersion $D^{2}\to M$ is an embedding except
on a countable set of points, the double points. One can kill one
double point by immersing another disk into that point. These disks
form the first stage of the Casson handle. By iteration one can produce
the other stages. Finally consider not the immersed disk but rather
a tubular neighborhood $D^{2}\times D^{2}$ of the immersed disk,
called a kinky handle, including each stage. The union of all neighborhoods
of all stages is the Casson handle $CH$. So, there are two input
data involved with the construction of a $CH$: the number of double
points in each stage and their orientation $\pm$. Thus we can visualize
the Casson handle $CH$ by a tree: the root is the immersion $D^{2}\to M$
with $k$ double points, the first stage forms the next level of the
tree with $k$ vertices connected with the root by edges etc. The
edges are evaluated using the orientation $\pm$. Every Casson handle
can be represented by such an infinite tree.
The main idea is the construction of a grope, an infinite union of
surfaces with non-vanishing genus, from the Casson handle. But the
grope can be represented by a sequence of polygons in the two-dimensional
hyperbolic space $\mathbb{H}^{2}$. This sequence of polygons is replaced
by one polygon with the same area. From this polygon we can construct
a codimension-one foliation on the 3-sphere as done by Thurston \cite{Thu:72}.
This 3-sphere is part of the boundary $\partial A$ of the Akbulut
cork $A$. Furthermore one can show that the codimension-one foliation
of the 3-sphere induces a codimension-one foliation of $\partial A$
so that the area of the corresponding polygons agree.
Thus we are able to obtain a relation between an exotic $\mathbb{R}^{4}$
(of Bizaca as constructed from the failure of the smooth h-cobordism
theorem) and codimension-one foliation of the $S^{3}$. Two non-diffeomorphic
exotic $\mathbb{R}^{4}$implying non-cobordant codimension-one foliations
of the 3-sphere described by the Godbillon-Vey class in $H^{3}(S^{3},\mathbb{R})$
(proportional to the are of the polygon). This relation is very strict,
i.e. if we change the Casson handle then we must change the polygon.
But that changes the foliation and vice verse. Finally we obtained
the result:\\
\emph{The exotic $\mathbb{R}^{4}$ (of Bizaca) is determined by
the codimension-1 foliations with non-vanishing Godbillon-Vey class
in $H^{3}(S^{3},\mathbb{R}^{3})$ of a 3-sphere seen as submanifold
$S^{3}\subset\mathbb{R}^{4}$. We say: the exoticness is localized
at a 3-sphere inside the small exotic $\mathbb{R}^{4}$.}
\section{Towards quantum D-branes via K-theory}
In this and subsequent sections we want to show that D-branes of string
theory are related to exotic smooth $\mathbb{R}^{4}$'s also beyond
the semi-classical limit, i.e. in the quantum regime of the theory
where one should deal rather with \emph{quantum branes}. What are
\emph{quantum branes,} is still in general an open and hard problem.
One appealing proposition, relevant for this paper, is to consider
branes in noncommutative spacetimes rather than on commutative manifolds
or orbifolds. This leads to abstract D-branes in general noncommutative
separable $C^{\star}$ algebras as counterparts for quantum D-branes.
The way from D-branes as submanifolds or K-homology classes and spaces
to K-theory cycles, spectral triples and $C^{\star}$ algebras is
presented in the following subsections.
\subsection{D-branes on spaces: K-homology and KK-theory \label{sub:D-branes-on-spaces:} }
The description of systems of stable Dp-branes of IIA,B string theories
via K-theory of topological spaces can be extended toward the branes
in noncommutative $C^{\star}$ algebras. A direct string representation
of the algebraic and K-theoretic ideas can be best explained in K-matrix
string theory where tachyons are elements of the spectral triple representing
the noncommutative geometry of the world-volumes for the configurations
of branes \cite{AsakawaSugimotoTerasima2002}.
First let us consider the case of a vanishing $H$-field. The charges
of D-branes are classified by suitable K-theory groups, i.e. $K^{0}(X)$
in IIB and $K^{1}(X)$ in IIA string theories, where $X$ is the background
manifold. On the other hand, world-volumes of Dp-branes correspond
to the cycles of K-homology groups, $K_{1}(X)$, $K_{0}(X)$, which
are dual to the $K$ theory groups. Let us see how K-cycles correspond
to the configurations of D-branes.
A K-cycle on $X$ is a triple $(M,E,\phi)$ where $M$ is a compact
${\rm {Spin}^{c}}$ manifold without boundary, $E$ is a complex vector
bundle on $M$ and $\phi:M\to X$ is a continuous map. The topological
K-homology $K_{\star}(X)$ is the set of equivalence classes of the
triples $(M,E,\phi)$ respecting the following conditions:
\begin{itemize}
\item[(i)] $(M_{1},E_{1},\phi_{1})\sim(M_{2},E_{2},\phi_{2})$ when
there exists a triple (bordism of the triples) $(M,E,\phi)$ such
that $(\partial M,E_{|\partial M},\phi_{|\partial M})$ is isomorphic
to the disjoint union $(M_{1},E_{1},\phi_{1})\cup(-M_{2},E_{2},\phi_{2})$
where $-M_{2}$ is the reversed ${\rm {Spin}^{c}}$ structure of $M_{2}$
and $M$ is a compact ${\rm {Spin}^{c}}$ manifold with boundary.
\item[(ii)] $(M,E_{1}\oplus E_{2},\phi)\sim(M,E_{1},\phi)\cup(M,E_{2},\phi)$,
\item[(iii)] Vector bundle modification $(M,E,\phi)\sim(\widehat{M},\widehat{H}\otimes\rho^{\star}(E),\phi\circ\rho)$.
$\widehat{M}$ is even dimensional sphere bundle on $M$, $\rho:\widehat{M}\to M$
projection, $\widehat{H}$ is a vector bundle on $\widehat{M}$ which
gives the generator of $K(S_{q}^{2p})=\mathbb{Z}$ on every $S_{q}^{2p}$
over each $q\in M$ \cite{Szabo2002a}.
\end{itemize}
The topological K-homology defined above has an abelian group structure
where the sum is the disjoint union of cycles. The triples $(M,E,\phi)$
with $M$ of even dimension determines $K_{0}(X)$. Similarly, $K_{1}(X)$
corresponds to odd dimensions of $M$. Thus $K_{\star}(X)$ decomposes
into a direct sum of abelian groups:
\[
K_{\star}(X)=K_{0}(X)\oplus K_{1}(X)\,.\]
K-homology is dual to K-theory and the decomposition of $K_{*}(X)$
is a direct consequence of Bott periodicity (see \cite{Ati:67}).
Now one can interpret the cycles $(M,E,\phi)$ as D-branes \cite{HarveyMoore2000}:
$M$ is the world-volume of the brane, $E$ the Chan-Paton bundle
on it and $\phi$ gives the embedding of the brane into the (background)
spacetime $X$. Moreover, $M$ has to wrap the ${\rm Spin}^{c}$ manifold
\cite{FreedWitten1999} and $K_{0}(X)$ classifies stable D-branes
configurations in IIB, and $K_{1}(X)$ in IIA, string theories. The
equivalences of K-cycles as formulated in the conditions (i)-(iii)
correspond to natural relations for D-branes \cite{AsakawaSugimotoTerasima2002,Szabo2008b}.
The topological K-homology theory above can be obtained analytically
(analytic K-homology theory). This theory is a special, commutative,
case of the following construction on general $C^{\star}$ algebras
\cite{AsakawaSugimotoTerasima2002}: A Fredholm module over a $C^{\star}$
algebra ${\cal A}$ is a triple $({\cal H},\phi,F)$ such that
\begin{enumerate}
\item ${\cal H}$ is a separable Hilbert space,
\item $\phi$ is a $^{\star}$ homomorphism between $C^{\star}$ algebras
${\cal A}$ and ${\rm {\bf B}}({\cal H})$ of bounded linear operators
on ${\cal H}$,
\item $F$ is self-adjoint operator in ${\rm {\bf B}}({\cal H})$ satisfying
\end{enumerate}
\[
F^{2}-id\in{\rm K}({\cal H})\,,\quad[F,\phi(a)]\in{\rm K}({\cal H})\:{\rm for}\:{\rm every}\: a\in{\cal A}\]
where ${\rm K}({\cal H})$ are compact operators on ${\cal H}$. Now
let us see how a Fredholm module $({\cal H},\phi,F)$ describes certain
configuration of IIA K-matrix string theory related to D branes. To
this end we consider the operators of the K-matrix theory $\Phi^{0},...,\Phi^{9}$
(infinite matrices) acting on the Hilbert space ${\cal H}$ as generating
the $C^{\star}$ algebra ${\cal A}_{M}$ \cite{AsakawaSugimotoTerasima2002}.
In the case of commuting $\Phi^{\mu}$, hence commutative ${\cal A}_{M}$,
we have the following correspondence (explaining the index $M$ in
${\cal A}_{M}$):
\begin{itemize}
\item Every commutative $C^{\star}$ algebra is isomorphic to the algebra
of continuous complex functions vanishing at infinity $C(M)$ on some
locally compact Hausdorff space $M$ (Gelfand-Naimark theorem and
Gelfand representation). A point $x\in M$ is determined by a character
of ${\cal A}_{M}$ which is a $^{\star}$ homomorphism $\phi_{x}:{\cal A}_{M}\to\mathbb{C}$.
\item $M$ serves as a common spectrum for $\Phi^{0},...,\Phi^{9}$ and
the choice of a point in $M$ represented as the eigenvalue of $\Phi^{\mu}$
fixes the position of the non BPS instanton along $x^{\mu}$.
\item In this way $M$ is covered by the positions of infinite many non
BPS instantons and serves as the world-volume of some higher dimensional
D brane \cite{AsakawaSugimotoTerasima2002}.
\end{itemize}
Now let us explain the role of the tachyon $T$. $T$ is a self-adjoint
unbounded operator acting on the Chan-Paton Hilbert space ${\cal H}$.
${\cal A}_{M}$ is a $C^{\star}$ unital algebra generated by $\Phi^{0},...,\Phi^{9}$
which can be now a noncommutative algebra. The corresponding geometry
of the world-volume $M$ would be a noncommutative geometry (in the
sense of Connes) and given by some spectral triple. The spectral triple
is in fact $({\cal H},{\cal A},T)$ which means that the following
conditions are fulfilled \cite{AsakawaSugimotoTerasima2002}:
\[
T-\lambda\in{\rm {\bf K}}({\cal H})\:\mbox{for every}\:\lambda\in\mathbb{C}\setminus\mathbb{R},\;[a,T]\in{\bf B}({\cal H})\:\mbox{for every}\: a\in{\cal A}_{M}\]
These conditions are fulfilled in our case of K-matrix string theory
for a tachyon field $T$, Chan-Paton Hilbert space ${\cal H}$ and
$C^{\star}$ algebra ${\cal A}_{M}$ generated by $\Phi^{\mu}$ .
Thus the natural extension of the spacetime manifold as well D-brane
world-volumes toward a noncommutative algebra and noncommutative world-volumes
of branes (represented by spectral triples) can be described by (see
e.g. \cite{AsakawaSugimotoTerasima2002}):
\begin{enumerate}
\item Fix the (spacetime) $C^{\star}$ algebra ${\cal A}$;
\item A $^{\star}$ homomorphism $\phi:{\cal A}\to{\bf B}({\cal H})$ generates
the embedding of the D-brane world-volume $M$ and its noncommutative
algebra ${\cal A}_{M}$ as ${\cal A}_{M}:=\phi({\cal A})$;
\item D-branes embedded in a spacetime ${\cal A}$ are represented by the
spectral triple $({\cal H},{\cal A}_{M},T)$;
\item Equivalently, a D-brane in $A$ is given by an unbounded Fredholm
module $({\cal H},\phi,T)$.
\end{enumerate}
Thus the classification of stable D-branes in ${\cal A}$ is given
by the classification of Fredholm modules $({\cal H},\phi,T)$ using
analytical K-homology. In the particular case of commutative $C^{\star}$
algebras based on the isomorphism of the topological and analytical
K-homology groups, we have the classification of stable D-branes in
terms of K-cycles, as was already discussed. In terms of K-matrix
string theory we can say that stable configurations of D-instantons
determine the stable higher dimensional D-branes which are K-homologically
classified as above \cite{AsakawaSugimotoTerasima2002}.
Now let us turn to a more general situation than K-string theory of
D-instantons, i.e. backgrounds given by non-BPS Dp-branes or non-BPS
Dp-${\rm \overline{{\rm Dp}}}$-branes in type II string theory. Then
the stable configurations of Dq-branes are classified by generalized
K-theory namely Kasparov KK-theory \cite{AsakawaSugimotoTerasima2002}.
As in the case of D-branes in a $C^{\star}$ algebra ${\cal A}$ corresponding
to Fredholm modules, one defines an odd Kasparov module $({\cal H}_{{\cal B}},\phi,T)$,
where ${\cal H}_{{\cal B}}$ is an countable Hilbert module over the
$C^{\star}$algebra ${\cal B}$, by
\begin{itemize}
\item a $\star$-homomorphism from ${\cal A}$ to the $C^{\star}$ algebra
of bounded linear operators on ${\cal H}_{{\cal B}}$, $\phi:{\cal A}\to{\rm {\bf B}}({\cal H}_{{\cal B}})$;
\item a self-adjoint operator $T$ from ${\rm {\bf B}}({\cal H}_{{\cal B}})$
satisfying:
\end{itemize}
\[
T^{2}-1\in{\rm {\bf K}}({\cal H}_{{\cal B}})\:\mbox{and}\:[T\,,\phi(a)]\in{\rm {\bf K}}({\cal H}_{{\cal B}})\:\mbox{for every}\, a\in{\cal A}\,,\]
where ${\rm {\bf K}}({\cal H}_{{\cal B}})$ is ${\cal B}\otimes{\bf {\rm K}}$.
$({\cal H}_{{\cal B}},\phi,T)$ is in fact a family of Fredholm modules
on the algebra ${\cal B}$. When ${\cal B}$ is $\mathbb{C}$ we have
an ordinary Fredholm module as before. The homotopy equivalence classes
of odd Kasparov modules $({\cal H}_{{\cal B}},\phi,T)$ determine
elements of $KK^{1}({\cal A},{\cal B})$. Also one defines even Kasparov
classes $KK^{0}({\cal A},{\cal B})=KK({\cal A},{\cal B})$ as homotopy
equivalence classes of the triples $({\cal H}_{{\cal B}}^{(0)}\oplus H_{{\cal B}}^{(1)},\phi^{(0)}\oplus\phi^{(1)},\left(\begin{array}{cc}
0 & T^{\star}\\
T & 0\end{array}\right))$. A natural $\mathbb{Z}_{2}$ grading appears due to the involution
${\cal H}_{{\cal B}}^{(0)}\oplus H_{{\cal B}}^{(1)}\to{\cal H}_{{\cal B}}^{(0)}\oplus-H_{{\cal B}}^{(1)}$.
Now one obtains the classification pattern for branes in spaces. Let
us introduce non-BPS unstable Dp-branes wrapping the $p+1$-dimensional
world-volume $B$. Then stable Dq-branes configurations embedded in
a space $A$ transverse to $B$ corresponding to (are classified by)
the classes of $KK^{1}(A,B)$ (we identify the commutative algebras
$C(A)$, $C(B)$ with $A$, $B$ correspondingly). Similarly, given
non-BPS unstable Dp-${\rm \overline{Dp}}$-branes system, then stable
Dq-branes embedded in $A$ transverse to $B$ ($p+1$-dimensional
world-volumes) are classified by elements of $KK^{0}(A,B)$. The case
of even $KK^{0}(A,B)$ contains the $\mathbb{Z}_{2}$ grading as corresponding
to the Chan-Paton indices of Dp and ${\rm \overline{Dp}}$-branes.
\subsection{D-branes on separable $C^{\star}$ algebras and KK-theory \label{sub:Branes-on-separable}}
Thus the classification of D-branes in a spacetime manifold is given
by KK-theory as sketched in the previous subsection. This can be extended
over noncommutative spacetimes and noncommutative D-branes both represented
by separable $C^{\star}$ algebras as already can be seen from the
appearance of tools of KK-theory. First let us reformulate the {}``classic''
case of spaces in a way allowing this extension \cite{Szabo2008c}.
In case of type II superstring theory, let $X$ be a compact part
of a spacetime manifold, i.e. $X$ is a compact ${\rm spin}^{c}$
manifold again with no background $H$-flux. As we saw, a flat D-brane
in $X$ is a Baum-Douglas K-cycle $(W,E,f)$. Here $f:W\hookrightarrow X$
is the embedding of the closed ${\rm spin}^{c}$ submanifold $W$
of $X$ and $E\to W$ is a complex vector bundle with connection (Chan-Paton
gauge bundle). It follows from the Baum-Douglas construction that
$E$ determines the stable class in the K-theory group $K^{0}(W)$
and all K-cycles form an additive category under disjoint union. Now,
the set of all K-cycles classes up to a kind of gauge equivalence
as in Baum-Douglas construction, gives the K-homology of $X$. This
K-homology is also the set of stable homotopy classes of Fredholm
modules taken over the commutative $C^{\star}$ algebra $C(X)$ of
continuous functions on $X$. This defines the correspondence (isomorphism)
between a K-cycle $(W,E,f)$ and the unbounded Fredholm module $({\cal H},\rho,D_{E}^{\mbox{W}})$.
Here ${\cal H}$ is the separable Hilbert space of square integrable
spinors on $W$ taking values in the bundle $E$, i.e. $L^{2}(W,S\otimes E)$,
$\rho:C(X)\to{\rm {\bf B}}({\cal H})$ is the representation of the
$C^{\star}$ algebra $C(X)$ in ${\cal H}$ such that $C(X)\ni g\to a_{g\circ f}\in{\rm {\bf B}}({\cal H})$
where $a_{g\circ f}$ is the operator of point-wise multiplication
of functions in $L^{2}(W,S\otimes E)$ by the function on $W$, $g\circ f$,
and $f:W\hookrightarrow X$. $D_{E}^{W}$ is the Dirac operator twisted
by $E$ corresponding to the ${\rm spin}^{c}$ structure on $W$.
Given the K-theory class of the Chan-Paton bundle $E$, i.e. $[E]\in K^{0}(W)$,
then the dual K-homology class of a D-brane, $[W,E,f]$ uniquely determines
$[E]$. In that way D-branes determine K-homology classes on $X$
which are dual to K-theory classes from $K^{r}(X)$ where $r$ is
the transversal dimension for the brane world-volume $W$. This K-theory
class is derived from the image of $[E]\in K^{0}(W)$ by the Gysin
K-theoretic map $f_{!}$. As we discussed already, the odd and even
classes of K-homology $K_{\star}(X)$ correspond to the parity of
the dimension of $W$. The K-cycle $(W,E,f)$ corresponds to a Dp-brane
and its gauge equivalence is given by Baum-Douglas construction using
the conditions (i)-(iii) in section \ref{sub:D-branes-on-spaces:}.
Thus we have \cite{Szabo2008b}:
Fact 1: \emph{There is a one-to-one correspondence between flat D-branes
in $X$, modulo Baum-Douglas equivalence, and stable homotopy classes
of Fredholm modules over the algebra $C(X)$.}
In the presence of a non-zero $B$-field on $X$, which is a $U(1)$-gerbe
with connection represented by the characteristic class in $H^{3}(X,\mathbb{Z})$
\cite{Szabo2008b,AsselmeyerKrol2009}, one can define twisted D-brane
on $X$ as \cite{Szabo2008b}:
\begin{definition}
A twisted D-brane in a B-field $(X,H)$ is a triple $(W,E,\phi)$,
where $\phi:W\hookrightarrow X$ is a closed, embedded oriented submanifold
with $\phi^{\star}H={\rm W}_{3}(W)$, and $E$ is the Chan-Paton bundle
on $W$, i.e. $E\in K^{0}(W)$, and ${\rm W}_{3}(W)$ is the 3-rd
integer Stiefel-Whitney class of the normal bundle of $W$, ${\rm W}_{3}(W)\in H^{3}(W,\mathbb{Z})$.
\end{definition}
By the cancellation of the Freed-Witten anomaly, the condition in
the definition is really necessary. Let $H\in H^{3}(X,\mathbb{Z})$
represents the NS-NS $H$-flux. Since ${\rm W}_{3}(W)$ is the obstruction
to the existence of a ${\rm spin}^{c}$ structure on $W$ \cite{HiHo:58},
in the case of ${\rm W}_{3}(W)=0$ one has flat D-branes in $X$.
Thus equivalence classes of twisted D-branes on $X$ are represented
by twisted topological K-homology $K_{\star}(X,H)$ which is dual
to the twisted K-theory $K^{\star}(X,H)$.
Now, in the case of $S^{3}$ and integral classes $H\in H^{3}(S^{3},\mathbb{Z})$,
one has some exotic $\mathbb{R}^{4}$'s which is determined by the
class $H$, when $S^{3}$ is a part of the boundary of the Akbulut
cork \cite{AsselmeyerKrol2010}. This is the same class $H$ which
twists the K-theory leading to $K^{\star}(S^{3},H)$. We can also
represent the $U(1)$ gerbes with connection on $S^{3}$, by the bundles
${\cal E}_{H}$ of algebras over $S^{3}$, such that the sections
of the bundle ${\cal E}_{H}$ define the noncommutative, twisted algebra
\emph{$C_{0}(X,{\cal E}_{H})$. }The Dixmier-Douady class of ${\cal E}_{H}$,
$\delta_{H}({\cal E}_{H})$, is again $H\in H^{3}(S^{3},\mathbb{Z})$
\cite{AsselmeyerKrol2009a,AtiyahSegal2004,Szabo2002a}.
The important relation is the following (\cite{Szabo2008b}, Proposition
1.15):
Fact 2: \emph{There is a one-to-one correspondence between twisted
D-branes in $(X,H)$ and stable homotopy classes of Fredholm modules
over the algebra $C_{0}(X,{\cal E}_{H})$.}
Since the algebra \emph{$C_{0}(X,{\cal E}_{H})$} determines its stable
homotopy classes of the Fredholm modules on it, then in the case $X=S^{3}$
one has the correspondence:
A. \emph{Let the exotic smooth $\mathbb{R}^{4}$'s are determined
by the integral third classes $H\in H^{3}(S^{3},\mathbb{Z})$. Then,
these exotic smooth $\mathbb{R}^{4}$'s correspond one-to-one to the
sets of twisted D-branes in $(S^{3},H)$, provided $S^{3}$ is a part
of the boundary of the Akbulut cork.}
Thus, given the complete collection of twisted D-branes in $(S^{3},H)$,
which take values in $K_{\star}(S^{3},H)$, one can determine, in
principle, the corresponding exotic $\mathbb{R}^{4}$. This exotic
$\mathbb{R}_{H}^{4}$ corresponds to the class $[H]\in H^{3}(S^{3})$
and the class $H$ twists the K-homology as dual to the twisted K-theory
$K^{\star}(S^{3},H)$ \cite{AsselmeyerKrol2009a,AsselmeyerKrol2010,Szabo2002a}.
In the following we try to convince the reader that the correspondence
of D-branes to 4-exotics can be extended to more general cases with
a closer relation.
Remembering that $S^{3}\subset\mathbb{R}^{4}$ is a part of the Akbulut
cork of the exotic structure, our previous observation can be restated
as:
B. \emph{The change of the exotic smoothness of $\mathbb{R}^{4}$,
$\mathbb{R}_{H_{1}}^{4}\to\mathbb{R}_{H_{2}}^{4}$, $H_{1}$, $H_{2}\in H^{3}(S^{3},\mathbb{Z})$,
$H_{1}\neq H_{2}$, corresponds to the change of the curved backgrounds
$(S^{3},H_{1})\to(S^{3},H_{2})$ hence the sets of stable D-branes.}
This motivates the formulation:
C. \emph{Some small exotic smoothness appearing on $\mathbb{R}^{4}$,
$\mathbb{R}_{H_{1}}^{4}$, can destabilize (or stabilize) D-branes
in $(S^{3},H_{2})$, where $S^{3}\subset\mathbb{R}^{4}$ lies at the
boundary of the Akbulut cork of $\mathbb{R}_{H_{1}}^{4}$. We say
that D-branes configurations in $(S^{3},H_{2})$ are }4-exotic-sensitive\emph{.}
Next we extend the formalism of D-branes in spaces to \emph{quantum
}D-branes in general $C^{\star}$ algebras including the correspondence
described above,.\emph{ }There were developed recently impressive
counterparts of a variety of topological, geometrical and analytical
results, like Poincar\'e duality, characteristic classes and the
Riemann-Roch theorem, in $C^{\star}$ algebras. Besides the generalized
formula for charges of quantum D-branes in a noncommutative separable
$C^{\star}$ algebras was worked out \cite{Szabo2008a,Szabo2008b}.
Thus one obtains a suitable framework for considering the quantum
regime of D-branes. Therefore we will try to find a relation to 4-exotics
also in this quantum regime of D-branes.
Following \cite{AsakawaSugimotoTerasima2002,Szabo2008a,Szabo2008b,Szabo2008c}
one can choose an obvious substitute for the category of quantum D-branes:
the category of separable $C^{\star}$ algebras where the morphisms
are elements of some KK-theory group. This means that for a pair $({\cal A},{\cal B})$
of separable $C^{\star}$ algebras the morphisms $h:{\cal A}\to{\cal B}$
is lifted to the element of the group $KK({\cal A},{\cal B})$. Thus
we can consider a generalized D-brane in a separable $C^{\star}$
algebra ${\cal A}$ as corresponding to the lift $h!:{\cal A}\to{\cal B}$
where ${\cal B}$ represents a quantum D-brane.
More precisely following \cite{Szabo2008a}, let us consider a subcategory
${\cal C}$ of the category of $C^{\star}$ separable algebras and
their morphisms, which consists of strongly K-oriented morphisms.
Therefore there exists a contravariant functor $!:{\cal C}\to KK$
such that ${\cal C}\ni f:{\cal A}\to{\cal B}$ is mapped to $f!\in KK_{d}({\cal B},{\cal A})$,
here $KK$ is the category of separable $C^{\star}$ algebras with
KK classes as morphisms. Strongly K-oriented morphisms and the functor
$!$ are subjects to the following conditions:
\begin{enumerate}
\item Identity morphism $id_{{\cal A}}:{\cal A}\to{\cal A}$ is strongly
K-oriented (SKKO) for every separable $C^{\star}$ algebra ${\cal A}$
and $(id_{{\cal A}})!=1_{{\cal A}}$. Also, the 0-morphism $0_{{\cal A}}:{\cal A}\to{\cal A}$
is SKKO and $(0_{{\cal A}})!=0\in KK(0,{\cal A})$.
\item If $f:{\cal A}\to{\cal B}$ is SKKO then $f^{\circ}:{\cal A}^{\circ}\to{\cal B}^{\circ}$
is also SKKO, and $(f!)^{\circ}=(f^{\circ})!$ where ${\cal A}^{\circ}$
is the opposite $C^{\star}$ algebra to ${\cal A}$, i.e. the algebra
having the same underlying vector space but the reversed product.
\item Any morphism $f:{\cal A}\to{\cal B}$ is SKKO, provided ${\cal A}$
and ${\cal B}$ are strong Poincar\'e dual (PD) algebras. Then $f!$
is determined as: \begin{equation}
f!=(-1)^{d_{{\cal A}}}\Delta_{{\cal A}}^{\vee}\otimes_{{\cal A}^{0}}\left[f^{0}\right]\otimes_{{\cal B}^{0}}\Delta_{{\cal B}}\label{eq:K-orientation}\end{equation}
here $[f]$ is the class of $f:{\cal A}\to{\cal B}$ in $KK({\cal A},{\cal B})$.
$\Delta_{{\cal A}}$ is the fundamental class in $KK_{d_{{\cal A}}}({\cal A}\otimes{\cal A}^{\circ},\mathbb{C})=K^{d_{{\cal A}}}({\cal A}\otimes{\cal A}^{\circ})$,
$\Delta_{{\cal A}}^{\vee}$ its dual class in $KK_{-d_{{\cal A}}}(\mathbb{C},{\cal A}\otimes{\cal A}^{\circ})=K_{-d_{{\cal A}}}(A\otimes{\cal A}^{\circ})$
which exists by strong PD \cite{Szabo2008a}.
\end{enumerate}
K-orientability was introduced in its original form, by A. Connes
to define the analogue of the ${\rm spin}^{c}$ structure for noncommutative
$C^{\star}$ algebras (see also \cite{Connes1984} and the next section).
The formulation of K-orientability and strong PD $C^{\star}$ algebras
are crucial ingredients of noncommutative versions of Riemann-Roch
theorem, Poincar\'e-like dualities, Gysin K-theory map and allows
to formulate a very general formula for noncommutative D-brane charges
\cite{Szabo2008b,Szabo2008a,Szabo2008c}. Let us notice that if both
${\cal A}$ and ${\cal B}$ are PD algebras then any morphism $f:{\cal A}\to{\cal B}$
is K-oriented and the K-orientation for $f$ is given in (\ref{eq:K-orientation}).
In the special case of the proper smooth embedding $f:W\to X$ of
codimension $d$ between the smooth compact manifolds $W,X$, we choose
the normal bundle $\tau$ over $W$ to be ${\rm spin}^{c}$, where
$\tau$ is given by $TX=\tau\oplus f_{*}(TW)$. When $X$ is also
${\rm spin}^{c}$ then the ${\rm spin}^{c}$ condition on $\tau$
for vanishing $H$-flux in type II string theory formulated on $X$
is the Freed-Witten anomaly cancellation condition \cite{Szabo2008a}.
In this case any D-brane in $X$, given by the triple $(W,E,f)$,
determines the KK-theory element $f!\in KK(C(W),C(X))$. The construction
of K-orientation $f:M\to X$, between smooth compact manifolds, can
be extended to smooth proper maps which are not necessary embeddings.
Thus the general condition for K-orientability gives the correct analogue
for stable D-branes in $C^{\star}$ algebras.
\begin{definition}\label{enu:Def: q-Branes}
\emph{A generalized stable quantum D-brane} on a separable $C^{\star}$
algebra ${\cal A}$, represented by a separable $C^{\star}$ algebra
${\cal B}$, is given by the strongly K-oriented homomorphism of $C^{\star}$
algebras, $h_{{\cal B}}:{\cal A}\to{\cal B}$. The K-orientation means
that there is the lift $(h_{{\cal B}})!\in KK({\cal B},{\cal A})$
where $!$ fulfills the functoriality condition as in (\ref{eq:K-orientation}).
\end{definition}
This approach to quantum D-branes is a natural extension of the string
formalism over $C^{\star}$algebras replacing spaces and branes, which
is currently a conjectural framework. This framework exceeds both
the dynamical Seiberg-Witten limit of superstring theory (inducing
noncommutative brane world-volumes) and the geometrical understanding
of branes, and places itself rather in a deep quantum regime of the
theory \cite{Szabo2008c}. On the other hand in such a formal quantum
limit of string theory one can observe the relation with 4-dimensional
exotic open smooth structures, which relies on the natural relation
of exotic $R^{4}$ with $C^{\star}$algebras of the foliations.
\section{Exotic $\mathbb{R}^{4}$ and branes in $C^{\star}$ algebras\label{sec:Exotic--and-branes}}
\subsection{Exotic $\mathbb{R}^{4}$ and stable D-branes configurations on foliated
manifolds\label{sub:D-branes-on-foliated}}
Now we want to tackle the problem to describe stable states of D-branes
in a more general geometry than used for spaces, namely the geometry
of foliated manifolds. The interesting case for us is a codimension-1
foliation of the 3-sphere $S^{3}$ admitting a noncommutative geometry
as we will show now. In general, to every foliation $(V,F)$ one can
associate its noncommutative $C^{\star}$ algebra $C^{\star}(V,F)$,
on the other hand a foliation determines its holonomy groupoid $G$
and the topological classifying space $BG$. Both cases, topological
K-homology of $G$ and $C^{\star}$algebraic K-theory, are in fact
dual. Analogously to our previous discussion of branes as K-cycles
on $X$, let us start with K-homology of $G$ and define D-branes
as K-cycles in $G$:
A $K$ - cycle on a foliated geometry $X=(V,F)$ is a triple $(M,E,\phi)$
where $M$ is a compact manifold without boundary, $E$ is a complex
vector bundle on $M$ and $\phi:M\to BG$ is a smooth K-oriented map.
Due to the K-orientability in the presence of canonical $G$-bundle
$\tau$ on $BG$, the condition of ${\rm Spin}^{c}$ structure on
$M$ is lifted to the ${\rm Spin}^{c}$ structure on $TM\oplus\phi^{\star}\tau$
\cite{Connes1984}.
The topological K-homology $K_{\star,\tau}(X)=K_{\star,\tau}(BG)$
of the foliation $(V,F)$ is the set of equivalence classes of the
above triples, where the equivalence respects the following conditions:
\begin{itemize}
\item[(i)] $(M_{1},E_{1},\phi_{1})\sim(M_{2},E_{2},\phi_{2})$ when
there is a triple (bordism of the triples) $(M,E,\phi)$ such that
$(\partial M,E_{|\partial M},\phi_{|\partial M})$ is isomorphic to
the disjoint union $(M_{1},E_{1},\phi_{1})\cup(-M_{2},E_{2},\phi_{2})$
where $-M_{2}$ is the reversed ${\rm {Spin}^{c}}$ structure of $TM_{2}\oplus\phi_{2}^{\star}\tau$
and $M$ is a compact manifold with boundary.
\item[(ii)] $(M,E_{1}\oplus E_{2},\phi)\sim(M,E_{1},\phi)\cup(M,E_{2},\phi)$,
\item[(iii)] Vector bundle modification $(M,E,\phi)\sim(\widehat{M},\widehat{H}\otimes\rho^{\star}(E),\phi\circ\rho)$
similarly as in the case of manifolds.
\end{itemize}
As in the case of spaces (manifolds) and the corresponding K-homology
groups representing stable D-branes of type II superstring theory
(see section \ref{sub:D-branes-on-spaces:}), we generalize stable
D-branes as being represented by the above triples in case of the
geometry of foliated manifolds .
\begin{theorem}
The class of generalized stable D-branes on the $C^{\star}$ algebra
$C^{\star}(S^{3},F_{1})$ (of the codimension 1 foliation of $S^{3}$)
corresponding to the K-homology classes $K_{\star,\tau}(S^{3}/F)$
determines an invariant of exotic smooth $\mathbb{R}^{4}$ .
\end{theorem}
The result follows from the fact that $K_{\star,\tau}(S^{3}/F)$ is
isomorphic to $K_{\star,\tau}(BG)$ \cite{Connes1984} and this determines
a class of stable D-branes in $(S^{3},F)$. The foliations $(S^{3},F)$
correspond to different smoothings on $\mathbb{R}^{4}$ \cite{AsselmeyerKrol2009}.
$\square$
\subsection{The net of exotic $\mathbb{R}^{4}$'s and quantum D-branes in $C^{\star}(S^{3},F)$\label{sub:Net-of-exotic}}
The extension of string theory and D-branes to general noncommutative
separable $C^{\star}$ algebras can be considered as an approach to
quantum D-branes where D-branes are also represented by noncommutative
separable $C^{\star}$ algebras. A category of D-branes in a quantum
regime, is the category of separable $C^{\star}$ algebras where the
morphisms are elements of KK-theory groups. For a pair $({\cal A},{\cal B})$
of separable $C^{\star}$ algebras the morphisms $h:{\cal A}\to{\cal B}$
belong to $KK({\cal A},{\cal B})$. Abstract quantum D-branes in a
separable $C^{\star}$ algebra ${\cal A}$ correspond to $\phi:{\cal A}\to{\cal B}$
where ${\cal B}$ is the algebra representing a quantum D-brane and
$\phi$ is a strongly K-oriented map. A general formula for RR charges
in the noncommutative setting was obtained for these branes in \cite{Szabo2008a,Szabo2008b}.
D-branes, as considered in the previous subsection, correspond to
the lifted KK-theory classes. That is, if the D-brane corresponds
to the triple $(M,E,f)$ and $f:M\hookrightarrow G=V/F$ is a K-oriented
map then $f!\in KK(M,V/F)$ represents the D-brane (see \cite{Connes1984})
. More generally (still following \cite{Connes1984}), given a K-oriented
map $f:X\to Y$ , one can define (under certain conditions) a push
forward map $f!$ in K-theory. The very important property of the
analytical group $K(V/F)$ of the foliation $(V,F)$ is its ,,wrong
way'' (Gysin) functoriality, i.e. one associates to each K-oriented
map $f:V_{1}/F_{1}\to V_{2}/F_{2}$ of leaf spaces an element $f!$
of the Kasparov group $KK(C^{\star}(V_{1};F_{1});C^{\star}(V_{2};F_{2}))$.
Now given a small exotic $\mathbb{R}^{4}$, say $e_{1}$, embedded
in some small exotic $\mathbb{R}^{4}$, $e$, both are represented
by the $C^{\star}$ algebras of the codimension-1 foliations of $S^{3}$,
$C^{\star}(V_{1};F_{1})$ and $C^{\star}(V;F)$ respectively. The
embedding $i:e_{1}\hookrightarrow e$ determines the corresponding
K-oriented map of the leaf spaces $f_{i}:S^{3}/F_{1}\to S^{3}/F$
and the KK-theory lift $f_{i}!\in KK(C^{\star}(V_{1};F_{1});C^{\star}(V;F))$.
According to definition \ref{enu:Def: q-Branes} from section \ref{sub:Branes-on-separable},
we obtain
\begin{theorem}
\label{theo:quantum-exotic-R4}
Let $e$ be an exotic $\mathbb{R}^{4}$ corresponding to the codimension-1
foliation of $S^{3}$ which gives rise to the $C^{\star}$algebra
${\cal A}_{e}$. The exotic smooth $\mathbb{R}^{4}$ embedded in $e$
determines a generalized quantum D-brane in ${\cal A}_{e}$.
\end{theorem}
Given exotic $\mathbb{R}^{4}$'s, $\{e_{a},\, a\in I\}$, all embedded
in $e$, one has the family of $C^{\star}$ algebras, $\{{\cal A}_{a},\, a\in I\}$,
of the codimension-1 foliations of $S_{a}^{3},\: a\in I$. Now the
embeddings $e_{a}\to e$ determine the corresponding K-oriented maps
of the leaf spaces as before, and the $\star$-homomorphisms of algebras
$\phi_{a}:{\cal A}_{e}\to{\cal A}_{a}$. The corresponding classes
in KK-theory $KK({\cal A}_{e},{\cal A}_{a})$ represent the quantum
D-branes in ${\cal A}_{e}$. $\square$
However, the correspondence in the theorem is many-to-one and an exotic
smooth $\mathbb{R}^{4}$ embedded in $e$ can be represented (non-uniquely)
by stable D-brane in ${\cal A}_{e}$, and not all abstract D-branes
in the algebra ${\cal A}_{e}$ are represented by some exotic $e'\subset e$.
Still one can consider D-branes represented by exotic $e_{a}$ in
$e$ as carrying 4-dimensional, hence potentially physical, information.
This is a kind of special ,,superselection'' rule in superstring theory
and will be discussed separately.
\section{Discussion and conclusions}
In this paper we give further evidences that string theory is indeed
related to 4-dimensional nonstandard smoothness of open manifolds
like $\mathbb{R}^{4}$. Our concern here was the quantum limit of
D-branes. We show that, on the formal level, there are strong correlations
between formalism of quantum D-branes in a quantum spacetime, both
represented by separable $C^{\star}$-algebras, and exotic smooth
$\mathbb{R}^{4}$'s. These $\mathbb{R}^{4}$'s are also represented
by $C^{\star}$-algebras and embedded in some exotic $\mathbb{R}^{4}$.
These $C^{*}-$algebras are the convolution algebras of the codimension
1 foliations of the 3-sphere when $S^{3}$ is taken as a part of the
boundary of the Akbulut cork for the small exotic $\mathbb{R}^{4}$.
Thus we model quantum D-branes in a quantum spacetime by exotic $\mathbb{R}^{4}$'s
embedded in an exotic $\mathbb{R}^{4}$. When the target ,,spacetime''
$\mathbb{R}^{4}$ is taken to be the standard one, which is always
possible since exotic $\mathbb{R}^{4}$'s are \emph{small}, one recovers
the correlation with ,,classic'' configurations of D and NS branes
in certain string backgrounds, as was described in our previous paper
\cite{AsselmeyerKrol2011}. Thus the way to abstract algebraic setting
of $C^{\star}$-algebras and quantum D-branes generalizes the correspondence
of branes (represented by submanifolds or K-homology classes) with
exotic $\mathbb{R}^{4}$ seen as smooth submanifolds of the standard
$\mathbb{R}^{4}$. These two-facets of exotic $\mathbb{R}^{4}$, namely
the $C^{\star}$ algebraic and smooth (sub)manifold, are crucial for
exhibiting the full range of a correspondence to string theory. When
smoothness on $\mathbb{R}^{4}$ is standard we lose the string information
as encoded in 4-structures. Thus, one could in some important cases
translate stringy situations into 4-smooth setting and conversely
and this is not a duplicate of existing approaches in string theory.
Thus we gain the additional and independent channel leading to 4-dimensions
from string theory. The crucial is whether this 4-dimensional data
carry information on physics in 4-dimensions. This important point
was considered already in a series of research papers \cite{BraRan:93,Bra:94a,Bra:94b,Ass:96,AssBra:2002,Ass2010,Sla:96,Sladkowski2001}
as well in a textbook \cite{Asselmeyer2007}. In \cite{AsselmeyerKrol2009}
we showed that exotic smoothness of an open 4-region in spacetime
have the same effect as the existence of magnetic monopoles, i.e.
exotic smoothness induces the quantization condition for the electric
charge. Moreover, one can consider exotic $\mathbb{R}^{4}$'s as quantum
object, i.e. the spacetime induces the quantization processes \cite{AsselmeyerKrol2010}.
However, the full-fledged presentation of the relation of exotic $\mathbb{R}^{4}$,
and other open smooth 4-manifolds, with string theory is out of reach
for the authors at present. We think that new analytical and topological
tools are needed. In the forthcoming paper we will present an effort
into this direction and try to understand quantum branes as a kind
of wild embeddings based on the smoothness of 4-manifolds. Thus the
point of the question ,,Is it possible that string theory deals with
4-dimensional structures directly neither by implementing compactifications
nor by phenomenological models-building, and these structures would
have a physical meaning?'' should be further explored and studied.
As we emphasized in the previous paper \cite{AsselmeyerKrol2011}
this effort should help with understanding both 4-dimensional physics
as appearing from string theory and exotic open 4-manifolds in mathematics
\cite{AssKrol2010ICM}.
\section*{Acknowledgment}
T.A. wants to thank C.H. Brans and H. Ros\'e for numerous discussions
over the years about the relation of exotic smoothness to physics.
J.K. benefited much from the explanations given to him by Robert Gompf
regarding 4-smoothness several years ago, and discussions with Jan
S{\l}adkowski.
|
1,314,259,993,538 | arxiv |
\section{Introduction} \label{sec:intro}
Over the past two decades, many examples of galaxies with stellar masses $\log(M_*/M_\odot) \gtrsim 11$ out to redshift $z\sim6$ have been found (e.g., \citealt{caputi2011,stefanon2015,deshmukh2018,marsan2022}). However, towards the Epoch of Re-ionisation (EoR; $z\gtrsim6$), such massive galaxies become increasingly rarer (e.g., \citealt{stefanon2021}). For instance, \citet{caputi2015spitzer} found virtually no galaxy with stellar mass $\log(M_*/M_\odot) > 11.0$ at such high redshifts over 0.8\,deg$^2$ of the COSMOS field \citep{scoville2007}. This result has been recently challenged by the apparent discovery of unusually massive galaxies at $z>6$ \citep{endsley2022_alma,labbe2022}. The possible existence of these sources would be in tension with galaxy formation theories assuming $\Lambda$CDM cosmology (e.g., \citealt{behroozi2018,boylan2022,menci2022}).
Galaxy stellar masses are usually estimated through spectral energy distribution (SED) fitting of photometric data. Many different SED fitting codes exists, which can lead to significantly different stellar mass estimates of the same object, especially for faint galaxies (e.g., \citealt{dahlen2013}). Therefore, a critical study of the effects of SED fitting approaches on the derived stellar masses at $z>6$ is of utmost importance.
For example, recent works have demonstrated that SED models assuming a non-parametric star formation history (SFH) yield higher stellar masses compared to traditional parametric descriptions \citep{tachella2022,topping2022,whitler2022}, although \citet{stefanon2022} found identical stellar masses between a constant and non-parametric SFH fit of a stacked sample of $z\sim10$ galaxies. In addition, the choice of initial mass function (IMF) also affects the derived stellar mass, and a Galactic IMF might not be the most suitable at $z>6$ \citep{steinhardt2022}.
In this letter, we present a case study of the source COS-87259, located in COSMOS. This galaxy was originally identified as a $z_{\rm phot}\approx 6.6$--6.9 Lyman Break galaxy in \citet{endsley2021}. Subsequently, \citet{endsley2022_radio} identified radio and X-ray emission coming from this source, concluding that this galaxy likely harbors an active galactic nucleus (AGN). Finally, \citet{endsley2022_alma} conducted follow-up spectroscopy with ALMA, identifying strong [CII]158\,\micron\ and dust continuum emission, establishing a precise spectroscopic redshift of $z_{\rm phot}=6.853 \pm 0.002$. Endsley and collaborators obtained different stellar mass estimates for COS-87259 in their different works, with the most recent one claiming that its best-estimate stellar mass is $\log(M_*/M_\odot) = 11.2$.
By using different SED fitting codes, in this work we assess whether this extremely high stellar mass value is necessarily the best estimate for COS-87259. We adopt a cosmology with $\rm{H_0 = 70\ km\ s^{-1}\ Mpc^{-1}}$, $\Omega_{\rm m}=0.3,$ and $\Omega_\Lambda=0.7$. All magnitudes and fluxes are total, with magnitudes referring to the AB system \citep{okegun1983}. Stellar masses correspond to a \citet{chabrier2003} IMF.
\section{Photometry} \label{sec:photometry}
\begin{deluxetable}{lcc}
\tablecaption{Optical and NIR flux density measurements for COS-87259, as obtained in this work. For non-detections, $3\sigma$ upper limits are reported. \label{tab:flux}}
\tablewidth{0pt}
\tablehead{
\colhead{Telescope/Instrument} & \colhead{Band} & \colhead{Flux} \\
\colhead{} & \colhead{} & \colhead{($\mu$Jy)}
}
\startdata
CFHT/MegaCam & $u$ & $<0.0039$ \\
Subaru/Suprime-Cam & $B$ & $<0.0050$ \\
Subaru/HSC & $g$ & $<0.0045$ \\
Subaru/Suprime-Cam & $V$ & $<0.014$ \\
Subaru/HSC & $r$ & $<0.0067$ \\
Subaru/Suprime-Cam & $r^+$ & $<0.013$ \\
Subaru/Suprime-Cam & $i^+$ & $<0.013$ \\
Subaru/HSC & $i$ & $<0.0079$ \\
Subaru/HSC & $z$ & $<0.0098$ \\
Subaru/Suprime-Cam & $z^{++}$ & $0.007 \pm 0.025$ \\
Subaru/HSC & $y$ & $0.097 \pm 0.053$ \\
VISTA/VIRCAM & $Y$ & $0.214 \pm 0.035$ \\
VISTA/VIRCAM & $J$ & $0.391 \pm 0.043$ \\
VISTA/VIRCAM & $H$ & $0.577 \pm 0.051$ \\
VISTA/VIRCAM & $K_{\rm s}$ & $0.837 \pm 0.081$ \\
Spitzer/IRAC & $[3.6]$ & $1.90 \pm 0.17$ \\
Spitzer/IRAC & $[4.5]$ & $1.91 \pm 0.18$ \\
\enddata
\end{deluxetable}
COS-87259 is part of the UltraVISTA ultra-deep catalogue presented in \citet{mierlo2022}. Based on the photometry in this catalogue, our initial photometric redshift for COS-87259 is $z_{\rm phot}=6.87^{+0.08}_{-0.07}$, in excellent agreement with the spectroscopic redshift from \citet{endsley2022_alma}. To obtain more precise flux measurements for this analysis, we redid the photometry for COS-87259, using the \textsc{Python} modules \textsc{Astropy} (version 5.0.4; \citealt{astropy}) and \textsc{Photutils} (version 1.4.1; \citealt{photutils}).
We included ultra-deep optical data from Data Release (DR) 3 of the Subaru Hyper Suprime-Cam (HSC) Strategic Program \citep{aihara2022} in the $g$, $r$, $i$, $z$, and $y$ bands. In addition, we consider CFHT Megacam $u$ and Subaru Suprime-Cam broad-band data, namely the $B$, $V$, $r^+$, $i^+$, and $z^{++}$ bands \citep{ilbert2009,taniguchi2015}. We also included the UltraVISTA DR4 VIRCAM $Y$,$J$,$H$, and $K_{\rm s}$ data \citep{mccracken2012} and IRAC 3.6 and 4.5 \micron\ imaging from the SMUVS programme \citep{ashby2018,deshmukh2018}. In total, we consider imaging in 17 rest-frame optical to near-infrared (NIR) broad bands.
We measured the photometry in $2"$ circular diameters at the position of COS-87259 measured from the $HK_{\rm s}$ stack, i.e., right ascension $\alpha = 149h 44m 34s.06$ and declination $\delta = +01d 39m 20s.10$. COS-87259 has only two faint low-redshift neighbours in a $5"$ radius that are both IRAC dark, such that we are not worried about flux contamination. The measured aperture fluxes were corrected to total using the curve-of-growths of nearby bright stars.
To derive the flux errors in all but the IRAC bands, we measured the background fluxes in $2"$ empty apertures over a 30 by 30$"$ region around the source, and calculated the the flux error as the standard deviation of the $\sigma$-clipped flux distribution. For the IRAC flux errors, we adopted a \textsc{SourceExtractor}-like approach \citep{bertin1996}.
Finally, all fluxes and error measurements were corrected for Galactic dust extinction using the \citet{schlafly2011} dust maps. In bands with non-detections, we adopt $3\sigma$ flux upper limits derived from the empty aperture fluxes. We present an overview of our flux measurements in each band in Table \ref{tab:flux}.
\begin{deluxetable*}{lllcc}[ht!]
\tablecaption{Resulting fit parameters from various SED fitting runs on the optical to NIR broad-band photometry of COS-87259. \label{tab:results}}
\tablewidth{0pt}
\tablehead{
\colhead{Code} & \colhead{Stellar Population Models} & \colhead{SFH} & \colhead{$\chi^2_{\nu}$} & \colhead{$M_*$}\\
\colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{($\log[M/M_{\odot}]$)}
}
\startdata
LePhare & BC03 & Parametric & 0.66 & $10.42^{+0.22}_{-0.05}$ \\
Prospector & FSPS & Parametric & 0.49 & $11.00^{+0.05}_{-0.07}$ \\
Prospector & FSPS & Binned SFH & 0.96 & $11.16^{+0.05}_{-0.06}$ \\
LePhare & STARBURST99 & Parametric & 1.19 & $10.24^{+0.18}_{-0.05}$ \\
EAZY & FSPS/\citet{carnall22} & Parametric & 1.55 & $10.53^{+0.09}_{-0.12}$ \\
\enddata
\end{deluxetable*}
\section{SED fitting} \label{sec:sedfit}
In this section, we describe the different SED fitting approaches we took to derive the physical properties of COS-87259. We ran each code on the 17-band photometry, with the redshift fixed to the spectroscopic redshift $z_{\rm spec} = 6.853$ from \citet{endsley2022_alma}.
\subsection{LePhare with BC03} \label{sec:lephare}
As a first code, we used the traditional, well-tested $\chi^2$-fitting algorithm \textsc{LePhare} \citep{arnouts1999,ilbert2006}. The galaxy models were sampled from the GALAXEV library (\citealt{bruzual2003}; BC03 hereafter).
We adopted different SFHs: a single stellar population and two parametric SFHs, i.e., an exponentially declining SFH ($\rm{SFR} \propto e^{-t/\tau}$) and a delayed exponentially declining ($\rm{SFR} \propto te^{-t/\tau}$), using star formation timescales $\tau = 0.01, 0.1, 0.3, 1.0, 3.0, 5.0, 10.0$, and 15 Gyr. We considered solar ($Z=Z_\odot$) and sub-solar ($Z=0.2Z_\odot$) metallicities.
We adopted the \citet{calzetti2000} reddening law and left the colour excess free between $E(B-V)=0$--1. Emission lines were incorporated following the scaling relations from \citet{kennicutt1998} (see \citealt{ilbert2009} for a detailed description). Absorption of emission at wavelengths shorter than rest-frame $912\,\AA$ by the intergalactic medium (IGM) was implemented following \citet{madau1995}. \textsc{LePhare} rejects any modelled SED that produces fluxes higher the 3$\sigma$ upper limits in the non-detected bands.
\subsection{LePhare with STARBURST99} \label{sec:starburst99}
Young galaxies with strong nebular line and continuum emission can have significantly boosted broad-band flux measurements, so that their stellar masses may be overestimated by up to a factor 10 \citep{bisigello2019}. Therefore, we performed a separate SED fitting run with \textsc{LePhare} using stellar population models from the STARBURST99 library \citep{leitherer1999}, which include both stellar emission and nebular line and continuum emission. We considered five templates with sub-solar metallicity of $Z=0.05Z_\odot$, ages spanning $10^6$ to $10^8$ years, and constant SFRs between $0.01$--$10$ $M_\odot \mathrm{yr^{-1}}$. These templates were compiled into SED models and fitted to the photometry following the \textsc{LePhare} approach described in Sect.\,\ref{sec:lephare}.
\subsection{Prospector} \label{sec:prospector}
The second code we considered is \textsc{Prospector} \citep{johnson2021}, a Bayesian inference code, which recently has been used in numerous works to model the properties of very high-redshift galaxies, including \textit{James Webb} Space Telescope (\textit{JWST})-observed sources (e.g. \citealt{naidu2022a,tachella2022,whitler2022}).
\textsc{Prospector} uses the Flexible Stellar Population Synthesis code (FSPS; \citealt{conroy2009, conroy2010}). We tested both a delayed exponentially declining SFH and a flexible, non-parametric SFH.
Our parametric model involves six free parameters, using the default prior shapes with the following ranges: the formed stellar mass $M_* = 10^9$--$10^{12} M_\odot$, the metallicity $\log(Z/Z_\odot) = -2$--0.19, the e-folding time $\tau_{\rm SF} = 0.001$--15 Gyr, and the age $t_{age} = 0.001$--13.8 Gyr. We modelled diffuse dust attenuation following \citet{calzetti2000} with $\tau_{dust} = 0$--4. Lastly, we implemented IGM absorption following \citet{madau1995} and nebular emission, using the default parameters.
In the non-parametric model ("continuity prior"), the SFH history is described by $N$ temporal bins, with a constant SFR in each bin, and \textsc{Prospector} fits the ratio between these bins. We largely adopted the approach outlined in \citet{tachella2022}, modelling six time bins, where the first bin spans 0--10 Myr in lookback time, and the remaining five bins are spaced equally in logarithmic time space up to $z=20$. In addition, we fitted the formed stellar mass, metallicity, diffuse dust attenuation, IGM absorption factor, and gas ionisation parameter, following the \textsc{Prospector} parametric model.
Lastly, \textsc{Prospector} treats non-detections by utilizing the $1\sigma$ flux limit as the flux error.
\subsection{EAZY} \label{sec:eazy}
As a third SED fitting model, we used the \textsc{Python} version of \textsc{EAZY} \citep{brammer08}. \textsc{EAZY} utilises a series of non-negative linear combinations of basis-set templates constructed with the FSPS models. Specifically, we used the \textsc{corr\_sfhz\_13} subset of models within \textsc{EAZY}. These models contain redshift-dependent SFHs, which, at a given redshift, exclude the SFHs that start earlier than the age of the Universe. The maximum allowed attenuation is also tied to a given epoch. Additionally, we included the best-fit template to the \textit{JWST}-observed extreme emission line galaxy at $z=8.5$ (ID4590) from \citet{carnall22}, which has been rescaled to match the normalization of the FSPS models. This was done to adequately model potential emission lines with large equivalent widths.
To fit our object, we have implemented a “template error function” within \textsc{EAZY}, to account for any additional uncertainty related to unusual stellar populations, using the default value of 0.2 for the template error. For non-detected bands, \textsc{EAZY} utilises the $1\sigma$ flux upper limit in the fit.
\section{Results} \label{sec:results}
\begin{figure*}
\gridline{\fig{plot_SED_30003833_manual_with-zspec-fixed.png}{0.5\textwidth}{(a)}
}
\gridline{\fig{plot_prospector-SED_parametric-sfh_nebe_median-posterior_30003833.png}{0.5\textwidth}{(b)}
\fig{plot_prospector-SED_non-parametric-sfh_nebe_median-posterior_30003833.png}{0.5\textwidth}{(c)}
}
\gridline{\fig{plot_SED_30003833_starburst99_10Msunyr_Z001_age106_err-34-66.png}{0.5\textwidth}{(d)}
\fig{plot_SED_vk_carnall.png}{0.5\textwidth}{(e)}
}
\caption{Best-fit SEDs corresponding to the five SED fitting set-ups presented in Table \ref{tab:results}. In each panel, the SED is shown in a black line. The observed fluxes and flux upper limits are shown in dark and light blue diamonds respectively. Each panel reports the SED fitting set-up, the spectroscopic redshift from \citet{endsley2022_alma}, the reduced $\chi^2$ value of the fit, and the stellar mass. For the \textsc{LePhare} run using STARBURST99 models, shown in the lower left panel, the yellow dotted line represents the contribution of nebular emission to the total SED.
\label{fig:seds}}
\end{figure*}
Here we compare the best-fit SEDs and associated stellar masses of COS-87259 obtained with the approaches outlined in Sect.\,\ref{sec:sedfit}.
For each SED fitting code and choice of stellar population models, we report the reduced ${\chi^2}$ and stellar mass in Table\,\ref{tab:results}. The corresponding best-fit SEDs are shown in Fig.\,\ref{fig:seds}. Each code has their own metric to determine the best-fit SED, so we explain them as follows. With \textsc{LePhare}, the best-fit SED simply minimises the $\chi^2$ value compared to the observed photometry, and the stellar mass uncertainties reflect the \nth{34} and \nth{66} percentiles of the maximum likelihood distribution. The resulting SEDs and stellar masses obtained with \textsc{Prospector} correspond to the median of the posterior SED, and the errors on the stellar represent the \nth{34} and \nth{66} percentiles of the stellar mass posterior distribution. Finally, \textsc{EAZY} returns the best-fit SED based on the linear combination of templates that maximises the posterior probability distribution, with the errors on the stellar mass derived as the \nth{16} and \nth{84} percentiles.
To compare the goodness of fit for each model, we use the reduced $\chi^2$ metric, calculated straight from the resulting SED (excluding bands with flux upper limits). For the stellar mass estimates presented in Table \ref{tab:results}, we imposed a minimum error of 0.05, which was derived from the signal-to-noise ratio in the observed band that samples the SED most closely to the rest-frame $K$-band, i.e., the IRAC 4.5\,\micron\ band.
Our most important result is that between the five set-ups of codes and stellar population models, the resulting stellar masses vary up to 0.9 dex, whereas it is virtually impossible to determine which of these fits is most representative of the truth: the $\chi^2_{\nu}$ values are all close to one and differ by 1.06 at most.
Between the runs performed with \textsc{LePhare} using the BC03 models and the \textsc{Prospector} parametric set-up, their best-fit SEDs shown in Fig.\,\ref{fig:seds}(a) and \ref{fig:seds}(b) respectively, the stellar masses differ by 0.6 dex and do not agree within the errors, even though we chose the run parameters to be as similar as possible. We believe this is partially because of the different templates sets, but also because of the actual SED fitting prescriptions (even if the stellar mass is derived directly from the template normalisation), given that stellar mass difference between the \textsc{LePhare}+BC03 and \textsc{LePhare}+STARBURST99 runs is only 0.18 dex.
According to the \textsc{LePhare}+BC03 result, this galaxy would be of solar metallicity, young at 0.01 Gyr old, actively undergoing star formation with $\tau=15$ Gyr, and quite dusty with $A_V=2.03$. Instead, \textsc{Prospector} finds that this source would be metal-poor with $Z=0.014Z_\odot$, relatively old with an age of 0.18 Gyr and less dusty with $A_V=0.8$. Most importantly, its SFH e-folding time of only 0.003 Gyr and lack of nebular emission lines suggests that this galaxy would have undergone a short burst of SF upon creation and evolved relatively passively afterwards. Based on $\chi^2_{\nu}$ values of these different fits, it is impossible to say with confidence which result is most likely between these two conflicting descriptions of of COS-87259's nature.
Using \textsc{Prospector}, we explicitly assess the dependency of stellar mass on the assumed SFH. We find a moderate difference of 0.12 dex in stellar mass between the two models, such that the stellar mass of $\log(M_*/M_\odot) = 11.16$ from the non-parametric SFH exceeds the parametric estimate of $\log(M_*/M_\odot) = 11.00$, even within the uncertainty. This effect has been observed in other works at similar redshift \citep{topping2022,whitler2022}. When we inspect our non-parametric SFH fit, we find that more than 80\,\% of the stellar mass was formed between lookback times of 0.18 Gyr and 0.042 Gyr, with a constant SFR of $1166\,M_\odot \mathrm{yr}^{-1}$. After this initial burst, the star formation rate falls off and continues at $42\,M_\odot \mathrm{yr}^{-1}$. This could explain the slightly higher stellar mass as compared to the parametric SFH, for which star formation ceases completely in the last $\sim 10^7$ million years.
We find that stellar mass estimate from \textsc{LePhare} using the STARBURST99 templates is the lowest of our considered set-ups, at $\log(M_*/M_\odot) = 10.24$ with $\chi^2_{\nu}=1.19$. The best-fit SED is shown in Fig.\,\ref{fig:seds}(d), and corresponds to a 0.01 Gyr old galaxy with a constant SFR of $10\,M_\odot \mathrm{yr}^{-1}$ and sub-solar metallicity $Z=0.05Z_\odot$. Upon decomposition of the SED, we find that 52\,\% of the integrated light is in fact nebular emission, resulting in a correct stellar mass of only $\log(M_*/M_\odot) = 9.91$.
Finally, we show the best-fit SED obtained with \textsc{EAZY} in Fig.\,\ref{fig:seds}(e). The stellar mass from this fit is $\log(M_*/M_\odot) = 10.53$, with an associated $\chi^2_{\nu}=1.55$, which makes the \textsc{EAZY} SED the worst fit out of the five code set-ups considered here. \textsc{EAZY} identifies COS-87259 as a 0.05 Gyr old galaxy, with moderate dust content so that $A_{V}=0.87$ and a strong presence of emission lines.
\section{Combined stellar and dust emission SED fitting} \label{sec:discussion}
\begin{figure*}
\plotone{stardust_endsleyphot_fit.pdf}
\caption{Best-fit SED of COS-87259 obtained with \textsc{Stardust} on the observed UV to millimeter photometry from \citet{endsley2022_radio} and \citet{endsley2022_alma}. The flux measurements are shown in red, where the arrows correspond to flux upper limits. The individual components of the stellar, AGN and dust emission are shown in blue, green and red curves respectively. \label{fig:stardust}}
\end{figure*}
We have demonstrated how different SED fitting approaches on independent photometric data produce strongly varying stellar mass estimates for COS-87259. Moreover, our estimates are all lower than the $\log(M_*/M_\odot)=11.2$ value from \citet{endsley2022_alma}. However, it should be noted that our $K_{\rm s}-[3.6]$ colour of 0.9 is significantly bluer than $K_{\rm s}-[3.6] = 1.5$ from \citet{endsley2022_radio}. At $z\sim7$, the $K_{\rm s}-[3.6]$ colour is sensitive to the 4000\AA\ break, but also to $O[III]+H\beta$ emission which can boost the IRAC 3.6\,\micron\ flux. The stellar mass discrepancy might therefore be partially explained by the $K_{\rm s}-[3.6]$ colour difference.
Another key difference that could explain the high stellar mass from \citet{endsley2022_alma} is the inclusion of FIR to millimeter data, especially given that COS-87259 likely harbors an AGN. So far we have only fitted the optical to NIR regime, so here we adopted the 5.8\,\micron\ to 1.4\,mm photometry for COS-87259 from \citet{endsley2022_alma} and Table 1 in \citet{endsley2022_radio}.
This combined suite of photometry is fitted with the SED-fitting code \textsc{Stardust} \citep{kokorev21}, again fixing the redshift to $z_{\rm spec}=6.853$. \textsc{Stardust} models light from stars, AGN and infrared emission arising from the dust reprocessed stellar continuum. Similarly to \textsc{EAZY}, \textsc{Stardust} fits independent linear combinations of templates, however with one key advantage of not assuming the energy balance between stellar and dust emission. For our fit we utilised UV-NIR templates adopted from \textsc{EAZY}, empirically derived AGN templates of \citet{mullaney11} and the dust models from \citet{draine07}. For the latter, the minimum radiation fields intensity spans $U_{\rm min}=40$--50, and fraction of dust contained in the photo-dissociation regions (PDRs) spans $\gamma=0.01$--0.3. When combined, these correspond to a range of luminosity weighted dust temperatures ($T_{\rm dust}$) from 35--45 K.
The best-fit SED obtained from \textsc{Stardust} is shown in Fig.\,\ref{fig:stardust}. We note that \textsc{Stardust} treats any flux measurement with confidence level $<3\sigma$ as an upper limit instead, which brings the the total number of secure detections to 14. The $\chi^2_{\nu}$ value for this fit is 1.90, and the resulting stellar mass is $\log(M_*/M_\odot)=10.81^{+0.03}_{-0.03}$, which is over 0.4 dex lower than the stellar mass from \citet{endsley2022_alma}. This new, independent estimate reinforces our previous conclusion obtained with the SED fitting of only the source stellar emission: the stellar mass of COS-87259 is most likely $<10^{11}\, M_\odot$.
\section{Conclusion}\label{sec:conclusion}
In this letter, we reassessed the stellar mass of $z_{\rm spec}=6.853$ AGN-host galaxy candidate COS-87259, located in the UltraVISTA ultra-deep region in the COSMOS field. This source has been extensively studied in previous works: its most recent stellar mass estimate is very high with $\log(M_*/M_\odot) = 11.2$ \citep{endsley2022_alma}. Here, we took this galaxy as a case study to compare the best-fit SEDs and physical parameters obtained with different SED fitting routines. We measured independent photometry from 17 rest-frame optical to NIR broad-band images for COS-87259. These data were fitted with SED fitting codes \textsc{LePhare}, \textsc{Prospector}, \textsc{EAZY}, and \textsc{Stardust}, including 5.8\,\micron\ to 1.4\,mm photometry from \citet{endsley2022_alma} for the latter fit.
Between six set-ups of codes and stellar population models, we find that the resulting stellar masses span $\log(M_*/M_\odot)=10.24$--11.16. Contrarily, the reduced $\chi^2$ values of the fits are all close to unity within $\Delta \chi^2_\nu=0.9$. Therefore, all SED fits are of comparable quality, making is virtually impossible to decide which stellar mass estimate is most representative of the truth.
We find that the combination of \textsc{Prospector} and a non-parametric description of the SFH (which has been frequently used to fit newly \textit{JWST}-discovered high-redshift galaxies) yields the highest stellar mass estimate in this work, $\log(M_*/M_\odot)=11.16$. Moreover, even when adopting a traditional parametric SFH, \textsc{Prospector} yields significantly higher stellar masses than any of the other considered codes. Lastly, by considering very young galaxy templates that have strong nebular line and continuum emission, we obtain our lowest stellar mass estimate of $\log(M_*/M_\odot)=10.24$ with $\chi^2_{\nu}=1.19$.
We emphasise that none of our six considered SED fitting routines can replicate the extremely high stellar mass result from \citep{endsley2022_alma}. A $\log(M_*/M_\odot) \geq 11$ solution for COS-87259 technically does not violate $\Lambda$CDM number density upper limit \citep{boylan2022}, but this relies strongly on the assumption that COS-87259 is the only $z\sim7$ galaxy for which such a high stellar mass has been claimed in the entire COSMOS field.
In conclusion, in light of the recent discoveries of very massive EoR galaxies with \textit{JWST}, we emphasise the absolute importance of testing various SED fitting routines on these seemingly massive galaxies to obtain a confident stellar mass estimate. Otherwise, we may falsely conclude that \textit{JWST} is allowing us to probe an unexpectedly numerous population of massive galaxies, whereas in fact overestimation from novel SED fitting approaches is the main driver behind these results. As for the specific instance of COS-87259, this source will be observed with \textit{JWST} in the near future, hopefully bringing us yet again closer to a consensus on the nature of this undoubtedly interesting galaxy.
\begin{acknowledgments}
KIC and SvM acknowledge funding from the European Research Council through the award of the Consolidator Grant ID 681627-BUILDUP. KIC and VK acknowledge funding from the Dutch Research Council (NWO) through the award of the Vici Grant VI.C.212.036.
\end{acknowledgments}
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
|
1,314,259,993,539 | arxiv | \section{Introduction and preliminaries on spherical harmonics}
Many parts of analysis on Euclidean spaces and in particular the
theory of spherical harmonics provide an elegant and instructive
application of group theoretical concepts to various questions of
classical function theory. It suf\/f\/ices to mention points like:
coincidence of eigenspaces of spherical Laplacian with irreducible
components with respect to the natural action of the group ${\bf
SO}(d)$ within the $L^2$ space on the sphere; the role of
classical orthogonal polynomials, i.e.\ Gegenbauer polynomials, as
reproducing kernels for the spaces of spherical harmonics of a
given degree, or more generally, as providing an explicit
construction of symmetry adapted basis functions for those spaces
(cf.~\cite{Vi}), or the connection of the Fourier transform on the
Euclidean space to the Hankel transform obtained via restriction
to ${\bf SO}(d)$-f\/inite functions and various integral
identities of the Hecke--Bochner type resulting there from.
The goal of the present paper is to present under one cover recent
developments within this circle of ideas, obtained in
\cite{me3,BDS1,BDS2} by the present authors, partly in
collaboration with A.~D\c{a}browska. The central point of this
paper is a novel form of spherical expansion of zonal functions on
Euclidean spheres which we derive by purely dif\/ferential
methods. We show how it implies certain general formulae for the
Fourier transform of ${\bf SO}(d)$-f\/inite functions, including
recent generalizations of the famous Bochner formula and also
derive certain function theoretic consequences.
\subsection{Preliminaries and notations}
We denote by $(x\,|\, y)$ the standard (Euclidean) inner product
of points $x,y\in {\mathbb R}^d$ and by $r=|x|=(x\,|\,x)^{1/2}$ the
corresponding length (or radius) function. The unit sphere in
$\mathbb R^d$ is denoted by $S^{d-1}$ and for $x\neq0$ we shall
often write $x=r\xi$ with $\xi\in S^{d-1}$. We shall assume $d\ge
3$ and frequently use a constant $\alpha$ def\/ined by
$\alpha=(d-2)/2$. The sphere is regarded as a homogeneous space of
the group $K={\bf SO}(d)$ of (proper) rotations in ${\mathbb R}^d$
and for a point $\eta\in S^{d-1}$ its isotropy group
$K_\eta\subset K$ can be identif\/ied with the group ${\bf
SO}(d-1)$.
Let $\Delta = \sum\limits_{j=1}^{d}{ \partial^2}/{\partial x_j^2}$
denote the Laplacian in ${\mathbb R}^d$. We let ${\mathcal P}^l
={\mathcal P}^l ({\mathbb R}^d)$ denote the space consisting of
homogeneous polynomials of degree $l$ on ${\mathbb R}^d$ and
denote the kernel of $\Delta$ in ${\mathcal P}^l$ by~${\mathcal
H}^l $. This latter space consists of harmonic and homogeneous
polynomials of degree $l$ and its dimension is given by
\[
\dim {\mathcal
H}^l=\frac{2(l+\alpha)\Gamma(2\alpha+l)}{\Gamma(l+1)\Gamma(2\alpha+1)},
\]
where $\Gamma(z)$ denotes the Euler gamma function. Both these
spaces are invariant under the natural action of the group $K={\bf
SO}(d)$ on functions on ${\mathbb R}^d$ given by $k\cdot
f(x)=f(k^{-1}x)$ for $k\in K$, $x\in \mathbb R^d$. It is known
that this group acts irreducibly in ${\mathcal H}^l$ for each $l$,
and the representations so obtained are inequivalent for $l\neq
l'$.
The \emph{surface spherical harmonics of order} $l$ are by
def\/inition the restrictions of elements from~${\mathcal H}^l$
to the unit sphere $S^{d-1}$, and since the restriction map
commutes with the action of rotations the spaces of surface
spherical harmonics of any f\/ixed order are invariant and
irreducible under $K$.
We endow ${\mathcal P}^l={\mathcal P}^l({\mathbb R}^d)$ with an
inner product def\/ined by the formula
\begin{gather} \label{inner-prod}
\langle P\,|\, Q\rangle := \int_{S^{d-1}}
P(\xi)\overline{Q(\xi)}\,d\sigma(\xi),
\end{gather}
where $d\sigma(\cdot)$ is the Euclidean surface measure on the
unit sphere $S^{d-1}$ normalized so that the total area of the
sphere is 1. In fact, the integral on the right hand side extends
to the natural inner product on $L^2(S^{d-1}, d\sigma)$ and it is
known that the spaces of spherical surface harmonics of
dif\/ferent orders are orthogonal to each other with respect to
this inner product.
We recall that a function $f$, def\/ined on the unit sphere
$S^{d-1}$, is said to be \emph{a zonal function (relative to a
point $\eta\in S^{d-1}$)} if it is invariant with respect to the
isotropy group $K_\eta$ of $\eta$. Any such function depends in
fact on the inner product $(\xi \,|\, \eta)$ only, and so there
exists a function $\phi$ def\/ined on the closed unit interval
$[-1,1]\subset {\mathbb R}$ so that
\begin{gather}\label{eqn:profile}
f(\xi)= \phi((\xi\,|\, \eta)),\qquad \text{for all}\ \xi\in
S^{d-1}.
\end{gather}
We shall call $\phi$ the prof\/ile function of $f$.
\subsection{The canonical decomposition of homogeneous polynomials}
It is well known that every homogeneous polynomial of degree $l$
can be represented as a sum of products of harmonic homogeneous
polynomials with even powers of the radius,
cf.~equation~(\ref{can-decomp2}) below, and this decomposition is
usually called the canonical decomposition of homogeneous
polynomials. While the general form of the canonical
decomposition, i.e.\ equation~(\ref{can-decomp2}),
is stated in all sources concerned with this matter, the explicit formula for harmonic
components of a~given homogeneous polynomial is seldom quoted. The
only source known to us, where the formula can be found, is the
paper \cite{Lu} of Lucquiaud. We record it here since it is
essential for considerations to follow, in particular, in
establishing formula (\ref{expansions3}) giving expansion of
elementary zonal polynomials of the form $(x\,|\,\eta)^l$, where
$\eta \in S^{d-1}$ is an arbitrary unit vector.
\begin{theorem}[The canonical decomposition] \label{can-decomp}
The space ${\mathcal P}^l$ decomposes orthogonally as the sum
${\mathcal P}^l = \oplus_{k=0}^{[l/2]} r^{2k} {\mathcal H}^{l-2k}$
and the decomposition is invariant with respect to the action of
the group ${\bf SO}(d)$. Explicitly, every polynomial $P\in
{\mathcal P}^l$ may be written as
\begin{gather}\label{can-decomp2}
P=\sum_{k=0}^{[l/2]}r^{2k}h_{l-2k}(P), \qquad \text{where}\quad
h_{l-2k}(P)\in {\mathcal H}^{l-2k}
\end{gather}
are called the harmonic components of $P$ and are given by
\begin{gather}\label{eqn:harmon_proj}
h_{l-2k}(P)= \sum^{[l/2]-k}_{j=0} e^{l}_{\,j}(k) r^{2j}
\Delta^{k+j} P
\end{gather}
with the coefficients $e^{l}_{\,j}(k)$ determined by
\begin{gather}\label{eqn:harm_coeff}
e^{l}_{\,j}(k) = (-1)^j\frac
{(\alpha+l-2k)\Gamma(\alpha+l-2k-j)}{4^{k+j}k!j!\Gamma(\alpha+l+1-k)}.
\end{gather}
The maps $P\mapsto r^{2k} h_{l-2k}(P)\in P^l$ are
projections onto ${\bf SO}(d)$-irreducible subspaces of
${\mathcal P}^l$ commuting with the action of the group ${\bf
SO}(d)$.
\end{theorem}
\section{Expansions of zonal functions}
\subsection{Elementary zonal functions and reproducing kernels}
To proceed, we need to recall an explicit formula for the
Gegenbauer polynomial $C^\alpha_l$ of degree $l$ and index
$\alpha$, given for example in \cite[Section 5.3.2]{MOS} or
\cite[Chapter 9.2]{Waw}, which reads as follows:
\begin{gather*}
C_l^\alpha(t) = \sum_{j=0}^{[l/2]}(-1)^j
\frac{\Gamma(\alpha+l-j)}{\Gamma(\alpha)\Gamma(j+1)\Gamma(l+1-2j)}
(2t)^{l-2j}.
\end{gather*}
Applying the formulae for the canonical decomposition given in
(\ref{eqn:harmon_proj}) and (\ref{eqn:harm_coeff}) to elementary
zonal polynomials $(x\,|\, \eta)^l$, where $\eta\in S^{d-1}$ and
$l$ is a nonnegative integer, we obtain
\begin{gather}\label{expansions3}
(x\,|\, \eta)^l = 2^{-l}\Gamma(\alpha)\Gamma(l+1)|x|^l
\sum_{k=0}^{[l/2]} \frac{(\alpha+l-2k)}{k!\Gamma(\alpha+l+1-k)}
C_{l-2k}^\alpha\bigl((\xi\,|\, \eta)\bigr), \qquad x=|x|\xi.
\end{gather}
In particular, the spherical harmonic obtained by restricting to
the unit sphere the harmonic component $h_l(P_{\eta})$ of the
highest degree of $P_\eta(x)=(x\,|\,\eta)^l$, which is given by
the formula
\begin{gather*
h_l(P_\eta)(\xi)=2^{-l}\Gamma(\alpha)\Gamma(l+1)\frac{(\alpha
+l)}{\Gamma(\alpha +l+1)}C_l^{\alpha}((\xi\,|\,\eta)),
\end{gather*}
plays an important role in the group representation theoretic
interpretation of the decomposition~(\ref{can-decomp}). With
normalization given by
\begin{gather*}
Z^l_\eta(\xi)=
\bigl[C_l^{\alpha}(1)\bigr]^{-1}C_l^{\alpha}((\xi\,|\, \eta))
\end{gather*}
it satisf\/ies
\[
\dim{\mathcal H}^l\int_{S^{d-1}} Z^l_\eta(\xi)P(\xi)\,d\sigma(\xi)
= P(\eta).
\]
Because of this property, $Z^l_\eta(\xi)$ is called the
reproducing kernel for the space ${\mathcal H}^l$. Moreover, it
is uniquely (up to a scalar multiple) determined by the property
of being invariant under the action of the isotropy subgroup
$K_\eta$ of the point $\eta\in S^{d-1}$.
\subsection[A differential formula for expansions of smooth zonal functions]{A dif\/ferential formula for expansions of smooth zonal functions}
Now recall \cite{Mue, Vi} that every square integrable function on
the sphere can be written as a series of spherical harmonics (the
Fourier--Laplace expansion). For zonal functions this expansion,
thanks to an easy group representation theoretic argument, reduces
to
\begin{gather*
f(\xi)= \sum_{m=0}^{\infty} f_m Z^m_\eta(\xi), \qquad \text{where}\quad
f_m = \dim{\mathcal H}^m\int_{S^{d-1}}f(\xi)Z^m_\eta(\xi)\,d\sigma(\xi).
\end{gather*}
By taking into account the invariance under $K_\eta$ of the
integrand and equation (\ref{eqn:profile}), the coef\/f\/icients
may be expressed as integrals
\begin{gather*
f_m =
\frac{(\alpha+m)\Gamma(\alpha)}{\sqrt{\pi}\Gamma(\alpha+1/2)}
\int_{-1}^{1}\phi(t)C_m^\alpha(t)\big(1-t^2\big)^{\alpha-1/2}\,dt,
\end{gather*}
what reduces the problem of spherical expansion of zonal functions
to the expansion of prof\/ile functions $\phi$ with respect to the
(orthogonal) system of Gegenbauer polynomials. The following
result shows that the coef\/f\/icients $f_m$ of the expansion can
also be expressed in terms of the coef\/f\/icients of the Taylor
expansion of the prof\/ile function $\phi$, provided the latter
satisf\/ies suitable regularity assumptions.
\begin{theorem}\label{expan_of_f}
Assume $\phi:[-1,1] \rightarrow {\mathbb C}$ has the Taylor
expansion $\phi(t)= \sum\limits_{m=0}^{\infty}
\frac{\phi^{(m)}(0)}{m!}t^m$ that is absolutely convergent on the
closed interval $[-1,1]$, and let $f(\xi)=\phi((\xi \,|\, \eta))$
be the zonal function on the sphere $S^{d-1}$ corresponding to
$\phi$. Then the spherical Fourier expansion of $f(\xi)$ is given
by
\begin{gather}\label{exp_f}
f(\xi)=\Gamma(\alpha+1) \sum_{m=0}^{\infty} f_m \dim{\mathcal H}^m
Z^{m}_{\eta}(\xi),
\end{gather}
where the coefficients of the expansion can be expressed as
\begin{gather*
f_m = \sum_{k=0}^{\infty}
\frac{\phi^{(m+2k)}(0)}{2^{m+2k}k!\Gamma(\alpha+m+k+1)}.
\end{gather*}
\end{theorem}
A detailed proof of this result is contained in the forthcoming
paper \cite{BDS2} of the authors, and here we shall present its
main line only. It consists in substituting the expansion formula
(\ref{expansions3}) into the Taylor series of $\phi$ and
rearanging terms so that to group together the terms corresponding
to the Gegenbauer polynomials of a given degree. This procedure
requires some estimates on the coef\/f\/icients which assure the
absolute convergence of the double series, which are presented in
detail in~\cite{BDS2}.
Below we brief\/ly present two applications of this expansion for
obtaining new derivations of some classical results.
\subsection{The plane wave expansion}
A well known and very useful instance of the expansion
\eqref{exp_f} is the so called plane wave expansion, giving a
representation of the exponential function $e^{i(x|\eta)}$ as a
series of zonal harmonic polynomials. The expansion involves the
Bessel functions of the f\/irst kind of order $\nu \in {\mathbb
C}$ with $\mathop{\rm Re}\nolimits{\nu}>-1$, which are given as
\begin{gather}\label{Bessel}
J_\nu(t)= \left(\frac{t}{2}\right)^\nu\sum_{k=0}^{\infty}
\frac{(-1)^k}{\Gamma(k+1)\Gamma(k+\nu+1)}\left(\frac{t}{2}\right)^{2k},
\qquad t\in {\mathbb C}.
\end{gather}
For some purposes the results are better expressed with the aid of
the so called spherical Bessel functions, which are given as
\begin{gather*
j_\nu(t)= \Gamma(\nu+1)\left(\frac{t}{2}\right)^{-\nu} J_\nu(t).
\end{gather*}
We point out that all classical proofs of this formula known to us
are obtained by applying certain integral identities of the
Hecke--Bochner type, as in \cite{far,Mue}. With the use of
\eqref{exp_f} we get it by direct dif\/ferentiation of the
exponential $e^{irt}$ and comparison with the power series
expansion of the Bessel function \eqref{Bessel}.
\begin{corollary}\label{d>2}
For an arbitrary unit vector $\eta\in S^{d-1}\subset {\mathbb
R}^d$ and $x\in {\mathbb R}^d$, the plane wave $e^{i(x|\eta)}$
admits the following expansion
\begin{gather*
e^{i(x|\eta)}
= \sum_{m=0}^{\infty}i^m \dim {\mathcal H}^m \frac{\Gamma(\alpha+1)}{\Gamma(\alpha+m+1)}
\left(\frac{r}{2}\right)^m j_{\alpha+m}(r)Z^m_\eta(\xi), \qquad
x=r\xi, \quad \xi\in S^{d-1}.
\end{gather*}
The series converges absolutely on each sphere of radius $r$ and
uniformly with respect to $\xi,\eta\in S^{d-1}$.
\end{corollary}
\subsection{The generating function of Gegenbauer polynomials}
Take a f\/ixed element $x\neq 0$ from the unit ball in ${\mathbb
R}^d$ and write $x=r\eta$, $\eta\in S^{d-1}$. The function
\[
S^{d-1}\ni \xi \mapsto |\xi- x|^{-2\alpha}=
\left(1-2r(\xi\,|\,\eta) +r^2\right)^{-\alpha}
\]
is clearly a zonal function with pole at $\eta$ corresponding to
the prof\/ile function $(1-2rt +r^2)^{-\alpha}$. The expansion
resulting from \eqref{exp_f} has the form
\begin{gather*
(1-2r(\xi\,|\,\eta) +r^2)^{-\alpha}= \sum_{m=0}^{\infty} \frac{\Gamma(\alpha+m)}{\Gamma(2\alpha)\Gamma(m+1)} r^m Z^m_\xi(\eta)
\end{gather*}
and can be reduced to the familiar formula for the generating
function of Gegenbauer polynomials
\begin{gather*}
\left(1-2rt +r^2\right)^{-\alpha}= \sum_{m=0}^{\infty}r^m
C^{\alpha}_m(t),
\end{gather*}
as given e.g.\ in \cite{AA,far,MOS}.
\section[The Fourier transforms of ${\bf SO}(d)$-finite functions]{The Fourier transforms of
$\boldsymbol{{\bf SO}(d)}$-f\/inite functions}
In this section we shall present some applications of the plane
wave expansion given in Corollary~\ref{d>2} to problems of Fourier
analysis in Euclidean space. The subject is related to the well
known Bochner formula (c.f.\ e.g.\ \cite{AA, Mue}), which
describes the restriction of the Fourier transform to the space
of functions of the form $f(r)P(x)$, where $f$ is a radial
function and $P\in {\mathcal H}^l$ a homogeneous harmonic
polynomial. The result is again a product of the same harmonic
polynomial $P$ with a radial function, which is expressed as the so
called Hankel transform of the original radial factor and is given
by the formula~(\ref{Hankel_trans}) below. The result we present
in Corollary~\ref{Bochner_id} generalizes that relation for the
case of arbitrary homogeneous polynomials and is a combination of
results obtained by the authors in \cite{me3,BDS1} and by F.J.~Gonzalez-Vieli in \cite{Gon}.
\subsection[The case of ${\bf SO}(d)$-finite functions on the sphere]{The case
of $\boldsymbol{{\bf SO}(d)}$-f\/inite functions on the sphere}
We take the Fourier transform of suitable regular (e.g.
$L^1({\mathbb R}^d)$) functions on ${\mathbb R}^d$ as def\/ined by
\[
{\mathcal F} f(x)=(2\pi)^{-\frac{d}{2}}\int_{{\mathbb
R}^d}e^{i(x\,|\, y)}f(y)\,dy,\qquad x\in {\mathbb R}^d.
\]
Observe that the above def\/inition of the Fourier transform also
makes sense in the case when $f$ is a function def\/ined on the
unit sphere --- in this case it can be regarded as the Fourier
transform of a measure supported on the sphere. Especially
interesting is the case, when the measure on the sphere comes from
restricting a homogeneous polynomial to the sphere, since this way
we obtain a so-called ${\bf SO}(d)$-f\/inite measure (since its
${\bf SO}(d)$-translates span a f\/inite dimensional subspace).
\begin{theorem}
\label{Theo:FJG} If $P\in {\mathcal P}^l$, then the Fourier
transform of the measure $P(\xi) d\sigma(\xi)$ with support on the
unit sphere $S^{d-1}$ is given by the following equivalent
formulae
\begin{gather}
{\mathcal F}(P)=\int_{S^{d-1}}e^{i(x\,|\,\eta)}P(\eta)\,
d\sigma(\eta)\nonumber \\
\phantom{{\mathcal F}(P)}{} =
\biggl(\frac{i}{2}\biggr)^l\sum_{k=0}^{[l/2]}
\frac{(-1)^k2^{2k}\Gamma(\alpha+1)}{\Gamma(\alpha+l+1-2k)}
j_{\alpha+l-2k}(|x|)h_{l-2k}(P)(x)
\label{eqn:FT3} \\
\phantom{{\mathcal F}(P)}{}=\left(\frac{i}{2}\right)^l
\sum_{k=0}^{[l/2]}
\frac{(-1)^{k}\Gamma(\alpha+1)}{k!\Gamma(\alpha+l+1-k)}
j_{\alpha+l-k}(|x|)(\Delta^k P)(x) \label{eqn:FJG}
\end{gather}
with $h_{l-2k}(P)$ denoting the harmonic components of $P$ as in
equation~\eqref{can-decomp2}.
\end{theorem}
The formula \eqref{eqn:FJG} has been derived by
F.J.~Gonzalez~\cite{Gon} and the equivalence of those two forms was
observed by the present authors and A. D\c{a}browska in
\cite{BDS1}.
Theorem \ref{Theo:FJG} implies the following interesting function
theoretical corollary. By comparing the two expressions
(\ref{eqn:FT3}) and (\ref{eqn:FJG}) from this theorem one may
derive the following multi-step recurrence relation for spherical
Bessel functions.
\begin{corollary}\label{Coro:multistep}
If $\alpha\ge0$ is a half odd integer (or an integer) then for any
$l\in {\mathbb Z}_+$ the following relations hold
\begin{gather*
j_{\alpha+l-s}(r)=
\sum_{k=0}^{s} \frac{s!\,
\Gamma(\alpha+l+1-s)\Gamma(\alpha+l-k-s)}{k!(s-k)!\Gamma(\alpha+l+1-k)\Gamma(\alpha+l-2k)}
\left(\frac{r}{2}\right)^{2(s-k)}j_{\alpha+l-2k}(r)
\end{gather*}
for $s=1,\ldots, [l/2]$.
In terms of the Bessel functions of the first kind, this is
expressed by
\begin{gather}\label{Bess:mult}
\frac{1}{s!}\left(\frac{2}{r}\right)^{s}J_{\alpha+l-s}(r)=
\sum_{k=0}^{s}
\frac{\Gamma(\alpha+l-k-s)\Gamma(\alpha+l+1-2k)}{k!(s-k)!\Gamma(\alpha+l+1-k)\Gamma(\alpha+l-2k)}J_{\alpha+l-2k}(r).
\end{gather}
\end{corollary}
It is interesting to note that these latter relations
unify several classical relations satisf\/ied by Bessel functions
like \cite[equation~(4.6.11)]{AA} or \cite[Chapter~3.2.2]{MOS}.
In fact, taking into account the relation
\[
J_{-1/2}(t)=\left(\frac{1}{2}\pi t\right)^{-1/2}\cos{t},
\]
which follows directly from the expansion (\ref{Bessel}), and the
familiar recurrence relations satisf\/ied by~$J_\nu(t)$, namely
\begin{gather*
\left(\frac{1}{t}\frac{d}{d t}\right)^l (t^{-\nu}J_\nu(t))=
(-1)^l t^{-\nu-l}J_{\nu+l}(t)
\end{gather*}
one can derive from \eqref{Bess:mult} the following relations.
\begin{corollary}[Finite expansions of Bessel functions]
The Bessel functions of integer order satisfy the relation
\begin{gather*}
\frac{1}{n!}\left(\frac{2}{t}\right)^{n}J_n(t)=\sum_{k=0}^{n}\epsilon_k\frac{1}{(n+k)!(n-k)!}J_{2k}(t),
\quad \textrm{where} \quad
\epsilon_k=\left\{\begin{array}{ll}
1, & k=0\\
2, & k=1,2,3\ldots,
\end{array}\right.
\end{gather*}
while those of half odd integer order satisfy
\begin{gather*}
J_{n+1/2}(t)= \sqrt{\frac{2}{\pi x}} \left\{
\sin(t-n\pi/2)\sum_{k=0}^{[n/2]}
\frac{(-1)^k(n+2k)!}{(2k)!(n-2k)!(2t)^{2k}}\right. \\
\left. \phantom{J_{n+1/2}(t)=}{}+
\cos(t-n\pi/2)\sum_{k=0}^{[(n-1)/2]}\frac{(-1)^k(n+2k+1)!}{(2k+1)!(n-2k-1)!(2t)^{2k+1}}
\right\}.
\end{gather*}
\end{corollary}
\subsection{The generalized Bochner formula and the Hankel transform}
For our last topic recall that for suitable functions on the
nonnegative real axis ${\mathbb R}_+$, say, with $\phi$ belonging
to the Schwartz space ${\mathcal S}({\mathbb R})$, one def\/ines
the Hankel transform by the formula
\begin{gather}\label{Hankel_trans}
H_\nu(\phi)(t) = \frac{2^{-\nu}}{\Gamma(\nu+1)}\int_0^\infty
\phi(s)j_\nu(st)s^{2\nu+1}ds.
\end{gather}
Theorem \ref{Theo:FJG} above now immediately implies that the
generalized Bochner formula which was previously obtained in
\cite{me3} can also be expressed by a pair of equivalent formulae.
\begin{corollary}[Generalized Bochner identity]\label{Bochner_id}
If $P\in {\mathcal P}^l$, then the Fourier transform of the
function $f(|x|)P(x)$ is given by the following expressions:
\begin{gather*}
(2\pi)^{-\frac{d}{2}}\int_{{\mathbb R}^{d}}f(|x|)P(x)e^{i(y| x)}dx
= i^l\sum_{k=0}^{[l/2]}
(-1)^k H_{\alpha+l-2k}(s^{2k}f)(|y|)h_{l-2k}(P)(y) \\
\phantom{(2\pi)^{-\frac{d}{2}}\int_{S^{d-1}}f(|x|)P(x)e^{i(y|
x)}dx}{} = i^l\sum_{k=0}^{[l/2]} \frac{(-1)^k}{2^k k!}
H_{\alpha+l-k}(f)(|y|)\Delta^{k}P(y)
\end{gather*}
\end{corollary}
This implies in turn the following:
\begin{corollary}[Periodicity relation for the Hankel transform]
For any $\phi\in{\mathcal S}({\mathbb R})$ with~$\alpha$ and $l$
satisfying
conditions of Corollary {\rm \ref{Coro:multistep}} above, the Hankel transform satisfies the following relation
\begin{gather*}
H_{\alpha+l}(\phi)(t) = 2(\alpha+l-1)H_{\alpha+l-1}(\phi)(t) -
H_{\alpha+l-2}\left(s^{2}\phi\right)(t).
\end{gather*}
\end{corollary}
\section{Conclusion}
In this paper we have demonstrated that dif\/ferential identities
for homogeneous polynomials like those implied by equations
\eqref{can-decomp2}--\eqref{eqn:harm_coeff} can be ef\/fectively
used for solving problems in harmonic analysis, which so far have
been approached by means of integral identities of the type of
Hecke--Bochner formula. In our opinion other possibilities of
using that approach
should certainly be further explored.
\subsection*{Acknowledgements}
The results contained in this paper were presented at the
conference Symmetry in Nonlinear Mathematical Physics in Kyiv,
June 20--26, 2005 and also at the Seminar Sophus Lie
in Nancy, June 10, 2005. We thank the organizers of those
meetings for enabling us to present these results there.
We are also obliged to the referees for remarks which, as we hope,
enabled us to improve the presentation in the paper. In
particular, the reference~\cite{CF} was indicated by the referee.
|
1,314,259,993,540 | arxiv | \section{introduction}
In this paper, a graph is understood to be undirected and have no loops or multiple edges. While graphs with multiple edges could be taken under consideration, it is not necessary to do so as multiple edges do not add any difficulty or important properties.
A set of vertices in a graph $G$ is said to be independent if no two vertices in the set are joined by an edge. An independent set $M$ of $G$ is called maximal if no independent set of $G$ properly contains $M$. The largest (in terms of cardinality) maximal independent set (or sets) of $G$ is called a maximum independent set of $G$, and a graph is said to be well-covered if every maximal independent set of $G$ is also maximum. A well-covered graph could also be defined by the property of all maximal independent sets having the same cardinality. This notion was introduced by Plummer in \cite{P}. In \cite{BN}, Brown and Nowakowski defined a well-covered weighting of a graph $G$ as a function $w:V(G)\rightarrow \textbf F$ that assigns values to the vertices of $G$ in such a way that $\sum_{x\in M} w(x)$ is a constant for all maximal independent sets $M$ of $G$. It is immediate from the latter definition that one could re-define well-coveredness by saying that a well-covered graph is a graph that admits the constant function equal to $1$ as a well-covered weighting of $G$. We will use Brown and Nowakowski's presentation (notation, nomenclature, etc), although this problem was originally introduced by Caro, Ellingham, Ramey, and Yuster in \cite{CER} and \cite{CY}.
It is easy to show that, once a field $\textbf{F}$ is fixed, the set of all well-covered weightings of a graph $G$ is an $\textbf{F}$-vector space, which is called \emph{the well-covered space of $G$}. The dimension of this vector space over $\textbf{F}$ is called the \emph{well-covered dimension of $G$} and is denoted by $wcdim(G,\textbf F)$. If $wcdim(G, \textbf F)$ does not depend on the field used then the well-covered dimension of $G$ is instead denoted as $wcdim(G)$. The well-covered dimension of a graph $G$ can therefore be considered to be the cardinality of a set of vertices $S$ of $G$ whose weights are independent of each other and on which the weights of the remaining vertices of $G$ are dependent, provided that the weighting is a well-covered weighting. Most of the results obtained in this paper are independent of the characteristic of $\textbf F$, and when the characteristic becomes something to consider we will be careful to remark on it.
Our graph theoretic notation, algebraic notation, and matrix theoretic notation is standard; the reader can look at \cite{W} for any concepts we fail to define. The vertex set of a graph $G$ is denoted by $V(G)$. The cardinality of a set of vertices $V$ is denoted by $|V|$. A field with $q=p^h$ ($p$ prime) elements is denoted by $\textbf F_{q}$. The $n\times n$ identity matrix is denoted by $I_{n}$. The $n\times n$ matrix where each entry is a $1$ is denoted by $J_{n}$. An $m\times 1$ column vector where each entry is a $1$ is denoted by $\textbf 1_{m}$. An $m\times 1$ column vector where each entry is a $0$ is denoted by $\textbf 0_{m}$.
It is relatively simple to calculate the well-covered dimension of a graph $G$, provided $G$ is not too large. One first needs to find all the independent sets of $G$, which can be done using a greedy algorithm. Suppose that the maximal independent sets of $G$ are $M_{i}$ for $i=0,\ldots,k-1$. Then a well-covered weighting $w$ of $G$ is determined by a solution of the linear system of equations formed by selecting a maximal independent set, in this particular instance $M_0$, and setting the system $M_i - M_0=0$ for $i=1,\ldots,k-1$. This system is homogeneous, and can therefore be written in the form $A\textbf x=\textbf0$. Note that $A$ is an $m\times n$ matrix where $m=k-1$ and $n=|V(G)|$. As this system is homogeneous, the nullity of $A$ (here is where $char(\textbf{F})$ could be relevant) is equivalent to $wcdim(G,\textbf F)$. So, $wcdim(G,\textbf F)=n-rank(A)$. In the case when $n=rank(A)$, then $wcdim(G,\textbf F)=0$, which implies that in this case the only possible well-covered weighting is the $0$ function.
For the remainder of this paper, we shall concern ourselves only with the determining of the well-covered dimensions for various individual graphs and graph families. We start by recalling a lemma from \cite{BN}, as it will allow us to focus only on connected graphs.
\begin{lemma}[Brown \& Nowakowski \cite{BN}]
Let $G$ and $H$ be graphs. Then
\[
wcdim(G \cup H, \textbf F) = wcdim(G, \textbf F)+wcdim(H, \textbf F)
\]
\end{lemma}
The family of (connected) graphs that is possibly the easiest to attack when looking for well-covered dimensions is the family of complete graphs. By simply looking at the maximal independent sets of $K_n$ we get that $wcdim(K_{n})=1$. Similarly, using the previous lemma, we get $wcdim(\overline{K_n})=n$.
\section{The Well-covered dimension of certain families of graphs}
Only using the technique mentioned above, it is easy to find the well-covered dimension of several families of graphs.
\begin{example}
(a) $wcdim(Pe)=0$, where $Pe$ is the Petersen graph. \\
(b) The two graphs in figure \ref{SG1andSG2} below are he smallest graphs such that their well-covered dimension depends on the characteristic of the base field ($SG_1$) or it is zero ($SG_2$). In fact,
\[
wcdim(SG_{1},\textbf{F}) =
\left\{
\begin{array}{ll}
3 & $if $char(\textbf F)=2$$ \\
2 & $otherwise$
\end{array}
\right.
\]
\begin{figure}[ht]
\centering
\includegraphics[height=1.5in]{SG1picture.pdf} \hspace{1in}
\includegraphics[height=1.5in]{SG2picture.pdf}
\caption{$SG_1$ (left) and $SG_2$ (right).}
\label{SG1andSG2}
\end{figure}
\end{example}
\newpage
A similar result is obtained for crown graphs. Recall that, for any $n>2$, the crown graph $S_{n}^{0}$ is formed by removing a perfect matching from $K_{n,n}$. Though not specifically stated as such, it was proven in \cite{BN} that $wcdim(S_{n}^{0},\textbf F)=n-1$, if $char(\textbf F)=0$, and $wcdim(S_{n}^{0},\textbf F)=n$ if both $char(\textbf F)$ and $n$ are even. We shall extend this result to allow us to calculate the well-covered dimensions of all crown graphs over all fields.
\begin{theorem}
Let $S_{n}^{0}$ denote a crown graph, for all $n\in \mathbb{N}$. Then,
\[
wcdim(S_{n}^{0},\textbf F) =
\left\{
\begin{array}{ll}
n & $if $char(\textbf F)= p\neq 0$ and $p| (n-2)$$ \\
n-1 & $otherwise$
\end{array}
\right.
\]
\end{theorem}
\begin{proof}
Let $K_{V_{1},V_{2}}$ be the complete bipartite graph with $V_{1}=\{a_{1},\cdots ,a_{n}\}$ and $V_{2}=\{b_{1},\cdots ,b_{n}\}$ and let $a_{1}b_{1},\cdots ,a_{n}b_{n}$ be the perfect matching that is removed from $K_{V_{1},V_{2}}$ to form $S_{n}^{0}$. The maximal independent sets of $S_{n}^{0}$ are $\{a_{i}, b_{i}\}$ for $i=1,2,3,\cdots ,n$, and $V_{1}$ and $V_{2}$. Setting the sum of each of the weights on the maximal independent sets equal to that of the weights on the vertices of $V_{2}$, we find that the linear system corresponding to the well-covered weightings is $A\textbf{x}=0$, where
\[
A=\begin{pmatrix}
I_{n} & I_{n}-J_{n}\\
\textbf{1}_{n}^{T} &
-\textbf{1}_{n}^{T}\\
\end{pmatrix},
\]
an $(n+1)\times 2n$ matrix. Subtracting the top $n$ rows from the bottom yields
\[
\begin{pmatrix}
I_{n} & I_{n}-J_{n}\\
\textbf{0}_{n}^{T} & \left(n-2\right)\textbf{1}_{n}^{T}\\
\end{pmatrix}.
\]
It follows that we have two possibilities depending on whether or not $char(F)$ divides $n-2$. The theorem follows after finding the rank of this matrix in either case.
\end{proof}
\begin{theorem}\label{k-partite}
$\displaystyle{wcdim(K_{|n_{0}|,...,|n_{k-1}|}) =\sum_{i=0}^{k-1} |n_{i}| -(k-1)}$, where $K_{|n_{0}|,\ldots ,|n_{k-1}|}$ is the (obvious) complete $k$-partite graph.
\end{theorem}
\begin{proof}
Let $f$ be a well-covered weighting of $K_{|n_{0}|,...,|n_{k-1}|}$. The maximal independent sets of $K_{|n_{}|,...,|n_{k-1}|}$ are $n_{i}$ for $i=0,\ldots ,k-1$. Setting the sum of each of the weights on the maximal independent sets equal to that of the weights on the vertices of $n_{k-1}$, we find that the linear system corresponding to the well-covered weightings is $A\textbf{x}=0$, where
\[
A=\begin{pmatrix}
\textbf{1}_{|n_{0}|}^{T} & \textbf{0}_{|n_{1}|}^{T} & \textbf{0}_{|n_{2}|}^{T} & \cdots & \textbf{0}_{|n_{k-2}|}^{T} & -\textbf{1}_{|n_{k-1}|}^{T}\\
\textbf{0}_{|n_{0}|}^{T} & \textbf{1}_{|n_{1}|}^{T} & \textbf{0}_{|n_{2}|}^{T} & \cdots & \textbf{0}_{|n_{k-2}|}^{T} & -\textbf{1}_{|n_{k-1}|}^{T}\\
\textbf{0}_{|n_{0}|}^{T} & \textbf{0}_{|n_{1}|}^{T} & \textbf{1}_{|n_{2}|}^{T} & \cdots & \textbf{0}_{|n_{k-2}|}^{T} & -\textbf{1}_{|n_{k-1}|}^{T}\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\
\textbf{0}_{|n_{0|}}^{T} & \textbf{0}_{|n_{1}|}^{T} & \textbf{0}_{|n_{2}|}^{T} & \cdots & \textbf{0}_{|n_{k-2}|}^{T} & -\textbf{1}_{|n_{k-1}|}^{T}\\
\end{pmatrix},
\]
a $(k-1)\times (\sum_{i=0}^{k-1} |n_{i}|)$ matrix. $A$ has rank $k-1$. Hence the nullity is $\sum_{i=0}^{k-1} |n_{i}| -(k-1)$, which implies that $wcdim(K_{|n_{0}|,...,|n_{k-1}|}) =\sum_{i=0}^{k-1} |n_{i}| -(k-1)$.
\end{proof}
\begin{corollary}\label{turan}
$wcdim\left(T\left(n,r\right)\right)=\left(n\bmod r\right)\lceil n/r\rceil+\left(r-\left(n\bmod r\right)\right)\lfloor n/r\rfloor-(r-1)$, where $T(n,r)$ denotes a Tur\'an graph.
\end{corollary}
We can see immediately that if $n$ is divisible by $r$, then corollary \ref{turan} reduces to $wcdim\left(T\left(n,r\right)\right)=n-(r-1)$.
The behaviors, in terms of well-covered weightings, of paths and cycles are very similar. Hence, we will study these two families simultaneously.
Consider $G$ to be an $n$-path or an $(n+2)$-cycle, for $n\geq 6$. Label six `consecutive' vertices $a,b,c,d,e$ and $f$ as in the picture below. Let $w$ be a well-covered weighting of $G$, and let $M_1$ and $M_2$ be two maximal independent sets of $G$ that contain the same vertices except from $M_1$ containing $\{a,c,f\}$ and $M_2$ containing $\{a,d,f\}$ instead. Locally, these two independent sets are represented in the figure below.
\begin{figure}[ht]
\centering
\includegraphics[height=.7in]{paths2.pdf}
\caption{$M_1$ and $M_2$ on six consecutive vertices.}
\label{sixconsecutivevertices}
\end{figure}
Since $M_1$ and $M_2$ just `interchange' $c$ and $d$, then $w(c)=w(d)$. It is now immediate that all vertices of $C_n$, for $n\geq 8$, have the same weight for all well-covered weightings of this graph. Hence, $wcdim(C_n) \leq 1$ for all $n\geq 8$.
Now consider two maximal independent sets $N_1$ and $N_2$ of $C_n$, with $n\geq 8$, that contain the same vertices outside of a string of seven consecutive vertices, where $N_1$ and $N_2$ contain four and three vertices respectively. These seven vertices, with the vertices contained in $N_1$ and $N_2$ are represented in figure \ref{differentnumberofelements} below.
\begin{figure}[ht]
\centering
\includegraphics[height=.7in]{paths3.pdf}
\caption{Two maximal independent sets with different cardinality.}
\label{differentnumberofelements}
\end{figure}
It follows that this graph admits maximal independent sets with different cardinalities, and thus $wcdim(C_n) =0$ for all $n\geq 8$.
Similarly, from the argument associated to figure \ref{sixconsecutivevertices}, if $n\geq 6$ and $V(P_n)=\{ v_1, v_2, \cdots , v_n\}$ (edges connecting $v_i$ with $v_{i+1}$) then vertices $v_3, \cdots , v_{n-2}$ must have the same weight for all well-covered weightings of $P_n$. Moreover, for small values of $n$ it is easy to see that these weights must be zero. For larger values of $n$ figure \ref{differentnumberofelements} provides a way to construct maximal independent sets with different cardinality, which forces $w(v_3) = \cdots =w(v_{n-2})$.
Finally, we can construct two maximal independent sets of $P_n$ that share all but one vertex, which is $v_1$ for one of them and $v_2$ for the other. This can be seen in the figure below.
\begin{figure}[ht]
\centering
\includegraphics[height=.7in]{paths1picture.pdf}
\caption{Two maximal independent sets on the first four vertices.}
\label{fourendvertices}
\end{figure}
It follows that $w(v_1)=w(v_2)$, and symmetrically that $w(v_{n-1})=w(v_n)$, for all well-covered weightings $w$ of $P_n$. Lastly, we want to remark that $w(v_1)$ is independent of $w(v_n)$, and thus, adding simple computations to the arguments above we obtain the following theorem:
\begin{theorem}
If $w$ is a well-covered weighting of $P_{n}$ and $n\geq 5$, then $w(v_{1})=w(v_{2})$, $w(v_{3})=\ldots=w(v_{n-2})=0$, and $w(v_{n-1})=w(v_{n})$. Moreover, $wcdim(P_{2})=1$ and $wcdim(P_{n})=2$ if $n>1$.
Also,
\begin{enumerate}
\item[(i)] $wcdim(C_{n})=0$, if $n\geq 8$,
\item[(ii)] $wcdim(C_{n})=1$, if $n=3,5,7$,
\item[(iii)] $wcdim(C_{6})=2$,
\item[(iv)] $wcdim(C_{4})=3$.
\end{enumerate}
\end{theorem}
\begin{remark}
The well-covered dimensions of paths had been already computed in \cite{CY} using methods different from the ones used in this paper.
\end{remark}
Now we look at the family of gear graphs. A gear graph over $2n+1$ vertices, denoted $G_{n}$ is the graph with vertex set $V(G_{n})=\{v_{0},...,v_{2n-1},v_{c}\}$ where:
\begin{enumerate}
\item $v_{i}$ is adjacent to $v_{i-1\bmod 2n}$ and $v_{i+1\bmod 2n}$ for $0\leq i\leq 2n-1$.
\item if $i\in 2\mathbb{Z}$, then $v_{i}$ is adjacent to $v_{c}$.
\end{enumerate}
We can compute the well-covered dimensions of the gear graphs using the same methods we used to compute the well-covered dimensions of the cycle graphs.
\begin{corollary}
Let $G_n$ be the gear graph in $2n+1$ vertices, then
\[
wcdim(G_{n})= \left\{
\begin{array}{ll}
3 & $if$ \ n=3 \\
0 & $if$ \ n>3
\end{array}
\right.
\]
\end{corollary}
\section{Blowups and lexicographic products}
In this section we look at the well-covered dimension of graphs that can be constructed from known ones by using various techniques. We begin with a definition.
\begin{definition}
Let $G$ be a graph and $t\in \mathbb{N}$. A $t$-blowup of a vertex $v_i\in V(G)$ is an independent set $V_{v_i} =\{v_{i1}, v_{i2}, \cdots , v_{it} \}$ that `takes the place' of $v$. More precisely, wherever there was an edge joining $v$ to $w\in V(G)$ there is an edge joining $v_i$ with $w$.
The graph obtained by the $t$-blowup of $v$ will be denoted $G(tv)$. Similarly, for $v,w\in V(G)$ and $s,t \in \mathbb{N}$ we denote a `double blowup' $G(tv)(sw)$ as $G(tv , sw)$. For multiple blowups we extend in the natural way the notation set of double blowups.
\end{definition}
Note that $G(v)=G$ for all $v\in V(G)$.
\begin{lemma}\label{G(tv)}
Let $G$ be a graph with $V(G) = \{v_1, \cdots , v_n\}$, and $m = wcdim(G, \textbf F)$. Let $H= G(tv_1)$, where $t \in \mathbb{N}$. Then, $wcdim(H, \textbf F) = m+t-1$.
\end{lemma}
\begin{proof}
We begin by noticing that a maximal independent set of $G$ not containing $v_1$ is also a maximal independent set of $H$, and if $S = \{v_{1}, v_{i_2}, \cdots, v_{i_r}\}$ is a maximal independent set of $G$ then
\[
S' = \{v_{11}, \cdots , v_{1t}, v_{i_2}, \cdots , v_{i_r}\}
\]
is a maximal independent set of $H$. Moreover, it is easy to see that every maximal independent set of $H$ must be of one of these two types.
Let $M$ and $M(t)$ be the matrices associated to the systems of equations arising from looking for well-covered weightings of $G$ and $H$ respectively. We notice that $M(t)$ has $t-1$ more columns than $M$ but that it has exactly the same number of rows, and in fact the same rank as $M$, which is $n-m$. The result follows.
\end{proof}
By using this lemma repeatedly in a graph that is constructed from $G$ by a sequence of blowups of vertices of $G$ we get the following theorem.
\begin{theorem}
Let $G$ be a graph with $V(G) = \{v_1, \cdots , v_n\}$ and $m = wcdim(G, \textbf F)$. Let $H= G(t_1v_1, t_2v_2, \cdots , t_nv_n)$, where $t_i \in \mathbb{N}$ for all $i=1,2,\cdots ,n$. Then,
\[
wcdim(H, \textbf F) = (m-n) + \sum_{i=1}^n t_i
\]
\end{theorem}
Now we will look at the lexicographic product of graphs. We start with a definition.
\begin{definition}
The lexicographic product of $G$ and $H$, denoted $G \bullet H$, is the graph with vertex set $V(G)\times V(H)$ and edges joining $(g,h)$ and $(g',h')$ if and only if $gg' \in E(G)$ or $g=g'$ and $hh'\in E(H)$.
\end{definition}
\begin{corollary}\label{blowlex}
Let $G$ be a graph in $n$ vertices with $wcdim(G, \textbf F) =m$. Then,
\[
wcdim\left(G \bullet \overline{K_t}, \textbf F \right)=m +n (t-1)
\]
where $t\in \mathbb{N}$.
\end{corollary}
\begin{proof}
Assume that $V(G)= \{v_1, \cdots , v_n\}$. The result follows from the previous theorem and the fact that $G(tv_1, tv_2, \cdots , tv_n) \cong G \bullet \overline{K_t}$.
\end{proof}
The previous corollary is also a corollary of theorem \ref{lex}. In order to prove this theorem we need a couple of linear algebra results that we will not prove but will mention in full detail.
\begin{lemma}\label{linalg1}
Let $M$ be an $n\times m$ matrix and let $N$ be the $(n-1) \times m$ matrix obtained by subtracting the first row $R_1$ of $M$ from all the other rows of $M$, and then deleting $R_1$. Assume $rank(N)=k$, then,
\[
rank(M) = \left\{
\begin{array}{ll}
k & $if $R_1$ is dependent of other rows of $M$$ \\
k+1 & $if $R_1$ is independent from other rows of $M$$
\end{array}
\right.
\]
\end{lemma}
For the next couple of results, we denote the Kronecker (or tensor) product of two matrices, $M$ and $A$, by $M\otimes A$.
\begin{remark}\label{linalg2}
Let $N, B$ and $C$ be matrices obtained by using the construction described in lemma \ref{linalg1} from matrices $M, A$, and $M\otimes A$ respectively, we will re-arrange rows in the matrices if necessary to get, when possible, the first row to be dependent of the others. Then, $rank(C)=rank(M\otimes A)$ whenever there is a row that is dependent of others in $A$ or $M$, as in these cases we can always choose a row of $M\otimes A$ that depends on the other rows of this matrix. On the other hand, if both $M$ and $A$ have linearly independent rows, then $M\otimes A$ also has this property (using $rank(A\otimes M) = rank(A)rank(M)$), and thus $rank(C)=rank(M\otimes A)-1$.
If we now use lemma \ref{linalg1}, and assume $rank(N)=k$ and $rank(B)=q$, then
\[
rank(C) = \left\{
\begin{array}{ll}
kq & $if both $M$ and $A$ have linearly dependent rows$ \\
k(q+1) & $if $M$ has linearly dependent rows and $A$ does not$ \\
(k+1)q & $if $A$ has linearly dependent rows and $M$ does not$ \\
(k+1)(q+1)-1 & $if both $M$ and $A$ have linearly independent rows$
\end{array}
\right.
\]
\end{remark}
Now we have all the tools needed to prove.
\begin{theorem}\label{lex}
Let $G$ and $H$ be graphs with $|V(G)|= a$, $|V(H)|= b$, $wcdim(G, \textbf F)=n$, and $wcdim(H, \textbf F)=m$. Then,
\[
wcdim(G\bullet H, \textbf F) = nb+am - nm +\delta_{m-b+1, i}(n-a) +\delta_{n-a+1, j}(m- b)
\]
where $\delta_{xy}$ represents the Kronecker delta, and $i$, $j$ are the number of maximal independent sets of $H$ and $G$ respectively.
\end{theorem}
\begin{proof}
We first notice that if $S = \{v_{1_1}, v_{i_2}, \cdots, v_{i_r}\}$ is a maximal independent set of $G$ then
\[
S' = \{w_{1i_1}, \cdots , w_{t_1i_1}, w_{1i_2}, \cdots , w_{t_2i_2}, \cdots , w_{1i_r}, \cdots , w_{t_ri_r}\}
\]
is a maximal independent set of $G\bullet H$, where $\{ w_{1i_j}, \cdots , w_{t_ji_j} \}$ is a maximal independent set of $H$ for all $j=1,2, \cdots , r$. Moreover, it is easy to see that every maximal independent set of $G\bullet H$ must be obtained this way.
Set the weight-sums of each of the independent sets of $G$ equal to zero. Let $M$ be the matrix representing that homogeneous system of equations. Note that the matrix $N$ needed to find $wcdim(G, \textbf F)$ is obtained from $M$ by using the construction described in lemma \ref{linalg1}. Similarly, by repeating this process with $H$ we obtain $B$ (needed for finding $wcdim(H, \textbf F)$) out of $A$ (found by setting the weight-sums of the maximal independent sets of $H$ equal to zero).
Now we notice that (because of the first paragraph in this proof) $A\otimes M$ is the matrix associated to the homogeneous system given by setting the weight-sums of all the independent sets of $G\bullet H$ equal to zero. It follows that we are interested in finding the rank of the matrix $C$ obtained from $A\otimes M$ by using the construction described in lemma \ref{linalg1}.
Since $rank(N) = a-n$, $rank(B)=b-m$, and $|V(G\bullet H)|=ab$, then using that a matrix has linearly dependent rows if and only if its rank is not equal to its number of rows, and remark \ref{linalg2}, we get
{\small
\[
wcdim(G\bullet H, \textbf F) = \left\{
\begin{array}{ll}
nb+am - nm & $if $i\neq b-m+1, \ j\neq a-n+1$$ \\
nb+am- nm+n-a & $if $i=b-m+1, \ j\neq a-n+1$$ \\
nb+am - nm+m-b & $if $i\neq b-m+1, \ j= a-n+1$$ \\
nb+am - nm +m-b +n- a & $if $i=b-m+1, \ j= a-n+1$$
\end{array}
\right.
\]
}
where $i$, $j$ represent the number of maximal independent sets of $H$ and $G$ respectively (which are the number of rows of $A$ and $M$ respectively).
The result follows from the definition of the Kronecker delta.
\end{proof}
\begin{corollary}
Let $G$ and $H$ be graphs with more maximal independent sets than vertices, and such that $|V(G)|= a$, $|V(H)|= b$, $wcdim(G, \textbf F)=n$ and $wcdim(H, \textbf F)=m$. Then,
\[
wcdim(G\bullet H, \textbf F) = nb+am - nm
\]
\end{corollary}
\begin{remark}
As mentioned above, corollary \ref{blowlex} is a corollary of theorem \ref{lex}. In order to see this we just need to notice that $\overline{K_t}$ has one maximal independent set and that $wcdim(\overline{K_t}) = V(\overline{K_t})=t$.
\end{remark}
|
1,314,259,993,541 | arxiv | \section{Introduction}\label{sec:Introduction}
The aim of this paper is to present a new exact solution which represents a nonlinear internal water wave. The solution in this study is constructed by adapting the celebrated Pollard's solution in order to successfully describe the internal water waves. In 1970, Pollard \cite{Pollard1970} presented a surface wave solution, where he extended the remarkable Gerstner's solution \cite{Gerstner1809} by including the effects of the rotation of Earth.\\
An extensive analysis of Gerstner's solution was presented in \cite{Constantin2001b,Constantin2011,Henry2008a}. Recently, there has been a significant research activity deriving Gerstner-like solutions which model various geophysical oceanic waves including equatorially-trapped surface and internal waves \cite{Constantin2012b,Constantin2013b,Constantin2014,Henry2016b,Hsu2014a} or waves in the presence of depth invariant underlying currents \cite{Henry2013,Henry2015,Henry2016,Kluczek2016,Kluczek2017,Sanjurjo2017}. Furthermore, an instability analysis of Gerstner's solution was presented in \cite{Constantin-Germain2013}. The mathematical importance of the recently derived and analysed Gerstner-like solutions is presented in a form of a review paper in \cite{Henry2017,Ionescu-Kruse2017,Johnson2017}.\\
For rotating flows in the Pollard solution a wave experiences a very slight cross-wave tilt to the wave orbital motion associated with the planetary vorticity. Therefore, the Pollard-like solution is more suitable to describe large-scale global waters rather than Gerstner's solution; since Gerstner's solution describes the motion of a particle in the vertical plane \cite{Constantin2001b,Constantin2012b,Gerstner1809,Henry2008a}, it is more adequate for flows close to the Equator where the force alternating the particles paths vanishes and the orbits are indeed vertical. The primary novel feature of this paper is we present an exact solution representing an internal water wave. The Pollard-like internal water wave solution established in this paper describes still the circular particle orbits but now the orbits lie in a plane slightly tilted from the vertical, therefore the solution is fully three-dimensional and is essentially different to the internal water wave solutions derived for the equatorial region \cite{Constantin2013b,Constantin2014}; cf. \cite{Boyd2018} for a discussion of the oceanographical relevance of these solutions.\\
The internal water waves in a stably stratified ocean may desribe the oscillation of a thermocline \cite{Constantin-Johnson2015,Cushman-Beckers2011}. The thermocline is a sharp interface separating two horizontal layers of ocean water with constant but different densities \cite{Cushman-Beckers2011,Garrison-Ellis2016,Vallis2006}. The thermocline is a phenomenon occurring also at higher latitudes, thus it is important to emphasise the need for a solution which describes the internal water waves applicable beyond the equatorial region, as is the case in this paper. The mechanism of generation of the oscillation of the thermocline is, regrettably, outside of the scope of this paper; cf. \cite{Constantin-Johnson2015,Johnson-McPhaden-Firing,Johnson2017} for a detailed study of the thermocline and its interaction with the Equatorial Undercurrent.\\
Subsequently to the work on Gerstner-like solutions, there has been developments in the analysis of Pollard's solution for surface waves \cite{Constantin-Monismith2017,Ionescu-Kruse2015b,Ionescu-Kruse2016}. A Pollard-like solution for the surface waves in the presence of mean currents and rotation was derived in a recent research paper \cite{Constantin-Monismith2017}, with an instability analysis of the Pollard-like solution presented in \cite{Ionescu-Kruse2016}. Moreover, the surface wave solution is globally dynamically possible \cite{Sanjurjo2018}. Our purpose is to modify Pollard's solution to obtain a valid model describing the nonlinear internal water waves. By empirically examining the developed solution, we hope to produce a more complete understanding of the internal oceanic flows \cite{Constantin-Johnson2016b}. We build on this analysis to identify the dispersion relation for the internal waves, desribing the oscillation of the thermocline, which may be expressed as a polynomial of degree four by a suitable non-dimensional transformation. An analysis of the polynomial identifies one mode of the internal water wave that is a standard internal gravity wave modified very slightly by Earth's rotation.
\section{The governing equations}\label{sec:The governing equations}
The flow pattern we investigate is described in a rotating frame with the origin at a point on Earth's surface. Therefore, the $(x,y,z)$ Cartesian coordinates represent the directions of the longitude, latitude and local vertical, respectively. The governing equations for the geophysical ocean waves are given by the Euler equations \cite{Cushman-Beckers2011,Vallis2006}
\begin{equation*}\label{eq:Governing equation 1}
\begin{cases}
u_t+uu_x+vu_y+wu_z+2\Omega w\cos\phi-2\Omega v\sin\phi=-\frac{1}{\rho}P_x,\\
v_t+uv_x+vv_y+wv_z+2\Omega u\sin\phi=-\frac{1}{\rho}P_y,\\
w_t+uw_x+vw_y+ww_z-2\Omega u\cos\phi=-\frac{1}{\rho}P_z-g,
\end{cases}
\end{equation*}
coupled with the equation of mass conservation
\begin{equation*}\label{eq:Mass conservation}
\rho_t+u\rho_x+v\rho_y+w\rho_z=0,
\end{equation*}
together with the equation for incompressibility
\begin{equation}\label{eq:Incompressibility}
u_x+v_y+w_z=0.
\end{equation}
Here $t$ is time, $\phi$ represents the latitude, $g=9.81$m$s^{-2}$ is the constant gravitational acceleration at Earth's surface, $\rho$ is the water's density, and $P$ is the pressure, while $u$, $v$ and $w$ are the respective fluid velocity components. Earth is taken to be a sphere of radius $R=6371$ km, rotating with the constant rotational speed $\Omega = 7.29\times10^{-5}$rad$\cdot s^{-1}$ round the polar axis towards the east.
\begin{figure}[th]
\begin{center}
\begin{tikzpicture}
\draw[line width=0.5pt] (0, 0) .. controls(0.1,-0.1)and(0.3,-0.1) .. (0.4,0) .. controls(0.45,0.1) .. (0.5, 0);
\draw[line width=0.5pt] (0.5, 0) .. controls(0.6,-0.1)and(0.8,-0.1) .. (0.9,0) .. controls(0.95,0.1) .. (1, 0);
\draw[line width=0.5pt] (1, 0) .. controls(1.1,-0.1)and(1.3,-0.1) .. (1.4,0) .. controls(1.45,0.1) .. (1.5, 0);
\draw[line width=0.5pt] (1.5, 0) .. controls(1.6,-0.1)and(1.8,-0.1) .. (1.9,0) .. controls(1.95,0.1) .. (2, 0);
\draw[line width=0.5pt] (2, 0) .. controls(2.1,-0.1)and(2.3,-0.1) .. (2.4,0) .. controls(2.45,0.1) .. (2.5, 0);
\draw[line width=0.5pt] (2.5, 0) .. controls(2.6,-0.1)and(2.8,-0.1) .. (2.9,0) .. controls(2.95,0.1) .. (3, 0);
\draw[line width=0.5pt] (3, 0) .. controls(3.1,-0.1)and(3.3,-0.1) .. (3.4,0) .. controls(3.45,0.1) .. (3.5, 0);
\draw[line width=0.5pt] (3.5, 0) .. controls(3.6,-0.1)and(3.8,-0.1) .. (3.9,0) .. controls(3.95,0.1) .. (4, 0);
\draw[line width=0.5pt] (4, 0) .. controls(4.1,-0.1)and(4.3,-0.1) .. (4.4,0) .. controls(4.45,0.1) .. (4.5, 0);
\draw[line width=0.5pt] (4.5, 0) .. controls(4.6,-0.1)and(4.8,-0.1) .. (4.9,0) .. controls(4.95,0.1) .. (5, 0);
\draw[line width=0.5pt] (5, 0) .. controls(5.1,-0.1)and(5.3,-0.1) .. (5.4,0) .. controls(5.45,0.1) .. (5.5, 0);
\draw[line width=0.5pt] (5.5, 0) .. controls(5.6,-0.1)and(5.8,-0.1) .. (5.9,0) .. controls(5.95,0.1) .. (6, 0);
\draw[line width=0.5pt] (6, 0) .. controls(6.1,-0.1)and(6.3,-0.1) .. (6.4,0) .. controls(6.45,0.1) .. (6.5, 0);
\draw[line width=0.5pt] (6.5, 0) .. controls(6.6,-0.1)and(6.8,-0.1) .. (6.9,0) .. controls(6.95,0.1) .. (7, 0);
\draw[line width=0.5pt] (7, 0) .. controls(7.1,-0.1)and(7.3,-0.1) .. (7.4,0) .. controls(7.45,0.1) .. (7.5, 0);
\draw[line width=0.5pt] (7.5, 0) .. controls(7.6,-0.1)and(7.8,-0.1) .. (7.9,0) .. controls(7.95,0.1) .. (8, 0);
\draw[line width=0.5pt] (8, 0) .. controls(8.1,-0.1)and(8.3,-0.1) .. (8.4,0) .. controls(8.45,0.1) .. (8.5, 0);
\draw[line width=0.5pt] (8.5, 0) .. controls(8.6,-0.1)and(8.8,-0.1) .. (8.9,0) .. controls(8.95,0.1) .. (9, 0);
\draw[line width=0.5pt] (9, 0) .. controls(9.1,-0.1)and(9.3,-0.1) .. (9.4,0) .. controls(9.45,0.1) .. (9.5, 0);
\draw[line width=0.5pt] (9.5, 0) .. controls(9.6,-0.1)and(9.8,-0.1) .. (9.9,0) .. controls(9.95,0.1) .. (10, 0);
\draw[line width=0.5pt] (0, -1.5) .. controls(1.5,-1.5)and(1.6,-2) .. (1.9,-2) .. controls(2.2,-2)and(2.3,-1.5) .. (3.45,-1.5) .. controls(4.6,-1.5)and(4.7,-2) .. (5,-2) .. controls (5.3,-2)and(5.4,-1.5) .. (6.55,-1.5) .. controls(7.7,-1.5)and(7.8,-2) .. (8.1,-2) .. controls (8.4,-2)and(8.5,-1.5) .. (10,-1.5) ;
\draw[line width=1pt] (0, -3.95) .. controls(1.5,-3.95)and(1.6,-4.55) .. (1.9,-4.55) .. controls(2.2,-4.55)and(2.3,-3.95) .. (3.45,-3.95) .. controls(4.6,-3.95)and(4.7,-4.55) .. (5,-4.55) .. controls (5.3,-4.55)and(5.4,-3.95) .. (6.55,-3.95) .. controls(7.7,-3.95)and(7.8,-4.55) .. (8.1,-4.55) .. controls (8.4,-4.55)and(8.5,-3.95) .. (10,-3.95) ;
\draw (0,-6) -- (10,-6);
\node[draw=none] at (5,-0.75) {near-surface layer $\pazocal{L}(t)$};
\node[draw=none] at (5,-3) {layer $\pazocal{M}(t)$};
\node[draw=none] at (5,-5) {still water layer $\pazocal{S}(t)$};
\node[draw=none] at (9,-0.75) {$\rho_0$};
\node[draw=none] at (9,-3) {$\rho_0$};
\node[draw=none] at (9,-5) {$\rho_+$};
\node[draw=none,anchor=west] at (10.1,0.1) {free surface};
\node[draw=none,anchor=west] at (10.1,-1.45) {$z=\eta_{+}(x,y,t)$};
\node[draw=none,anchor=west] at (10.1,-3.8) {$z=\eta(x,y,t)$};
\node[draw=none,anchor=west] at (10,-6) {$z=-d$};
\node[draw=none,anchor=west] at (-0.5,-3.7) {thermocline};
\end{tikzpicture}
\caption{The depiction of the main flow regions at a fixed latitude $y$. The thermocline is described by a trochoid propagating at a speed $c$. The thermocline separates two layers of water of constant however different densities $\rho_0<\rho_+$ in a stable stratification. In the solution that we present the amplitude of the internal waves decays exponentially above the thermocline and is reduced to less then 4\% of its thermocline value at the hight of half a wave-length above the thermocline.}
\label{fig:regions}
\end{center}
\end{figure}
The solution that we construct models the internal water waves describing the oscillation of a thermocline and the hydrostatic model is presented as follow. The thermocline separates layers of ocean water of different densities \cite{Cushman-Beckers2011}. The layer of less dense water $\pazocal{M}(t)$ with density $\rho_0$ overlays the layer of more dense water $\pazocal{S}(t)$ with density $\rho_+>\rho_0$. The wave motion in $\pazocal{M}(t)$ is describing the oscillations of the thermocline. The layer $\pazocal{M}(t)$ is bounded by the thermocline $z=\eta(x,y,t)$ and by the upper boundary $z=\eta_+(x,y,t)$. In the solution which we present below the amplitude of the internal waves decays exponentially with the hight above the thermocline. The amplitude of the internal waves is reduced to less then 4\% of its thermocline value at the hight of half a wave-length above the thermocline, since $e^{-\pi}\approx0.04$ (cf. \cite{Constantin2012b}), therefore, for the purposes of this model, it is justifiable to consider that the layer $\pazocal{M}(t)$ is finite and bounded. The motion in the near surface layer $\pazocal{L}(t)$ is neglected as it is a small perturbation of the free surface caused primarily by the wind and the geophysical effect has little bearing there. The layer $\pazocal{S}(t)$ of water under the thermocline describes a motionless abyssal deep-water region. The idea is to approximate the thermal structure of an ocean in the simplest form. We investigate the internal water waves in a relatively narrow ocean strip less than a few degrees of latitude wide, and so we regard the Coriolis parameters
\begin{equation*}\label{eq:Coriolis parameters}
f=2\Omega\sin\phi, \qquad \hat{f}=2\Omega\cos\phi,
\end{equation*}
as constants, where $f$ is called the Coriolis parameter and $\hat{f}$ has no traditional name but usually is called the reciprocal Coriolis parameter \cite{Cushman-Beckers2011}. The typical values of the Coriolis parameters at $45\degree$ on the Northern Hemisphere are $f=\hat{f}=10^{-4}s^{-1}$ \cite{Gill1982}. On a rotating sphere, such as Earth, the Coriolis term varies with the sine of latitude, however in the $\beta$-plane approximation the Coriolis parameter is set to vary linearly in space. Furthermore, this variation can be ignored and a value of Coriolis parameter appropriate for a particular latitude can be used in the whole domain \cite{Cushman-Beckers2011}. Thus, the Euler equations reduce in the $f$-plane approximation to
\begin{equation}\label{eq:Governing equation 2}
\begin{cases}
u_t+uu_x+vu_y+wu_z+\hat{f}w-fv=-\frac{1}{\rho}P_x,\\
v_t+uv_x+vv_y+wv_z+fu=-\frac{1}{\rho}P_y,\\
w_t+uw_x+vw_y+ww_z-\hat{f}u=-\frac{1}{\rho}P_z-g.
\end{cases}
\end{equation}
Water is still under the thermocline which indicates that the velocity field is in the form
\begin{equation*}\label{eq:still water}
(u,v,w)=(0,0,0) \mbox { for } z<\eta(x,y,t).
\end{equation*}
Since there is no motion in the layer $\pazocal{S}(t)$ the governing equations imply the hydrostatic pressure
\begin{equation*}\label{eq:Hydrostatic pressure}
P=P_0-\rho_+gz \qquad z<\eta(x,y,t).
\end{equation*}
The governing equations for the internal water waves in the layer $\pazocal{M}(t)$ are
\begin{equation}\label{eq:Governing equation in M(t)}
\begin{cases}
u_t+uu_x+vu_y+wu_z+\hat{f}w-fv=-\frac{1}{\rho_0}P_x,\\
v_t+uv_x+vv_y+wv_z+fu=-\frac{1}{\rho_0}P_y,\\
w_t+uw_x+vw_y+ww_z-\hat{f}u=-\frac{1}{\rho_0}P_z-g.
\end{cases}
\end{equation}
The appropriate boundary conditions for the internal water waves are the dynamic and kinematic conditions,
\begin{eqnarray*}\label{eq:dynamic condition}
P=P_0-\rho_+gz \mbox{ on the thermocline } z=\eta(x,y,t)\\
w=\eta_t+u\eta_x+v\eta_y \mbox{ on the thermocline } z=\eta(x,y,t),
\end{eqnarray*}
respectively. The kinematic condition prevents mixing of particles between the abyssal water region and the layer $\pazocal{M}(t)$. The particle initially on the boundary stays on the boundary all the times.
\section{Discussion of the model}\label{sec:Discussion of the model}
\subsection{Exact and explicit solution}\label{subsec:Exact and explicit solution}
In this section we present an exact solution to the governing equations for the internal water waves in the layer $\pazocal{M}(t)$. The Pollard-like solution represents a periodic travelling wave in the longitudinal direction at a speed of propagation $c$. For the explicit description of this flow it is convenient to use the Lagrangian framework \cite{Bennett2006}. The Lagrangian positions $(x,y,z)$ of a fluid particle are given as functions of the labelling variables $(q,r,s)$, time $t$ and real parameters $a,b,c,d,k,m$. We show that the explicit solution to the governing equations~(\ref{eq:Governing equation in M(t)}) satisfying the incompressibility condition is given by
\begin{equation}\label{eq:Pollard explicit solution}
\left\{
\begin{array}{l}
x=q-be^{-ms}\sin[k(q-ct)],\\
y=r-de^{-ms}\cos[k(q-ct)],\\
z=s-ae^{-ms}\cos[k(q-ct)].
\end{array}
\right.
\end{equation}
The constant $k=2\pi/L$ is the wavenumber corresponding to the wavelength $L$. The parameter $q$ covers the real line, while $r\in[-r_0,r_0]$, for some $r_0$, because the solution is set up around a fixed latitude $\phi$. For every fixed value of $r\in[-r_0,r_0]$, we require $s\in[s_{0},s_{+}]$, where the choice $s=s_{0}\geq s^*>0$ represents the thermocline $z=\eta(x,y,t)$ at the latitude $\phi$, while $s=s_{+}>s_0$ prescribes the interface $z=\eta_+(x,y,t)$ separating $\pazocal{L}(t)$ and $\pazocal{M}(t)$ at the same latitude. We set the parameter of the amplitude $a>0$, wavenumber $k>0$ and for waves with amplitude decreasing above the thermocline we require $m>0$. The parameter $d$ varies from $d>0$ in the Southern Hemisphere, $d<0$ in the Northern Hemisphere to $d=0$ on the Equator since it is related to the Coriolis parameter $f$, which we show later on. Moreover, the parameters $b,c,d$ must be suitably chosen in terms of $k,m,a$.
\begin{figure}[th]
\begin{center}
\begin{tikzpicture}
\node[draw=none] at (5,-0.75) {the Equator};
\node[draw=none] at (0,-1.5) {The North Pole};
\node[draw=none] at (10,-1.5) {The South Pole};
\draw[thick,<->] (0,-2.5) -- (10,-2.5);
\draw[thick,->] (5,-4) -- (5,-1);
\draw[line width=2pt] (5,-3.25) -- (5,-1.75);
\draw[line width=2pt] (0.95,-2.875) -- (2.25,-2.125);
\draw[line width=2pt] (3.15,-3.2) -- (3.65,-1.8);
\draw[line width=2pt] (6.35,-1.8) -- (6.85,-3.2);
\draw[line width=2pt] (7.75,-2.125) -- (9.05,-2.875);
\draw[thick,dashed,->] (1.6,-3.5) -- (1.6,-1.5);
\draw (2,-2.25) arc (30:90:0.5) ;
\draw[thick,dashed,<-] (1.2,-3.35) -- (2,-3.35);
\node[draw=none] at (1.1,-3.35) {\footnotesize y};
\draw[thick,dashed,->] (3.4,-3.5) -- (3.4,-1.5);
\draw (3.55,-2) arc (70:90:0.5) ;
\draw[thick,dashed,<-] (3,-3.35) -- (3.8,-3.35);
\node[draw=none] at (2.9,-3.35) {\footnotesize y};
\draw[thick,dashed,->] (6.6,-3.5) -- (6.6,-1.5);
\draw (6.6,-2) arc (100:120:0.5) ;
\draw[thick,dashed,<-] (6.2,-3.35) -- (7,-3.35);
\node[draw=none] at (6.1,-3.35) {\footnotesize y};
\draw[thick,dashed,->] (8.4,-3.5) -- (8.4,-1.5);
\draw (8.4,-2) arc (90:150:0.5) ;
\draw[thick,dashed,<-] (8,-3.35) -- (8.8,-3.35);
\node[draw=none] at (7.9,-3.35) {\footnotesize y};
\end{tikzpicture}
\caption{The figure presents the inclination of the particles orbits when the latitude increases. At the Equator the orbit becomes vertical.}
\label{fig:Inclination}
\end{center}
\end{figure}
Before we proceed to proving the validity of the explicit solution~(\ref{eq:Pollard explicit solution}) we want to provide a brief discussion about the particle trajectory. For the setting of a surface wave (cf. \cite{Constantin-Monismith2017}), it is shown that the solution~(\ref{eq:Pollard explicit solution}) with parameters for the surface waves describes circles, which also applies to the internal waves. A feature of the Pollard-like solution is that the path of a particle is a slightly tilted circle \cite{Constantin-Monismith2017,Pollard1970} where the Gerstner-like solution describes circles in the vertical plane \cite{Henry2008a}. In the Pollard-like solution for the internal waves the top of the circle made by the particle is closer to the Equator and the bottom of the circle deviates to the pole at an angle of inclination $\arctan(-d/a)$ to the local vertical, which is a reversed state to the one of the surface waves \cite{Constantin-Monismith2017}. The angle of the inclination is increasing with the distance from the Equator (the figure~(\ref{fig:Inclination}). The orbits of the water particles in three-dimensions are presented in the figure~(\ref{fig:Circle}). The internal waves are in this setting in the shape of a trochoid (cf. \cite{Constantin2011}), whereas the surface wave is an inverted trochoid. The internal wave has narrow troughs and wide crests. The shape of the internal wave is depicted in the figure~(\ref{fig:Wave_profile}) taking into account the three-dimensional character. For a better explanation of the shape of the internal wave the intersection of the wave and the vertical plane is presented in the figure~(\ref{fig:Projection}). Moreover, our setting of the internal wave evaluated on the Equator particularises to the Gerstner-like equatorial internal wave solution \cite{Hsu2014a}. Note that in Gerstner's and Pollard's surface waves \cite{Constantin2001b,Constantin-Monismith2017,Henry2008a,Pollard1970} the amplitude of wave oscillations decreases as we descend into fluid, which is a reverse of the present setting, whereby the amplitude of the internal waves decreases exponentially as we ascend above the thermocline \cite{Constantin2013b,Constantin2014}. Let us now verify that~(\ref{eq:Pollard explicit solution}) is indeed the exact solution of~(\ref{eq:Governing equation in M(t)}) representing the internal water waves. For notational convenience we set
\[
\theta=k(q-ct).
\]
\begin{figure}[th]
\begin{center}
\includegraphics[width=1\textwidth]{Internal_wave_circle.eps}
\caption{The path of the fluid particles when the wave propagates through water. The trajectory of the particle is a circle slightly tilted toward the Equator. The parameters of the wave-induced motion at the thermocline are $a=10$ m, $k=6.28\times 10^{-2}m^{-1}$, $\phi=45\degree$N and $\Delta\rho/\rho_0=4\times10^{-3}$. We present the motion at two depths in an ocean. The mean difference of the depths is 10 m.}
\label{fig:Circle}
\end{center}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[width=1\textwidth]{Internal_wave_profile_3D.eps}
\caption{The trochoidal wave profile in three-dimensions generated by the oscillation of the thermocline. The wave is evaluated at two depths in an ocean (the mean difference of the depths is 10 m) for $a=10$ m, $k=6.28\times 10^{-2}m^{-1}$, $\phi=45\degree$N and $\Delta\rho/\rho_0=4\times10^{-3}$ at the thermocline. The wave profile is slightly tilted toward the Equator.}
\label{fig:Wave_profile}
\end{center}
\end{figure}
\begin{figure}[th]
\begin{center}
\includegraphics[width=1\textwidth]{Internal_wave_profile.eps}
\caption{The projection on the vertical plane of the wave generated by the oscillation of the thermocline for two different depths (the mean difference of the depths is 10 m) at the latitude $\phi=45\degree$ on the Northern Hemisphere. The parameters of the wave at the thermocline are $a=10$ m, $k=6.28\times 10^{-2}m^{-1}$, $\Delta\rho/\rho_0=4\times10^{-3}$. The amplitude of the internal wave decreases as we ascend above the thermocline. The internal water wave is in the shape of a trochoid with narrow troughs and wide crests.}
\label{fig:Projection}
\end{center}
\end{figure}
We require
\begin{equation*}\label{eq:Condition of r}
s\geq s^{*}>0,
\end{equation*}
so that $e^{-ms}<1$ throughout the layer $\pazocal{M}(t)$, since $ms\geq ks^{*}>0$. The Jacobian of the map relating the particle positions to the Lagrangian labelling variables is given by
\begin{equation}\label{eq:the Jacobian}
\begin{pmatrix}
\frac{\partial x}{\partial q} & \frac{\partial y}{\partial q} & \frac{\partial z}{\partial q} \\
\frac{\partial x}{\partial r} & \frac{\partial y}{\partial r} & \frac{\partial z}{\partial r} \\
\frac{\partial x}{\partial s} & \frac{\partial y}{\partial s} & \frac{\partial z}{\partial s}
\end{pmatrix}
=
\begin{pmatrix}
1-kbe^{-ms}\cos\theta & kde^{-ms}\sin\theta & kae^{-ms}\sin\theta \\
0 & 1 & 0 \\
mbe^{-ms}\sin\theta & mde^{-ms}\cos\theta & 1+mae^{-ms}\cos\theta
\end{pmatrix}
\end{equation}
The flow is volume preserving and the condition of incompressibility~(\ref{eq:Incompressibility}) holds in the layer $\pazocal{M}(t)$ if and only if the determinant of the Jacobian is time independent and different than zero. The Jacobian determinant of~(\ref{eq:Pollard explicit solution}) is precisely
\begin{equation*}\label{eq:the Jacobian determinant}
J=1+(ma-kb)e^{-ms}\cos\theta-kmabe^{-2ms}.
\end{equation*}
We need the condition
\begin{equation}\label{eq:first condition}
ma-kb=0,
\end{equation}
to ensure that the determinant of the Jacobian is time independent. It follows that
\begin{equation*}
mkabe^{-2ms}\neq 1,
\end{equation*}
throughout the flow in order to ensure a valid local diffeomorphism of~(\ref{eq:Pollard explicit solution}) by means of the inverse function theorem. Due to the condition~(\ref{eq:first condition}) and $s\geq s^{*}>0$ the above statement implies
\begin{equation}\label{eq:ame<1}
m^2a^2e^{-2ms^*}<1.
\end{equation}
From the explicit solution~(\ref{eq:Pollard explicit solution}) we can deduce that the upper bound for the amplitude of internal waves is $1/m$. The Euler equations can be rewritten in the form
\begin{equation}\label{eq:Euler eq in Lagranian}
\begin{cases}
\frac{Du}{Dt}+\hat{f}w-fv=-\frac{1}{\rho_{0}}P_{x},\\
\frac{Dv}{Dt}+fu=-\frac{1}{\rho_{0}}P_{y},\\
\frac{Dw}{Dt}-\hat{f}u=-\frac{1}{\rho_{0}}P_{z}-g,
\end{cases}
\end{equation}
where $D/Dt$ is the material derivative. From the direct differentiation of the system of coordinates in~(\ref{eq:Pollard explicit solution}), the velocity of each fluid particle may be expressed as
\begin{equation}\label{eq:Velocities above thermocline}
\left\{
\begin{array}{l}
u=\frac{Dx}{Dt}=kcbe^{-ms}\cos\theta,\\
v=\frac{Dy}{Dt}=-kcde^{-ms}\sin\theta,\\
w=\frac{Dz}{Dt}=-kcae^{-ms}\sin\theta,
\end{array}
\right.
\end{equation}
and the acceleration is
\begin{equation*}\label{eq:Acceleration above thermocline}
\left\{
\begin{array}{l}
\frac{Du}{Dt}=k^2c^{2}be^{-ms}\sin\theta,\\
\frac{Dv}{Dt}=k^2c^2de^{-ms}\cos\theta,\\
\frac{Dw}{Dt}=k^2c^{2}ae^{-ms}\cos\theta.
\end{array}
\right.
\end{equation*}
Due to the velocity and acceleration in the Lagrangian setting we can write~(\ref{eq:Euler eq in Lagranian}) as
\begin{equation}\label{eq:Partial derivatives of pressure wrt x,y,z}
\begin{array}{l}
P_x=-\rho_0(k^2c^2b-kca\hat{f}+kcdf)e^{-ms}\sin\theta,\\
P_y=-\rho_0kc(kcd+bf)e^{-ms}\cos\theta,\\
P_z=-\rho_0(k^2c^2ae^{-ms}\cos\theta-\hat{f}kcbe^{-ms}\cos\theta+g).
\end{array}
\end{equation}
Since
\begin{equation*}\label{eq:Partial derivatives of Pressure in q,s,r}
\begin{pmatrix}
P_q \\
P_r \\
P_s
\end{pmatrix}
=
\begin{pmatrix}
\frac{\partial x}{\partial q} & \frac{\partial y}{\partial q} & \frac{\partial z}{\partial q} \\
\frac{\partial x}{\partial r} & \frac{\partial y}{\partial r} & \frac{\partial z}{\partial r} \\
\frac{\partial x}{\partial s} & \frac{\partial y}{\partial s} & \frac{\partial z}{\partial s}
\end{pmatrix}
\cdot
\begin{pmatrix}
P_x \\
P_y \\
P_z
\end{pmatrix}
\end{equation*}
we have
\begin{equation}\label{eq:Parital derivatives of Pressure in q,s,r 3}
\begin{array}{l}
P_q=-\rho_0\big[k^3c^2(a^2+d^2-b^2)e^{-ms}\cos\theta-\hat{f}kca+fkcd+k^2c^2b+kag\big]e^{-ms}\sin\theta,\\
P_r=-\rho_0kc[kcd+bf]e^{-ms}cos\theta,\\
P_s=-\rho_0\big[k^2c^2m(a^2+d^2-b^2)e^{-2ms}\cos^2\theta-\hat{f}kcmabe^{-2ms}+fkcmbde^{-2ms}\\
+k^2c^2b^2me^{-2ms}+(k^2c^2a-kcb\hat{f}+mag)e^{-ms}\cos\theta+g\big].
\end{array}
\end{equation}
Making a natural assumption that the pressure in $\pazocal{M}(t)$ has continuous second order mixed partial derivatives we obtain the following conditions
\begin{equation}\label{eq:second condition}
kcd+bf=0,
\end{equation}
\begin{equation}\label{eq:third condition}
mkc^2b+mcdf=k^2c^2a.
\end{equation}
We note that the equation~(\ref{eq:second condition}) implies by means of~(\ref{eq:Partial derivatives of pressure wrt x,y,z}) that the pressure is independent of the variable $y$ throughout the layer $\pazocal{M}(t)$. Moreover, the gradient of the following pressure distribution is precisely the right-hand side of~(\ref{eq:Partial derivatives of Pressure in q,s,r})
\begin{equation*}\label{eq:Pressure}
\begin{array}{l}
P=-\rho_0\big[-\frac{1}{2}k^2c^2(a^2+d^2-b^2)e^{-2ms}\cos^2\theta-\frac{1}{2}k^2c^2b^2e^{-2ms}+\frac{1}{2}\hat{f}kcabe^{-2ms}\\
-\frac{1}{2}fkcbde^{-2ms}+(ca\hat{f}-cdf-kc^2b-ag)e^{-ms}\cos\theta+gs\big]+\tilde{P}_0.
\end{array}
\end{equation*}
For Pollard-like internal water waves we define that
\begin{equation}\label{eq:P0-tildeP0}
\begin{array}{l}
P_0-\tilde{P}_0=-\rho_0\big[-\frac{1}{2}k^2c^2(a^2+d^2-b^2)e^{-ms_0}\cos^2\theta-\frac{1}{2}k^2c^2b^2e^{-2ms_0}\\
+\frac{1}{2}\hat{f}kcabe^{-2ms_0}-\frac{1}{2}fkcbde^{-2ms_0}+gs_0\big]+\rho_+gs_0,
\end{array}
\end{equation}
to satisfy the dynamic condition. The solution $s_0$ to the equation~(\ref{eq:P0-tildeP0}) represents the thermocline. The right-hand side of~(\ref{eq:P0-tildeP0}) is a strictly increasing diffeomorphism if
\begin{equation}\label{eq:Wavenumber}
k>4\Omega^2/\tilde{g}\approx5\times10^{-8}\mbox{m}^{-1},
\end{equation}
where $\tilde{g}=g(\rho_0-\rho_+)/\rho_0$ is called the coefficient of reduced gravity and $\Delta\rho/\rho_0=4\times10^{-3}$ is a typical value for the equatorial region \cite{Kessler1995}. Therefore, taking $\beta_0>P_0-\tilde{P}_0$ we can determine the solution $s_+$ representing the interface $z=\eta_+$ between the layer $\pazocal{M}(t)$ and $\pazocal{L}(t)$. Additionally, we require the continuity of pressure across the thermocline, which yields
\begin{equation}\label{eq:continuity of pressure}
\rho_+ga=\rho_0(kc^2b+cdf-ca\hat{f}+ag),
\end{equation}
and pressure must be time independent hence
\begin{equation*}\label{eq:fourth condition}
b^2=a^2+d^2.
\end{equation*}
From the equations~(\ref{eq:first condition}) and~(\ref{eq:second condition}) we get
\begin{equation*}\label{eq:b}
b=\frac{ma}{k},
\end{equation*}
\begin{equation*}\label{eq:d}
d=-\frac{fma}{k^2c}.
\end{equation*}
Therefore, the equation of continuity of the pressure~(\ref{eq:continuity of pressure}) becomes
\begin{equation}\label{eq:continuity of pressure 2}
\rho_0^2m^2(c^2k^2-f^2)^2=k^4(\rho_0c\hat{f}+g(\rho_+-\rho_0))^2,
\end{equation}
Moreover, the condition~(\ref{eq:third condition}) yields
\begin{equation*}\label{eq:m^2}
m^2=\frac{k^4c^2}{k^2c^2-f^2},
\end{equation*}
where $m>0$, otherwise if $m<0$ the amplitude of the wave is increasing when we ascend above the thermocline. Moreover, $m^2>0$ is ensured by~(\ref{eq:Wavenumber}) and $m=k$ at the Equator. Summarizing the aforementioned facts we obtain the dispersion relation for the internal water waves describing the oscillation of the thermocline
\begin{equation*}\label{eq:dispersion relation}
\rho_0^2c^2(c^2k^2-f^2)=(\rho_0c\hat{f}+g(\rho_+-\rho_0))^2.
\end{equation*}
The dispersion relation can be simplified by including the coefficient of reduced gravity $\tilde{g}=g(\rho_0-\rho_+)/\rho_0$. Consequently, we get
\begin{equation}\label{eq:Dispersion relation with reduced gravity}
c^2(c^2k^2-f^2)=(c\hat{f}+\tilde{g})^2.
\end{equation}
Choosing a suitable non-dimensional variables
\begin{equation}\label{eq:Non-dimensional variable}
X=c\sqrt{\frac{k}{\tilde{g}}} \qquad \varepsilon=\frac{f}{\sqrt{\tilde{g}k}} \qquad F=\frac{\hat{f}}{f},
\end{equation}
the dispersion relation~(\ref{eq:Dispersion relation with reduced gravity}) can be rewritten as a polynomial equation of degree four $P(X)=0$ where
\begin{equation}\label{eq:Polynomial P}
P(X)=X^4-\varepsilon^2(1+F^2)X^2-2F\varepsilon X-1.
\end{equation}
The roots of the polynomial $P(X)$ allows us to identify the wave speed by means of the non-dimensional variables. Moreover, we can prove that for fixed parameters there exist more than one phasespeed and we can estimate the intervals containing the roots of~(\ref{eq:Polynomial P}) (see section~(\ref{sec:Solution of the dispersion relation})). The exact value of the roots can be found by Ferrari's method. However, we focus our attention only on the existence of the real roots of the polynomial $P(X)$. The relation $c=\sqrt{\tilde{g}/k}$ refers to a standard dispersion relation for the internal waves where the Coriolis parameters are neglected \cite{Stuhlmeier2013}, which is analogous to the deep-water wave dispersion relation for surface waves \cite{Constantin2001b,Constantin2011,Constantin-Monismith2017,Henry2008a}.
\subsection{Equatorial region}
Let us now consider the special case of a solution close to the Equator in order to substantiate the validity of the Pollard-like solution. For the equatorial waves we take the Coriolis parameters
\begin{equation*}\label{eq:Coriolis parameters for Equatorial Waves}
f=0, \qquad \hat{f}=2\Omega,
\end{equation*}
and as a result, the dispersion relation~(\ref{eq:Dispersion relation with reduced gravity}) reduces to
\begin{equation}\label{eq:Dispersion relation for equatorial waves}
kc^2-2\Omega c-\tilde{g}=0.
\end{equation}
The solution to the quadratic equation~(\ref{eq:Dispersion relation for equatorial waves}) is
\begin{equation*}\label{eq:Dispersion relation for equatorial waves 2}
c=\frac{\Omega\pm\sqrt{\Omega^2+k\tilde{g}}}{k},
\end{equation*}
which readily agrees with the result for the internal equatorial water waves in the $f$-plane obtained in \cite{Hsu2014a}.
\subsection{Vorticity}\label{Vorticity}
The vorticity plays important part on the trajectory of fluid particles. For an irrotational gravity-driven flow the lack of vorticity ensures that the particle paths are open loops \cite{Constantin2006,Henry2008b}. For Gerstner-like rotational flows the particle path is a closed circle \cite{Constantin2001b,Constantin2011,Henry2008a}. We prove that the Pollard-like solution we have constructed in~(\ref{eq:Pollard explicit solution}) is indeed rotational which explains somewhat the fact that the particle paths are closed circles. The vorticity is obtained by considering the product
\begin{equation}\label{eq:Derivative of velocity wrt position variable}
\left(\frac{\partial(q,r,s)}{\partial(x,y,z)}\right) \left(\frac{\partial(u,v,w)}{\partial(q,r,s)}\right)=\left(\frac{\partial(u,v,w)}{\partial(x,y,z)}\right),
\end{equation}
where we exploit the inverse of~(\ref{eq:the Jacobian}) and the velocity field~(\ref{eq:Velocities above thermocline}). Moreover, the matrix~(\ref{eq:Derivative of velocity wrt position variable}) yields that the velocity field of fluid in $\pazocal{M}(t)$ is independent of the variable $y$. We are now in position to calculate the vorticity in the layer $\pazocal{M}(t)$
\begin{equation}\label{eq:vorticity}
\begin{array}{l}
\omega=(w_y-v_z,u_z-w_x,v_x-u_y)=\frac{1}{1-m^2a^2e^{-2ms}}\times\\
\begin{pmatrix}
\frac{m^2af}{k}e^{-ms}\sin\theta\\ -c(m^2-k^2)ae^{-ms}\cos\theta+cma^2(m^2+k^2)e^{-2ms}\\ fma(\cos\theta-mae^{-ms})e^{-ms}
\end{pmatrix}^{T}.
\end{array}
\end{equation}
We can validate our result by considering the vorticity in the equatorial region. Taking the Coriolis parameters for the equatorial waves $f=0$, $\hat{f}=2\Omega$, the vorticity~(\ref{eq:vorticity}) takes the form
\begin{equation*}
\omega=\frac{1}{1-m^2a^2e^{-2ms}}(0,2kcm^2a^2e^{-2ms},0),
\end{equation*}
where taking the critical value of the amplitude of waves $a=1/m$ and $m=k$ we recover the vorticity for the internal equatorial water waves in the $f$-plane approximation \cite{Hsu2014a}
\begin{equation*}
\omega=\left(0,\frac{2kce^{-2ks}}{1-e^{-2ks}},0\right),
\end{equation*}
and it also coincides with the vorticity in the $\beta$-plane approximation \cite{Constantin2013b}.
\section{Solution of the dispersion relation}\label{sec:Solution of the dispersion relation}
This section presents an analytic approach towards identifying the location of roots of the polynomial~(\ref{eq:Polynomial P}). If we can find the roots of the polynomial~(\ref{eq:Polynomial P}), we can discern a wave phasespeed by means of the non-dimensional change of variables~(\ref{eq:Non-dimensional variable}). Moreover, we show that the polynomial $P(X)$, which is of degree four, has only two real roots and both are of order $O(1)$ indicating two wave speeds. It is readily seen that the constants of~(\ref{eq:Polynomial P}) are positive on both hemispheres of Earth and we can perform an analysis of the polynomial~(\ref{eq:Polynomial P}) on both hemispheres simultaneously nonetheless, we exclude the Equator since $F$ is not defined there. We recall Cauchy's theorem \cite{Prasolov2010}.
\begin{theorem}\label{th:Cauchy theorem}
Let $f(x)=x^n-b_1x^{n-1}-...-b_n$ where all $b_i$ are non-negative and at least one of them is non-zero. The polynomial $f$ has a unique (simple) positive root $p$ and the absolute values of the other roots do not exceed $p$.
\end{theorem}
According to Cauchy's theorem the polynomial $P(X)$ has a unique positive root $X_0^+>0$. However, the polynomial~(\ref{eq:Polynomial P}) still can have three negative roots. We can easily compute the first derivative of the polynomial $P(X)$
\begin{equation*}\label{eq:Derivative of polynomial P(X)}
P'(X)=4X^3-2\varepsilon^2(1+F^2)X-2F\varepsilon
\end{equation*}
and its discriminant
\begin{equation*}\label{eq:Discriminant of P'(X)}
\Delta P'(X)=128\varepsilon^6(1+F^2)^3-1728F^2\varepsilon^2.
\end{equation*}
Making an assumption that we are outside the tropical zone, at latitudes exceeding $23\degree 26' 16"$, we have that $|F|<2.4$. Since the water temperature in the subpolar regions of Earth is constant the thermocline does not have favorable conditions to exist there and to produce the internal wave motion \cite{Garrison-Ellis2016}. Moreover, for the latitudes at most $15\degree$ away from the poles we have $|F|\geq 2-\sqrt{3}$ and therefore, we infer that the polynomial $P'(X)$ for the mid-latitudes $(23\degree 26' 16"-75\degree)$ has exactly one real root as
\begin{equation*}\label{eq:Discriminant of P'(X) 2}
\Delta P'(X)<0,
\end{equation*}
which means that the polynomial $P(X)$ has one critical point. Together with $P(0)=-1$, it proves that there exist one unique positive root $X_0^+>0$ and one unique negative root $X_0^-<0$. For the polynomial $P(X)$ we can estimate
\begin{equation}\label{eq:Estimates}
\begin{array}{l}
P(\pm 1)=\mp 2\varepsilon F+O(\varepsilon^2)\\
P(1+\varepsilon F)=2\varepsilon F+O(\varepsilon^2)>0\\
P(-1+\varepsilon F)=-2\varepsilon F+O(\varepsilon^2)<0
\end{array}
\end{equation}
since $F=O(1)$ and $\varepsilon=O(10^{-2})$ for internal waves with the wavelength 150-250m. Hence, the estimates~(\ref{eq:Estimates}) yield that
\begin{equation*}
X_0^+-1\in(0,\varepsilon F) \qquad X_0^-+1\in (0,\varepsilon F).
\end{equation*}
for both hemispheres (see the result for the surface water waves in \cite{Constantin-Monismith2017}). We have proved therefore the existence of two real roots of the polynomial~(\ref{eq:Polynomial P}). The exact wave speed for the internal water waves generated by the oscillation of the thermocline can be found by the non-dimensional change of variable~(\ref{eq:Non-dimensional variable}) indicating two phasespeeds in dimensional terms close to
\begin{equation*}\label{eq:Dimensional term}
c\approx\pm\sqrt{\frac{\tilde{g}}{k}}.
\end{equation*}
Therefore, the analysis identifies one mode of the internal wave that is a standard internal wave $c=\sqrt{\tilde{g}/k}$ \cite{Stuhlmeier2013} very slightly modified by Earth's rotation.
\subsection*{Acknowledgements}
The author acknowledges the support of the Science Foundation Ireland (SFI) research grant 13/CDA/2117.
|
1,314,259,993,542 | arxiv | \section{introduction}
\label{sec:intro}
Using grid diagrams Ozsv\'{a}th, Szab\'{o} and Thurston \cite{grid} defined combinatorial invariants of Legendrian and transverse links in the tight 3-sphere. These invariants have been shown to be effective, meaning they distinguish some Legendrian and transverse knots having identical classical invariants. Grid diagrams for links in lens spaces have been studied in \cite{lensgridcomb}, and their relationship with Legendrian links in universally tight lens spaces laid out in \cite{lensgridleg}.
Roughly speaking, such a grid diagram is just the usual toroidal Heegaard diagram for $L(p,q)$ with $n$ parallel copies of the $\alpha$ and $\beta$ curves, along with some choice of $2n$ basepoints in their complements; $n$ is the \emph{index} of the grid diagram.
The invariants of \cite{grid} admit natural generalizations to the case of links in universally tight lens spaces:
\begin{theorem}
\label{thm:grid}
For a grid diagram $G$ encoding a link $K\subset L(p,q)$, let $L \subset (L(p,q),\xi_{UT})$ denote the corresponding oriented Legendrian representative of $K$. There are two associated cycles $\bold{x}^+,\bold{x}^-\in CFK^- (G)$ supported in Maslov gradings
\begin{align*}
M(\bold{x}^+) = tb_\mathbb{Q}(L) - rot_\mathbb{Q}(L) +\frac{1}{p} -d(p,q,q-1)\\
M(\bold{x}^-) = tb_\mathbb{Q}(L) + rot_\mathbb{Q}(L) +\frac{1}{p}-d(p,q,q-1).
\end{align*}
If $K$ is a knot, the cycles are supported in Alexander gradings
\begin{align*}
A(\bold{x}^+) = \frac{1}{2}\Big{(}tb_\mathbb{Q}(L) - rot_\mathbb{Q}(L) +1 \Big{)}\\
A(\bold{x}^-) =\frac{1}{2}\Big{(}tb_\mathbb{Q}(L) + rot_\mathbb{Q}(L) +1 \Big{)}.
\end{align*}
The homology classes $[x^+]$ and $[x^-]$ in $HFK^- (L(p,q),K)$, denoted $\lambda ^+ (L)$ and $\lambda ^- (L)$, are invariants of the oriented Legendrian isotopy class of $L$. Let $L^-$ (respectively $L^+$) denote the negative (respectively positive) Legendrian stabilization of $L$ in $(L(p,q),\xi_{UT})$. We have that
\begin{align*}
\lambda^+ (L^-) = \lambda^+(L) \quad \quad \quad \quad \lambda^- (L^-) = U\cdot\lambda^-(L)\\
\lambda^+ (L^+) = U\cdot \lambda^+(L) \quad \quad \quad \quad \lambda^- (L^+) = \lambda^-(L).
\end{align*}
\end{theorem}
In the case of a link, we compute the rational Alexander multi-gradings in Proposition \ref{prop:agradingcomp}.
Transverse isotopy classes correspond to Legendrian isotopy classes modulo negative Legendrian (de)stabilizations \cite{transverseapprox}. Let $T$ denote the positive transverse push-off of $L$ and define $\theta (T)$ to be $\lambda ^+ (L)$, the corollary below follows immediately:
\begin{corollary}
\label{corollary:grid}
The homology class $\theta(T)$ is an invariant of the transverse isotopy class of $T$, and is supported in Maslov grading
\begin{align*}
M(\theta(T)) = sl_{\mathbb{Q}}(T) +\frac{1}{p}-d(p,q,q-1).
\end{align*}
If $T$ is a knot, then the invariant is supported in Alexander grading
\[
A(\theta(T)) = \frac{1}{2}\Big{(}sl_\mathbb{Q}(T) +1\Big{)}.
\]
\end{corollary}
We refer to the invariants $\lambda^\pm, \theta$ as the GRID invariants.
Suppose that $(B,\pi)$ is an open book supporting a contact manifold $(Y,\xi)$. Any link $K$ braided about this open book admits a natural transverse representative, and in fact via the general transverse Markov theorem \cite{pav} all transverse links arise as braids about $(B,\pi)$.
Using this perspective Baldwin, Vela-Vick and V\'{e}rtesi \cite{equiv} defined the BRAID invariant of transverse links in $(Y,\xi)$:
\[
t(K) \in HFK^- (-Y,K).
\]
The contact class in Heegaard Floer homology is characterized in terms of the Alexander filtration induced by the binding of any supporting open book on the Heegaard Floer chain complex \cite{contactclass}. The BRAID invariant admits a similar characterization. Suppose that $Y$ is a rational homology sphere. If $K$ is braided about $(B,\pi)$, we may consider the filtration on $CFK^- (-Y,K)$ induced by $-B$:
\[
\emptyset = \mathcal{F}^{-B}_i\subset\mathcal{F}^{-B}_{i+1}\subset \dots \subset \mathcal{F}^{-B}_j = {CFK}^-(-Y,K).
\]
Set
$bot := min\{j| H_* (\mathcal{F}_j^{-B})\ne 0 \}$
and let $H_{top}(\mathcal{F}^{-B}_{bot})$ denote the summand of $H_*(\mathcal{F}^{-B}_{bot})$ of maximal rational Maslov grading. We prove in Subsection \ref{subsec:reformulation} that $H_{top}(\mathcal{F}^{-B}_{bot})$ is a rank one $\mathbb{F}[U_1,\dots, U_m]$-module, where $m$ is the number of components of $K$.
The invariant $t(K)$ is the image of a generator under the natural map $H_{top}(\mathcal{F}^{-B}_{bot}) \to HFK^- (-Y,K)$. \footnote{This is not exactly right, see Subsection \ref{subsec:reformulation} for precise statements.}
The above reformulation of $t(K)$ was first proven in the case of a braid about the unknot in $S^3$ \cite{equiv}. There, the authors show that the GRID invariant $\theta (K)$ for a transverse link $K\subset (S^3,\xi_{std})$ also admits such a reformulation, and use this to prove the equivalence of $t(K)$ and $\theta (K)$ in this special case. Generalizing their approach we prove the following:
\begin{theorem}
\label{thm:equivalence}
Let $K\subset (L(p,q),\xi_{UT})$ be a transverse link, then the GRID and BRAID invariants are equivalent. There exists a graded isomorphism of $\mathbb{F}[U_1,\dots,U_m]$-modules
\[
HFK^-(-L(p,q),K)\to HFK^-(-L(p,q),K)
\]
mapping the class $\theta(K)$ to $t(K)$.
\end{theorem}
Our proof involves generalizing the reformulation of $t(K)$ to braids about rational open books having connected binding.
The lens space $L(p,q)$ can be obtained by $-p/q$ surgery on the unknot in $S^3$; the core of the filling torus is a rationally fibered knot $B\subset L(p,q)$ with $D^2$ fibers. We let $(B,\pi)$ denote this rational open book; the monodromy $\pi$ is a $2\pi q/p$ boundary twist. $(B,\pi)$ supports a universally tight contact structure $\xi_{UT}$ on $L(p,q)$ \cite{CCSMCM}. We refer to a braid about $(B,\pi)$ as a \emph{lens space braid}.
In Section \ref{sec:altchara} we prove an alternative reformulation of $t(K)$ involving the Alexander filtration induced by the unknotted Seifert cable of the rational binding $B$. We show in Section \ref{sec:gridchara} that the GRID invariant $\theta(K)$ of Corollary \ref{corollary:grid} satisfies the same reformulation involving the Seifert cable of the binding, allowing us to prove Theorem \ref{thm:equivalence}.
Lisca, Ozsv\'{a}th, Stipsicz and Szab\'{o} \cite{LOSS} defined invariants of null-homologous Legendrian and transverse knots in a contact manifold $(Y,\xi)$.
\[
\mathcal{L}(K),\mathcal{T}(K)\in HFK^- (-Y,K).
\]
These are referred to as the LOSS invariants. For a transverse knot $K$, it is proven directly in \cite{equiv} that $\mathcal{T}(K) = t(K)$; Theorem \ref{thm:equivalence} allows us to conclude that for a transverse knot $K\subset (L(p,q),\xi_{UT})$ the invariants $\theta (K)$ and $\mathcal{T}(K)$ are equivalent. The equivalence of Legendrian invariants also follows immediately:
\begin{corollary}
\label{corollary:same}
If $K\subset (L(p,q),\xi_{UT})$ is a Legendrian knot , the Legendrian GRID and LOSS invariants are equivalent, i.e. there
exists a graded isomorphism of $\mathbb{F}[U]$-modules
\[
HFK^-(-L(p,q),K)\to HFK^-(-L(p,q),K)
\]
mapping the class $\lambda^{+} (K)$ to $\mathcal{L}(K)$ and the class $\lambda^- (K)$ to $\mathcal{L}(-K)$, where $-K$ denotes the knot with reversed orientation.
\end{corollary}
\subsection{The Berge conjecture and simple knots}
These new invariants may give some insight into the Berge conjecture, which proposes an exhaustive list of knots in the 3-sphere admitting Dehn surgeries to lens spaces.
Dually, the Berge conjecture is equivalent to showing that any knot in a lens space admitting a surgery to the 3-sphere is \emph{simple}; i.e. can be encoded with an index one grid diagram.
Given a grid diagram for a knot $K\subset L(p,q)$, one can \emph{dualize} to obtain a grid diagram for the mirror $K\subset -L(p,q)$. Via this construction, a grid diagram gives rise to Legendrian representatives of the encoded knot $K$ and its mirror.
The GRID invariants can be used to characterize index one grid diagrams for knots admitting $S^3$ surgeries:
\begin{theorem}
\label{thm:gridmirror}
Suppose $K\subset L(p,q)$ admits an $S^3$ surgery. If a grid diagram $\mathcal{G}$ for $K$ gives rise to Legendrian representatives $L_0\subset (L(p,q),\xi_{UT})$ and $L_1\subset (L(p,p-q),\xi_{UT})$ then
\[
\mathcal{G} \text{ is an index one diagram} \iff \widehat{\lambda}^+(L_0),\widehat{\lambda}^-(L_0),\widehat{\lambda}^+(L_1),\widehat{\lambda}^-(L_1)\ne 0.
\]
\end{theorem}
The forward direction is immediate, if $\mathcal{G}$ is an index one diagram the two complexes used to define the quadruple of invariants have trivial differential. The reverse direction utilizes the Floer simplicity of $K$, see Section \ref{sec:gridmirror}.
\noindent
We formulate two conjectures:
\begin{conjecture}
\label{conj:rep}
Any knot $K\subset L(p,q)$ admitting a surgery to the 3-sphere has a Legendrian representative $L\subset (L(p,q),\xi_{UT})$ such that $\widehat{\lambda}^+(L),\widehat{\lambda}^-(L)\ne 0$.
\end{conjecture}
\begin{conjecture}
\label{conj:grid}
Given any Legendrian representatives (in universally tight lens spaces) of $K\subset L(p,q)$ and its mirror $m(K)\subset L(p,p-q)$ there exists a grid diagram giving rise to the pair of Legendrians.
\end{conjecture}
Assuming the validity of these conjectures, the Berge conjecture follows readily via an application of Theorem \ref{thm:gridmirror}.
Suppose $K\subset L(p,q)$ admits a surgery to the 3-sphere. Conjecture \ref{conj:grid} produces a grid diagram $\mathcal{G}$ for $K$ giving rise to the Legendrian representatives of $K$ and its mirror guaranteed by Conjecture \ref{conj:rep}, where the associated quadruple of invariants is non-vanishing. Theorem \ref{thm:gridmirror} implies that $\mathcal{G}$ is an index one diagram, hence $K$ is a simple knot.
Any knot $K\subset L(p,q)$ which admits a surgery to the 3-sphere is rationally fibered and supports a tight contact structure. Progress on Conjecture \ref{conj:rep} is likely to occur first for knots supporting a universally tight contact structure, and will probably appeal to properties of the LOSS invariant $\widehat{\mathcal{L}}$ via Corollary \ref{corollary:same}. Conjecture \ref{conj:grid} is known to hold for knots in the tight 3-sphere, this is a result of Dynnikov and Prasolov \cite{jonescon}.
\begin{comment}
\textbf{Organization} Section \ref{sec:contact} provides some contact geometric background. We review the correspondence between grid diagrams for links in lens spaces and Legendrian and transverse links in universally tight lens spaces, and lens space braids.
Section \ref{sec:HFK} provides the necessary background on knot Floer homology. We prove Theorem \ref{thm:grid} in Section \ref{sec:invariants}. In Section \ref{sec:braidinvt} we reformulate the BRAID invariant in terms of an Alexander filtration on the knot Floer complex induced by the binding. In Section \ref{sec:rationalchar} we use work of \cite{QOB} to extend the reformulation of the previous section to braids about rational open books whose pages have connected boundary.
In Section \ref{sec:diagram} we present a standard Heegaard diagram for lens space braids and identify an intersection point representing the BRAID invariant. In Section \ref{sec:altchara} we give an alternate reformulation of the BRAID invariant of a lens space braid in terms of the Alexander filtration on its knot Floer complex induced by the Seifert cable of the binding of $(B,\pi)$. Finally, in Section \ref{sec:gridchara} we show that the GRID invariant satisfies the reformulation of the previous section and prove Theorem \ref{thm:equivalence}.
\end{comment}
\subsection{Acknowledgements} We thank John Baldwin for many helpful discussions.
\section{Contact Preliminaries}
\label{sec:contact}
\subsection{Contact Geometry}
\label{subsec:contact}
We assume the reader has a certain knowledge of contact geometry.
For an introduction to the Giroux correspondence and open books consult the wonderful notes of Etnyre \cite{openbooks}. For a reference on transverse and Legendrian links we point the reader to Etnyre's survey \cite{legtransverse}.
Let $(Y,\xi)$ be a contact 3-manifold. Suppose that $(B,\pi)$ is an open book supporting $(Y,\xi)$. $B$ sits naturally as a transverse link. Any link braided about $B$ is also naturally a transverse link, as the contact plane field is very close to the plane field tangent to the pages away from the binding $B$. The following is a generalization of a theorem of Bennequin \cite{ben}
\begin{theorem} \cite{pav}
Suppose $(B,\pi)$ is an open book supporting $(Y,\xi)$. Every transverse link in $(Y,\xi)$ is transversely isotopic to a braid with respect to $(B,\pi)$.
\end{theorem}
There is a notion of positive Markov stabilization for braids with respect to an arbitrary open book, defined in \cite{pav}. This operation increases the braid index by one, but preserves the transverse isotopy class of the braid. The following is a generalization of the transverse Markov theorem of Wrinkle \cite{wrinkle}.
\begin{theorem} \cite{pav}
Suppose $K_1$ and $K_2$ are braids with respect to an open book $(B,\pi)$ supporting $(Y,\xi)$. $K_1$ and $K_2$ are transversely isotopic if and only if they admit positive Markov stabilizations $K_1 ^+$ and $K_2 ^+$ which are braid isotopic with respect to $(B,\pi)$.
\end{theorem}
The binding $B$ of an open book supporting $(Y,\xi)$ sits naturally as a transverse link.
A copy of $B$ may be braided about the underlying open book, resulting in a braid of index $n$, where $n$ is the number of components of $B$.
Recall that the neighborhood of a transverse knot in any contact manifold is standard. If $K$ is transverse it admits a neighborhood contactomorphic to
\[
N_\epsilon = \{(r,\theta,z):r<\epsilon\} \subset \mathbb{R}^2 \times S^1
\]
where $\xi = ker(\alpha) = ker(dz+r^2 d\theta)$, and $K$ is identified with $(0,0)\times S^1$. In these coordinates, $K$ admits a parametrization $\gamma (t) = (0,t,t)$, where $t\in [0,2\pi)$. Consider the following transverse isotopy
\[
\Gamma _s (t) = (s,t,t)
\]
from $\gamma _0 (t)$ to $\gamma_{\epsilon /2} (t)$. Applying this isotopy to each component of $B$ realizes a copy of $B$ as an index $n$ braid.
We will also use the notion of a rational open book and how one supports a contact structure; the original reference is \cite{CCSMCM}. Whenever an open book is rational we will emphasize it, otherwise open books are assumed to be integral. Any link braided about a rational open book also sits naturally as a transverse link in the supported contact manifold.
The unknot $U\subset S^3$ is fibered with disk pages. We take the convention that the lens space $L(p,q)$ is obtained by $-p/q$ surgery on the unknot. Let $B\subset L(p,q)$ be the core of the filling torus in the Dehn surgery.
$B$ is the binding of a rational open book decomposition $(B,\pi)$ for $L(p,q)$. The open book has $D^2$ pages, and the monodromy $\pi$ is a counter-clockwise $2\pi q/p$-rotation, which we denote by $\delta ^{q/p}$.
Honda \cite{contact1} classifies the universally tight contact structures (those which lift to the tight contact structure on $S^3$) on lens spaces. There are at most two such contact structures on $L(p,q)$. The structure is unique if $q=p-1$, otherwise the two universally tight contact structures are related by co-orientation reversal.
We let $\xi _{UT}$ denote the contact structure on $L(p,q)$ supported by the rational open book $(B,\pi)$, see \cite{CCSMCM} for a proof that the contact structure is universally tight. $\xi_{UT}$ is also constructed explicitly as the kernel of a globally defined 1-form in section 3 of \cite{lensgridleg}.
We will need the classical invariants for rationally null-homologous Legendrian and transverse links, studied in \cite{RLCG}.
\begin{definition}
Let $L\subset Y$ be an oriented, rationally null-homologous link, which is partitioned into sub links $L_1\cup\dots \cup L_l$. Let $r$ denote the least common multiple of the orders of components of $L$ in $H_1 (Y;\mathbb{Z})$. Let $i:\Sigma \to Y$ be a \emph{uniform rational Seifert surface} for $L$, by which we mean an oriented surface whose boundary wraps $r$ times around each component of $L$ (even those components whose orders in homology are less than $r$).
\begin{itemize}
\item Given another oriented link $L'$, define the \emph{rational linking number} of $L$ with $L'$ to be
\[
lk_{\mathbb{Q}} (L,L') = \frac{1}{r}\Sigma \cdot L'
\]
where $\Sigma \cdot L'$ is the algebraic intersection number. In general the rational linking number may depend on the relative homology class of rational Seifert surface $\Sigma$ for $L$. We will only consider rational homology spheres, so we continue to suppress $\Sigma$ from the notation.
\item If $L$ is a Legendrian link in $(Y,\xi)$ let $L'$ denote the longitude for $L$ specified by the contact framing $\xi |_L \cap \nu (L)$. We define the \emph{rational Thurston-Bennequin number} of $L$ to be
\[
tb_{\mathbb{Q}}(L) = lk_\mathbb{Q} (L,L').
\]
\item Consider a trivialization of $\xi|_{\Sigma} \simeq \Sigma \times \mathbb{R}^2$. Since $L$ is oriented, the positive unit tangent vectors to $L$ give rise to a nonzero section $\sigma$ of this trivial bundle restricted to $\partial \Sigma$. We define the \emph{rational rotation number} of $L$ to be the winding number of $\sigma$ divided by $r$.
\[
rot_\mathbb{Q} (L) = \frac{1}{r} winding(\sigma,\mathbb{R}^2)
\]
We also denote the winding of the section $\sigma$ restricted to the component of $\partial \Sigma$ which wraps around the sub-link $L_i\subset L$ by
\[
rot^i_\mathbb{Q}(L).
\]
\item Suppose $L$ is a transverse link in $(Y,\xi)$. The map $i|_{\partial \Sigma} : \partial \Sigma \to L$ is an $r$-fold covering map. The map $i$ induces a map from a small neighborhood of the zero-section of $i^* \xi$ to a small neighborhood of the zero section of $\xi |_L$, which in turn is naturally identified with a neighborhood $\nu(L)$ of $L$ (because $L$ is transverse). The bundle $i^* \xi$ is trivial, so there exists a nonvanishing section $v$. A small and generic choice of section $v$ gives rise to a link $L'$ sitting in $\nu(L)\smallsetminus L$. We define the \emph{rational self-linking number} of $L$ to be
\[
sl_\mathbb{Q} (L) = \frac{1}{r}lk_\mathbb{Q}(L,L')
\]
the coefficient $\frac{1}{r}$ appears because $L'$ is an $r$-fold push off of $L$.
We define
\[
sl^i_\mathbb{Q}(L) = \frac{1}{r}lk_\mathbb{Q}(L_i,L').
\]
\end{itemize}
\end{definition}
\subsection{Grid diagrams for links in lens spaces}
\label{subsec:contactgrid}
Grid diagrams for links in lens spaces were first studied in \cite{lensgridcomb}, and subsequently from a contact geometric perspective in \cite{lensgridleg}.
Let $T^2$ be the standard torus $\mathbb{R}^2/\mathbb{Z}^2$, where $\mathbb{Z}^2$ is the standard lattice generated by $(1,0)$ and $(0,1)$. Let $\pi :\mathbb{R}^2 \to \mathbb{R}^2/\mathbb{Z}^2 = T^2$ denote the quotient map.
\begin{definition}
Suppose that $0<q<p$. A \emph{grid diagram} for a link $K\subset L(p,q)$, with \emph{index} $n$ is a Heegaard diagram $\mathcal{G} = (T^2,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w})$ where
\begin{itemize}
\item $\boldsymbol{\alpha} = \{\alpha_0,\dots,\alpha_{n-1}\}$, where $\alpha_i$ is the image of the line $y=i/n$ under the map $\pi$. The $n$ annular components of $T^2 \smallsetminus \boldsymbol{\alpha}$ are called the \emph{rows} of $\mathcal{G}$. For $0\le i < n$ the row between $\alpha_{i}$ and $\alpha_{i+1}$ is called the $i^{th}$ row.
\item $\boldsymbol{\beta} = \{\beta_0,\dots,\beta_{n-1}\}$, where $\beta_i$ is the image of the line $y= -\frac{p}{q}(x-\frac{i}{pn})$ under the map $\pi$. The $n$ annular components of $T^2 \smallsetminus \boldsymbol{\beta}$ are called the \emph{columns} of $\mathcal{G}$. For $0\le i < n$ the row between $\beta_{i}$ and $\beta_{i+1}$ is called the $i^{th}$ column.
\item $\bold{z} = \{z_0,z_1,\dots,z_{n-1}\}$. For each $i$, the basepoint $z_i$ is in the $i^{th}$ column.
\item $\bold{w} = \{w_0,w_1,\dots,w_{n-1}\}$. For each $i$, the basepoint $w_i$ is in the $i^{th}$ column.
\item Each region of $T^2 \smallsetminus \boldsymbol{\alpha} \smallsetminus \boldsymbol{\beta}$ contains at most one basepoint of $\bold{z}\cup\bold{w}$.
\item Each row contains two basepoints, one from $\bold{z}$ and one from $\bold{w}$.
\end{itemize}
\end{definition}
In Proposition 4.3 of \cite{lensgridcomb} is it shown that every link $K\subset L(p,q)$ is represented by a grid diagram.
Baker and Grigsby \cite{lensgridleg} introduce the notion of a toroidal front diagram; these diagrams are similar to regular front diagrams for knots in $(\mathbb{R}^3, \xi_{std})$:
\begin{proposition}(Proposition 3.3 of \cite{lensgridleg})
A toroidal front diagram uniquely specifies a Legendrian link in $(L(p,q),\xi_{UT})$ up to Legendrian isotopy.
\end{proposition}
\begin{proposition}(Proposition 3.4 of \cite{lensgridleg})
Every Legendrian isotopy class in ($L(p,q),\xi_{UT}$) admits a representative admitting a toroidal front projection.
\end{proposition}
To a grid diagram $\mathcal{G}$ for $K$ one can associate several rectilinear projections of $K$ onto $T^2$. Baker and Grigsby describe how to canonically perturb a rectilinear projection of $K$ into a Legendrian front, they prove:
\begin{proposition}(Lemma 4.5 and Proposition 4.6 of \cite{lensgridleg})
\label{prop:front}
A rectilinear projection associated to a grid diagram $\mathcal{G}$ for $K\subset L(p,q)$ uniquely specifies a toroidal front for $L$, a Legendrian in $(L(p,q),\xi_{UT})$. Moreover, the Legendrian isotopy class of
$L$ is independent of choice of rectilinear projection coming from $\mathcal{G}$, hence a grid diagram uniquely specifies a Legendrian representative of $K$.
\end{proposition}
The assumption that $0<q<p$ is needed to obtain a toroidal front diagram, the pieces of the rectilinear projection inside the columns of the diagram must be negatively sloped.
\begin{remark}
\label{remark:ORIENTATIONS}
Baker and Grigsby actually show that a grid diagram for $K\subset L(p,q)$ induces a Legendrian representative of the mirror $K\subset -L(p,q)$. Their approach fits nicely with the conventions established in \cite{grid} and \cite{lensgridcomb} and can be easily modified to give the results stated in this paper. Our conventions will be more natural in proving Theorem \ref{thm:equivalence}.
\end{remark}
Baker and Grigsby classify the grid moves which preserve topological and Legendrian isotopy classes.
\begin{theorem}(Theorem 5.1 of \cite{lensgridleg})
\label{thm:leg}
Let $\mathcal{G}$ and $\mathcal{G} '$ be grid diagrams for $K$ and $K'$ in $L(p,q)$. Let $L$ and $L'$ denote the induced Legendrian representatives of $K$ and $K'$ by $\mathcal{G}$ and $\mathcal{G} '$ respectively.
\begin{itemize}
\item $K$ and $K'$ are isotopic if and only if $\mathcal{G}$ and $\mathcal{G} '$ are related by a sequence of elementary grid moves.
\item $L$ and $L'$ are Legendrian isotopic if and only if $\mathcal{G}$ and $\mathcal{G}'$ are related by a sequence of elementary Legendrian grid moves.
\end{itemize}
\end{theorem}
The \emph{elementary grid moves} consist of eight different types of (de)stabilizations along with row and column commutations. See Figures \ref{fig:stab} and \ref{fig:commutation}.
\begin{figure}[h]
\def400pt{175pt}
\input{WNWstab.pdf_tex}
\caption{A W:NW stabilization is pictured. A new pair of curves, $\alpha '$ and $\beta '$, is added, in addition to a pair of basepoints. The ordinal direction of the stabilization indicates which slot is basepoint free after stabilizing. The corresponding destablization is the inverse of this procedure.}
\label{fig:stab}
\end{figure}
\begin{figure}[h]
\def400pt{200pt}
\input{commutation.pdf_tex}
\caption{A commutation of the first and second columns on a index three diagram for a link in $L(2,1)$. Because the basepoint pairs in the columns do not interleave we are able to perform an exchange of the basepoints. Row commutations are the analogue for two adjacent rows. These commutations can be performed so long as the markings do not interleave.}
\label{fig:commutation}
\end{figure}
\begin{lemma} (Compare with Lemma 4.2 of \cite{grid})
A stabilization of type Z:SE (respectively Z:NE, Z:NW, or Z:SW) is equivalent to a stabilization of type W:NW(respectively W:SW, W:SE, or W:NE) followed by a sequence of commutation moves on the torus.
\end{lemma}
\begin{proof}
After performing a stabilization near a $Z$ basepoint, we can perform a sequence of commutation moves to get the desired diagram.
\end{proof}
The \emph{elementary Legendrian grid moves} are comprised of commutations along with (de)stabilizations of types W:NE, and W:SW.
It is easy to see which elementary grid moves correspond to Legendrian (de)stabilization:
\begin{lemma}
(De)stabilizations of type W:SE and W:NW correspond to negative and positive Legendrian (de)stabilization respectively.
\end{lemma}
\begin{proof}
Consider the rectilinear projection $\lambda$ of $K$ constructed in the proof of Proposition \ref{prop:braiding}. This projection may be smoothed to a toroidal front projection of $L$, the associated Legendrian representative of $K$, having zero cusps. A stabilization of type W:SE has the effect of locally adding two upward oriented cusps, see Figure \ref{fig:negstab}. This can be thought of as taking place in a small Darboux ball, hence corresponds to negative Legendrian stabilization.
\begin{figure}[h]
\def400pt{200pt}
\input{negstab.pdf_tex}
\caption{The effect of a W:SE stabilization on an associated rectilinear projection of $K$ is pictured on top. The effect on the associated toroidal front projection is pictured below.}
\label{fig:negstab}
\end{figure}
Likewise, by rotating the figures 180 degrees, a stabilization of type W:NW can be seen to have the effect of adding two downward oriented cusps, and hence corresponds to positive Legendrian stabilization.
\end{proof}
\begin{remark}
Since $(B,\pi)$ supports $\xi_{UT}$, the contact planes are oriented so that the ``upward" braid induced by a grid diagram is a positive transverse link, see Proposition \ref{prop:braiding}. Reversing co-orientation preserves the set of elementary Legendrian grid moves and exchanges the two moves corresponding to positive and negative Legendrian (de)stabilization; the set of elementary transverse grid moves is not preserved under co-orientation reversal. Let $\overline{\xi_{UT}}$ denote $\xi_{UT}$ with the reverse co-orientation.
\end{remark}
Transverse isotopy classes are in one to one correspondence with Legendrian isotopy classes up to negative Legendrian (de)stabilization \cite{transverseapprox}. The transverse isotopy class associated to a Legendrian link is obtained via positive transverse push-off. A grid diagram for $K$ gives rise to a transverse representative $T$ by taking the positive transverse push-off of $L$. (De)stabilizations of type W:SE in addition to the elementary Legendrian grid moves comprise the \emph{elementary transverse grid moves}. The following is evident:
\begin{proposition}
\label{prop:trans}
Let $\mathcal{G}$ and $\mathcal{G} '$ be grid diagrams for $K$ and $K'$ in $L(p,q)$. Let $T$ and $T'$ denote the induced transverse representatives of $K$ and $K'$ by $\mathcal{G}$ and $\mathcal{G} '$ respectively.
$T$ and $T'$ are transversely isotopic if and only if $\mathcal{G}$ and $\mathcal{G} '$ are related by a sequence of elementary transverse grid moves.
\end{proposition}
\begin{proposition}
\label{prop:braiding}
Let $(B,\pi)$ denote the rational open book supporting ($L(p,q),\xi_{UT}$) described in subsection \ref{subsec:contact}.
Each grid diagram $\mathcal{G}$ naturally induces a braiding $\mathcal{B}$ of $K$ about $B$, and hence a transverse representative $T$ of $K$.
\end{proposition}
\begin{proof}
Given a grid diagram $\mathcal{G} = (T^2,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w})$, we specify a longitude $\lambda$ for $K$ in the following way. In each column draw an oriented arc $\gamma _i ^\beta$ upward (i.e. having tangent vector $-q\frac{d}{dx} +p\frac{d}{dy}$) from $w_i$ to $z_i$. For each $i$, draw an arc in the $i^{th}$ row $\gamma^\alpha_i$ from a point of $\bold{z}$ to a point of $\bold{w}$ oriented right to left, (i.e. having tangent vector $-d/dx$). We push all horizontal arcs slightly into the $\boldsymbol{\alpha}$ handlebody, all vertical arcs into the $\boldsymbol{\beta}$ handlebody and set $\lambda = \bigcup_{i=0}^{n-1}( \gamma^\alpha_i \cup \gamma^\beta_i)$. The pages of $(B,\pi)$ meet $T^2$ in parallel copies of $\alpha_0$.
Note that $\lambda$ may be made positively transverse to all parallel copies of $\alpha_0$ in $T^2$ via a small isotopy, realizing $K$ as a braid about $B$.
\end{proof}
The \emph{elementary braid grid moves} consist of commutations along with (de)stabilizations of type W:SE and W:SW. These are precisely the grid moves which leave $\mathcal{B}$ unchanged.
It is elementary to check the following proposition.
\begin{proposition}
\label{prop:braid}
Let $\mathcal{G}$ and $\mathcal{G} '$ be grid diagrams for $K$ in $L(p,q)$. Let $\mathcal{B}$ and $\mathcal{B}'$ denote the induced braid representatives of $K$ by $\mathcal{G}$ and $\mathcal{G} '$ respectively.
$\mathcal{B}$ and $\mathcal{B}'$ are braid isotopic if and only if $\mathcal{G}$ and $\mathcal{G} '$ are related by a sequence of elementary braid grid moves.
\end{proposition}
A (de)stabilization of type W:NE corresponds to a generalized positive Markov (de)stabilization in the rational setting. A generalized positive Markov stabilization has the effect of connect summing the braid with a braid index p unknot along a positively half twisted band.
The following proposition is a generalization of the analogous statement for grid diagrams in $S^3$, proven in \cite{gridcommutative}. Their proof generalizes to our setting.
\begin{proposition}
\label{prop:commute}
The two transverse links associated to a grid diagram coincide, i.e. the diagram
\[
\xymatrix{
\mathcal{G} \ar[d] \ar[r] & L \ar[d]\\
\mathcal{B} \ar[r]&T}
\]
commutes.
\end{proposition}
\begin{comment}
\begin{proof}
Let $\lambda$ denote the longitude for $K$ on $T^2$ constructed in the proof of Proposition \ref{prop:braiding}, and let $\Lambda$ denote the associated toroidal front of $L$. The construction of $L$ in Proposition 3.3 of \cite{lensgridleg} ensures that $L$ misses open neighborhoods of the cores of the $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ handlebodies (this is expressed in their coordinates on $L(p,q)$ as $r_1 \in (0,1)$); in particular $L$ misses an open neighborhood $\nu (B)$ of the binding of $(B,\pi)$.
Let $T$ denote a small positive transverse push-off of $L$ (so that the push-off takes place in the complement of $\nu(B)$); in what is to come we think of $T$ as a fixed embedding, not a transverse isotopy class. Since $(B,\pi)$ supports $\xi_{UT}$ there exists an isotopy through contact structures from $\xi_{UT}$ to $\xi$, such that the resulting contact structure $\xi$ is arbitrarily close to the tangent plane field of the pages of $(B,\pi)$ in the complement of $\nu (B)$. Transversality is an open condition, so we may assume that $T$ remains transverse throughout this isotopy. It is now clear that $\mathcal{B}$ gives rise to $T$ in $(L(p,q),\xi)$, and hence in $(L(p,q),\xi_{UT})$.
\end{proof}
\end{comment}
We have a rational transverse Markov theorem in $(L(p,q),\xi_{UT})$:
\begin{theorem}
\label{theorem:Markov}
Two braids about $(B,\pi)$ represent transversely isotopic knots in $(L(p,q),\xi_{UT})$ if and only if they are related by a sequence of braid isotopies and generalized positive Markov (de)stabilizations.
\end{theorem}
\begin{proof}
The elementary transverse grid moves are the elementary braid grid moves in addition to (de)stabilization of type W:NE; this (de)stabilization corresponds to positive Markov (de)stabilization. Propositions \ref{prop:trans} and \ref{prop:braid} give the result.
\end{proof}
\subsection{Classical invariants and grid diagrams}
As in the previous subsection, let $\mathcal{G}$ denote a grid diagram for a link $K\subset L(p,q)$, and let $L$ and $T$ denote the induced Legendrian and transverse representatives. Cornwell \cite{BTI} has derived combinatorial formulas for the classical invariants of $L$ and $T$ coming from the grid diagram $\mathcal{G}$. These formulas generalize the well known formulas for classical invariants of Legendrian and transverse links in the tight three sphere coming from front projections.
Let $P$ denote a toroidal front projection of $K$ as in Proposition \ref{prop:front}. Let $w$ denote the writhe of this projection. Let $m$ denote the algebraic intersection of $\alpha _0$ with $\lambda$, and $l$ the algebraic intersection of $\beta_0$ with $\lambda$. There will be some cusps in the projection, let $c_d$ denote the number of downward oriented cusps, $c_u$ the number of upward oriented cusps, and $c$ the total number of cusps.
\begin{proposition} (Propositions 3.2, 3.6, and Corollary 3.7 of \cite{BTI})
\label{prop:classical}
Let $L$ and $T$ denote the induced Legendrian and transverse representatives of $K$ in $(L(p,q),\xi_{UT})$. We have the following formulas for the classical invariants of $L$ and $T$:
\begin{align*}
tb_\mathbb{Q} (L) = w - \frac{c}{2} - \frac{ml}{p}\quad\quad\quad
rot_\mathbb{Q} (L) = \frac{1}{2} (c_d - c_u) - \frac {l-m}{p}\\
sl_\mathbb{Q} (T) = w - c_d - \frac{ml +(m-l)}{p}
\end{align*}
\end{proposition}
Let $L=L_1\cup\dots \cup L_l$ and
let $w^i, c_d ^i, c_u ^i, l^i$ and $m^i$ denote the contributions to $w,c_d,c_u,l$ and $m$ coming from the $i^{th}$ component $L_i\subset L$. The proof of Proposition \ref{prop:classical} in fact shows us that
\begin{align*}
rot^i_\mathbb{Q} (L) = \frac{1}{2} (c_d^i - c_u ^i) - \frac {l^i-m^i}{p}\\
sl^i_\mathbb{Q} (T) = w^i - c_d^i - \frac{m^il^i +(m^i-l^i)}{p}.
\end{align*}
Let $\pi :(S^3,\xi_{std})\to (L(p,q),\xi_{UT})$ denote the contact universal cover.
By taking $p$ copies of a grid diagram $\mathcal{G}$ for $K\subset L(p,q)$ and stacking them vertically (see Figure \ref{fig:stacking}), we obtain a grid diagram $\mathcal{G}'$ for $K' \subset S^3$. Let $L, T, L'$, and $T'$ denote the Legendrian and transverse representatives induced by $\mathcal{G}$ and $\mathcal{G}'$, respectively. By virtue of how a toroidal front projection induces Legendrian and transverse representatives (Proposition \ref{prop:front}) we have that $L' = \pi^{-1} (L)$ and $T' = \pi^{-1} (T)$.
\begin{figure}[h]
\def400pt{250pt}
\input{gridcover.pdf_tex}
\caption{Stacking an index two diagram $\mathcal{G}$ for a link $K\subset L(3,1)$ to obtain an index six diagram $\mathcal{G}'$ for $K'\subset S^3$. The solid and hollow dots depict $\bold{w}$ and $\bold{z}$ basepoints, respectively.}
\label{fig:stacking}
\end{figure}
The writhe and the number of downward or upward oriented cusps are all multiplied by $p$ when passing from $\mathcal{G}$ to $\mathcal{G}'$; the quantities $m$ and $l$ are preserved. Combining these facts with the formulas of Proposition \ref{prop:classical} it is easy to see how the classical invariants behave under this contact universal cover:
\begin{lemma}
\label{lem:cover}
Let $\pi :(S^3,\xi_{std})\to (L(p,q),\xi_{UT})$ denote the contact universal cover. Let $L, T \subset (L(p,q),\xi_{UT})$ be Legendrian and tranverse links. Let $L' = \pi^{-1} (L)$ and $T' = \pi^{-1} (T)$. Then we have the following:
\begin{align*}
tb_\mathbb{Q} (L) = \frac{1}{p} tb(L')\quad\quad\quad
rot_\mathbb{Q} (L) = \frac{1}{p} rot(L')\quad\quad\quad
sl_\mathbb{Q} (T) = \frac{1}{p} sl(T')
\end{align*}
and
\begin{align*}
rot^i_\mathbb{Q} (L) = \frac{1}{p} rot^i(L')\quad\quad\quad
sl^i_\mathbb{Q} (T) = \frac{1}{p} sl^i(T')
\end{align*}
where the lift $L'$ has been partitioned into sublinks $L_1'\cup\dots \cup L_l '$.
\end{lemma}
If $T$ is the positive transverse push-off of a null-homologous Legendrian link $L$, it is well known that $sl(T)=tb(L)-rot(L)$. By the above Lemma, this equality holds for links in universally tight lens spaces as well.
Recall that $\delta^{q/p}$ denotes the counterclockwise $2\pi q/p$ boundary twist on the disk, which is the monodromy of a rational open book supporting $(L(p,q),\xi_{UT})$. If $\beta\in B_n$ is an element of the braid group, then we may consider the corresponding braid $\beta \circ \delta^{q/p}$ about this rational open book. The closure of such a braid is naturally a transverse link, just as in the integral case. We will often not distinguish the braid from its closure.
Let $\Delta_n\in B_n$ denote the Garside element
\[
\Delta_n = (\sigma_1 \dots \sigma_{n-1})(\sigma_1,\dots,\sigma_{n-2})\dots(\sigma_1\sigma_2)(\sigma_1),
\]
which has square
\[
\Delta_n ^2 = (\sigma_1,\dots\sigma_{n-1})^n
\]
the full twist on $n$ strands. Recall that the self-linking number of a braid $\beta\in B_n$ is given by $w(\beta)-n$, where $w(\beta)$ is the writhe.
\begin{lemma}
\label{lem:QSL}
The rational self-linking number of $\beta\circ\delta^{q/p}$ in $(L(p,q),\xi_{UT})$ is given by
\[
sl_{\mathbb{Q}}(\beta\circ\delta^{q/p})= w(\beta) +\frac{1}{p}(qn^2 -qn - n).
\]
\end{lemma}
\begin{proof}
Consider the contact universal cover $\pi :(S^3,\xi_{std})\to (L(p,q),\xi_{UT})$. The braid $\beta\circ\delta^{q/p}$ lifts to the braid $\beta^p\circ \Delta_n ^{2q}$. By Lemma \ref{lem:cover} we have that
\[
sl_{\mathbb{Q}}(\beta\circ\delta^{q/p}) = \frac{1}{p}(sl(\beta^p\circ \Delta_n ^{2q}))= \frac{1}{p}(w(\beta^p\circ \Delta_n ^{2q}) - n) = w(\beta) +\frac{1}{p}(qn^2 -qn - n).
\]
\end{proof}
\subsection{Dual Grid diagrams}
\label{subsec:dual}
We describe how to \emph{dualize} a grid diagram encoding a link to obtain one encoding the link's mirror. Our conventions differ from those of \cite{lensgridleg}.
Suppose $\mathcal{G}$ is a grid diagram encoding $K\subset L(p,q)$.
Reflecting $\mathcal{G}$ about the horizontal line $y=1/2$ and exchanging the sets of $\bold{z}$ and $\bold{w}$ basepoints gives rise to a Heegaard diagram $H$ encoding the mirror $K\subset -L(p,q) = L(p,p-q)$.
The $\boldsymbol{\beta}$-curves are now positively sloped, so $H$ is not a grid diagram.
Performing a shear homeomorphism to $T^2$, so that the $\boldsymbol{\beta}$-curves have slope $-\frac{p}{p-q}$ gives a grid diagram $\mathcal{G}_*$.
Suppose that the grid diagrams $\mathcal{G}$ and its dual $\mathcal{G}_*$ give rise to Legendrians $L_0$ and $L_1$, respectively. Topologically, $L_1$ is the mirror of $L_0$. Let $g$ denote the index of $\mathcal{G}$.
\begin{proposition} (compare with Proposition 6.9 of \cite{lensgridleg})
\label{prop:index}
\[
tb_{\mathbb{Q}}(L_0)+tb_{\mathbb{Q}}(L_1) = -g
\]
\end{proposition}
\begin{proof}
Let $P$ denote some rectilinear projection of $L_0$ coming from $\mathcal{G}$, and let $w,l,m$ and $c$ be as in Proposition \ref{prop:classical}.
Let $P_*$ be the rectilinear projection of $L_1$ obtained by reflecting $P$ about $y=\frac{1}{2}$ and reversing the orientation of the link; $\mathcal{G}_*$ gives rise to $P_*$.
Let $w_*,l_*,m_*$ and $c_*$ be the corresponding numbers for $P_*$.
By the construction of $P_*$ it is clear that
\[
w=-w_*, l=-l_*, \text{ and }m=m_*
\]
Moreover, each basepoint of $\mathcal{G}$ gives rise to a cusp of the toroidal front induced by either $P$ or $P_*$, i.e.
\[
c+c_* = 2g.
\]
Applying Proposition \ref{prop:classical}
\[
tb_{\mathbb{Q}}(L_0)+tb_{\mathbb{Q}}(L_1) = (w-\frac{c}{2}-\frac{ml}{p})+(w_*-\frac{c_*}{2}-\frac{m_*l_*}{p}) = -\frac{c+c_*}{2}=-g.
\]
\end{proof}
\section{knot Floer background}
\label{sec:HFK}
\subsection{Knot Floer Homology}
\label{subsec:HFK}
We provide a brief overview of knot Floer homology, working with $\mathbb{F}=\mathbb{Z}_2$ coefficients throughout the entire paper.
Let $L\subset Y$ be an rationally null-homologous, oriented link in a closed, oriented 3-manifold.
A multi-pointed Heegaard diagram for $(Y,L)$ is an ordered tuple $\mathcal{H} = (\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w}\cup\bold{w}_F)$ where
\begin{itemize}
\item $\Sigma$ is a genus $g$ Riemann surface,
\item $\boldsymbol{\alpha} = \{\alpha_1,\dots,\alpha_{g+m+n-1}\}$ and $\boldsymbol{\beta} = \{\beta_1,\dots,\beta_{g+m+n-1}\}$ are sets of disjoint, simple closed curves on $\Sigma$ such that $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ each span half dimensional subsets of $H_1 (\Sigma;\mathbb{Z})$,
\item $\bold{z}$ and $\bold{w}$ are sets of $m$ \emph{linked} basepoints. Each component of $\Sigma \smallsetminus \{\alpha_1,\dots,\alpha_{g+m-1}\}$ and $\Sigma \smallsetminus \{\beta_1,\dots,\beta_{g+m-1}\}$ contains exactly one element of $\bold{z}$ and one of $\bold{w}$,
\item $\bold{w}_F$ is a set of $n$ \emph{free} basepoints. Every component of $\Sigma \smallsetminus \boldsymbol{\alpha}$ and $\Sigma\smallsetminus\boldsymbol{\beta}$ contains exactly one element of $\bold{w}\cup\bold{w}_F$.
\end{itemize}
$Y$ is specified by the Heegaard diagram $(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta})$. The link $L$ is obtained in the usual way as follows. Connect the points in $\bold{z}$ to those in $\bold{w}$ with $m$ oriented, disjoint, embedded arcs in $\Sigma \smallsetminus \boldsymbol{\alpha}$; form $\{\gamma^\alpha _1,\dots,\gamma^\alpha _{m}\}$ by pushing the interiors of the arcs into the $\boldsymbol{\alpha}$ handlebody. Likewise, connect the points in $\bold{w}$ to those in $\bold{z}$ with $m$ oriented, disjoint, embedded arcs in $\Sigma \smallsetminus \boldsymbol{\beta}$; form $\{\gamma^\beta _1,\dots,\gamma^\beta _{m}\}$ by pushing the interiors of the arcs into the $\boldsymbol{\beta}$ handlebody. The union
\[
L=\gamma^\alpha _1\cup\dots\cup\gamma^\alpha _{m}\cup\gamma^\beta _1\cup\dots\cup\gamma^\beta _{m}
\]
forms the link.
To each $w\in \bold{w}\cup\bold{w}_F$ we associate a formal variable $U_w$. Consider the totally real tori $\mathbb{T}_{\boldsymbol{\alpha}} = \alpha_1 \times \dots \times \alpha_{g+m+n-1}$ and $\mathbb{T}_{\boldsymbol{\beta}}= \beta_1 \times \dots \times \beta_{g+m+n-1}$ in the symmetric product $Sym^{g+m+n-1}(\Sigma)$. $CFK^- (\mathcal{H})$, the knot Floer complex, is a free $\mathbb{F} [\{ U_w \} _{w\in \bold{w}\cup\bold{w}_F} ]$-module generated by the intersections of $\mathbb{T}_{\boldsymbol{\alpha}}$ with $\mathbb{T}_{\boldsymbol{\beta}}$.
Let $\bold{x},\bold{y}\in\mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$, $\phi\in\pi_2(\bold{x},\bold{y})$ be a Whitney disk, and suppose there is a suitable path of almost complex structures on $Sym^{g+m+n-1}(\Sigma)$; we denote the moduli space of pseudo-holomorphic representatives of $\phi$ by $\mathcal{M} (\phi)$. The formal dimension of $\mathcal{M}(\phi)$ is given by the Maslov index $\mu (\phi)$. $\widehat{\mathcal{M}} (\phi)$ denotes the quotient of $\mathcal{M}(\phi)$ by the natural translation action of $\mathbb{R}$.
For $p\in \Sigma$, we let
$n_p (\phi)$ denote the multiplicity of $D(\phi)$, the domain of $\phi$, at the point $p$. For a finite set of points $\bold{p} = \{p_1,\dots, p_k\}\subset \Sigma$, $n_\bold{p} (\phi)$ denotes the sum $n_{p_1} (\phi) +\dots +n_{p_{k}}(\phi)$.
\subsection{$Spin^C$ structures}
The correspondence between $Spin^C$-structures and homology classes of non-vanishing vector fields for 3-manifolds was first introduced by Turaev \cite{turaev}.
Ozsv\'{a}th and Szab\'{o} \cite{relspinc} generalized this construction to 3-manifolds having torus boundary components. If $L\subset Y$ is a link, a \emph{relative} $Spin^C$-\emph{structure} is a homology class of non-vanishing vector field $v$ on $Y\smallsetminus \nu(L)$ such that the vector field $v$ points outwards along the boundary of $Y\smallsetminus \nu(L)$; we denote the set of such relative $Spin^C$-structures by $Spin^C (Y,L)$. There is an affine correspondence between $Spin^C (Y,L)$ and classes of $H^2 (Y,L;\mathbb{Z})$ which is analogous to the correspondence between $Spin^C (Y)$ and $H^2 (Y;\mathbb{Z})$; in particular there is an action of relative cohomology classes on relative $Spin^C$-structures.
There is a filling map
\[
G_{Y,L}: Spin^C (Y,L) \to Spin^C (Y)
\]
defined as follows. Let $v_L$ be a vector field on $Y\smallsetminus \nu(L)$ representing $\mathfrak{s}_L \in Spin^C(Y,L)$. Identifying $\nu (L)$ with $L\times D^2$, it is easy to see that there is a unique vector field $v_{\nu(L)}$, up to homotopy, on $\nu(L)$ which points inward along the boundary, is everywhere transverse to the $D^2$ factor, and has $L$ as an oriented closed orbit. Let $v$ denote the vector field on $Y$ obtained by gluing $v_L$ to $v_{\nu(L)}$.
We define $G_{Y,L} (\mathfrak{s}_L)$ to be the homology class of $v$.
This filling map is equivariant with respect to the action of cohomology, meaning that if $\eta \in H^2 (Y,L;\mathbb{Z})$ and $i:Y\smallsetminus L \to Y$ is the inclusion map, then
\[
G_{Y,L}(\mathfrak{s}_L +\eta) = G_{Y,L} (\mathfrak{s}_L) + i^* \eta.
\]
Let $\mathcal{H} = (\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w}\cup\bold{w}_F)$ be a Heegaard diagram encoding $(Y,L)$. Ozsv\'{a}th and Szab\'{o} define a map
\[
\mathfrak{s}_{z,w} : \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}\to Spin^C(Y,L)
\]
by explicitly constructing a vector field representing $\mathfrak{s}_{z,w}(\bold{x})$, the construction is similar to that of the map
\[
\mathfrak{s}_{z} : \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}\to Spin^C(Y)
\]
in their earlier work.
These maps behave nicely with respect to the filling map defined above, in particular for a generator $\bold{x}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$
\[
G_{Y,L} (\mathfrak{s}_{z,w} (\bold{x})) = \mathfrak{s}_{z} (\bold{x}).
\]
The complex $CFK^-(\mathcal{H})$ splits as a direct sum over both $Spin^C$ and relative $Spin^C$-structures.
The relative homological grading is called the Maslov grading. It is specified by
\[
M(\bold{x})-M(\bold{y}) = \mu(\phi)-2n_{\bold{w}\cup \bold{w}_F}(\phi)
\]
for $\bold{x},\bold{y}\in\mathbb{T}_{\boldsymbol{\alpha}}\cap\mathbb{T}_{\boldsymbol{\beta}}$ and any Whitney disk $\phi\in \pi_2 (\bold{x},\bold{y})$, and the fact that multiplication by each of the formal variables $U_w$ lowers Maslov grading by two.
If we are working in a summand corresponding to a torsion $Spin^C$ structure $\mathfrak{s}\in Spin^C (Y)$, the relative Maslov grading can be enhanced to an absolute $\mathbb{Q}$ grading \cite{absgrading}.
\subsection{The Alexander grading}
The set of relative $Spin^C$-structures determines a filtration of the chain complex $CFK^-(\mathcal{H})$ called the Alexander filtration. If $L$ is null-homologous, the filtration levels can be identified with the integers via the Alexander grading (\cite{holknots},\cite{ras}). Ni \cite{coversalex} later generalized this construction to rationally null-homologous links.
\begin{definition}
Let $L= L_1\cup\dots\cup L_l \subset Y$ be a rationally null-homologous link represented by a multi-pointed Heegaard diagram $\mathcal{H} = (\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w}\cup\bold{w}_F)$. Let $F$ be a rational Seifert surface for $L$. For a generator $\bold{x}\in\mathbb{T}_{\boldsymbol{\alpha}}\cap\mathbb{T}_{\boldsymbol{\beta}}$ the \emph{$i^{th}$ Alexander grading of $\bold{x}$ with respect to F} is given by
\begin{align*}
A_{L_i} ^F (\bold{x}) =\frac{1}{2[\mu_i]\cdot[F]} \Big(<c_1(\mathfrak{s}_{z,w}(\bold{x})) - (2n_i - 1)PD([\mu_i]),[F]>\Big)\\ =\frac{<c_1(\mathfrak{s}_{z,w}(\bold{x})),[F]>}{2[\mu_i]\cdot[F]} - (n_i - \frac{1}{2}).
\end{align*}
where $\mu_i$ is an oriented meridian for $L_i$ and $n_i$ is the number of basepoint pairs used to encode $L_i$.
\end{definition}
The Alexander grading only depends on the Seifert surface through its relative homology class. In this paper we will primarily be studying the case that $Y$ is a rational homology 3 sphere where the choice of $F$ is irrelevant, or we will be studying the Alexander grading induced by a binding of a rational open book, where the fiber will be the preferred rational Seifert surface, so we often suppress $F$ from the notation.
Multiplication by $U_w$, for any $w\in \bold{w}_{L_i}$, lowers $A_{L_i}$ by one, multiplication by the other formal variables does not change $A_{L_i}$.
We denote the sum $A_{L_1} (\bold{x}) + \dots + A_{L_l}(\bold{x})$ by $A_L(\bold{x})$. The bigrading on the knot Floer homology of a link is comprised of the Maslov and collapsed Alexander gradings.
\begin{definition}
Let $K\subset Y$ be a rationally null-homologous knot.
We define the \emph{complexity} of $K$ to be
\[
||K|| = inf \Big{\{} \frac{-\chi(F)}{2[\mu]\cdot [F]}\Big{\}}
\]
where the infimum is taken over all rational Seifert surfaces $F$ for $K$ having no sphere components.
\end{definition}
Ni \cite{coversalex} has proven that knot Floer homology of links in rational homology spheres detects the Thurston norm of the link complement.
This result specializes to the following theorem for knots; this is a generalization of the analogous Theorem in the $S^3$ setting due to Ozsv\'{a}th and Szab\'{o} \cite{genusdetection}.
\begin{theorem}
\label{thm:genusdetection}
Let $K$ be a knot in a rational homology sphere $Y$.
Let $A_{max}$ denote the maximal Alexander grading among all non-zero classes in $\widehat{HFK}(Y,K)$. Then
\[
||K|| = A_{max}- \frac{1}{2}
\]
\end{theorem}
The following notion was introduced in \cite{QOB} and is very useful for studying the relative Alexander grading.
\begin{definition}
Let $L_1\cup \dots\cup L_l = L \subset Y$ be an $l$ component link, and let
\[
(\Sigma, \boldsymbol{\alpha},\boldsymbol{\beta},\bold{z}_{1}\cup\dots\cup\bold{z}_{l},\bold{w}_{1}\cup\dots\cup\bold{w}_{l})
\]
be a Heegaard diagram for $(Y,L)$ where the basepoints $\bold{z}_{i}$ and $\bold{w}_{i}$ encode the link component $L_i$. Suppose that $[L_i]$ has order $r$ in $H_1(Y)$. Let $\lambda_i \subset \Sigma$ be a longitude for $L_i$ constructed as above. Let $D_1,\dots,D_r$ denote the closures of components of $\Sigma \setminus (\lambda_i\cup\boldsymbol{\alpha}\cup\boldsymbol{\beta})$. A \emph{relative periodic domain} is a 2-chain $\mathcal{P} = \Sigma a_i D_i$, whose boundary satisfies
\[
\partial \mathcal{P} = r\lambda _i +\sum n_i \alpha _i + \sum m_i \beta_i.
\]
\end{definition}
A relative periodic domain $\mathcal{P}$ naturally corresponds to a homology class in $H_2 (Y\smallsetminus \nu (L_i),\partial (Y\smallsetminus \nu (L_i)))$.
\begin{lemma} (see Lemma 2.3 of \cite{QOB})
\label{lemma:relperiodic}
Let $L_i\subset L$ be as in the definition above. Let $\mathcal{P}$ be a relative periodic domain whose homology class agrees with that of some rational Seifert surface $F$ for $L_i$. For $\bold{x},\bold{y}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$, we have
\[
A_{L_i}(\bold{x})-A_{L_i}(\bold{y}) = \frac{1}{r} (n_\bold{x} (\mathcal{P})-n_\bold{y}(\mathcal{P}))
\]
where the Alexander grading above is defined using the surface $F$.
\end{lemma}
Ni has shown the relative Alexander grading to behave nicely under covers.
Let $\mathcal{H} = (\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w})$ be a Heegaard diagram for $L_1\cup \dots\cup L_l = L \subset Y$. If $Y$ is a $\mathbb{Q}HS^3$, the universal cover $\pi: S^3\to Y$ is of some finite index $p$. We may take a $p$-fold cover of $\mathcal{H}$ to get a diagram $\widetilde{\mathcal{H}} = (\widetilde{\Sigma},\widetilde{\boldsymbol{\alpha}},\widetilde{\boldsymbol{\beta}},\widetilde{\bold{z}},\widetilde{\bold{w}})$ for $(S^3,\widetilde{L})$, where $\widetilde{L} = \pi ^{-1} (L)$. We let $\widetilde{L}_i = \pi^{-1}(L_i)$, note that $\widetilde{L}_i$ is a link having $p/r_i$ components, where $r_i$ is again the order of $[L_i]$ in $H_1 (Y;\mathbb{Z})$.
A generator $\bold{x}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$ lifts to a generator $\widetilde{\bold{x}}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$.
\begin{lemma}(see Lemma 4.2 of \cite{coversalex})
For $\bold{x},\bold{y}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$,
\[
A_{L_i}(\bold{x})-A_{L_i}(\bold{y}) = \frac{1}{p} (A_{\widetilde{L_i}}(\widetilde{\bold{x}}) - A_{\widetilde{L_i}}(\widetilde{\bold{y}}))
\]
\end{lemma}
We will need to understand the behavior of the absolute grading under covers.
\begin{lemma}
\label{lem:Acovers}
\[
A_{L_i}(\bold{x})=\frac{1}{p} A_{\widetilde{L_i}}(\widetilde{\bold{x}}) +\frac{1}{2}\Big(1-\frac{1}{r_i}\Big)
\]
where $r_i$ denotes the order of $L_i$ in $H_1(Y;\mathbb{Z})$.
\end{lemma}
\begin{proof}
Suppose that $F$ is a rational Seifert surface for the link $L$. We may use $\widetilde{F} = \pi ^{-1} (F)$ to compute the Alexander grading with respect $\widetilde{L}$, even though it may not be a Seifert surface.
By construction of the relative $Spin^C$-structures it is clear that
if $v$ is a vector field representing $\mathfrak{s}_{z,w} (\bold{x})$, then we may pull back $v$ to a vector field $\pi ^* v$ on $S^3 \smallsetminus \widetilde{L}$ representing $\mathfrak{s}_{\widetilde{z},\widetilde{w}} (\widetilde{\bold{x}})$.
It follows that $\pi ^* (c_1(\mathfrak{s}_{z,w} (\bold{x}))) = c_1 (\mathfrak{s}_{\widetilde{z},\widetilde{w}} (\widetilde{\bold{x}}))$ and
\begin{align*}
<c_1 (\mathfrak{s}_{\widetilde{z},\widetilde{w}} (\widetilde{\bold{x}})), [\widetilde{F}]>\\ = <\pi ^* (c_1(\mathfrak{s}_{z,w} (\bold{x}))), [\pi ^{-1}(F)]>\\= p <c_1(\mathfrak{s}_{z,w} (\bold{x})),[F]>.
\end{align*}
Let $K_1\dots K_m$ be the components of $\widetilde{L}_i$, where $m= p/r_i$. Let $m_j$ denote a meridian for $K_j$, and $\widetilde{n}_j$ denote the number of basepoint pairs encoding $K_j$. Note that
\begin{align*}
p[\mu_i]\cdot[F] = [\pi^{-1}(\mu_i)]\cdot[\widetilde{F}] = r_i[m_1]\cdot[\widetilde{F}].
\end{align*}
We can now evaluate,
\begin{align*}
A_{\widetilde{L_i}}(\widetilde{\bold{x}}) = \sum_{j=1}^{p/r_i} A_{K_j}(\widetilde{\bold{x}}) \\
= \sum_{j=1}^{p/r_i} \Big(\frac{<c_1 (\mathfrak{s}_{\widetilde{z},\widetilde{w}} (\widetilde{\bold{x}})),[\widetilde{F}]>}{2[m_j]\cdot[\widetilde{F}]} - (\widetilde{n}_j - 1/2)\Big)\\
= \sum_{j=1}^{p/r_i} \Big(\frac{p <c_1(\mathfrak{s}_{z,w} (\bold{x})),[F]>}{2[m_j]\cdot[\widetilde{F}]}\Big) - (n_ip -\frac{p}{2r_i})\\
= p\Big( \frac{p}{r_i} \frac{<c_1(\mathfrak{s}_{z,w} (\bold{x})),[F]>}{2[m_1]\cdot[\widetilde{F}]} - (n_i - \frac{1}{2r_i})\Big)\\
= p\Big(\frac{<c_1(\mathfrak{s}_{z,w} (\bold{x})),[F]>}{2[\mu_i]\cdot[F]} - (n_i - \frac{1}{2r_i})\Big)\\
= p\Big(\frac{<c_1(\mathfrak{s}_{z,w} (\bold{x})),[F]>}{2[\mu_i]\cdot[F]} - (n_i - \frac{1}{2}) - \frac{1}{2}+\frac{1}{2r_i}\Big)\\
= p \Big(A_{L_i} (\bold{x}) - \frac{1}{2}(1-\frac{1}{r_i})\Big).
\end{align*}
\end{proof}
\subsection{Knot Floer complexes and stabilizations}
The differential $\partial ^- : CFK^- (\mathcal{H})\to CFK^- (\mathcal{H})$ is defined as follows on generators
\[
\partial ^- (\bold{x}) := \sum\limits_{\bold{y}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}} \sum_{\substack{\phi\in\pi_2 (\bold{x},\bold{y})\\ \mu(\phi)=1\\ n_z(\phi) = 0\ \ \forall z\in\bold{z}}} \# \widehat{\mathcal{M}}(\phi) \cdot \prod\limits_{w\in \bold{w}\cup \bold{w}_F} U_w ^{n_w (\phi)}\cdot \bold{y},
\]
and extends linearly to the entire complex. We define the minus version of knot Floer homology to be
\[
HFK^- (Y,L):= HFK^- (\mathcal{H}) = H_* ( CFK^- (\mathcal{H}),\partial^-).
\]
If $w$ and $w'$ are in the same $\bold{w}_{L_i}$ for some $i$, the formal variables $U_w$ and $U_{w'}$ act identically on $HFK^- (Y,L)$. Each of the formal variables corresponding to free basepoints also act identically on $HFK^- (Y,L)$. Letting $U_i$ denote the action of $U_w$ for $w\in \bold{w}_{L_i}$, and $w_f \in \bold{w}_F$ be some free basepoint, one can show that $HFK^- (Y,L)$ is an invariant of $L\subset Y$, which is well defined up to graded $\mathbb{F}[U_1,\dots,U_l,U_{w_f}]$-module isomorphism.
$\widehat{CFK}(\mathcal{H})$ is the chain complex obtained by setting $U_w=0 $ for exactly on $w$ in each $\bold{w}_{L_i}$. We let $\widehat{\partial}$ denote the induced differential on $\widehat{CFK}(\mathcal{H})$, and let
\[
p:CFK^- (\mathcal{H})\to \widehat{CFK}(\mathcal{H})
\]
denote the natural projection. The hat version of knot Floer homology,
\[
\widehat{HFK}(Y,L):=\widehat{HFK}(\mathcal{H}):=H_* (\widehat{CFK}(\mathcal {H}),\widehat{\partial}),
\]
is an invariant of $(Y,L)$ up to graded $\mathbb{F}$-module isomorphism.
Setting each $U_w = 0$ for all $w\in \bold{w}_F$ one obtains another chain complex, $CFK^{-,\bold{w}_F} (\mathcal{H})$. This complex plays a key role in our reformulation of the transverse invariant in subsequent sections. The homology
\[
HFK^{-,n} (Y,L):= HFK^{-,n} (\mathcal{H}):= H_* (CFK^{-,\bold{w}_F} (\mathcal{H}),\partial ^-)
\]
is an invariant of $(Y,L)$ and the number, $n$, of free basepoints up to graded $\mathbb{F}[U_1,\dots ,U_l]$-module isomorphism.
Any pair of multi-pointed Heegaard diagrams for $(Y,L)$ is related by a sequence of Heegaard moves in the complement of all basepoints. The moves are isotopy, handleslide, index 1/2 (de)stabilization, linked index 0/3 (de)stabilization, and free index 0/3 (de)stabilization. A pair of such diagrams with equal number of free basepoints may be related by a sequence of Heegaard moves not including free 0/3 (de)stabilization.
Isotopies and handleslides induce chain maps, via pseudo-holomorphic triangle counts, which induce isomorphisms on homology. Index 1/2 (de)stabilization induces an isomorphism of chain complexes. We describe the maps associated to linked and free 0/3 (de)stabilizations and their relationship to certain basepoint actions on the complex.
Suppose that $D$ is a region of $\Sigma \smallsetminus \boldsymbol{\beta}$ containing some $z\in \bold{z}$ and $w\in\bold{w}$. Performing a linked index 0/3 stabilization consists of adding a basepoints $z'$ to $\bold{z}$ and $w'$ to $\bold{w}$, and curves $\alpha '$ to $\boldsymbol{\alpha}$ and $\beta '$ to $\boldsymbol{\beta}$ as depicted in Figure \ref{fig:linked03}. The two intersections of $\alpha ' $ and $\beta '$, denoted $x'$ and $y'$, must be in the region of $\Sigma \smallsetminus \boldsymbol{\alpha} \smallsetminus \boldsymbol{\beta}$ which contains $z$.
\begin{figure}[h]
\def400pt{300pt}
\input{linked03.pdf_tex}
\caption{Before and after a linked 0/3 stabilization.}
\label{fig:linked03}
\end{figure}
Let $CFK^{-,n}(\mathcal{H}')_2$ be the subcomplex of $CFK^{-,n}(\mathcal{H}')$ generated by elements of the form $\bold{x}\cup\{y'\}$, and let $CFK^{-,n}(\mathcal{H}')_1$ denote the quotient complex generated by elements of the form $\bold{x}\cup\{x'\}$, where $\bold{x}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$. Define $f:CFK^{-,n}(\mathcal{H}')_1\to CFK^{-,n}(\mathcal{H}')_2$ by
\[
f(\bold{x}\cup \{x'\}) = (U_w + U_{w'})(\bold{x}\cup \{y'\}).
\]
$CFK^{-,n} (\mathcal{H}')$ is isomorphic to the mapping cone of $f$, and it follows that the map from $CFK^{-,n}(\mathcal{H})$ to $CFK^{-,n}(\mathcal{H})$ defined on generators by sending $\bold{x}$ to $\bold{x}\cup \{y'\}$ induces an isomorphism on homology. Linked index 0/3 destablization induces the inverse of this isomorphism.
Given any $z'\in \bold{z}$ we define a chain map $\Psi_{z'}: CFK^{-,n}(\mathcal{H})\to CFK^{-,n}(\mathcal{H})$ by counting holomorphic disks which pass exactly once through $z'$.
On generators the map is defined as follows:
\[
\Psi_{z'}(\bold{x}) := \sum\limits_{\bold{y}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}} \sum_{\substack{\phi\in\pi_2 (\bold{x},\bold{y})\\ \mu(\phi)=1\\ n_{z'}(\phi) =1\\ n_z(\phi) = 0\ \ \forall z\in\bold{z} \smallsetminus \{z'\}}} \# \widehat{\mathcal{M}}(\phi) \cdot \prod\limits_{w\in \bold{w}\cup \bold{w}_F} U_w ^{n_w (\phi)}\cdot \bold{y}.
\]
Studying degenerations of holomorphic disks shows that $\Psi_{z'}$ is a chain map; let $\psi_{z'}$ denote the induced map on homology. More degeneration arguments involving disks show that $\psi_{z'}^2 = 0$ and if $z'\ne z \in \bold{z}$ then $\psi_{z'}\psi_{z} = \psi_{z}\psi_{z'}$. Further standard degeneration arguments involving triangles show that $\psi_{z'}$ commutes with the isomorphisms associated to isotopies and handleslides. $\psi_{z'}$ commutes with the map associated to free 0/3 (de)stabilization, and linked 0/3 (de)stabilization so long as $z'$ is not the basepoint being added (or removed).
Note that for the diagram $\mathcal{H}'$, obtained from $\mathcal{H}$ by linked 0/3 stabilization, we have that $CFK^{-,n} (\mathcal{H}')_1 = ker\ \Psi_{z'}$ and $CFK^{-,n} (\mathcal{H}')_2 = coker\ \Psi_{z'}$. Thus the summand $\bigcap_{z\in \bold{z}} coker(\psi_{z})$ is preserved by the isomorphism induced by any Heegaard move.
Free index 0/3 stabilization consists of adding a free basepoint $w'$ to $\bold{w}_F$, one curve $\alpha '$ to $\boldsymbol{\alpha}$ and one curve $\beta '$ to $\boldsymbol{\beta}$, in a region of $\Sigma\smallsetminus\boldsymbol{\alpha}\smallsetminus\boldsymbol{\beta}$ containing a point of $\bold{z}$, as depicted in Figure \ref{fig:free03}, to obtain a new diagram $\mathcal{H}'$. We say that $\alpha '$ and $\beta '$ form a \emph{small configuration} about $w'$.
\begin{figure}[h]
\def400pt{200pt}
\input{free03.pdf_tex}
\caption{Before and after a free 0/3 stabilization.}
\label{fig:free03}
\end{figure}
Let $CFK^{-,n+1}(\mathcal{H}')_1$, $CFK^{-,n+1}(\mathcal{H}')_2$, be the subcomplex generated by elements of the form $\bold{x}\cup \{x'\}$, $\bold{x}\cup \{y'\}$, respectively, for $\bold{x}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}$.
$CFK^{-,n+1}(\mathcal{H}')$ splits as a direct sum of complexes,
\[
CFK^{-,n+1}(\mathcal{H}') = CFK^{-,n+1}(\mathcal{H}')_2 \oplus CFK^{-,n+1}(\mathcal{H}')_2.
\]
The inclusion $i:CFK^{-,n}(\mathcal{H})\to CFK^{-,n+1}(\mathcal{H}')$ which sends $\bold{x}$ to $\bold{x}\cup \{x'\}$, is an isomorphism from $CFK^{-,n}(\mathcal{H})$ to $CFK^{-,n+1}(\mathcal{H}')_1 [1]$, where the $[1]$ indicates that the Maslov grading has been increased by 1. The projection $j$, sending generators $\bold{x}\cup \{x'\}$ to $\bold{x}$, and all others to zero, restricts to the inverse of $i$ on $CFK^{-,n+1}(\mathcal{H}')_1[1]$.
Given any $w\in \bold{w}_F$ we define a chain map $\Psi_{w}: CFK^{-,n}(\mathcal{H})\to CFK^{-,n}(\mathcal{H})$ by counting holomorphic disks which pass exactly once through $w$.
On generators the map is defined as follows:
\[
\Psi_{w}(\bold{x}) := \sum\limits_{\bold{y}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap \mathbb{T}_{\boldsymbol{\beta}}} \sum_{\substack{\phi\in\pi_2 (\bold{x},\bold{y})\\ \mu(\phi)=1\\ n_{w}(\phi) =1\\ n_z(\phi) = 0\ \ \forall z\in\bold{z} }} \# \widehat{\mathcal{M}}(\phi) \cdot \prod\limits_{w\in \bold{w}} U_w ^{n_w (\phi)}\cdot \bold{y}.
\]
$\Psi_{w}$ is a chain map; let $\psi_{w}$ denote the induced map on homology and refer to it as the free basepoint action associated to $w$. Standard degeneration arguments show that the basepoint actions associated to two distinct free basepoints commute, any free basepoint action squares to zero, and that $\psi_w$ will commute with maps induced by all Heegaard moves, including 0/3 free (de)stabilization so long as $w$ is not the free basepoint being added (or removed).
Note that for the diagram $\mathcal{H}'$, obtained from $\mathcal{H}$ by free 0/3 stabilization, we have that $CFK^{-,n+1} (\mathcal{H}')_1 = coker\ \Psi_{w'}$ and $CFK^{-,n+1} (\mathcal{H}')_2 = ker\ \Psi_{w'}$. Thus the splitting in (1) gives rise to the splitting on homology,
\[
HFK^{-,n+1} (\mathcal{H}') = coker\ \psi_{w'} \oplus ker\ \psi_{w'}.
\]
The inclusion $i_*$ induces, and the projection $j_*$ restricts to, isomorphisms which are inverses of each other:
\[
\begin{split}
i_*:HFK^{-,n}(\mathcal{H})\to coker\ \psi_{w'}[1]&\\
j_*: coker\ \psi_{w'}[1]\to HFK^{-,n}(\mathcal{H}).
\end{split}
\]
Suppose now that $\mathcal{H}'$ is obtained from $\mathcal{H}$ by $k$ free index 0/3 stabilizations. Let $w_1,\dots,w_k$ denote the free basepoints which are added in the stabilizations. Let $i^k$ and $j^k$ denote the obvious compositions of inclusion and projection maps
\[
\begin{split}
i^k:CFK^{-,n}(\mathcal{H})\to CFK^{-,n+k}(\mathcal{H}')&\\
j^k:CFK^{-,n+k}(\mathcal{H}')\to CFK^{-,n}(\mathcal{H}).
\end{split}
\]
The compositions $i^k_{*}$, $j^k_{*}$, induce and restrict to, respectively, isomorphisms which are inverses of each other:
\[
\begin{split}
i^k_*:HFK^{-,n}(\mathcal{H})\to \Big( \bigcap_{i=1}^k coker\ \psi_{w_i}\Big)[k]&\\
j^k_*: \Big( \bigcap_{i=1}^k coker\ \psi_{w_i}\Big)[k]\to HFK^{-,n}(\mathcal{H}).
\end{split}
\]
\begin{comment}
\subsection{Rational $\tau$ invariants}
\label{subsec:tau}
Let $\mathcal{H}'=(\Sigma, \boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w})$ be a Heegaard diagram encoding a knot $K$ in a rational homology 3-sphere $Y$. Suppose $\bold{x},\bold{y}\in \mathbb{T}_{\boldsymbol{\alpha}}\cap\mathbb{T}_{\boldsymbol{\beta}}$ are generators such that $\mathfrak{s}_{\bold{z}} (\bold{x}) = \mathfrak{s}_{\bold{w}} (\bold{y}) = \mathfrak{s} \in Spin^C (Y)$. Then we have that
\[
\mathfrak{s}_{\bold{z},\bold{w}} (\bold{x}),\mathfrak{s}_{\bold{z},\bold{w}}(\bold{y}) \in G_{Y,K}^{-1} (\mathfrak{s})
\]
and the difference $\mathfrak{s}_{\bold{z},\bold{w}} (\bold{x}) - \mathfrak{s}_{\bold{z},\bold{w}}(\bold{y})$ can be identified with an element of $ker(H^2(Y,K)\to H^2 (Y))$. For some integer $m$ we can write $\mathfrak{s}_{\bold{z},\bold{w}} (\bold{x}) - \mathfrak{s}_{\bold{z},\bold{w}}(\bold{y}) = m PD([\mu])$.
Noting that $c_1 ( m PD([\mu])) = 2m PD([\mu])$, we compute
\begin{align*}
A_K(\bold{x})-A_K(\bold{y}) = \frac{<2mPD([\mu]),[F]>}{2[\mu]\cdot [F]} = m.
\end{align*}
We see that within each summand $CFK^- (\mathcal{H}',\mathfrak{s})$ the relative Alexander grading is integer valued.
$\mathcal{H} = (\Sigma, \boldsymbol{\alpha},\boldsymbol{\beta},\bold{z})$ is a diagram encoding $Y$. $K$ induces a filtration on the associated Floer chain complex,
\[
\emptyset = \mathcal{F}_i ^{K}(\mathcal{H}) \subset \mathcal{F}_{i+1}^K (\mathcal{H})\subset \dots \subset F_j ^K (\mathcal{H}) = \widehat{CF}(\mathcal{H})
\]
which restricts to filtrations on the summands associated to each $\mathfrak{s}\in Spin^C(Y)$
\[
\emptyset = \mathcal{F}_i ^{K}(\mathcal{H},\mathfrak{s}) \subset \mathcal{F}_{i+1}^K (\mathcal{H},\mathfrak{s})\subset \dots \subset F_j ^K (\mathcal{H},\mathfrak{s}) = \widehat{CF}(\mathcal{H},\mathfrak{s}).
\]
\end{comment}
\subsection{Combinatorial knot Floer homology}
\label{subsec:combmaps}
Grid diagrams have been used to give a combinatorial definition of link Floer homology, first for links in $S^3$ \cite{MOST}, and subsequently for links in lens spaces \cite{lensgridcomb}. Also see the text \cite{gridhom} for a comprehensive treatment of grid homology for links in $S^3$.
Given a grid diagram $\mathcal{G} = (T^2,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w})$, consider the diagram $G = (T^2, \boldsymbol{\beta},\boldsymbol{\alpha},\bold{w},\bold{z})$ for $(-L(p,q),K)$. In this section we refer to $G$ as a grid diagram for $K$.
If $G$ has index $n$, the generators of $CFK^-(G)$ can be identified with $S_n \oplus \mathbb{Z}/p\mathbb{Z}$, where $S_n$ denotes the symmetric group on $n$ elements. The differential on $CFK^-(G)$ is defined by counting certain pseudo-holomorphic disks, c.f. Subsection \ref{subsec:HFK}, for a grid diagram all of the appropriate disks contributing to the differential have domains which are rectangles. Computing the homology $HFK^-(G)$ is a combinatorial task, this is the basic idea behind grid homology.
\begin{definition}
Fix $\bold{x},\bold{y} \in \mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$. A \emph{rectangle} from $\bold{x}$ to $\bold{y}$ is an embedded disk $r\subset T^2$ whose boundary consists of four arcs, each of which lies along some $\boldsymbol{\beta}$ or $\boldsymbol{\alpha}$ curve, satisfying the conditions:
\begin{itemize}
\item Four corners of $p$ are in $\bold{x}\cup\bold{y}$. Moreover $\bold{x}$ and $\bold{y}$ agree away from these four corners.
\item The portion of $\partial p$ along the $\boldsymbol{\alpha}$ curves is an oriented path from $\bold{y}$ to $\bold{x}$.
\end{itemize}
\end{definition}
The set of rectangles from $\bold{x}$ to $\bold{y}$ is denoted $Rect(\bold{x},\bold{y})$, and is either empty or consists of two rectangles. A rectangle $x\in Rect(\bold{x},\bold{y})$ is called empty if its interior is disjoint from $\bold{x}$ and $\bold{y}$. The space of empty rectangles from $\bold{x}$ to $\bold{y}$ is denoted $Rect^\circ (\bold{x},\bold{y})$.
The differential on $CFK^-(G)$ can be expressed as
\[
\partial ^- (\bold{x}) := \sum\limits_{\bold{y}\in \mathbb{T}_{\boldsymbol{\beta}}\cap \mathbb{T}_{\boldsymbol{\alpha}}} \sum_{\substack{r\in Rect^\circ (\bold{x},\bold{y})\\ r\cap \bold{w} = \emptyset}} U_0 ^{z_0 (r)} U_1^{z_1(r)}\dots U_{n-1}^{z_{n-1}(r)} \cdot \bold{y},
\]
where $z_i(r)$ denotes the intersection number of $z_i$ with $r$.
As explained in Subsection \ref{subsec:contactgrid}, a grid diagram naturally gives rise to Legendrian and transverse representatives of the link. In Section \ref{sec:invariants} we will use grid diagrams to define invariants of Legendrian and transverse links in universally tight lens spaces, naturally extending the invariants defined in \cite{grid}.
\begin{comment}
Suppose that $G = (T^2, \boldsymbol{\beta},\boldsymbol{\alpha},\bold{w}_1\cup\bold{w}_2\dots\cup\bold{w}_l,\bold{z}_1\cup\bold{z}_2\dots\cup\bold{z}_l)$ encodes $(-L(p,q),K)$, where $K=K_1\cup K_2\dots\cup K_l$ and the basepoints $\bold{w}_i$ and $\bold{z}_i$ correspond to $K_i$.
If $\bold{x}\in\mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$, then we have the following
\begin{lemma}
\label{lem:rationalalexandergrading}
The $i^{th}$ Alexander grading of a generator $\bold{x}$ is given by
\[
A_{K_i} (\bold{x})= \frac{1}{2}(M_\bold{z} (\bold{x}) - M_\bold{w} (\bold{x}) - (n_i - \frac{1}{r_i}))
\]
where $M_\bold{z}(\bold{x})$ and $M_\bold{w}(\bold{x})$ denote the Maslov gradings of $\bold{x}$ in the complex $CFK^- (T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{z})$ and $CFK^- (T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w})$, respectively, $n_i$ is the number of $\bold{w}_i$ basepoints, and $r_i$ is the order of $[K_i]$ in $H_1(L(p,q);\mathbb{Z})$.
\end{lemma}
\begin{proof}
We take a $p$-fold cover of $G$ to get a grid diagram $\tilde{G}=(T^2,\tilde{\boldsymbol{\beta}},\tilde{\boldsymbol{\alpha}},\tilde{\bold{w}},\tilde{\bold{z}})$ for $\tilde{K}=\tilde{K_1}\cup\dots\cup\tilde{K_l}$ in $-S^3$. The generator $\bold{x}$ lifts to a generator $\tilde{\bold{x}}\in\mathbb{T}_{\tilde{\boldsymbol{\beta}}}\cap\mathbb{T}_{\tilde{\boldsymbol{\alpha}}}$. The relative rational Alexander grading behaves predictably under covers \cite{coversalex}; this fixes the Alexander grading up to a constant term $C$, independent of the generator $\bold{x}$,
\[
A_{K_i}(\bold{x})=\frac{1}{p}(A_{\tilde{K_i}}(\tilde{\bold{x}})) +C.
\]
Furthermore by \cite{MOST} we have that
\[
A_{\tilde{K_i}}(\tilde{\bold{x}}) = \frac{1}{2}(M_{\tilde{\bold{z}}} (\tilde{\bold{x}})-M_{\tilde{\bold{w}}} (\tilde{\bold{x}}) - (n_ip - \frac{p}{r_i})),
\]
because $\tilde{K}_i$ is encoded with $n_ip$ basepoint pairs and has $\frac{p}{r_i}$ components.
The Maslov grading also behaves in the same fashion under covers \cite{coversmaslov}, we have that
\[
M_\bold{z} (\bold{x}) - M_\bold{w} (\bold{x}) = \frac{1}{p} (M_{\tilde{\bold{z}}} (\tilde{\bold{x}})-M_{\tilde{\bold{w}}} (\tilde{\bold{x}})).
\]
Combining these equations gives the result up to a constant $C$, which is independent of the generator $\bold{x}$
\[
A_{K_i} (\bold{x})= \frac{1}{2}(M_\bold{z} (\bold{x}) - M_\bold{w} (\bold{x}) - (n_i - \frac{1}{r_i}))+C.
\]
Lemma \ref{lem:reversal} forces $C=0$.
\end{proof}
\end{comment}
In \cite{MOST} not only is a combinatorial method of computing $HFK^{-}(S^3,K)$ given, a combinatorial proof of invariance is presented. There are quasi-isomorphisms associated to commutations and the various stabilizations. These quasi-isomorphisms admit natural extensions to grid homology for links in lens spaces. We will later use properties of these extensions to prove invariance of the GRID invariants defined in Section \ref{sec:invariants}.
We now turn to the definition of the chain map for a column commutation, the case of a row commutation is similar. Suppose that $G'$ is obtained by commuting two adjacent columns of $G$. It is useful to draw both $G$ and $G'$ on a $T^2$ simultaneously. Note that replacing $\beta _i \in \boldsymbol{\beta}$ by the curve $\gamma_i$ depicted in Figure \ref{fig:COMMcomb} gives the diagram $G'$. The curve $\gamma_{i}$ intersects $\beta_i$ in two points. Let $\theta\in\gamma_i\cap\beta_i$ denote the point at the top of the bigon in $T^{2}\smallsetminus \{\gamma_i\cup\beta_i\}$ whose left boundary consists of an arc along $\beta_i$.
We set $\boldsymbol{\gamma} = \boldsymbol{\beta}\smallsetminus \beta_i \cup \gamma_i$, so that $G' = (T^2, \boldsymbol{\gamma},\boldsymbol{\alpha},\bold{w},\bold{z})$.
\begin{figure}[h]
\def400pt{200pt}
\input{COMMcomb.pdf_tex}
\caption{The solid and hollow dots depict $\bold{w}$ and $\bold{z}$ basepoints, respectively. The brown square depicts $\theta$. The green curve is $\gamma_i$. The domain of a pentagon is shaded.}
\label{fig:COMMcomb}
\end{figure}
\begin{definition}
Fix $\bold{x}\in \mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$ and $\bold{y}\in \mathbb{T}_{\boldsymbol{\gamma}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$. A \emph{pentagon} from $\bold{x}$ to $\bold{y}$ is an embedded disk $p\subset T^2$ whose boundary consists of five arcs, each of which lies along some $\boldsymbol{\beta},\boldsymbol{\gamma},$ and $\boldsymbol{\alpha}$ curve, satisfying the conditions:
\begin{itemize}
\item Four corners of $p$ are in $\bold{x}\cup\bold{y}$.
\item $p$ has multiplicity $1/4$ at each of its corners.
\item The portion of $\partial p$ along the $\boldsymbol{\alpha}$ curves is an oriented path from $\bold{y}$ to $\bold{x}$.
\end{itemize}
The set of pentagons from $\bold{x}$ to $\bold{y}$ is denoted $Pent(\bold{x},\bold{y})$.
\end{definition}
Note that $Pent(\bold{x},\bold{y})$ is empty unless $\bold{x}$ and $\bold{y}$ share $n-2$ components. Moreover the set of pentagons between two generators consists of at most one element. The fifth corner of any pentagon must be $\theta$.
A pentagon $p\in Pent(\bold{x},\bold{y})$ is said to be \emph{empty} if the interior of $p$ is disjoint from $\bold{x}$ and $\bold{y}$, the set of such pentagons is denoted $Pent^\circ (\bold{x},\bold{y})$.
We define a map $P: CFK^{-}(G)\to CFK^{-}(G')$ by
\[
P(\bold{x}) = \sum\limits_{\bold{y}\in \mathbb{T}_{\boldsymbol{\gamma}}\cap \mathbb{T}_{\boldsymbol{\alpha}}} \sum_{\substack{p\in Pent^\circ(\bold{x},\bold{y})\\ p\cap\bold{w}=\emptyset}} U_0 ^{z_0 (p)} U_1^{z_1(p)}\dots U_{n-1}^{z_{n-1}(p)} \cdot \bold{y}.
\]
$P$ is a chain map, and induces an isomorphism on homology. The proof of these facts is a straightforward adaptation of the arguments appearing in \cite{MOST}.
We avoid defining the quasi-isomorphisms associated to destabilizations, instead we define their restrictions to certain subcomplexes.
\begin{remark}
The chain maps, and all facts asserted about them, are discussed in detail in Chapter 5 of \cite{gridhom}. Slightly different conventions are used, and only grid diagrams for knots in $S^3$ are considered. Translating to our conventions and generality is just a matter of changing notation.
\end{remark}
Let $G'$ be obtained by performing a stabilization near a $\bold{w}$ basepoint of $G$. By renumbering the variables we think of $CFK^{-}(G)$ as an $\mathbb{F}[U_1,\dots,U_n]$-module, and $CFK^{-}(G')$ as an $\mathbb{F}[U_0,\dots,U_n]$-module. Let $CFK^{-}(G)[U_0]$ denote the bigraded complex $CFK^{-}(G)\otimes_{\mathbb{F}[U_1,\dots,U_n]} \mathbb{F}[U_0,\dots,U_n]$, where $x\otimes U_0 ^k$ has bigrading $(d-2k,s-k)$ for a homogenous $x\in CFK^- (G)$ having bigrading $(d,k)$.
There is a natural projection
\[
\pi : H_* (CFK^- (G)[U_0]) \simeq HFK^-(G)[U_0]\to \frac{HFK^-(G)[U_0]}{U_0+U_1}\simeq HFK^- (G).
\]
Let $\beta '$ and $\alpha '$ be the pair of curves introduced in the stabilization. There is a distinguished intersection point $\eta \in\beta' \cap\alpha '$. Let $I(G')$ denote the points of $\mathbb{T}_{\boldsymbol{\beta '}}\cap \mathbb{T}_{\boldsymbol{\alpha '}}$ having $\eta$ as a component, and let $N(G')$ denote the complement of $I(G')$. Let $\bold{I}$ and $\bold{N}$ denote the submodules of $CFK^- (G')$ generated by $I(G')$ and $N(G')$ over $\mathbb{F}[U_0,\dots,U_n]$, respectively.
There is a natural one-to-one correspondence between $I(G')$ and $\mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$. This extends to give an isomorphism of $\mathbb{F}[U_0,\dots,U_n]$-modules
\[
e:\bold{I}\to CFK^- (G)[U_0].
\]
If we are considering a stabilization of type W:NE or W:SW then $e$ is a bigraded map, otherwise it is homogenous of degree $(1,1)$.
For stabilizations of type W:NW or W:SE, $\bold{N}$ is easily seen to be a subcomplex of $CFK^-(G')$. In these cases we will need a chain homotopy equivalence
\[
\mathcal{H}^I_{w_1}:\bold{N}\to\bold{I}
\]
defined by
\[
\mathcal{H}^I_{w_1}(x) = \sum\limits_{\bold{y}\in I(G')} \sum_{\substack{r\in Rect^\circ (\bold{x},\bold{y})\\ r\cap\bold{w}=w_1}} U_0 ^{z_0 (p)} U_1^{z_1(p)}\dots U_{n}^{z_{n}(p)} \cdot \bold{y}.
\]
\begin{proposition}(See Proposition 5.4.1 of \cite{gridhom})
\label{prop:stabcomb}
If $G'$ is obtained from $G$ by a stabilization, then there is an isomorphism of bigraded $\mathbb{F}[U]$-modules from $HFK^{-}(G')\to HFK^{-}(G)$. In particular:
\begin{itemize}
\item
If the stabilization is of type W:NW or W:SE, the restriction of the above isomorphism to cycles coming from the subcomplex $\bold{N}$ is the map $\pi \circ (e\circ\mathcal{H}^I_{w_1})_*$.
\item
If the stabilization is of type W:NE or W:SW, then $\bold{I}$ is a subcomplex of $CFK^-(G')$. The restriction of the above isomorphism to cycles coming from the subcomplex $\bold{I}$ is given by $\pi \circ e_*$.
\end{itemize}
\end{proposition}
\section{The GRID invariants for links in lens spaces}
\label{sec:invariants}
Given a grid diagram $\mathcal{G} = (T^2,\boldsymbol{\alpha},\boldsymbol{\beta},\bold{z},\bold{w})$, consider the diagram $G = (T^2, \boldsymbol{\beta},\boldsymbol{\alpha},\bold{w},\bold{z})$ for $(-L(p,q),K)$, and the generator $\bold{x}^+ \in CFK^- (G)$ (respectively $\bold{x}^-$) having components which are in the upper left (respectively lower right) corners of regions in $T^2 \smallsetminus(\boldsymbol{\beta}\cup\boldsymbol{\alpha})$ containing points of $\bold{w}$. In this section we refer to $G$ as a grid diagram for $K$.
\begin{lemma}
\label{lem:cycle}
The generators $\bold{x}^+,\bold{x}^- \in CFK^- (G)$ are cycles.
\end{lemma}
\begin{proof}
The differential for the complex $CFK^-(G)$ counts parallelograms (Proposition 2.1 of \cite{lensgridcomb}). Suppose that $P$ is a parallelogram contributing to the differential of $\bold{x}^+$, let $x$ denote the component of $\bold{x}^+$ in the top left corner of $P$. The parallelogram $P$ now clearly contains a $\bold{w}$ basepoint. The proof for $\bold{x}^-$ is similar.
\end{proof}
For the case of a grid diagram $G$ representing a link in the three sphere, the following was proven in \cite{grid}:
\begin{theorem}(combination of Theorems 1.1 and 7.1 from \cite{grid})
\label{thm:OST}
If $G$ is a grid diagram for a link $K=K_1\cup K_2\cup\dots\cup K_l \subset S^3$, let $L$ and $T$ be the corresponding oriented Legendrian and transverse representatives of $K$.
The homology class $[x^+]$ in $HFK^-(-S^3,L)$ is an invariant of $L$ and $T$ up to Legendrian and transverse isotopy, respectively.
The class $[x^-]$ is also an invariant of $L$ up to Legendrian isotopy. These invariants are supported in multi-gradings
\begin{align*}
M(\bold{x}^+) = tb(L) - rot(L) +1 = sl(T)+1\\
A_{L_i}(\bold{x}^+) = \frac{1}{2}\Big{(}tb(L_i) - rot^i(L) +1\Big{)}= \frac{1}{2}\Big{(}sl^i(T) +1\Big{)}\\
M(\bold{x}^-) = tb(L) + rot(L) +1\quad \quad \text {and }\quad\quad A_{L_i}(\bold{x}^-) = \frac{1}{2}\Big{(}tb(L_i) + rot^i(L) +1\Big{)}.
\end{align*}
\end{theorem}
\begin{remark}
The conventions used in \cite{grid} differ from ours, the above has been translated to our conventions. In \cite{equiv} their version of $[\bold{x}^+]$ has been shown to agree with the BRAID invariant (also defined in \cite{equiv}). Using the same proof (in the case of $S^3$), the invariant we will construct can also be shown to agree with the BRAID invariant, and in particular the invariant defined in \cite{grid}.
\end{remark}
We establish Theorem \ref{thm:grid} in a sequence of Lemmas and Propositions.
Let $G$ be a grid diagram encoding a Legendrian link $L\subset (L(p,q),\xi_{UT})$.
We begin by showing that $[\bold{x}^+],[\bold{x}^-]\in HFK^- (G)$ are invariants of the oriented Legendrian isotopy class of $L$. In light of Theorem \ref{thm:leg} it suffices to prove invariance under the elementary Legendrian grid moves, which are commutations along with (de)stabilizations of types W:NE and W:SW.
\begin{lemma}
The classes $[\bold{x}^+]$ and $[\bold{x}^-] \in HFK^- (G)$ are invariant under commutations. In particular, if $G'$ is obtained from $G$ by a commutation move then the quasi-isomorphism
\[
P: CFK^-(G)\to CFK^-(G')
\]
sends $\bold{x}^+(G)$ and $\bold{x}^- (G)$ to $\bold{x}^+(G')$ and $\bold{x}^- (G')$, respectively.
\end{lemma}
\begin{proof}
Let $G$ and $G'$ differ be grid diagrams differing by a commutation. Recall that in Subsection \ref{subsec:combmaps} we defined a quasi-isomorphism
\[
P: CFK^-(G)\to CFK^-(G')
\]
which counts empty pentagons.
There is an obvious empty pentagon $p\in Pent^\circ (\bold{x}^+(G),\bold{x}^+(G'))$, see Figure \ref{fig:COMMcomb}. It is easy to see that $p$ is the unique empty pentagon connecting $\bold{x}^+(G)$ to any point of $\mathbb{T}_{\boldsymbol{\gamma}}\cap \mathbb{T}_{\boldsymbol{\alpha}}$. If $p'$ is a pentagon other than $p$ having upper right corner at a component of $\bold{x}^+(G)$, then $p'$ contains a parallelogram of $T^2\smallsetminus \{\boldsymbol{\beta}\cup\boldsymbol{\alpha}\}$
having a $\bold{w}$ basepoint, so $p'$ is not empty.
We have established that $P(\bold{x}^+(G))=\bold{x}^+(G')$. The proof that $P(\bold{x}^-(G))=\bold{x}^- (G')$ is similar, the picture is obtained by rotating the diagram of Figure \ref{fig:COMMcomb} by 180 degrees.
\end{proof}
\begin{proposition}
Suppose that $G'$ is obtained from $G$ by applying a destabilization of type W:NE or W:SW. There is an isomorphism
\[
HFK^-(G')\to HFK^-(G)
\]
sending $[\bold{x}^+(G')]$ and $[\bold{x}^-(G')]$ to $[\bold{x}^+(G)]$ and $[\bold{x}^-(G)]$, respectively.
\end{proposition}
\begin{proof}
Suppose that $G'$ is obtained form $G$ by applying a destabilization of type W:NE. Note that the generator $\bold{x}^+(G')$ lies in the subcomplex $\bold{I}$ of $CFK^- (G')$. The second half of Proposition \ref{prop:stabcomb} tells us that the isomorphism
\[
HFK^-(G')\to HFK^-(G)
\]
induced by destabilization of type W:NE maps $[\bold{x}^+ (G')]$ to $\pi \circ e_* ([\bold{x}^+(G')]) = \pi ([\bold{x}^+(G)]\otimes 1)=[\bold{x}^+(G)]$. The case of a (de)stabilization of type W:SW is similar.
The proof of invariance for the class $[\bold{x}^-]$ is similar.
\end{proof}
We have shown that if a grid diagram $G$ encodes a Legendrian link $L\subset (L(p,q),\xi_{UT})$ then the classes $[\bold{x}^\pm (G)]\in HFK^- (G)$ are invariants of the oriented Legendrian link $L$ up to Legendrian isotopy, we denote them by $\lambda^\pm (L)$.
We establish the behavior of the invariants $\lambda^\pm (L)$ under Legendrian stabilizations.
\begin{proposition}
Let $L^-$ (respectively $L^+$) denote a negative (respectively positive) Legendrian stabilization along some component of $L_i\subset L$ of $L$, in $(L(p,q),\xi_{UT})$. We have that
\begin{align*}
\lambda^+ (L^-) = \lambda^+(L) \quad \quad \quad \quad \lambda^- (L^-) = U\cdot \lambda^-(L)\\
\lambda^+ (L^+) = U\cdot \lambda^+(L) \quad \quad \quad \quad \lambda^- (L^+) = \lambda^-(L).
\end{align*}
\end{proposition}
\begin{proof}
Let $G$ be a grid diagram encoding the Legendrian link $L$. Let $G^-$ ($G^+$) be a grid diagram obtained from $G$ by performing some W:SE (W:NW) stabilization; recall that stabilizations of this type correspond to negative (positive) Legendrian stabilizations.
Recall the notation established in the discussion preceding Proposition \ref{prop:stabcomb}. Note that both $x^\pm (G^-)$ (respectively $x^\pm (G^+)$) lie in the subcomplex $\bold{N}$ of $CFK^-(G^-)$ (respectively $CFK^-(G^+)$).
We will show that the compositions (there are really two such maps, one from the subcomplex of $CFK^-(G^-)$, and one from the subcomplex $CFK^-(G^+)$)
\[
e\circ \mathcal{H}^I_{w_1}:\bold{N}\to CFK^-(G)[U_0]
\]
map the the distinguished generators as follows:
\begin{align*}
\bold{x}^+(G^-)\to x^+(G)\otimes 1\quad \quad\quad \bold{x}^+(G^+)\to \bold{x}^+(G)\otimes U_0\\
\bold{x}^-(G^-)\to x^-(G)\otimes U_0\quad \quad\quad \bold{x}^-(G^-)\to \bold{x}^-(G)\otimes 1.
\end{align*}
The first half of Proposition \ref{prop:stabcomb} will then give the desired result.
\begin{figure}[h]
\def400pt{200pt}
\input{legstabinv.pdf_tex}
\caption{The domains of rectangles contributing to $\mathcal{H}^I_{w_1}(\bold{x}^\pm(G^+))$ or $\mathcal{H}^I_{w_1}(\bold{x}^\pm(G^-))$ are shaded above. In each case, the orange dots are components of $\bold{x}^\pm (G^\pm)$.}
\label{fig:legstabinv}
\end{figure}
The rectangles illustrated in Figure \ref{fig:legstabinv} are the only ones emanating from $\bold{x}^\pm (G^\pm)$ disjoint from all points of $\bold{w}\smallsetminus w_1$, and hence the only rectangles contributing to $\mathcal{H}^I_{w_1}(\bold{x}^\pm(G^+))$ or $\mathcal{H}^I_{w_1}(\bold{x}^\pm(G^-))$. In each case it is straightforward to post-compose with the map $e$ and check that the composition behaves as desired.
\end{proof}
\begin{proposition}
\label{prop:spincagrees}
The GRID invariants have $Spin^C$ structures agreeing with those of the contact plane fields. In particular
\[
\mathfrak{s}_\bold{w}(\theta) = \mathfrak{s}_\bold{w} (\lambda^+) = \mathfrak{s}_{\xi_{UT}}\quad \quad\text{ and }\quad\quad \mathfrak{s}_\bold{w}(\lambda^-) = \mathfrak{s}_{\overline{\xi_{UT}}}.
\]
\end{proposition}
\begin{proof}
Note that moving the $\bold{z}$ basepoints has no effect on $Spin^C$ structure $\mathfrak{s}_\bold{w}(\bold{x}^+)$. In light of the previous two propositions, stabilizations of any kind have no effect on the $Spin^C$ structure as well. Using these two types of moves (stabilizations and moving $\bold{z}$ basepoints arbitrarily), we may go from an arbitrary grid diagram $G$ to an index one diagram for the binding $B$ of the standard rational open book supporting $\xi_{UT}$.
For this index one diagram, the generator $\bold{x}^+$ is easily seen to have maximal Alexander grading. (This is trivial to check using Lemma \ref{lemma:relperiodic}). By Theorem 1.1 of \cite{QOB}, it follows that $[\bold{x}^+] \in \widehat{HF}(T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w})$ represents the contact invariant $c(\xi_{UT})$, in particular $\mathfrak{s}_\bold{w}(\bold{x}^+) = \mathfrak{s}_{\xi_{UT}}$.
Rotating the grid diagram by 180 degrees, noting that this corresponds to co-orientation reversal of the contact structures, and carrying out the same argument gives the analogous result for $\bold{x}^-$.
\end{proof}
Corollary \ref{corollary:grid} follows immediately. Recall that if $T$ is the positive transverse push-off of a Legendrian $L$, then we denote by $\theta (T) = \lambda^+ (L)$ the transverse invariant.
We turn to the computation of the bigradings of $\lambda^\pm (L)$.
Recall that $\Delta_n\in B_n$ denotes the Garside element, so that $\Delta_n ^2$ is the full twist on $n$ strands.
\begin{proposition}
\label{prop:gradingcomparison}
Let $L$ and $T$ denotes Legendrian and transverse links, respectively. The invariants $\lambda^+(L), \lambda^-(L)$ and $\theta(T)$ are supported in Maslov gradings
\begin{align*}
M(\lambda^+(L)) = tb_\mathbb{Q}(L) - rot_\mathbb{Q}(L) +\frac{1}{p} -d(p,q,q-1)\\
M(\lambda^-(L)) = tb_\mathbb{Q}(L) + rot_\mathbb{Q}(L) +\frac{1}{p}-d(p,q,q-1)\\
M(\theta(T)) = sl_{\mathbb{Q}}(T) +\frac{1}{p}-d(p,q,q-1)
\end{align*}
\end{proposition}
\begin{proof}
We prove the third equality.
Let $\beta\circ \delta^{q/p}$ be an n-braid representing $T$, and let $\beta_1$ denote the 1-braid.
Let $G$ be a grid diagram for $\beta\circ \delta^{q/p}$. We may take a p-fold cover of $G$ to get a grid diagram for $\beta^p\circ (\Delta^2)^q$. Likewise, we may cover a grid diagram for the braid $\tau_1\circ \delta^{q/p}$ by a grid diagram for the 1-braid $\tau_1$.
The gradings of the GRID invariants in $(S^3,\xi_{std})$ have been computed, see Theorem \ref{thm:OST}, we have
\begin{align*}
M(\theta(\beta^p\circ (\Delta^2)^q)) = sl(\beta^p\circ (\Delta^2)^q)+1=pw(\beta)+(qn-1)(n-1)\\
M(\theta(\tau_1))=sl(\tau_1)+1 = 0
\end{align*}
where $w(\beta)$ is the writhe of $\beta$.
Theorem 2.6 of \cite{coversmaslov} allows us to compute the grading difference:
\begin{align*}
M(\theta(\beta\circ \delta^{q/p}))-M(\theta(\beta_1\circ\delta^{q/p}))\\ = \frac{1}{p}(M(\theta(\beta^p\circ (\Delta^2)^q))-M(\theta(\beta_1)))\\
=w(\beta)+\frac{1}{p}(qn^2-qn-n+1)\\
= sl_\mathbb{Q}(\beta\circ\delta^{q/p})+\frac{1}{p}
\end{align*}
where the last equality uses Lemma \ref{lem:QSL}.
Note that $\tau_1\circ\delta^{q/p}$ can be encoded with an index one diagram. In this diagram, the Maslov grading $M(\theta(\tau_1\circ\delta^{q/p}))$ has been computed in \cite{dinvt} to be $-d(p,q,q-1)$ where the function $d$ is recursively defined by
\begin{align*}
d(1,0,0) =0\\
d(p,q,i) = \Big{(}\frac{pq-(2i+1-p-q)^2}{4pq}\Big{)}-d(q,r,j)
\end{align*}
where $r$ and $j$ are the reductions of $p$ and $i$ modulo $q$, respectively. The minus sign comes from our orientation conventions being opposite to that of \cite{dinvt}.
Note that the first equality follows from the third, for if $T$ is the positive transverse push-off of a Legendrian $L\subset (L(p,q),\xi_{UT})$ then we have that $sl_{\mathbb{Q}}(T) = tb_\mathbb{Q}(L) - rot_\mathbb{Q}(L)$.
The second equality follows from the first by another application of \cite{coversmaslov} and \cite{grid}. Let $\tilde{L}$ denote the pre-image of $L$ under the contact universal cover.
\begin{align*}
M(\lambda^+(L))-M(\lambda^-(L))\\ = \frac{1}{p}(M(\lambda^+(\tilde{L}))-M(\lambda^-(\tilde{L}))\\
=\frac{1}{p}((tb)\tilde{L})-rot(\tilde{L})+1) - (tb(\tilde{L})+rot(\tilde{L})+1)\\
=\frac{1}{p}(-2rot\tilde{L}) = -2rot_\mathbb{Q}(L)
\end{align*}
\end{proof}
\begin{proposition}
\label{prop:agradingcomp}
Let $L=L_1\cup\dots\cup L_{l}$ and $T\cup\dots\cup T_{l}$ denote Legendrian and transverse links, respectively. The invariants $\lambda^+(L), \lambda^-(L)$ and $\theta(T)$ are supported in Alexander gradings
\begin{align*}
A_{L_i}(\lambda^+(L)) = \frac{1}{2}\Big{(}tb_\mathbb{Q}(L_i) - rot^i_\mathbb{Q}(L) +1\Big{)}\\
A_{L_i}(\lambda^-(L)) = \frac{1}{2}\Big{(}tb_\mathbb{Q}(L_i) + rot^i_\mathbb{Q}(L) +1\Big{)}\\
A_{T_i}(\theta(T)) = \frac{1}{2}\Big{(}sl^i_\mathbb{Q}(T) +1\Big{)}.
\end{align*}
\end{proposition}
\begin{proof}
This is a straightforward combination of Lemmas \ref{lem:cover} and \ref{lem:Acovers} with Theorem \ref{thm:OST}.
\begin{comment}
Fix $\bold{x}^+\subset \mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$ to be as before. We inherit notation from the proof of Lemma \ref{lem:rationalalexandergrading}.
Applying Lemma \ref{lem:rationalalexandergrading} we get
\[
A_{L_i} (\bold{x}^+)= \frac{1}{2}(M_\bold{z} (\bold{x}^+) - M_\bold{w} (\bold{x}^+) - (n_i - \frac{1}{r_i}))
\]
where $n_i$ is the number of basepoints pairs encoding $L_i$. By \cite{coversmaslov}, this equals
\[
\frac{1}{2}\Big{(} \frac{1}{p}(M_{\tilde{\bold{z}}} (\tilde{\bold{x}}^+) - M_{\tilde{\bold{w}}} (\tilde{\bold{x}}^+)) - (n_i - \frac{1}{r_i})\Big{)}.
\]
On the other hand, the Alexander grading of $\tilde{\bold{x}}^+$ is given in Theorem \ref{thm:OST},
\[
\frac{1}{2} (tb(\tilde{L}_i) - rot^i (\tilde{L}) + \frac{p}{r_i}) = A_{\tilde{L}_i}(\tilde{\bold{x}}^+) = \frac{1}{2} (M_{\tilde{\bold{z}}} (\tilde{\bold{x}}^+) - M_{\tilde{\bold{w}}} (\tilde{\bold{x}}^+)-(n_ip - \frac{p}{r_i}))
\]
Combining these equations with Lemma \ref{lem:cover} gives the result.
The computation of $A(\lambda^- (L))$ is similar.
\end{comment}
\end{proof}
This completes the proof of Theorem \ref{thm:grid}.
\begin{proposition}
\label{prop:nontorsion}
For any Legendrian link $L$ or transverse link $T$ in $(L(p,q),\xi_{UT})$ the homology classes $\lambda^+ (L)$ and $\theta(T)$ do not vanish; the classes are non U-torsion, i.e. for any $n\ge 1$ we have that $U^n \cdot \lambda^+(L)\ne 0$ and $U^n \cdot \theta (T)\ne 0$
\end{proposition}
\begin{proof}
Let $L\subset (L(p,q),\xi_{UT})$ be an arbitrary Legendrian encoded by a grid diagram $G$. Consider the complex
\[
C' (G) = CFK^- (G)/ \{U_i = 1\}_{i=0}^{n-1}.
\]
It suffices to show that the homology class $[x^+]$ is non-zero in $H_* (C'(G))$. Note that the $\bold{z}$ basepoints do not have any effect on the differential, so we may move them as we like. The isomorphisms associated to each (de)stabilization preserve the class $[x^+]$.
It is easy to go from $G$ to some index one diagram $D$ using some sequence of relocations of the $\bold{z}$ basepoints and destabilizations. There is an induced isomorphism
\[
H_* (C'(G))\to H_* (C'(D))
\]
taking $[x^+(L)]$ to $[x^+(D)]$. Since $D$ is a index one diagram, the complex $C'(D)$ has no differential and $[x^+(D)]$ is non-zero.
\end{proof}
\section{The BRAID invariant}
\label{sec:braidinvt}
In this section we review the definition of the BRAID invariant, defined in \cite{equiv}. The definition is reminiscent of the definition of the contact invariant given in \cite{HKM}.
Let $(B,\pi)$ be an open book supporting $(Y,\xi)$. Let $(S,\phi)$ be the abstract open book corresponding to $(B,\pi)$, and let $g$ be the genus of a fiber. If $K$ is an index $k$ braid with respect to $(B,\pi)$, $K$ is specified by a lift $\widehat{\phi} \in MCG(S\smallsetminus \{p_1,\dots , p_k\}, \partial S)$ of $\phi$.
\begin{definition}
A \emph{basis of arcs} $\{a_i\}_1 ^{2g+k-1}\subset S\smallsetminus \{p_1,\dots, p_k\}$ is a collection of properly embedded disjoint arcs which cut $S\smallsetminus \{p_1,\dots, p_k\}$ into $k$ discs, each having precisely one of the $p_i$.
\end{definition}
Let $\{b_i\}_1^{2g+k-1}$ be another basis of arcs, where $b_i$ is obtained by slightly moving the endpoints of $a_i$ in the oriented direction of $\partial S$, and isotoping in $S\smallsetminus \{p_1,\dots, p_k\}$ so that $b_i$ intersects $a_i$ transversely in a single point.
Let $\Sigma$ denote the surface $S_{1/2}\cup -S_0$. For each $i$, let
\begin{align*}
\alpha_i = a_i\times \{0,1/2\} \\
\beta_i = b_i\times \{1/2\}\cup \widehat{\phi}(b_i)\times \{0\}\\
z_i = p_i \times \{0\}\\
\text{and } w_i = p_i \times \{1/2\}
\end{align*}
Let $\boldsymbol{\alpha} = \{\alpha_1,\dots,\alpha_k\}$, $\boldsymbol{\beta} = \{\beta_1,\dots, \beta_k\}$, $\bold{z} = \{z_1,\dots,z_k\}$, and $\bold{w} = \{w_1,\dots,w_k\}$. Then $\mathcal{H} = (\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w},\bold{z})$ is a multi-pointed Heegaard diagram encoding $(-Y,K)$.
Each $\alpha_i$ intersects $\beta_i$ in a single point in the region $S_{1/2}$ denoted $x_i$. Let $\bold{x} \in \mathbb{T}_{\boldsymbol{\beta}} \cap \mathbb{T}_{\boldsymbol{\alpha}}$ denote the generator having component $x_i$ on $\alpha _i$. The homology class $[\bold{x}] \in HFK^- (-Y,K)$ is an invariant of the transverse isotopy class of $K$ (Theorem 3.1 of \cite{equiv}). The invariant is denoted $t(K)$, and we refer to it as the BRAID invariant.
\subsection{The transverse invariant of a braid and its axis}
\label{subsec:binding}
Suppose that $(B,\pi)$ is an open book decomposition, with $B$ having $n$ components, supporting $(Y,\xi)$. Let $(S,\phi)$ be the abstract open book corresponding to $(B,\pi)$, where $S$ has genus $g$. As discussed in subsection \ref{subsec:contact} the binding $B$ is naturally a transverse link that may be braided about the open book via a transverse isotopy. Abusing notation, we denote the resulting $n$-braid $B$.
$B$ is specified by a lift $\widehat{\phi} \in MCG( S\smallsetminus \{p_1,\dots,p_n\})$ of $\phi$. Thinking of $\phi$ as fixing a collar neighborhood $\nu(\partial S)$ of the boundary, one obtains $\widehat{\phi}$ by composing $\phi$ with $n$ push maps supported in $\nu (\partial S)$. See Figure \ref{fig:one} for the push maps and a basis of arcs $\{a_i\}_1^{2g+n-1}\cup \{a_{2,i}\}_{2g+1}^{2g+n-1}$ for $S\smallsetminus \{p_1,\dots,p_n\}$.
\begin{figure}[h]
\def400pt{250pt}
\input{one.pdf_tex}
\caption{The basis of arcs $\{a_i\}_1^{2g+n-1}\cup \{a_{2,i}\}_{2g+1}^{2g+n-1}$ for the case $n=3$ and $g=2$ is depicted in red. The push maps, supported in the shaded neighborhood $\nu (\partial S)$, go in the orientation of $\partial S$ and are depicted in blue.}
\label{fig:one}
\end{figure}
The basis of arcs along with $\widehat{\phi}$ specify a Heegaard diagram $D= (\Sigma, \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w}_B, \bold{z}_B) $, along with a generator $\bold{x}_D$, shown in Figure $\ref{fig:two}$ for $(-Y,B)$, as described in Section \ref{sec:braidinvt}. The homology class $[\bold{x}_D]\in HFK^-(D)$ is the braid invariant $\widehat{t}(B)$. The labelling of the basis arcs induces a labelling of the $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$ curves.
\begin{figure}[h]
\def400pt{250pt}
\input{two.pdf_tex}
\caption{A portion of the Heegaard diagram $D$ for $(-Y,B)$ in the case $g=1$ and $n=3$. The $\bold{w}_B$ and $\bold{z}_B$ basepoints are depicted with solid and hollow dots, respectively. The homology class of the generator depicted by orange dots, in $\widehat{HFK}(D)$, is equal to the transverse invariant $\widehat{t}(B)$. The indexing of the basis in Figure \ref{fig:one} induces an indexing of the $\alpha$ and $\beta$ curves in $D$.}
\label{fig:two}
\end{figure}
By applying an isotopy to $D$, we obtain the Heegaard diagram $\mathcal{D}$ pictured in Figure \ref{fig:three}.
\begin{figure}[h]
\def400pt{250pt}
\input{three.pdf_tex}
\caption{A portion of the Heegaard diagram $\mathcal{D}$ obtained from $D$ via and isotopy. The generator $\bold{x}_\mathcal{D}$, whose homology class is $\widehat{t}(B)$, is depicted by orange dots. The purple multi-curve depicts an oriented (as $\partial S$) longitude for $B$.}
\label{fig:three}
\end{figure}
Now let $K$ be a link braided about $B$ having braid index $k$. We may add $k$ pairs of basepoints $\bold{w}$, $\bold{z}$, and curves to $\mathcal{D}$ to obtain a new diagram $\mathcal{H} = (\Sigma, \boldsymbol{\beta},\boldsymbol{\alpha}, \bold{w}\cup \bold{w}_B,\bold{z}\cup \bold{z}_B)$, see Figure \ref{fig:nine}, which encodes $(-Y,K\cup B)$. We reindex the $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}$ curves for our convenience.
$\mathcal{H}$ is isotopic to the usual diagram appearing in the definition of the braid invariant when considering a particular basis of arcs for $S\smallsetminus \{p_1,\dots,p_{n+k}\}$. The generator $\bold{x}_{\mathcal{H}}$ pictured in the figure has homology class $\widehat{t}(K)\in \widehat{HFK}(\mathcal{H})$.
\begin{figure}[h]
\def400pt{250pt}
\input{nine.pdf_tex}
\caption{A portion of the diagram $\mathcal{H}$ in the case $k=2$, $g=1$, and $n=2$. The $k$ pairs of alpha/beta curves introduced to encode $K$ are indexed $1,\dots,k$ left to right. The rest of the curves are indexed as before (see Figures \ref{fig:one} and \ref{fig:two}) with a shift of $k$ in the first coordinate. The orange dots are components of $\bold{x}_\mathcal{H}$. The $\bold{w}_B\cup \bold{w}$ and $\bold{z}_B$ basepoints are depicted with solid and hollow dots, respectively.}
\label{fig:nine}
\end{figure}
In earlier work we use the diagrams $\mathcal{D}$ and $\mathcal{H}$ to prove Theorem \ref{thm:nonvanishing}, which plays an important role in the next section.
\begin{theorem} (Theorem 1.1 of \cite{braiddynamics})
\label{thm:nonvanishing}
Let $(B,\pi)$ be an open book supporting $(Y,\xi)$. If $K$ is braided about $B$, then $\widehat{t}(B\cup K)\in \widehat{HFK}(-Y,B\cup K)$ is nonzero.
\end{theorem}
\subsection{A Reformulation of the BRAID invariant $t$}
\label{subsec:reformulation}
Let $(B,\pi)$ be an open book decomposition supporting $(Y,\xi)$, where the binding $B$ has $n$ components. Let $(S,\phi)$ be the abstract open book corresponding to $(B,\pi)$, where $S$ has genus $g$. Let $K \subset Y$ be an index $k$ braid about $B$ having $m$ components.
In this section we reformulate the transverse invariant $t(K)$ in terms of the Alexander filtration induced by $-B$ on $CFK^- (-Y,K)$.
Let $\mathcal{H}=(\Sigma, \boldsymbol{\beta},\boldsymbol{\alpha}, \bold{w}\cup \bold{w}_B,\bold{z}\cup \bold{z}_B)$ be the diagram for $(-Y,K\cup B)$ of Figure \ref{fig:nine}. Let $\bold{w}_B = \bold{z}_{-B}$ and $\bold{z}_B = \bold{w}_{-B}$ as sets. Swapping these sets of basepoints corresponds to reversing the orientation of $B$, so $\mathcal{\widetilde{H}} = (\Sigma, \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w}\cup \bold{w}_{-B}, \bold{z} \cup \bold{z}_{-B})$ is a diagram for $(-Y,K\cup -B)$.
$\mathcal{H}_0= (\Sigma, \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w} , \bold{z} \cup \bold{z}_{-B})$ is a diagram for $(-Y,K)$ with $n$ free basepoints. Let $\bold{x}_0 \in CFK^{-,n}(\mathcal{H}_0)$ be the generator corresponding to $\bold{x}_{\mathcal{H}}$ of Figure \ref{fig:nine}. $-B$ induces an Alexander filtration on $CFK^{-,n} (\mathcal{H}_0)$:
\[
\emptyset = \mathcal{F}_i^{-B}(\mathcal{H}_0) \subset \mathcal{F}_{i+1}^{-B}(\mathcal{H}_0) \subset \dots \subset \mathcal{F}_l^{-B}(\mathcal{H}_0) =CFK^{-,n} (\mathcal{H}_0)
\]
Let
\[
b= min \{j | H_*(\mathcal{F}_j^{-B}(\mathcal{H}_0) ) \ne 0\}.
\]
Let $\mathcal{D} = (\Sigma, \{\beta_{k+1},\dots,\beta_{2g+2n+k-2}\}, \{\alpha_{k+1},\dots,\alpha_{2g+2n+k-2}\}, \bold{w}_B,\bold{z}_B)$ be the diagram for $(-Y,B)$ from the previous section, where we have preemptively reindexed the curves and basepoints, so that each entry of the tuple for $\mathcal{D}$ is a subset of the analagous entry for $\mathcal{H}$.
Consider the Heegaard diagram $\widetilde{\mathcal{D}} = (\Sigma, \{\beta_{k+1},\dots,\beta_{2g+2n+k-2}\}, \{\alpha_{k+1},\dots,\alpha_{2g+2n+k-2}\}, \bold{z}_{-B})$ for $-Y$ with $n$ basepoints.
As above, $-B$ induces a filtration on $\widehat{CF}(\widetilde{\mathcal{D}})$.
Let
\[
r= min \{j | H_*(\mathcal{F}_j^{-B}(\widetilde{\mathcal{D}}) ) \ne 0\}.
\]
Note that $r=-g-n+1$.
\begin{lemma}
\label{lem:alexbot}
A generator of $CFK^{-,n} (\mathcal{H}_0)$ lies in $\mathcal{F}_b ^{-B}(\mathcal{H}_0) $ if and only if each of its components is in the region $S_{1/2}$.
Likewise, a generator of $\widehat{CFK}(\widetilde{\mathcal{D}})$ lies in $\mathcal{F}_r ^{-B}(\widetilde{\mathcal{D}})$ if and only if each of its components is in the region $S_{1/2}$.
\end{lemma}
\begin{proof}
In both cases, The portion of the diagram $S_{1/2}$ is a relative periodic domain for $B$. The result following immediately by applying Lemma \ref{lemma:relperiodic}.
\end{proof}
\begin{lemma}
As complexes we have $\mathcal{F}_b ^{-B}(\mathcal{H}_0) \simeq (V_1\otimes V_2\otimes \dots \otimes V_k) \otimes {\mathcal{F}_r}^{-B} (\widetilde{\mathcal{D}})$, where each $V_i$ is a free rank two $\mathbb{F}[U_1,\dots , U_m]$-module with basis $\{x_i,y_i\}$. Let $\sigma \in S_k$ denote the permutation of the points $\{p_1,\dots,p_k\}$ given by the monodromy $\widehat{\phi}$. The differential on $V_i$ is as follows
\[
\partial x_i=0 \ \ \ \ \ \ \ \ \ \ \ \ \
\partial y_i = (U_i + U_{\sigma (i)})x_i
\]
\end{lemma}
\begin{proof}
For $\bold{y}\in \mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$ let $(\bold{y})_i$, $(\bold{y})^i$, denote the component of $\bold{y}$ on $\alpha_i$, $\beta_i$, respectively. Note that in the region $S_{1/2}$, for $1\le i \le k$, the curve $\alpha _i$ intersects only the curves $\beta_j$ with $j\le i$.
If generator $\bold{y}$ lies in $\mathcal{F}_b ^{-B}(\mathcal{H}_0)$, it must be the case that $(\bold{y})_i = (\bold{y})^i$ for each $1\le i \le k$.
Let $x_i$, $y_i$, denote the point of $\alpha_i\cap \beta_i$ in the region $S_{1/2}$ of higher, respectively lower, Maslov grading.
For each $i\le k$ there are no disks leaving $x_i$ which contribute to the differential. There are exactly two disks from $y_i$ to $x_i$. One disks passes through the point $z_i$, the other through $z_{\sigma (i)}$.
\end{proof}
\begin{proposition}
\label{prop:rank}
If $Y$ is a $\mathbb{Q}HS^3$, then $H_{top}(\mathcal{F}_b ^{-B}(\mathcal{H}_0)) \simeq \mathbb{F}[U_1,\dots , U_m]$ and is generated by the homology class of $\bold{x}_0 \in \mathbb{T}_{\boldsymbol{\beta}} \cap \mathbb{T}_{\boldsymbol{\alpha}}$.
\end{proposition}
\begin{proof}
$\widehat{t}(B)=[\bold{x}_\mathcal{D}] \in \widehat{HFK}(-Y,B,-r)$ is nonzero (Theorem \ref{thm:nonvanishing}).
By Proposition 2.2 of \cite{linkgenus} $\widehat{HFK}(-Y,B,-r)$ is rank one.
It follows that $H_* (\mathcal{F}_r ^{-B}(\widetilde{\mathcal{D}}))\simeq \widehat{HFK}(-Y,-B,r)$ is generated by $[\bold{x}_{\widetilde{\mathcal{D}}}]$, the homology class of a generator having components same as $\bold{x}_\mathcal{D}$ of Figure \ref{fig:three}.
In the previous lemma $\bold{x}_0$ is identified with $(x_1\otimes \dots \otimes x_k)\otimes \bold{x}_{\widetilde{\mathcal{D}}}$. Note that there are obvious domains (unions of index one disks) having positive Maslov index from the intersection point $\bold{x}_0$ to any other point of $V_1\otimes V_2\otimes \dots \otimes V_k$.
Thus $[\bold{x}_0]$ generates $H_{top}(\mathcal{F}_b ^{-B}(\mathcal{H}_0)) \simeq \mathbb{F}[U_1,\dots , U_m]$.
\end{proof}
\begin{remark}
\label{remark:maslov}
The assumption that $Y$ is a $\mathbb{Q}HS^3$ is in place so that the absolute Maslov $\mathbb{Q}$ grading is defined. This technical assumption may be replaced with the assumption that $\mathfrak{s}_{\xi}$ is torsion. Alternatively, one may consider the basepoint action $\psi_{w}$ on $H_*(\mathcal{F}_b ^{-B}(\mathcal{H}_0))$ for each $w \in \bold{w}$. Then $[\bold{x}_0]$ generates $\bigcap_{w\in \bold{w}} coker(\psi_{w})\simeq \mathbb{F}[U_1,\dots , U_m]$.
\end{remark}
We now relate the class $[\bold{x}_0]$ to $t(K)$.\
Consider the triple diagram $(\Sigma, \boldsymbol{\beta}', \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w}, \bold{z} \cup \bold{z}_{-B})$ shown in Figure \ref{fig:triangleone}. The set of curves $\boldsymbol{\beta}'$ is handleslide equivalent to $\boldsymbol{\beta}$ in the complement of all basepoints.
For \begin{align*}
2\le i\le 2g+k\hspace{1cm}\text{ and}\\
2g+k+1\le j\le 2g+n+k-1,
\end{align*} $\beta_i '$, $\beta_{2,j}'$, is gotten by applying a small isotopy to $\beta_i$, $\beta_{2,j}$, respectively.
Let $\mathcal{H}_1 = (\Sigma, \boldsymbol{\beta}', \boldsymbol{\alpha}, \bold{w} , \bold{z} \cup \bold{z}_{-B})$. Let $\boldsymbol{\Theta}$ denote the generator of $CFK^{-,n}(\Sigma,\boldsymbol{\beta}',\boldsymbol{\beta}, \bold{w} , \bold{z} \cup \bold{z}_{-B})$ in the top Maslov grading.\\
\begin{figure}[h]
\def400pt{300pt}
\input{triangleone.pdf_tex}
\caption{The $\boldsymbol{\alpha}$,$\boldsymbol{\beta}$ and $\boldsymbol{\beta '}$ curves are red, blue, and green, respectively. The generators $\bold{x}_0$, $\boldsymbol{\Theta}$ and $\bold{x}_1$ are represented by orange dots, brown squares, and yellow stars, respectively. The $\bold{w}$ and $\bold{z}_{-B}$ basepoints are depicted with solid and grey dots, respectively.}
\label{fig:triangleone}
\end{figure}
Consider the triple diagram $(\Sigma, \boldsymbol{\beta}', \boldsymbol{\alpha}, \boldsymbol{\alpha}',\bold{w} , \bold{z} \cup \bold{z}_{-B})$ shown in Figure \ref{fig:triangletwo}. The set of curves $\boldsymbol{\alpha}'$ is handleslide equivalent to $\boldsymbol{\alpha}$ in the complement of all basepoints.
For
\begin{align*}
2\le i\le 2g+k\hspace{1cm} \text{ and}\\
2g+k+1\le j\le 2g+n+k-1,
\end{align*} $\alpha_i '$, $\alpha_{2,j}'$, is gotten by applying a small isotopy to $\alpha_i$, $\alpha_{2,j}$, respectively. Let $\mathcal{H}_2 = (\Sigma, \boldsymbol{\beta}', \boldsymbol{\alpha}', \bold{w} , \bold{z} \cup \bold{z}_{-B})$. Abusing notation, let $\boldsymbol{\Theta}$ denote the generator of $CFK^{-,n}(\Sigma,\boldsymbol{\alpha},\boldsymbol{\alpha}', \bold{w} , \bold{z} \cup \bold{z}_{-B})$ in the top Maslov grading.
\begin{figure}[h]
\def400pt{300pt}
\input{triangletwo.pdf_tex}
\caption{The $\boldsymbol{\alpha}$,$\boldsymbol{\beta} '$ and $\boldsymbol{\alpha} '$ curves are red, blue, and green, respectively. The generators $\bold{x}_1$, $\boldsymbol{\Theta}$ and $\bold{x}_2$ are represented by orange dots, brown squares, and yellow stars, respectively. To make the diagram simpler, we have performed an isotopy of $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}'$ curves and moved the leftmost basepoint $z_{-B}$ in the complement of said curves (before drawing the $\boldsymbol{\alpha}'$ curves).}
\label{fig:triangletwo}
\end{figure}
\begin{proposition}
\label{prop:maps}
Let
\[
\begin{split}
F_{0,1} : HFK^{-,n}(\mathcal{H}_0) \xrightarrow {\simeq} HFK^{-,n}(\mathcal{H}_1) &\\
F_{1,2} : HFK^{-,n}(\mathcal{H}_1) \xrightarrow {\simeq} HFK^{-,n}(\mathcal{H}_2)
\end{split}
\]
denote the isomorphisms induced by the triple diagrams above.
The composition $F_{0,2} = F_{1,2}\circ F_{0,1}$ sends the class $[\bold{x}_0]$ to the class $[\bold{x}_2]$.
\end{proposition}
\begin{proof}
The isomorphisms $F_{0,1}$ and $F_{1,2}$ are induced by pseudo-holomorphic triangle counting maps $f_{0,1}$ and $f_{1,2}$ respectively.
We first prove that $f_{0,1} (\bold{x}_0) = \bold{x}_1$.
Suppose that $u\in \pi_2 (\boldsymbol{\Theta},\bold{x}_0,\bold{y})$ is a Whitney triangle of Maslov index one which admits a pseudo-holomorphic representative. We claim that $\bold{y} = \bold{x}_1$ and that the domain $D(u)$ is a disjoint union of small triangles pictured in Figure \ref{fig:triangleone}. In this case $u$ has a unique pseudo-holomorphic representative. We prove this claim by analyzing the multiplicities of $D(u)$ near the generators $\bold{x}_0$ and $\boldsymbol{\Theta}$. First, we analyze the multiplicities near the small triangle shaded black in Figure \ref{fig:triangleone}.
The diagram in the upper right of the figure shows the local multiplicities in the regions near the triangle. The region just outside the triangle adjacent to the $\boldsymbol{\beta}'$ curve contains a basepoint of $\bold{w}$, so the local multiplicity in this region is zero. The region opposite to the triangle at the corner having a component of $\bold{x}_0$ contains a basepoint $z_{-B}$, so the local multiplicity is zero there as well. Since $\boldsymbol{\Theta}$ and $\bold{x}_0$ are corners of $D(u)$ it follows that $b+d = a+1$ and $b=a+c+1$. Subtracting the second equation from the first we get that $d=-c$. Because $u$ admits a pseudo-holomorphic representative all multiplicities of $D(u)$ must be non-negative; it follows that $d=c=0$ and $b=a+1$. If the component of $\bold{x}_1$ on the vertex of this small triangle is not a corner of $D(u)$, it follows that $b+e =0$, which in turn implies $b=e=0$ and $a=-1$, a contradiction.
Let $p\in \Sigma$ be the point denoted by a pink triangle in Figure \ref{fig:triangleone}. We have already shown that the multiplicities in three of the regions (all but $R$) which have a corner at $p$ are equal to zero. Since $p$ is not a corner of $D(u)$, it follows that the multiplicity in the region labeled $R$ is also equal to zero.
The multiplicities of regions near all the shaded small triangles now are identical to that of the multiplicities near the black triangle studied above. The claim follows.
To prove that $f_{1,2} (\bold{x}_1) = \bold{x}_2$ one uses a similar argument to show that the multiplicities of the domain of any triangle $v\in \pi_2 (\bold{x}_1,\Theta,\bold{y})$ admitting a pseudo-holomorphic representative are equal to $1$ in each shaded triangle of Figure \ref{fig:triangletwo} and zero elsewhere. As before, one must study multiplicities around the black triangle first to see that the multiplicity in the region labelled $R$ must be zero.
\end{proof}
Observe that in $\mathcal{H}_2$ we have small configurations about each point of $\bold{z}_{-B}$.
By performing $n$ free index 0/3 destabilizations to remove all points of $\bold{z}_{-B}$ we obtain a new diagram $\mathcal{T}$ and generator $\bold{x}$, see Figure \ref{fig:diagramT}. Let $T$ denote the diagram used in the definition of the BRAID invariant when using the basis for $S\smallsetminus \{p_1,\dots, p_k\}$ pictured on the left half of Figure \ref{fig:basisanddiagram}. We modify $T$ to obtain $\mathcal{T}$ by applying some finger moves along the $\beta$-curves in the region $-S_0$, which in turn may be realized via an isotopy of $\widehat{\phi}$. Since $t(K)$ is invariant under isotopy of $\widehat{\phi}$ (\cite{equiv}) we have that $[\bold{x}]=t(K)\in HFK^{-}(\mathcal{T})$.
\begin{figure}[h]
\def400pt{200pt}
\input{diagramT.pdf_tex}
\caption{The diagram $\mathcal{T}$. The generator $\bold{x}$ is depicted with orange dots.}
\label{fig:diagramT}
\end{figure}
\begin{figure}[h]
\def400pt{250pt}
\input{basisanddiagram.pdf_tex}
\caption{To the basis on the left we may associate the diagram on the right. Applying finger moves along the $\boldsymbol{\beta}$ curves in the region $-S_0$, which corresponds to isotopy of the monodromy, results in the diagram of Figure \ref{fig:diagramT}.}
\label{fig:basisanddiagram}
\end{figure}
The compositions of projection and inclusion maps
\[
\begin{split}
j^n : CFK^{-,n} (\mathcal{H}_2)\to CFK^- (\mathcal{T}) & \\ i^n : CFK^- (\mathcal{T}) \to CFK^{-,n} (\mathcal{H}_2)
\end{split}
\]
defined in subsection \ref{subsec:HFK}, send generators $\bold{x}_2$ to $\bold{x}$ and $\bold{x}$ to $\bold{x}_2$, respectively. In light of Proposition \ref{prop:maps} the following is evident:
\begin{proposition}
\label{prop:composition}
The compositions
\[
\begin{split}
(j^n)_*\circ F_{0,2} : HFK^{-,n} (\mathcal{H}_0)\to HFK^- (\mathcal{T}) & \\
F_{0,2}^{-1}\circ(i^n)_* : HFK^- (\mathcal{T}) \to HFK^{-,n} (\mathcal{H}_0)
\end{split}
\]
send $[\bold{x}_0]$ to $[\bold{x}] = t(K)$ and $[\bold{x}]=t(K)$ to $[\bold{x}_0]$, respectively. In particular, $[\bold{x}_0]$ lies in the summand $\bigcap_{z\in \bold{z}_{-B}} coker (\psi _z)$.
\end{proposition}
\section{The BRAID invariant $t$ and Rational Open Books}
\label{sec:rationalchar}
In this short section we show that the reformulation of $t(K)$ of the previous section generalizes to braids about rational open books having connected binding.
Let $(B,\pi)$ denote a rational open book decomposition for $Y$ with connected binding, such an open book supports a unique contact structure (see Theorem 1.7 \cite{CCSMCM}) $\xi$.
Let $K\subset Y$ denote a link braided about $B$ with $m$ components.
As in the integral case, $K$ is naturally a transverse link in $(Y,\xi)$.\\
We may choose a diagram \[
(\Sigma,\boldsymbol{\beta_0}',\boldsymbol{\alpha}_0,w_{-B},z_{-B}) \text{ for }(-Y,-B)
\]
such that $\beta_0\subset \boldsymbol{\beta_0}'$ is a meridian for $-B$, and such that $\beta_0$ intersects only $\alpha_0$ among all $\boldsymbol{\alpha}_0$ curves, and does so in a single point. In proving Proposition 3.1 of \cite{QOB}, Hedden and Plamenevskaya perform some finger moves to $\beta_0'$, getting a new curve $\beta_0$, this gives us a new Heegaard diagram
\[
\mathcal{B}=(\Sigma,\boldsymbol{\beta_0},\boldsymbol{\alpha}_0,w_{-B},z_{-B}) \text{ for }(-Y,-B)
\]
such that replacing $w_{-B}$ with another point $w_{-B'}$ results in a diagram
\[\mathcal{B}' = (\Sigma, \boldsymbol{\beta}_0,\boldsymbol{\alpha}_0,w_{-B'},z_{-B}) \text{ for } (-Y,-B'),
\]
where $B'$ is some $(P,Pn+1)$ genuinely fibered cable of $B$. Using relative periodic domains for longitudes of $B$ and $B'$ to study the Alexander gradings, Hedden and Plamenevskaya identify the complexes $\widehat{CFK}(\mathcal{B},bot)$ and $\widehat{CFK}(\mathcal{B}',bot')$ with each other.
By starting with a diagram \[
(\Sigma,\boldsymbol{\beta}',\boldsymbol{\alpha},\bold{w} \cup w_{-B}, \bold{z} \cup z_{-B}) \text{ for }(-Y,K\cup -B)
\]
such that $\beta_0\subset \boldsymbol{\beta}'$ is a meridian for $-B$, and such that $\beta_0$ intersects only $\alpha_0$ among all $\boldsymbol{\alpha}$ curves, and does so in a single point, we may perform the same finger moves to $\beta_0$ as in the proof of Proposition 3.1 of \cite{QOB}, getting a diagram
\[\widetilde{\mathcal{D}}=(\Sigma, \boldsymbol{\beta}, \boldsymbol{\alpha},\bold{w} \cup w_{-B}, \bold{z} \cup z_{-B})\text{ for }(-Y,K\cup -B)\]
such that replacing $w_{-B}$ with another point $w_{-B'}$ results in a diagram
\[\widetilde{\mathcal{D}}' =(\Sigma,\boldsymbol{\beta}, \boldsymbol{\alpha},\bold{w} \cup w_{-B}', \bold{z} \cup z_{-B})\text{ for }(-Y,K\cup -B').\]
Let $\mathcal{D}=(\Sigma,\boldsymbol{\beta}, \boldsymbol{\alpha},\bold{w} , \bold{z} \cup z_{-B})$. Both $-B$ and $-B'$ induce filtrations on $CFK^{-,1}(\mathcal{D})$. Copying the proof of Proposition 3.1 of \cite{QOB}, with minor changes in notation, shows that $\mathcal{F}_{bot}^{-B}(\mathcal{D}) = \mathcal{F}_{bot'}^{-B'}(\mathcal{D})$ as complexes.
\begin{proposition}
\label{prop:characterization}
Let $B\subset Y$ be a rationally fibered knot, with $Y$ a $\mathbb{Q}HS^3$. Let $K$ be a link braided about $B$ with $m$ components.
Let $\widetilde{\mathcal{G}} = (\Sigma, \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w} \cup w_{-B}, \bold{z} \cup z_{-B})$ be a Heegaard diagram for $(-Y,K\cup -B)$ with a pair of basepoints encoding $-B$. $\mathcal{G}_0= (\Sigma, \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w} , \bold{z}\cup z_{-B})$ is a Heegaard diagram for $(-Y,K)$ with $1$ free basepoint. $-B$ induces an Alexander filtration on the complex $CFK^{-,1}(\mathcal{G}_0)$
\[
\emptyset = \mathcal{F}_i^{-B}(\mathcal{G}_0) \subset \mathcal{F}_{i+1}^{-B}(\mathcal{G}_0) \subset \dots \subset \mathcal{F}_j^{-B}(\mathcal{G}_0) =CFK^{-,1} (\mathcal{G}_0)
\]
$H_{top} (\mathcal{F}_{bot}^{-B}(\mathcal{G}_0))$ is a rank one $\mathcal{F}[U_1,\dots,U_m]$-module.
\end{proposition}
\begin{proof}
The diagram $\widetilde{\mathcal{G}_0}$ is related to $\widetilde{\mathcal{D}}$ by a sequence of handleslides and isotopies avoiding all basepoints, together with index 1/2 (de)stabilizations and linked index 0/3 (de)stabilizations involving only the basepoints in $\bold{w} \cup \bold{z}$. This sequence of moves give rise to chain maps, the composition of which
\[
g:CFK^{-,1}(\mathcal{G}_0)\to CFK^{-,1}(\mathcal{D}
\]
respects the Maslov grading and the filtration induced by $-B$.\\
\indent Similarly, the diagram $\widetilde{\mathcal{D}}'$ is related to $\widetilde{\mathcal{H}_0}$ of the previous section by a sequence of handleslides and isotopies avoiding all basepoints, together with index 1/2 (de)stabilizations and linked index 0/3 (de)stabilizations involving only the basepoints in $\bold{w} \cup \bold{z}$. This again gives rise to a chain map
\[
h: CFK^{-,1}(\mathcal{D}) \to CFK^{-,1}(\mathcal{H}_0)
\]
which also respects the Maslov grading and the filtration induced by $-B'$.\\
\indent
The isomorphism
\[
(h\circ g)* :H_{top} (\mathcal{F}_{bot}^{-B}(\mathcal{G}_0))\to H_{top}(\mathcal{F}_{bot'} ^{-B'}(\mathcal{H}_0))
\]
combined with Proposition \ref{prop:rank}, shows that $H_{top} (\mathcal{F}_{bot}^{-B}(\mathcal{G}_0))$ is rank 1.
\end{proof}
The following is now evident:
\begin{lemma}
\label{lem:rationalmap}
Let
\[
Q : HFK^{-,1}(\mathcal{G}_0) \to HFK^{-,1}(\mathcal{H}_0)
\]
denote the isomorphism induced on homology by $h\circ g$.
Suppose that $[\bold{x}^{G}_0]$ generates $H_{top} (\mathcal{F}_{bot}^{-B}(\mathcal{G}_0))$, then $Q ([\bold{x}^{G}_0])=([\bold{x}_0])$.
\end{lemma}
Because the free index 0/3 (de)stabilization maps commute with all of the maps above, the reformulation of the BRAID invariant of the previous section extends to the case of braids about rational open books having connected binding.
\begin{remark}
The assumption that $Y$ is a $\mathbb{Q}HS^3$ can be weakened, see Remark \ref{remark:maslov}.
The case of disconnected rational binding can be dealt with similarly, although it is a bit more work. We have no use for this in the present paper, so we do not pursue this here.
\end{remark}
\section{A diagram for lens space braids}
\label{sec:diagram}
In this section we construct a Heegaard diagram $\mathcal{T}^G$ for $(-L(p,q),K)$ for a link $K$ braided about about the standard rational open book $(B,\pi)$ supporting $(L(p,q),\xi_{UT})$ described in Subsection \ref{subsec:contact}. We use the reformulation of the transverse invariant to identify a generator $\bold{x}^G \in CFK^{-}(\mathcal{T}^G)$ whose homology class is $t(K)$.
Recall that the rational open book $(B,\pi)$ has disk fibers, the monodromy is a counterclockwise $2\pi q/p$ boundary twist, and that this rational open book is obtained from the standard open book for $S^3$ having disk pages by performing $-p/q$ surgery on the unknot $U\subset S^3$. The binding $B$ is the core of the filling torus.
Let $K' \subset S^3$ denote the pre-image of $K$ under the Dehn-surgery described above. $K'$ is braided about $U$, suppose that $K'$ has braid index $k$. Consider the Heegaard diagram pictured on the left side of Figure \ref{fig:2diagrams} for $(-S^3,K'\cup U)$, this is just the diagram from Figure \ref{fig:nine} in the case of $g=0, n=1$ where $K'$ is a trivial 3-braid (in general the $\boldsymbol{\beta}$ curves may look different in $-S_0$, the bottom half of the diagram.).
\begin{figure}[h]
\def400pt{400pt}
\input{2diagrams.pdf_tex}
\caption{Two diagrams for $(-S^3,K\cup U)$. The second is obtained from the first by stabilization and handlesliding all the old beta curves over the new beta curve $\mu$. The $\bold{w}\cup w_U$ and $\bold{z}\cup z_U$ basepoints are depicted with solid and hollow dots, respectively.}
\label{fig:2diagrams}
\end{figure}
Stabilizing the diagram and performing a series of handle-slides we obtain the diagram pictured on the right side of Figure \ref{fig:2diagrams}. A longitude for $U$, denoted $\lambda _U$ is also pictured in the diagram. The curves $\alpha_0$ and $\lambda_U$ divide the Heegaard torus into two large regions, let $A$ denote the top region and $\overline{A}$ the bottom region, these correspond to the regions $S_{1/2}$ and $-S_0$, respectively. For a nontrivial braid, the $\boldsymbol{\beta}$ curves will look different in the region $\overline{A}$.
\begin{comment}
\begin{remark}
\label{remark:pages}
$U$ is the core of the $\boldsymbol{\beta}$ solid torus, so the exterior of $U$ is the $\boldsymbol{\alpha}$ solid torus. The pages of the open book intersect the $\boldsymbol{\alpha}$ solid torus in disks and meet the Heegaard torus in curves parallel to $\alpha_0$, this is one way of seeing that $K$ is braided about $U$ from the diagram.
\end{remark}
\end{comment}
\begin{figure}[h]
\def400pt{400pt}
\input{planardiagrams.pdf_tex}
\caption{A diagram for a 3-braid in $L(3,1)$ is pictured on the right. A longitude $\lambda$ for $-B$ is pictured in purple.}
\label{fig:planardiagrams}
\end{figure}
\begin{remark}
The generator pictured on the right half of Figure \ref{fig:2diagrams} can be shown to represent $t(K'\cup U)$ using the reformulation of section \ref{subsec:reformulation}. Forgetting the basepoints $z_U$ and $w_U$, along with the pair of curves $\beta_1$ and $\alpha_1$ we get a diagram for $(-S^3,K')$. The generator representing $t(K')$ is easily identified as well, again using the reformulation.
\end{remark}
We prefer to draw the diagram on a fundamental domain for the torus, see left side of Figure \ref{fig:planardiagrams}. The curve $\mu$ is a meridian for $U$. We may obtain a Heegaard diagram for $(-L(p,q),B\cup K)$ by replacing $\mu$ with another curve, $\beta_0$, and replacing basepoints $z_U$ and $w_U$ with $z_B$ and $w_B$, see right side of Figure \ref{fig:planardiagrams}.
The curves $\beta_0$, $\alpha_0$ and $\lambda_U$ cut $\Sigma$ into a large region $\Delta$ lying in $A$, a large region $\overline{\Delta}$ lying in $\overline A$, and a number of smaller regions. Depending on the braid, the $\boldsymbol{\beta}$ curves may look different in the region $\overline{\Delta}$.
Setting $z_{-B}=w_{B}$ and $w_{-B}=z_{B}$ we reverse the orientation of $B$. Forgetting the basepoint $w_{-B}$ we obtain a diagram $\mathcal{G}_0 = (\Sigma, \boldsymbol{\beta},\boldsymbol{\alpha}, \bold{w},\bold{z} \cup z_{-B})$ for $(-Y,K)$ with a single free basepoint $z_{-B}$. Let $\bold{x}_0^G \in CFK^{-}(\mathcal{G}_0)$ denote the generator depicted by orange dots. The following is inspired by Proposition 3.4 of \cite{QOB}.
\begin{proposition}
\label{prop:gen}
$[\bold{x}^{G}_0]$ generates $H_{top} (\mathcal{F}_{bot}^{-B}(\mathcal{G}_0))$.
\end{proposition}
\begin{proof}
The curve $\beta_0$ is a meridian for $B$. We may draw a longitude $\lambda$ for $-B$ on $\Sigma$ that is supported in a neighborhood of $\beta_0$, and intersects $\beta_0$ transversely in a single point as pictured in Figure \ref{fig:planardiagrams}. The curve $\beta_0$ is homologous to $-p\mu +q\alpha_0$. $\lambda$ is homologous to $b\mu +a\alpha_0$ for $a$ and $b$ satisfying $pa+qb=-1$. Note that $b\beta_0 +p\lambda$ is homologous to $-\alpha_0$, so we may consider a relative periodic domain $\mathcal{P}$ whose homology class is negative that of the fiber (of the rational open book), having boundary $\alpha_0 +b\beta_0+p\lambda$. The multiplicity of $\mathcal{P}$ is 1 in the region $\overline{\Delta}$ and $0$ in the region $\Delta$.
Lemma \ref{lemma:relperiodic} tells us that a generator $\bold{y} \in \mathbb{T}_{\boldsymbol{\beta}} \cap \mathbb{T}_{\boldsymbol{\alpha}}$ lies in $\mathcal{F}_{bot}^{-B}(\mathcal{G}_0)$ if and only if $n_\bold{y} (\mathcal{P})$ is minimized.
For a generator $\bold{y}$, we let $(\bold{y})_i$, $(\bold{y})^i$ denote the component of $\bold{y}$ on $\alpha_i$, $\beta_i$, respectively.
We first claim that any generator $\bold{y} \in \mathbb{T}_{\boldsymbol{\beta}} \cap \mathbb{T}_{\boldsymbol{\alpha}}$ minimizing $n_\bold{y} (\mathcal{P})$ must satisfy $(\bold{y})_0 = (\bold{y})^0$. Seeking a contradiction, suppose that $\bold{y}$ is a generator minimizing $n_\bold{y} (\mathcal{P})$ such that $(\bold{y})^0 = (\bold{y})_i$ for some $i>0$.
The four regions surrounding $(\bold{y})_0$ have multiplicities $m,m,m+b$ and $m+b$, this is because $\partial \mathcal{P}$ contains $\beta_0$ with multiplicity $b$ and does not contain $\alpha_i$.
There is an arc $\kappa \subset \beta_0$ from $(\bold{y})_0$ to a point $y_0\in\alpha_0\cap\beta_0$ which does not intersect $\alpha_0$ nor $\lambda$ in its interior. Because all of the alpha curves $\kappa$ intersects in its interior have multiplicity zero in $\partial \mathcal{P}$, it is clear that the multiplicities of $\mathcal{P}$ in the four regions surrounding $y_0$ are $m-1,m, m+b-1$ and $m+b$.
It must be the case that $(\bold{y})_0 = (\bold{y})^{j_1}$ for some $j_1 >0$. If $j_1 \ne i$ (in general this can happen, as the beta curves can be twisted in the region $\overline{\Delta}$), it follows that $(\bold{y})_{j_1} = (\bold{y})^{j_2}$ for some $j_2 \ne j_1$. In this way we construct a sequence $\{j_1, j_2,\dots j_t\}$ having length at most $k$ such that for $1\le s< t$, $(\bold{y})_{j_s} = (\bold{y})^{j_{s+1}}$ and $(\bold{y})_{j_t} = (\bold{y})^{i}$. Note that all of these intersections are in $\overline{\Delta}$, and thus have multiplicities equal to 1.
Let $\bold{y}'$ be obtained from $\bold{y}$ by replacing $(\bold{y})_0$ with $y_0$, $(\bold{y})_i$ with $(\bold{x}^{G}_0)_i$, and $(\bold{y})_{j_s}$ with $(\bold{x}^{G}_0)_{j_s}$ for each $1\le s \le t$. We conclude that $n_{\bold{y}} (\mathcal{P}) - n_{\bold{y}'} (\mathcal{P}) = 1+t$, proving the claim.
In the following lemma we will show that $(\bold{x}^{G}_0)^0$ contributes minimally to $n_\bold{y} (\mathcal{P})$ among all points of $\beta_0\cap\alpha_0$. Assuming the lemma for now, we proceed with the proof.
For $n_\bold{y} (\mathcal{P})$ to be minimized, the other components of $\bold{y}$ must be in the region $\Delta$ (otherwise a component is in $\overline{\Delta}$, contributing strictly higher multiplicity). Let $x_i$ and $y_i$ denote the intersections between $\beta_i$ and $\alpha_i$ in the region $\Delta$ having higher and lower Maslov grading, respectively. It is now clear that $\mathcal{F}_{bot}^{-B}(\mathcal{G}_2) \simeq V_1 \otimes V_2 \otimes \dots \otimes V_k$ where each $V_i$ is a free rank two $\mathcal{F}[U_1,\dots U_m]$-module, generated by $x_i$ and $y_i$, such that $\partial x_i = 0$. Since $(\bold{x}^{G}_0)^i=x_i$, the claim is proven.
\end{proof}
\begin{lemma}
$(\bold{x}^{G}_0)^0$ contributes minimally to $n_\bold{y} (\mathcal{P})$ among all points of $\beta_0\cap\alpha_0$.
\end{lemma}
\begin{proof}
Let $\mathcal{D} = (\Sigma, \beta_0,\alpha_0,z_{-B})$. As usual, $-B$ induces a filtration on $\widehat{CFK}(\mathcal{D})$. We aim to prove that $(\bold{x}^{G}_0)^0$ has minimal filtration level among all generators.
Proposition 3.4 of \cite{QOB} tells us that there is a unique generator $y$ having minimal filtration level, however they do not specify which generator.
Note that $pa+qb =-1 \implies gcd(b,p) = 1$. Thus we may find some $(r,s)$ cable, with $r>0$, $-\widetilde{B}$ of $-B$ which is homologous to $- \mu$.
$\widetilde{B}$ is also the binding of some rational open book for $L(p,q)$, moreover Theorem 1.8 of \cite{CCSMCM} tell us that the contact structure supported by this new rational open book is contactomorphic to $\xi$.
$-\widetilde{B}$ also induces a filtration on $\widehat{CFK}(\mathcal{D})$; we claim that $(\bold{x}^{G}_0)^0$ has minimal filtration level among all generators. It will then follow from Theorem 1 of \cite{QOB} that $[(\bold{x}^{G}_0)^0] = [c(\xi)]= [y]$, hence $(\bold{x}^{G}_0)^0 = y$.
Consider the longitude $\widetilde{\lambda}$ for $-\widetilde{B}$ pictured in Figure \ref{fig:multiplicity}. There is a relative periodic domain $\widetilde{P}$, having homology class that of a negative fiber, with boundary $q\alpha_0 -\beta_0 +p\widetilde{\lambda}$. Analyzing the multiplicities of this domain is trivial, it is clear that $n_{(\bold{x}^{G}_0)^0} (\widetilde{P})$ is minimal.
\begin{tiny}
\begin{figure}[h]
\def400pt{150pt}
\input{multiplicity.pdf_tex}
\caption{The diagram $\mathcal{D}$ for $L(5,3)$ is pictured. A longitude $\widetilde{\lambda}$ for $-\widetilde{B}$ is pictured in purple. The multiplicities of $\widetilde{P}$ in each region are shown. The basepoints $w_{-\widetilde{B}}$ and $z_{-B}$ are depicted with solid and hollow dots, respectively.}
\label{fig:multiplicity}
\end{figure}
\end{tiny}
\end{proof}
\begin{figure}[h]
\def400pt{400pt}
\input{somecounts.pdf_tex}
\caption{The triple diagrams $(\Sigma, \boldsymbol{\beta}', \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w} , \bold{z} \cup z_{-B})$ and $(\Sigma, \boldsymbol{\beta}', \boldsymbol{\alpha}, \boldsymbol{\alpha}', \bold{w} , \bold{z} \cup z_{-B})$ are shown on the left and right respectively. The $\boldsymbol{\alpha}, \boldsymbol{\alpha}', \boldsymbol{\beta}$ and $\boldsymbol{\beta}'$ curves are drawn red, purple, blue and green respectively. The generators $\bold{x}^{G}_0$ and $\bold{x}^{G}_2$ are depicted with orange dots. The generators $\boldsymbol{\Theta}$ are depicted with brown squares. The generator $\bold{x}^{G}_{1}$ is depicted with yellow stars. The $\bold{w}$, $\bold{z}$, and $\bold{z}_{-B}$ basepoints are depicted with solid, hollow, and grey dots, respectively.}
\label{fig:somecounts}
\end{figure}
Consider the triple diagrams $(\Sigma, \boldsymbol{\beta}', \boldsymbol{\beta}, \boldsymbol{\alpha}, \bold{w} , \bold{z} \cup z_{-B})$ and $(\Sigma, \boldsymbol{\beta}', \boldsymbol{\alpha}, \boldsymbol{\alpha}', \bold{w} , \bold{z} \cup z_{-B})$ shown in Figure \ref{fig:somecounts}.
The sets of curves $\boldsymbol{\beta}, \boldsymbol{\alpha}$, are handleslide equivalent to $\boldsymbol{\beta}',\boldsymbol{\alpha}'$, respectively, in the complement of all basepoints.
For $i\ne 1$ the curve $\beta_i '$ is obtained from $\beta_i$ by a small isotopy. $\beta_1 '$ is obtained by sliding $\beta_1$ over other $\beta$ curves. The curves $\boldsymbol{\alpha}'$ are obtained from $\boldsymbol{\alpha}$ via handleslides in the same way.
Let $\mathcal{G}_1 = (\Sigma, \boldsymbol{\beta}', \boldsymbol{\alpha}, \bold{w} , \bold{z} \cup z_{-B})$ and $\mathcal{G}_2 = (\Sigma, \boldsymbol{\beta}', \boldsymbol{\alpha}', \bold{w} , \bold{z} \cup z_{-B})$. Abusing notation, let $\boldsymbol{\Theta}$ denote both the top graded generator of $CFK^{-,1}(\Sigma,\boldsymbol{\beta}',\boldsymbol{\beta}, \bold{w},\bold{z}\cup z_{-B})$ and $CFK^{-,1}(\Sigma,\boldsymbol{\alpha},\boldsymbol{\alpha}', \bold{w},\bold{z}\cup z_{-B})$. Let $\bold{x}^{G}_1\in CFK^{-,1}(\mathcal{G}_1)$ denote the intersection point depicted with stars in Figure \ref{fig:somecounts}, and let $\bold{x}^{G}_2\in CFK^{-,1}(\mathcal{G}_2)$ denote the intersection point depicted with orange dots on the right half of the figure.
\begin{proposition}
\label{prop:relate}
Let
\[
\begin{split}
G_{0,1} : HFK^{-,1}(\mathcal{G}_0) \xrightarrow {\simeq} HFK^{-,1}(\mathcal{G}_1) &\\
G_{1,2} : HFK^{-,1}(\mathcal{G}_1) \xrightarrow {\simeq} HFK^{-,1}(\mathcal{G}_2)
\end{split}
\]
denote the isomorphisms induced by the triple diagrams above.
The composition $G_{0,2}=G_{1,2}\circ G_{0,1}$ sends the class $[\bold{x}^{G}_0]$ to the class $[\bold{x}^{G}_2]$.
\end{proposition}
\begin{proof}
The isomorphisms $G_{0,1}$ and $G_{1,2}$ are induced by pseudo-holomorphic triangle counting maps $g_{0,1}$ and $g_{1,2}$ respectively.
We outline the proof that $g_{0,1} (\bold{x}^{G}_0) = \bold{x}^{G}_1$, proving that $g_{1,2} (\bold{x}^{G}_1) = \bold{x}^{G}_2$ requires an similar argument.
Let $u\in \pi_2 (\boldsymbol{\Theta},\bold{x}^{G}_0,\bold{y})$ be a Whitney triangle having corner at some generator $\bold{y}\in \mathbb{T}_{\boldsymbol{\beta}'}\cap\mathbb{T}_{\boldsymbol{\alpha}}$ which misses the basepoints $\bold{w}$. We claim that $u$ has domain equal to the union of small gray and black triangles pictured in Figure \ref{fig:somecounts}, in which case it has a unique holomorphic representative. We count the multiplicities of the domain of $u$. Using the method presented in the proof of Proposition \ref{prop:maps}, it is immediate that for all $i>0$ we have that $(\bold{y})_i = (\bold{x}^{G}_1)_i$ and that the domain of $u$ contains the small gray triangles. Because the triple diagram corresponds to the identity cobordism, the induced triangle counting map should preserve $Spin^C$ structure. The only such generator $\bold{y}$ having the correct $Spin^C$ structure is $\bold{x}^{G}_1$, and the only Whitney triangle $u\in \pi_2 (\boldsymbol{\Theta},\bold{x}^{G}_0,\bold{x}^{G}_1)$ having no negative multiplicities is the one desired.
\end{proof}
Note that the diagram $\mathcal{G}_2$ has a small configuration about the point $z_{-B}$. Let $\mathcal{T}^G$ be the diagram obtained by performing the corresponding free 0/3-index destablization. The maps
\[
\begin{split}
j : CFK^{-,1} (\mathcal{G}_2)\to CFK^- (\mathcal{T}^G) & \\ i : CFK^- (\mathcal{T}^G) \to CFK^{-,1} (\mathcal{G}_2)
\end{split}
\]
are defined in subsection \ref{subsec:HFK}. Let $\bold{x}^G$ denote $j(\bold{x}^G_2)$.
\begin{theorem}
\label{thm:diagram}
The generator $\bold{x}^G$ has homology class $t(K)$, i.e. $[\bold{x}^G] = t(K)\in HFK^- (-L(p,q),K)$.
\end{theorem}
\begin{proof}
We are now in position to apply the reformulation of section \ref{sec:rationalchar}. Let $Q$ be the map defined in Lemma \ref{lem:rationalmap}. Let $\bold{x}, F_{0,2}, \mathcal{H}_2$ and $\mathcal{T}$ be as in Propositions \ref{prop:maps} and \ref{prop:composition}. Combining that lemma and those propositions with Propositions \ref{prop:characterization} and \ref{prop:gen} we have that the composition
\[
F_{0,2}\circ Q\circ G_{0,2}^{-1}: HFK^{-} (\mathcal{G}_2)\to HFK^{-} (\mathcal{H}_2)
\]
is an isomorphism mapping $[\bold{x}^G_2]$ to $[\bold{x}_2]$.
Moreover, since the maps above commute with the free 0/3 (de)stablization maps, the composition
\[
(j)_*\circ F_{0,2}\circ Q\circ G_{0,2}^{-1}\circ (i)_* : HFK^{-} (\mathcal{T}^G)\to HFK^{-} (\mathcal{T})
\]
is an isomorphism mapping $[\bold{x}^G]$ to $[\bold{x}] = t(K)$.
\end{proof}
We will refer to $\mathcal{T}^\mathcal{G}$ as the \textbf{standard braid diagram} for $K$.
\begin{lemma}
\label{lem:trivial}
Let $\tau_n\in B_n$ denote the trivial braid having index $n$. The Maslov gradings of the GRID and BRAID invariants agree for $\tau_n \circ \delta^{q/p}$. i.e.
\[
M(\theta(\tau_n \circ \delta^{q/p}))=M(t(\tau_n \circ \delta^{q/p}))
\]
\end{lemma}
\begin{proof}
The $\boldsymbol{\beta}$ curves of the standard braid diagram for $\tau_n\circ\delta^{q/p}$ are particularly simple, and we can handleslide to a grid diagram having index $n$.
Consider the triple diagrams $(\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z})$ and $(\Sigma,\boldsymbol{\beta}',\boldsymbol{\beta},\boldsymbol{\alpha'},\bold{w},\bold{z})$ pictured in Figure \ref{fig:maslovtriple}. Here, we have initially isotoped the diagram $\mathcal{T}^G = (\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha}, \bold{w},\bold{z})$ so that the final diagram appears more grid like.
\begin{figure}[h]
\def400pt{400pt}
\input{maslovtriple.pdf_tex}
\caption{The triple diagrams $(\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z})$ and $(\Sigma,\boldsymbol{\beta}',\boldsymbol{\beta},\boldsymbol{\alpha'},\bold{w},\bold{z})$ pictured left and right. The $\boldsymbol{\beta}',\boldsymbol{\beta},\boldsymbol{\alpha}$ and $\boldsymbol{\alpha}'$ are drawn green, blue, red and purple, respectively. The $\bold{w}$ and $\bold{z}$ basepoints are depicted with solid and hollow dots, respectively.. The generators $\bold{x}^G$ and $\bold{x}$ are depicted with orange dots. The generators $\boldsymbol{\Theta}$ and $\bold{x}'$ are depicted with brown squares and yellow stars, respectively.}
\label{fig:maslovtriple}
\end{figure}
Let $G = (\Sigma,\boldsymbol{\beta}',\boldsymbol{\alpha'},\bold{w},\bold{z})$ and $G' = (\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha'},\bold{w},\bold{z})$. Let $\bold{x}$ and $\bold{x}'$ denote the generators pictured in the figure.
Let $\boldsymbol{\Theta}$ denote both the top graded generator in $CFK^- (\Sigma,\boldsymbol{\beta}',\boldsymbol{\beta},\bold{w},\bold{z})$ and $CFK^- (\Sigma,\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z})$.
There is a Maslov index zero Whitney triangle $u \in \pi _2 (\bold{x}^G,\boldsymbol{\Theta},\bold{x}')$ and $u' \in \pi_2 (\boldsymbol{\Theta},\bold{x}',\bold{x})$, whose domain is shaded in Figure \ref{fig:maslovtriple}. It follows that
\[
M(t(\beta_n\circ\delta^{q/p}))=M(\bold{x}^G)=M(\bold{x})=M(\theta(\beta_n\circ\delta^{q/p}))
\]
\end{proof}
\begin{proposition}
\label{prop:needaname}
For a transverse braid $K\subset (L(p,q),\xi_{UT})$, the Maslov gradings of the GRID and BRAID invariants agree. i.e.
\[
M(\theta(K))=M(t(K)).
\]
\end{proposition}
\begin{proof}
Let $\beta\in B_n$ be an arbitrary index $n$ braid. One easily computes (as in the proof of Proposition \ref{prop:gradingcomparison}) that $M(\theta(\beta\circ\delta^{q/p}))-M(\theta(\tau_n\circ\delta^{q/p}))=w(\beta)$, the writhe. We will show that Maslov grading of the BRAID invariant satisfies the same equation, the result will then follow from Lemma \ref{lem:trivial}.
We compare the Maslov gradings of $t(\beta\circ\delta^{q/p})$ and $t(\tau_n\circ\delta^{q/p})$.
Let $(T^2,\boldsymbol{\gamma},\boldsymbol{\alpha},\bold{w},\bold{z})$ and $(T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w},\bold{z})$ denote the standard braid diagrams for $\beta\circ\delta^{q/p}$ and $\tau_n\circ\delta^{q/p}$, respectively. We draw all three sets of curves on a single torus (as in Figure \ref{fig:gradingshift}) getting the triple diagram $(T^2,\boldsymbol{\gamma},\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w},\bold{z})$. Note that the diagram $(T^2,\boldsymbol{\gamma},\boldsymbol{\beta},\bold{w},\bold{z})$ can be identified with a diagram used to define $t(\beta)\in HFK^-(-S^3,\beta)$ connect sum the standard Heegaard diagram for $S^1\times S^2$. Let $\Theta\in HF^-(S^1\times S^2)$ denote the generator in top Maslov grading.
Let $u$ denote the Whitney triangle having Maslov index $1-n$ whose domain is shaded in Figure \ref{fig:gradingshift}.
This Whitney triangle has corners at generators having homology classes $t(\beta)\otimes \Theta, t(\tau_n\circ\delta^{q/p})$ and $t(\beta\circ\delta^{q/p})$.
Using $u$ to compare Maslov gradings as in \cite{absgrading}, we see that
\[
M(t(\beta))+M(t(\tau_n\circ\delta^{q/p}))-M(t(\beta\circ\delta^{q/p}))=1-n.
\]
$M(t(\beta))$ has been computed in \cite{equiv} to equal $sl(\beta)+1 = w(\beta)-n+1$. Combining these equations gives
\[
M(t(\beta\circ\delta^{q/p}))-M(t(\tau_n\circ\delta^{q/p}))=w(\beta).
\]
\begin{figure}[h]
\def400pt{200pt}
\input{gradingshift.pdf_tex}
\caption{The homology classes of generators depicted by brown squares, yellow stars and orange dots are $t(\beta)\otimes \Theta,t(\tau_n\circ\delta^{p/q})$ and $t(\beta\circ\delta^{p/q})$, respectively. In this example $(p,q)=(2,1)$ and $\beta = \sigma_1$.}
\label{fig:gradingshift}
\end{figure}
\end{proof}
\section{Another reformulation of the BRAID invariant for lens space braids}
\label{sec:altchara}
Let $(B,\pi)$ denote the rational open book decomposition supporting $(L(p,q),\xi_{UT})$ studied in the previous section, and $K$ an index $k$ braid about $(B,\pi)$. Let $U\subset L(p,q)$ denote the Seifert cable of $B$; this is the cable specified by how a fiber $D$ of $(B,\pi)$ meets the boundary of a solid torus neighborhood of $B$. The braid $K$ intersects $D$ in $k$ points.
In this section we reformulate the invariant $t(K)\in HFK^{-}(-L(p,q),K)$ in terms of the Alexander filtration induced by $-U$ on the knot Floer chain complex. We will use this alternate reformulation in a subsequent section to prove that the GRID invariant for transverse links in $(L(p,q),\xi_{UT})$ is equivalent to $t(K)$.
\begin{remark}
In the following reformulation we use two pairs of basepoints to encode $-U$. One may easily formulate and prove a formulation using one pair of basepoints, but two pairs is better suited towards proving GRID = BRAID.
\end{remark}
By adding a basepoint $w_{-U}$ to the diagram $\mathcal{G}_0$ of the previous section, and relabelling $z_{-B}$ as $z_{-U}$ we obtain a Heegaard diagram for $(L(p,q),K\cup -U)$. Adding an extra pair of and curves and basepoints for $-U$ we obtain the diagram $\mathcal{D}$ pictured in Figure \ref{fig:altchara}. Denote the new curves $\alpha^s$ and $\beta^s$. Forgetting the basepoints $\bold{w}_{-U}$ we obtain a diagram \[\mathcal{D}_0 = (\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha}, \bold{w},\bold{z}\cup\bold{z}_{-U})\text{ for }(-L(p,q),K)\] with two free basepoints. Let $\bold{x}_0^D$ denote the generator pictured in the figure.
Let $(\mathcal{F}_{bot}^{-U} (\mathcal{D}_0), \mathfrak{s}_{\xi})$ denote the summand of $\mathcal{F}_{bot}^{-U} (\mathcal{D}_0)$ whose generators $\bold{x}$ satisfy $\mathfrak{s}_{\bold{w}}(\bold{x})=\mathfrak{s}_{\xi}$.
\begin{figure}[h]
\def400pt{200pt}
\input{altchara.pdf_tex}
\caption{A Heegaard diagram for $(-L(3,1),K\cup -U)$. A longitude $\lambda$ for $-U$ is pictured in purple. $\mathcal{D}_0 = (\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w},\bold{z}\cup \bold{z}_{-U})$. The components of $\bold{x}_0^D$ are orange dots.}
\label{fig:altchara}
\end{figure}
\begin{lemma}
\label{lem:needaname}
$\mathcal{F}_{bot}^{-U} (\mathcal{D}_0) \simeq V_1 \otimes \dots \otimes V_k \oplus \mathbb{F}_p$ and $(\mathcal{F}_{bot}^{-U} (\mathcal{D}_0), \mathfrak{s}_{\xi}) \simeq V_1 \otimes \dots \otimes V_k$, where each $V_i$ is a free rank two $\mathcal{F}[U_1,\dots U_m]$-module, generated by $x_i$ and $y_i$, such that $\partial x_i = 0$.
\end{lemma}
\begin{proof}
Let $\mathcal{P}$ denote the obvious disk bounded by a longitude for $U$ on the Heegaard diagram in Figure \ref{fig:altchara}. Orientation reversal corresponds to inverting the Alexander grading, up to an overall shift. In order to minimize the Alexander grading induced by $-U$, we maximize the grading induced by $U$. Lemma \ref{lemma:relperiodic} tells us that a generator $\bold{y}$ lies in $\mathcal{F}_{bot}^{-U} (\mathcal{G}_0)$ if $n_{\bold{y}} (\mathcal{P})$ is maximized. If $n_{\bold{y}} (\mathcal{P})$ is maximal, it is immediate that $(\bold{y})_i = (\bold{y})^i$ for each $i$, and that the component of $\bold{y}$ on $\alpha^s\cap\beta^s$ is fixed. For each $i>0$ there are two possible values of $(\bold{y})_i$, $x_i$ and $y_i$, let $x_i$ denote the intersection point contributing greater Maslov grading. The component of $\bold{y}$ on $\alpha_0$ is determined by the $Spin^C$ structure. There are $p$ possible values for $(\bold{y})_0$, corresponding to the different $Spin^C$ structures on $-L(p,q)$.
\end{proof}
It follows that $H_{top}(\mathcal{F}_{bot}^{-U} (\mathcal{D}_0), \mathfrak{s}_{\xi})$ is generated by $[\bold{x}_0^D]$. Next, we perform an isotopy of $\alpha^s$ to get a diagram $\mathcal{D}_1$, followed by a free 0/3-index destabilization to obtain the diagram $\mathcal{G}_0$ of the previous section, and then relate $[\bold{x}_0^D]$ to $[\bold{x}_0^G]$.
Consider the triple diagram $(\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup \bold{z}_{-U})$ shown in Figure \ref{fig:triple}. The set $\boldsymbol{\alpha}'$ is obtained by isotoping $\alpha^s$ to intersect only $\beta^s$, and $\alpha_i '$ is obtained by applying a small isotopy to $\alpha_i$. Let $\mathcal{D}_1 = (\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup \bold{z}_{-U})$, and
\[
D_{0,1}:HFK^{-,2}(\mathcal{D}_0) \xrightarrow {\simeq}HFK^{-,2}(\mathcal{D}_1)
\]
denote the isomorphism induced by the triple diagram.
Let $\bold{x}_1^D$ denote the generator of $CFK^{-,2}(\mathcal{D}_1)$ whose components are pictured in Figure \ref{fig:triple}.
\begin{proposition}
\label{prop:name2}
$D_{0,1} ([\bold{x}_0^D]) = [\bold{x}_1^D]$.
\end{proposition}
\begin{proof}
The proof is essentially that of Proposition \ref{prop:relate}.
Let $\boldsymbol{\Theta}$ denote the top graded generator of $CFK^{-,2}(\Sigma,\boldsymbol{\alpha},\boldsymbol{\alpha}', \bold{w},\bold{z_K}\cup \bold{z}_{-B})$.
Let $u\in \pi_2 (\bold{x}_0^D, \boldsymbol{\Theta},\bold{y})$ by a Whitney triangle having corner at some generator $\bold{y}\in \mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}'}$. As before, by studying the possible multiplicities of $u$, one can show that $u$ has domain equal to the union of small gray and black triangles pictured in Figure \ref{fig:triple}. It is immediate that for each $i>0$, $(\bold{y})_i = (\bold{x}^D_1)_i$. The $Spin^C$ structure fixes $(\bold{y})_0 = (\bold{x}^D_1)_0$.
\end{proof}
\begin{figure}[h]
\def400pt{200pt}
\input{triple.pdf_tex}
\caption{The triple diagram $(\Sigma,\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup \bold{z}_{-U})$ encoding a 3-braid in $-L(3,1)$. The $\boldsymbol{\beta},\boldsymbol{\alpha}$ and $\boldsymbol{\alpha}'$ curves are blue, red, and green, respectively. The components of $\bold{x}_0^D, \bold{x}_1^D$, and $\boldsymbol{\Theta}$ are orange dots, stars, and brown squares, respectively.}
\label{fig:triple}
\end{figure}
The diagram $\mathcal{D}_1$ has a small configuration about one of the $\bold{z}_{-U}$ basepoints. Performing the index 0/3 free destabilization we see that $j (\bold{x}^D_1) = \bold{x}^G _0$ (where this generator is defined in the previous section). Proposition \ref{prop:relate} and Theorem \ref{thm:diagram} relate $[\bold{x}^G_0]$ to the BRAID invariant $t(K)$.
\section{A Reformulation of the GRID invariant $\theta$}
\label{sec:gridchara}
In this section we show that the GRID invariant $\theta$ can be reformulated in terms of the filtration on the knot Floer complex of a braid induced by the Seifert cable of the braid axis. This reformulation is the same as that of Section \ref{sec:altchara} for the BRAID invariant $t$, and we will use this to show that the two invariants are equivalent.
Let $U$ denote the Seifert cable of the binding $(B,\pi)$ of the standard rational open book for $(L(p,q),\xi_{UT})$.
Let $K\subset (L(p,q),\xi_{UT})$ be the transverse link encoded by a grid diagram $G$. Fixing a fundamental domain for the Heegaard torus, $G$ gives rise to a rectilinear braided projection of $K$ onto the fundamental domain missing the left and right boundaries of the fundamental domain, see Figure \ref{fig:rectproj}. This rectilinear projection may be altered and enhanced to one for $K\cup -U$. We may encode this projection with a grid diagram $ (T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w}\cup\bold{w}_{-U},\bold{z}\cup\bold{z}_{-U})$ encoding $K\cup -U$ having index at most
\[
n+2k+2
\]
where $n$ is the index of $G$ and $k$ is the braid index of $K$.
Consider the diagram $\mathcal{S}_0 =(T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\bold{w},\bold{z}\cup\bold{z}_{-U})$ for $K$ with two free basepoints $\bold{z}_{-U} = \{z_0,z_1\}$. Let $\bold{x}^{S}_0\in \mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}}$ denote the generator having components in the upper left corners of parallelograms containing points of $\bold{w}\cup\bold{z}_{-U}$.
\begin{figure}[h]
\def400pt{200pt}
\input{rectproj.pdf_tex}
\caption{A rectilinear projection for $K=\sigma_1 ^{-1} \circ \delta ^{1/4}$ coming from an index one diagram is pictured on the left. On the right we have a rectilinear projection of $K\cup -U$.}
\label{fig:rectproj}
\end{figure}
\begin{lemma}
\label{lem:thetagen}
The class $[\bold{x}^{S}_0]$ generates $H_{top} (\mathcal{F}^{-U}_{bot}(\mathcal{S}_0), \mathfrak{s}_{\xi})$.
\end{lemma}
\begin{proof}
Proposition \ref{prop:spincagrees} tells us that $\mathfrak{s}_{\bold{w}}(\bold{x}_0^S) = \mathfrak{s}_{\xi}$.
Using Lemma \ref{lemma:relperiodic} it is easy to see that the generator $\bold{x}^S_0$ is in the bottom-most filtration level, as there is an obvious disk relative periodic $D$ domain for $U$, and the generator $\bold{x}^S_0$ has maximal multiplicity $n_{\bold{x}^S_0} (D)$.
The class is non-zero by Proposition \ref{prop:nontorsion}.
The triangle counts in the following propositions relate the class $[\bold{x}^S_0]$ to the invariant $\theta(K)$, in particular, we will see that the Maslov grading of the generator $\bold{x}^S_0$ is $sl_\mathbb{Q} (L) + \frac{1}{p} - d(p,q,q-1) -2$.
It follows by the discussion following Lemma \ref{lem:needaname}, Proposition \ref{prop:needaname} and the results of the previous section that $top = sl_\mathbb{Q} (L) + \frac{1}{p} - d(p,q,q-1) -2$.
By the discussion following Lemma \ref{lem:needaname}, $H_{top} (\mathcal{F}^{-U}_{bot}(\mathcal{S}_0), \mathfrak{s}_{\xi})$ is rank one.
\end{proof}
We wish to relate the class $[\bold{x}^{S}_0]$ to $\theta (K)$. We can perform two sequences of handleslides followed by two free index-0/3 destabilizations to go from $\mathcal{S}_0$ to a grid diagram encoding $K$.
We have labelled $\bold{z}_{-U} = \{z_0,z_1\}$. Suppose that $z_0$ lies in the $i_0^{th}$ column and $j_0^{th}$ row of $\mathcal{S}_0$, and $z_1$ lies in the $i_1^{th}$ column and $j_2^{th}$ row. Consider the triple diagram $(T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$ pictured in Figure \ref{fig:finaltriple}. For $r\ne j_0+1$ or $j_1+1$ the curve $\boldsymbol{\alpha}' _r$ is a small perturbation of the curve $\boldsymbol{\alpha}_r$. For $r= j_0+1$ or $j_1+1$ the curve $\boldsymbol{\alpha}' _r$ is obtained by handlesliding $\boldsymbol{\alpha}_r$ over $\boldsymbol{\alpha}_{r-1}$.
Also consider the triple diagram $(T^2,\boldsymbol{\beta}',\boldsymbol{\beta},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$. For $r\ne i_0$ or $i_1$, the curve $\boldsymbol{\beta}'_r$ is a small perturbation of the curve $\boldsymbol{\beta}_r$. For $r= i_0$ or $i_1$, the curve $\boldsymbol{\beta}'_r$ is obtained by handlesliding $\boldsymbol{\beta}_r$ over $\boldsymbol{\beta}_{r+1}$.
\begin{figure}[h]
\def400pt{400pt}
\input{finaltriple.pdf_tex}
\caption{The case of a one braid $\tau_1\circ \delta^{1/3}$. The $\boldsymbol{\beta},\boldsymbol{\beta}',\boldsymbol{\alpha}$ and $\boldsymbol{\alpha}'$ curves are drawn blue, green, red and purple, respectively. The $\bold{w},\bold{z}$ and $\bold{z}_{-U}$ are solid, hollow and grey dots, respectively. The generators $\bold{x}_0^S$ and $\bold{x}_2^S$ are depicted with orange dots. The generator $\bold{x}_1$ is depicted with yellow stars. The brown squares depict $\boldsymbol{\theta}$.}
\label{fig:finaltriple}
\end{figure}
We let
\begin{align*}
\mathcal{S}_1 =(T^2,\boldsymbol{\beta},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})\\
\mathcal{S}_2 =(T^2,\boldsymbol{\beta}',\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U}),
\end{align*}
and the generators $\bold{x}^S _1 \in CFK^{-,2} (\mathcal{S}_1)$ and $\bold{x}^S_2 \in CFK^{-,2} (\mathcal{S}_2)$ be as pictured in Figure \ref{fig:finaltriple}.
\begin{proposition}
\label{prop:propS}
Let \begin{align*}
S_{0,1}:HFK^{-,2} (\mathcal{S}_0)\to HFK^{-,2} (\mathcal{S}_1)\\
S_{1,2}:HFK^{-,2} (\mathcal{S}_1)\to HFK^{-,2} (\mathcal{S}_2)
\end{align*}
denote the isomorphisms induced by the triple diagrams above. The composition $S_{0,2}=S_{1,2}\circ S_{0,1}$ sends the class $[\bold{x}^S _0]$ to $[\bold{x}^S_2]$.
\end{proposition}
\begin{proof}
The isomorphisms $S_{0,1}$ and $S_{0,2}$ are induced by pseudo-holomorphic triangle counts $s_{0,1}$ and $s_{1,2}$, respectively.
We will show that $s_{0,1}(\bold{x}^S_0) = \bold{x}^S_1$, the proof that $s_{1,2}(\bold{x}^S_1) = \bold{x}^S_2$ is similar.
We argue that the triple diagram $(T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$ is weakly admissible. Let $n$ denote the number of $\boldsymbol{\beta}$ curves.
Any doubly periodic domain of $(T^2,\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$ missing all basepoints is a linear combination of periodic domains $\mathcal{P}_0,\dots,\mathcal{P}_{n-1}$, where
\begin{align*}
\partial \mathcal{P}_r = \alpha _r \cup \alpha_r ' \quad\quad \text{ for } r \ne j_0 +1, j_1+1\\
\partial \mathcal{P}_r = \alpha_{r}\cup\alpha_{r-1}'\cup \alpha_r ' \quad\quad \text { for } r = j_0+1,j_1+1.
\end{align*}
Each of these has positive and negative coefficients, and it is easy to see that any linear combination also has this property. This establishes weak admissibility of $(T^2,\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$. Any triply periodic domain $\mathcal{P}$ of $(T^2,\boldsymbol{\beta},\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$ missing all basepoints will either be a doubly periodic domain of $(T^2,\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$, in which case it has both positive and negative coefficients, or it will have some $\boldsymbol{\beta}$ curve in its boundary. This $\boldsymbol{\beta}$ curve must intersect a curve in $\boldsymbol{\alpha}\cup\boldsymbol{\alpha}'$, and near this intersection point multiplicities of both signs will appear.
Let $\boldsymbol{\theta}$ denote the top graded generator of $CFK^- (T^2,\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$. We argue that the Whitney triangle $u_0 \in \pi_2 (\bold{x}_0^S,\boldsymbol{\theta},\bold{x}_1^S)$ whose domain $D(u_0)$ is shaded in Figure \ref{fig:finaltriple} is the unique triangle contributing to $s_{0,1}(\bold{x}^S_0)$. Let $u_0\ne u\in \pi_2(\bold{x}_0^S,\boldsymbol{\theta},\bold{y})$ be a Whitney triangle for some $\bold{y}\in\mathbb{T}_{\boldsymbol{\beta}}\cap\mathbb{T}_{\boldsymbol{\alpha}'}$. The domain $D(u)-D(u_0)$ has boundary consisting of arcs along the $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}'$ curves and some total number of $\boldsymbol{\alpha}$ curves. It follows that for some doubly periodic domain $\mathcal{P}'$ of $(T^2,\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$ the domain
\[
D=D(u)-D(u_0)-\mathcal{P}'
\]
has boundary consisting only of arcs along the $\boldsymbol{\beta}$ and $\boldsymbol{\alpha}'$ curves. $D\in\pi_2 (\bold{x}^S_1,\bold{y})$ is a Whitney disk. Any such disk can easily be seen to have some negative multiplicities. For any doubly periodic domain $\mathcal{P}$ of $(T^2,\boldsymbol{\alpha},\boldsymbol{\alpha}',\bold{w},\bold{z}\cup\bold{z}_{-U})$ the domain
$D(u_0)+\mathcal{P}$ does not fully cover any region of $T^2\smallsetminus \{\boldsymbol{\beta}\cup\boldsymbol{\alpha}'\}$, in particular $D(u_0)+\mathcal{P}'$ does not cover the region in which $D$ has a negative multiplicity.
It follows that $D(u)$ must have a negative multiplicity in the same region, and that $u$ can not admit a holomorphic representative. In summary, $u_0$ is the unique Whitney triangle having corners at $\bold{x}^S_0$ and $\boldsymbol{\theta}$ which admits a holomorphic representative. We conclude that $s_{0,1}(\bold{x}^S_0) = \bold{x}^S_1$.
\end{proof}
We see that the diagram $\mathcal{S}_2$ has small configurations about both of the $\bold{z}_{-U}$ basepoints. Let $\mathcal{S}$ be the diagram obtained by performing both free index 0/3 destabilizations.
The compositions of projection and inclusion maps
\[
\begin{split}
j^2 : CFK^{-,2} (\mathcal{S}_2)\to CFK^- (\mathcal{S}) & \\ i^n : CFK^- (\mathcal{S}) \to CFK^{-,2} (\mathcal{S}_2)
\end{split}
\]
defined in subsection \ref{subsec:HFK}, send generators $\bold{x}^S_2$ to $\bold{x}^+$ and $\bold{x}^+$ to $\bold{x}_2 ^S$, respectively.
\subsection{GRID=BRAID}
\label{subsec:equiv}
In this subsection we prove Theorem \ref{thm:equivalence}.
\begin{proof}
We inherit the notations of Propositions \ref{prop:relate}, \ref{prop:name2}, \ref{prop:propS} and Theorem \ref{thm:diagram}.
If we include the $\bold{w}_{-U}$ basepoints in the diagrams $\mathcal{D}_0$ and $\mathcal{S}_0$, both are diagrams for the link $K\cup -U \subset -L(p,q)$. It follows that $\mathcal{D}_0$ may be obtained from $\mathcal{S}_0$ by a sequence of isotopies and handleslides avoiding all basepoints, together with index 1/2 (de)stabilizations and linked index 0/3 (de)stabilizations not involving the basepoints $\bold{w}_{-U}\cup \bold{z}_{-U}$. Associated to this sequence of moves is a chain map which induces an isomorphism on homology
\[
F: HFK^{-,2}(\mathcal{S}_0)\to HFK^{-,2}(\mathcal{D}_0).
\]
Since the chain map respects the filtrations on both complexes induced by $-U$, it follows from the proof of Lemma \ref{lem:needaname} and Lemma \ref{lem:thetagen} that $F([\bold{x}^S_0])=[\bold{x}^D_0]$, since both of these generate $H_{top}(\mathcal{F}^{-U}_{bot}, \mathfrak{s}_{\xi})$.
By Propositions \ref{prop:relate}, \ref{prop:name2}, \ref{prop:propS} and Theorem \ref{thm:diagram}, the composition
\[
HFK^- (\mathcal{S})\xrightarrow[]{S_{0,2}^{-1}\circ(i^2)_*} HFK^{-,2}(S_0)\xrightarrow[]{F } HFK^{-,2}(D_0)\xrightarrow[]{(j)_* \circ D_{0,1} } HFK^{-,1} (\mathcal{G}_0)\xrightarrow[]{(j)_*\circ G_{0,2} } HFK^- (\mathcal{T}^G)
\]
is a graded isomorphism of $\mathbb{F}[U_1,\dots,U_m]$-modules mapping the class $\theta(K) = [\bold{x}^S]$ to $[\bold{x}^G] = t(K)$.
\end{proof}
\section{Proof of Theorem \ref{thm:gridmirror}}
\label{sec:gridmirror}
As mentioned in the introduction, the forward implication is immediate. Here we prove the reverse implication.
Suppose that $K\subset L(p,q)$ admits a surgery to the 3-sphere. Let $K'\subset L(p,q)$ denote the simple knot in the same homology class as $K$, i.e. $[K']=[K]\in H_1(L(p,q))$.
By the proof of Theorem 1.3 of \cite{realization}, there is an (Alexander, $Spin^C$)-graded isomorphism
\[
\widehat{HFK}(-L(p,q),K)\simeq \widehat{HFK}(-L(p,q),K').
\]
Let $\mathcal{G}$ be a grid diagram for $K$, and let $\mathcal{G}_*$ denote the dual diagram as constructed in Subsection \ref{subsec:dual}. This pair of diagrams gives rise to a pair of Legendrians $L_0$ and $L_1$, where the $L_1$ is the topologically the mirror of $L_0$.
Likewise, let $S_0$ and $S_1$ be the Legendrians induced by the index one diagram $\mathcal{H}$ for $K'$ and its dual.
We wish to show that if the quadruple of invariants $\widehat{\lambda}^{+}(L_0),\widehat{\lambda}^{-}(L_0), \widehat{\lambda}^{+}(L_1),\widehat{\lambda}^{-}(L_1)$ do not vanish, then the diagram $\mathcal{G}$ has index one.
Clearly, $\widehat{\lambda}^{+}(S_0),\widehat{\lambda}^{-}(S_0)\ne 0$, because the complex used to define the pair of invariants has trivial differential.
Suppose that $\widehat{\lambda}^{+}(L_0),\widehat{\lambda}^{-}(L_0)\ne 0$.
By Proposition \ref{prop:spincagrees} and the fact that $K'$ is Floer-simple,
the $Spin^C$-graded isomorphism of $\widehat{HFK}$ groups maps $\widehat{\lambda}^{+}(L_0)$ to $\widehat{\lambda}^{+}(S_0)$, as these generate
the summand \[\widehat{HFK}(-L(p,q),K,\mathfrak{s}_{\xi_{UT}})\simeq \widehat{HFK}(-L(p,q),K',\mathfrak{s}_{\xi_{UT}})\simeq \mathbb{F}.\] Likewise, $\widehat{\lambda}^{-}(L_0)$ is mapped to $\widehat{\lambda}^{-}(S_0)$, as both generate the summand with grading $\mathfrak{s}_{\overline{\xi_{UT}}}$.
In particular, the Alexander gradings of the invariants must agree:
\begin{align*}
\frac{1}{2}\Big{(}tb_\mathbb{Q}(L_0) - rot_\mathbb{Q}(L_0) +1 \Big{)} = A(\widehat{\lambda}^{+}(L_0)) = A(\widehat{\lambda}^{+}(S_0)) = \frac{1}{2}\Big{(}tb_\mathbb{Q}(S_0) - rot_\mathbb{Q}(S_0) +1 \Big{)}\\
\frac{1}{2}\Big{(}tb_\mathbb{Q}(L_0) + rot_\mathbb{Q}(L_0) +1 \Big{)} = A(\widehat{\lambda}^{-}(L_0)) = A(\widehat{\lambda}^{-}(S_0)) = \frac{1}{2}\Big{(}tb_\mathbb{Q}(S_0) + rot_\mathbb{Q}(S_0) +1 \Big{)}.
\end{align*}
Adding the two equations, we see that $tb_\mathbb{Q}(L_0) = tb_\mathbb{Q}(S_0)$.
Assuming that $\widehat{\lambda}^{+}(L_1),\widehat{\lambda}^{-}(L_1)\ne 0$ and applying the above argument, one concludes that $tb_\mathbb{Q}(L_1) = tb_\mathbb{Q}(S_1)$. Let $g$ denote the index of $\mathcal{G}$. Applying Proposition \ref{prop:index} twice gives the desired result:
\[
-g = tb_{\mathbb{Q}}(L_0)+tb_{\mathbb{Q}}(L_1) = tb_{\mathbb{Q}}(S_0)+tb_{\mathbb{Q}}(S_1) = -1.
\]
\nocite{IKess,IKquasi,automatic,VVbind,StipV,LOSS, equiv, Pla, QOB, torsionob, comultcontact, comultgrid, sigmapositive, geomorder, lensgridleg, lensgridcomb, HKM, contactsurgery,CCSMCM, contactclass, contact1, transverseapprox, openbooks, legtransverse, torsionob, trivialbraid, coversmaslov, gridhom, MOST, dinvt, braiddynamics, absgrading, coversalex, turaev, relspinc, ras, disksgenus, holknots,genusdetection, linkgenus}
\newpage
\thispagestyle{empty}
{\small
\markboth{References}{References}
\bibliographystyle{myalpha}
|
1,314,259,993,543 | arxiv | \section{Introduction}
The theory of cosmological perturbations (TCP) for a perfect fluid has
always been an important issue in cosmology. It enables us to
understand how small fluctuations seeded in the early universe eventually evolved
into the present large scale structure. Also, TCP has been extremely
useful to put constraints on various cosmological models.
TCP for a perfect fluid has been developed and studied at the level of
the basic equations of motion, i.e., the Einstein equations of general
relativity (GR) and the energy-momentum conservation law \cite{Kodama:1985bj,Mukhanov:1990me}.
Yet, TCP for a perfect fluid can be also studied at the level of the action.
Although these two approaches are classically equivalent,
the latter gives the following advantage.
In TCP, one first has to perturb all the fields
appearing in the equations of motion or in the action, such as the metric
components and the energy density.
However, as is well known, not all the perturbation fields are dynamical.
Actually, GR with a perfect fluid has only one dynamical field for
the scalar-type perturbation.
But an identification of this field as well as a derivation of its closed evolution equation by means
of the equations of motion alone are not straightforward.
The situation becomes worse when going to extended gravity models.
For illustration, $f(R,G)$ theories ($R$ being the Ricci scalar, and $G$ the Gauss-Bonnet term)
with a perfect fluid involve two dynamical fields for the scalar-tensor perturbation.
In such theories, the usual approach based on the equations of motion requires a rather strong intuition
because the closed evolution equations for those dynamical fields have to be extracted from rather
complicated coupled differential equations.
On the other hand, the action approach advocated in this paper allows a straightforward identification of the
auxiliary fields just by checking the absence of any kinetic terms in the second order action.
Once the auxiliary fields are found, they can easily be eliminated through their trivial equations of
motion. What is then left is an action containing only the dynamical fields, from which we can derive the closed evolution equations.
In \cite{DeFelice:2009ak,DeFelice:2009wp},
we have explicitly checked that the action approach indeed works for $f(R,G)$
gravity models with no matter and with a scalar field, respectively.
In this paper, we want to describe first-order TCP for GR with a perfect fluid
at the level of the action,
in a way consistent with the principles of thermodynamics.
To this end, we use the action for a perfect fluid proposed by Schutz \cite{Schutz}
and do the quantization of the perturbations,
which might also be of some interest beyond a pure academic point of view.
Indeed, the quantization of the background universe with a perfect fluid has been
discussed by many authors \cite{Pedram:2007ud,Pedram:2007er,Alvarenga:2001nm,Monerat:2005mx,Lemos:1995qu,Ivashchuk:1995uy,Peter:2006id,Brown:1989vb}.
But here, we prove that quantizing the perturbation fields leads to non-standard
commutation relations and, consequently, to unexpected effects upon the physical
properties of any perfect fluid in quantum cosmology.
A TCP action approach for fluids was first introduced in \cite{Garriga}
in the context of k-inflation. This approach was taken also in \cite{Boubekeur:2008kn}
to study non-linear cosmological perturbations in the matter dominated universe.
However, the fluid discussed there is the so-called scalar fluid whose energy-momentum
tensor is completely written in terms of a scalar field and its derivative.
By construction, the scalar fluid cannot have vector-type perturbations.
Although there is an exact correspondence between a perfect fluid and the scalar fluid
for the scalar-type perturbation at the linear order,
it is no longer true for higher order perturbations because of the mixture of
scalar and vector-type perturbations.
On the other hand, the Schutz's action we will use here is for a
perfect fluid. Therefore the action exactly describes the dynamics of a perfect fluid
at any order.
As far as we know, this is the first time TCP is fully developed within the Schutz's action.
We believe our approach is suited for studying cosmology in extended gravity models.
Before introducing the action for a perfect fluid, let us briefly
review the thermodynamics needed to describe it. In this paper, we
consider a ``single'' fluid, that is, a fluid whose thermodynamical
quantities are completely determined by only two variables, e.g.\ the
chemical potential $\mu$ and the entropy per particle $s$
\cite{Misner}. In this sense one first needs to give two equations of
state, $n=n(\mu,s)$ and $T=T(\mu,s)$, where $n$ is the number density
and $T$ is the temperature of the fluid. Using then the first law of
thermodynamics, $\mathrm{d} p=n\mathrm{d}\mu-nT\mathrm{d} s$, one obtains the pressure as
$p=p(\mu,s)$. Finally, the energy density is given by $\rho\equiv\mu
n-p$. This is enough to describe the system thermodynamically.
Single fluids also satisfy particle number conservation, namely $N=n
V$ is a constant. The second law of thermodynamics imposes $\mathrm{d}(N
s)=N\mathrm{d} s\geq0$ such that $\mathrm{d} s=0$ at equilibrium.
A single perfect fluid is also defined through its stress-energy
tensor $T_{\mu\nu}=(\rho+p)u_\mu u_\nu+pg_{\mu\nu}$. In a
Friedmann-Lema\^\i tre-Robertson-Walker (FLRW) background the
conservation of energy-momentum, $T^{\mu\nu}{}_{;\nu}=0$, implies that
$\dot\rho+3H(\rho+p)=0$ or, equivalently, $\mathrm{d} (\rho V)+p\,\mathrm{d} V=0$
since $V\propto a^3$ with $a$, the cosmological scale factor, and
$H\equiv\dot a/a$, the Hubble parameter. This, in turn, implies that
$\mathrm{d} N=0$ and $\mathrm{d} s=0$. In any FLRW universe we thus have
$na^3=N=\mathrm{constant}$ and $\dot s=0$.
\section{Action}
The action considered here has been introduced by Schutz \cite{Schutz} and is
defined as follows
\begin{equation}
\label{eq:act1}
S=\int d^4x\sqrt{-g}\left[\frac R{16\pi G}+p(\mu,s)\right] .
\end{equation}
Alternative functionals have been proposed, all being physically
equivalent as shown in \cite{sorkino}. We chose the version
(\ref{eq:act1}) as it was the most convenient for our purpose. The
four-velocity of the perfect fluid is defined via potentials \cite{Schutz}:
\begin{equation}
\label{eq:vel1}
u_{\nu}=\frac1\mu\, ( \partial_{\nu}\ell+\theta\partial_{\nu} s+A\partial_{\nu} B)\, ,
\end{equation}
where $\ell$, $\theta$, $A$ and $B$ are all scalar fields. The
normalization for the four-velocity, $u^\nu u_\nu=-1$, gives $\mu$ in
terms of the other fields. The fundamental fields over which the
action (\ref{eq:act1}) will be varied are $g_{\mu\nu}$, $\ell$,
$\theta$, $s$, $A$, and $B$.
Having chosen the Lagrangian for gravity to be the one of GR, we
recover $G_{\mu\nu}=8\pi G\, T_{\mu\nu}$ by varying with respect to
the metric field. Besides the conservation of particle number and
entropy already discussed, the other equations of motion derived from
Eq.\ (\ref{eq:act1}) are \cite{Schutz}:
\begin{equation}
\label{eq:therm}
u^\alpha\partial_\alpha \theta=T,\quad u^\alpha \partial_\alpha A=0,\quad u^\alpha\partial_\alpha B=0.
\end{equation}
In a FLRW universe, $u_i=0$ and $u_0=-1$ such that the solutions to
Eq.~(\ref{eq:therm}) are simply
\begin{equation}
\label{eq:backA}
A=A(\vec x)\, ,\quad B=B(\vec x)\, ,\quad\theta=\int^t T(t')\mathrm{d} t'+\tilde\theta(\vec x)\, .
\end{equation}
There is a complete freedom for the functions $A$, $B$, and
$\tilde\theta$\ \footnote{ Since $u_\nu=(-1,\vec 0)$, we also have
that $\ell=-\int^t \mu(t')\mathrm{d} t'+\tilde\ell$, and
$\vec\nabla\tilde\ell=-A\vec\nabla B$, which implies that
$\vec\nabla A\times\vec\nabla B=0$.}, any choice leading to the same
physical background. We will take advantage of this freedom to
simplify our study of the scalar and vector perturbations.
\section{Perturbations}
Once and for all, we work within a spatially flat FLRW universe. At
first order in perturbation theory we have $\delta u_0=\frac12\delta
g_{00}$ and%
\begin{equation}
\label{eq:Perto}
\delta u_i = \partial_i \left( \frac{\delta \ell + \theta \delta s + A \delta B}\mu \right)+
\frac{W_i}\mu\, ,
\end{equation}
with
\begin{equation}
W_i\equiv B_{, i} \delta A -A_{,i} \delta B - \tilde{\theta}_{, i}\delta s\equiv
\partial_i w_s+\bar u_i\, .
\label{eq:wi}
\end{equation}
Note that $W_i$ is gauge invariant since, following ref.\ \cite{Weinberg},
the perturbation fields transform respectively as
\begin{alignat}{6}
\delta\ell&\to\delta\ell
+\mu\xi^0+A\,\partial_i B\,&&\xi^i,&&\\
\delta s&\to\delta s,&&\delta \theta&&\to\delta \theta-T\,\xi^0-\partial_i\tilde\theta\,\xi^i,\\
\delta A&\to\delta A-\partial_iA\,\xi^i,&&\delta B&&\to\delta B-\partial_iB\,\xi^i,
\label{eq:deltB}
\end{alignat}
under the gauge transformation $x^\alpha\to x^\alpha+\xi^\alpha$. In Eq.\ (\ref{eq:wi}) we have
decomposed $W_i$ into scalar ($w_s$) and divergence-less vector modes
($\bar u_i$). So, in general $W_i$ will generate both scalar and
vector perturbations. However, we can efficiently use the freedom of
choosing the time-independent background quantities $A$, $B$ and
$\tilde\theta$ given in Eq.\ (\ref{eq:backA}) to disentangle them. Any
such choice does not fix a gauge as no conditions are imposed on the
perturbation fields themselves.
\subsection{Scalar type perturbations}
Let us simply consider the choice
\begin{equation}
\label{eq:choi}
A=B=\tilde\theta=0\, ,
\end{equation}
to remove the vector perturbations arising from $W_i$. Regarding the
metric, $\delta g_{00}$ and $\delta g_{0i}$ are auxiliary fields such
that the only scalar component which will be dynamical is the
curvature perturbation $\phi$ defined by $\delta g_{ij}=2a^2\phi\,\delta_{ij}$,
with $\phi \to \phi-H \xi^0$ under a gauge transformation.
We introduce the new quantity $v=\delta\ell+\theta(t)\delta s$ such that $\delta u_i=\partial_i \left( v/\mu \right)$.
Therefore, $v$ represents the velocity perturbation of a perfect fluid.
We then define two gauge invariant fields, $\Phi=\phi+Hv/\mu$ and
$\delta\bar\theta=\delta\theta+Tv/\mu$, to expand the action
(\ref{eq:act1}) at second order, in a gauge-independent way:
\begin{align}
S_S&=\int \mathrm{d} t\mathrm{d}^3\vec x\left\{ \frac{a^3Q_S}2\left[\dot\Phi^2
-\frac {c_s^2}{a^2}(\vec\nabla\Phi)^2\right]
+C\delta s\dot\Phi-\frac D2 \delta s^2\right.\notag\\
&\qquad\left.{}
-E(\delta\bar\theta\dot{\delta s}
-\delta s\dot{\delta\bar\theta}
+\delta A\dot{\delta B}
-\delta B\dot{\delta A})\right\}.
\label{eq:act2}
\end{align}
The perturbation fields $\Phi$ and $\delta\bar\theta$ are related to the curvature
and temperature, respectively.
In the comoving gauge $v=0$ where a perfect fluid remains static, $\Phi=\phi$ and ${\delta\bar\theta}=\delta \theta$.
The coefficients for the kinetic terms are given by\
\footnote{For $c_s^2$ we used the fact that $\dot p= \left(\partial
p/\partial\rho\right)_s\dot\rho+\left(\partial p/\partial
s\right)_\rho\dot s$.}
\begin{alignat}{10}
Q_S&=\frac{\rho+p}{c_s^2 H^2}\, ,&\quad&&
c_s^2&\equiv\frac{\dot p}{\dot \rho}=
\left(\frac{\partial p}{\partial\rho}\right)_s ,&\quad&& \label{eq:Qs}
\end{alignat}
whereas the remaining coefficients are
\begin{alignat}{4}
C&=\frac{na^3}{H}\left[\mu\left(\frac{\partial T}{\partial\mu}\right)_{\!s}-T\right],
\quad&
E&=\frac{na^3}{2}\, ,
\end{alignat}
and
\begin{align}
D&=na^3\left[T\left(\frac{\partial T}{\partial\mu}\right)_{\!s}+\left(\frac{\partial T}{\partial s}\right)_{\!\mu}\right] .
\end{align}
The general solution for $\delta s$, $\delta A$, and $\delta B$ is
their initial values since Eq.\ (\ref{eq:act2}) forces them to be
time-independent. As a consequence, the non trivial equations of
motion are
\begin{align}
\label{eq:gPhi}
& \frac{1}{a^3Q_S}\frac{\mathrm{d}}{\mathrm{d} t}(a^3Q_S\dot\Phi)-\frac{c_s^2}{a^2}\nabla^2\Phi=-\frac{\dot C}{a^3Q_S}\,\delta s\,,\\
& n a^3\dot{\delta\bar\theta}-D\delta s+C\dot\Phi=0\, .
\end{align}
These equations exactly coincide with those derived by perturbing the
Einstein equations and the conservation law for the entropy, as it should be. In
general, $\Phi$ is sourced by $\delta s$. For example, if the perfect
fluid is an ideal non-relativistic gas characterized by
$T=\frac25(\mu-m_0)$, $m_0$ being the mass of the particles, then
${\dot C} \neq 0$ and we have to solve two coupled equations to
know the time evolution of $\Phi$ and $\delta{\bar\theta}$.
However, if $T=f(s)\,\mu$, which is equivalent to having a barotropic
equation of state $p=p(\rho)$ \footnote{In this case, we obtain $\left(\partial \mu/\partial
s\right)_\rho= T$ such that $(\partial p/\partial s)_\rho=n[\left(\partial \mu/\partial s\right)_\rho-T]=0$.}, then $C=0$.
(Note that both radiation and dust fulfill this condition, while a cosmological constant has vanishing
$Q_S$ so that no contribution for perturbations arises, as is well known).
In this case, the sign of $Q_S$ cannot be known from the usual approach based on the equations of motion alone.
On the other hand, the action approach advertised here leads to an exact expression for $Q_S$,
which will be used to avoid ghost degrees of freedom when quantizing the perturbations.
We also conclude that the fields $\Phi$ completely decouples from $\delta s$ and propagates with a
sound speed $c_s$ if $C=0$ and $c_s^2 >0$.
\subsection{Vector type perturbations}
To arrive at the desired action via the shortest path, let us first
assume that all the perturbation variables propagate only in one
direction, say the $z$-direction. This should be allowed, as we know
that perturbations with different wavenumber vectors do not mix in a
FLRW universe. Once we obtain the action for this particular mode, we
can then easily infer the general action.
The vector contributions come only from the component $\bar u_i$ of
$W_i$ defined in Eq.\ (\ref{eq:wi}). It is not easy to extract $\bar
u_i$ from this equation since the functions $A,B$ and $\tilde{\theta}$
depend in general on the spatial coordinates. Yet, taking again advantage of
the freedom to select these background functions, we can make the
simplest choice that contains all the information needed for the
vector modes, namely
\begin{equation}
A=\tilde\theta=0\,,\quad
B_{,i}=b_i\, ,
\end{equation}
where ${\vec b}=(b,0,0)$ is a constant vector orthogonal to the
$z$-direction. With this assumption, we have $w_s=0$ and $\bar u_i=b_i
\,\delta A(t,z)$ for $W_i$.
Regarding the vector perturbation of the metric, we follow again ref.\ \cite{Weinberg}
and denote $\delta g_{0i}=aG_i$, and $\delta g_{ij}=a^2(C_{i,j}+C_{j,i})$, with
transverse conditions $G_{i,i}=C_{i,i}=0$, or, in our setup,
$G_z=C_z=0$. Then, we impose the gauge condition $\delta
B=0$. However, this condition alone does not completely fix the gauge,
as only the component of $\xi^i$ parallel to $\vec b$ gets frozen by
Eq.\ (\ref{eq:deltB}). Therefore we can still choose $\xi^y$ such that
$C_y=0$, and $\vec C=\vec C^\parallel$ is parallel to $\vec
b$. Finally, we find that the action for the vector perturbations is
given by
\begin{align}
\label{eq:VC1}
S_V&=\int\mathrm{d}^4 x \left\{ \frac{a}{32\pi G} \bigl[ {\left( \partial_z V_x \right)}^2+{\left( \partial_z V_y \right)}^2 \bigr]+na^3 b\delta A {\dot C_x} \right.\notag \\
&\qquad\left.{} + na^2 b V_x \delta A+2\pi Gb^2n^2a\delta A^2/\dot H \right\},
\end{align}
where $V_i \equiv G_i-a {\dot C_i}$ is a gauge invariant field. This
action can be immediately extended to the general case where the
perturbation variables depend now on $(x,y,z)$. In the gauge $\delta
B=0$, the result is given by
\begin{align}
\label{eq:VC2}
S_V&=\int\mathrm{d}^4 x \left[ \frac{a}{32\pi G} \left( \partial_j V_i \right)\left( \partial_j V_i \right)+a^3(\rho+p) {\dot C_i} \delta u_i \right.\notag \\
&\qquad\left.{} +a^2 (\rho+p) V_i \delta u_i-\tfrac{1}{2}\,a\,(\rho+p)\delta u_i \delta u_i \right],
\end{align}
where we substituted $\delta u_i$ for $b_i\delta A/\mu$. Variations
with respect to $V_i$ and $C_i$ yield the following equations,
\begin{align}
\label{eq:vectN}
\triangle V_i&=16\pi G a(\rho+p) \delta u_i, \\
\frac{\mathrm{d}}{\mathrm{d} t}[(\rho&+p)a^3\delta u_i]=0,
\label{eq:vectT}
\end{align}
respectively. Again, these equations exactly coincide with those
derived by perturbing the Einstein equations and the energy-momentum
conservation law \cite{Weinberg}. This provides thus a cross-check that the calculations presented here
are correct. In fact, the main novelty in our approach is to be found when we quantize the system.
To summarize section III, the known results on first-order TCP for a
perfect fluid can be directly derived from variations of the classical
action given in Eq.\ (\ref{eq:act1}). Note that a similar action
approach has been already performed in \cite{Garriga,Boubekeur:2008kn}.
Yet, the system studied there cannot represent a perfect fluid.
Indeed, as already mentioned in the introduction, the action proposed in \cite{Garriga,Boubekeur:2008kn}
is made of a real scalar field. So, this system, by construction, cannot have vector perturbations,
as the only new perturbed field is the scalar one. Therefore the system
studied there is not a perfect fluid, otherwise a perfect fluid would have no vector perturbation.
It can be thought of as a scalar fluid, but, once more, not as a perfect fluid.
It is simply a different physical system whose squared sound speed $c_s^2$ is not equal to $\dot p/\dot\rho$.
\section{Quantization}
The most important advantage of the action approach proposed in this
paper is, of course, that it allows us to quantize the system. Although the
inhomogeneities of the present universe, such as the galaxy
distribution, are clearly described by the classical theory, the
quantization of a perfect fluid may have something to do with the
early universe if the seeds for structure formation are provided by
quantum fluctuations of fields generated during inflation. Yet,
besides its practical utility, our action approach also opens new
theoretical prospects, as discussed below. In the following, we will
again treat the quantization for the scalar and vector type
perturbations separately.
\subsection{Scalar type perturbations}
To quantize the scalar perturbations, let us first introduce the
canonical field $\psi\equiv\sqrt{a^3 Q_S} \Phi$. To avoid the
appearance of a ghost, we assume that $Q_S$ is positive. According to
Eq.\ (\ref{eq:Qs}), this means that $(\rho+p)/c_s^2 >0$. Such a
constraint, together with the stability of the perturbations,
$c^2_s>0$, lead to the null energy condition $\rho+p>0$. Using the
new variable $\psi$, the action (\ref{eq:act2}) is rewritten as
\begin{align}
\label{eq:act2.5}
S_S=\int \mathrm{d}^4x&\left[ \frac{\dot\psi^2}2
-\frac {c_s^2}{2a^2}(\vec\nabla\psi)^2
+C_1 \delta s\dot\psi+C_2 \delta s \psi\right.\notag\\
&-\left.\frac{N}2(\delta\bar\theta\dot{\delta s}
-\delta s\dot{\delta\bar\theta})-\frac D2 \delta s^2\right] ,
\end{align}
where we have neglected $\delta A$ and $\delta B$ as they do not contribute
to the Hamiltonian. The field $\psi$ has a canonical kinetic term,
whereas the quadratic terms for $\delta s$ and $\delta \bar\theta$ are
at most linear in their time derivatives. Yet, it is known
\cite{Jackiw} that a consistent quantization of such a singular
Lagrangian can be done provided one introduces the following
equal-time commutation conditions,
\begin{align}
\bigl[\hat{\psi}(t,\vec x),\hat{\pi}(t,\vec y)\bigr]&=i\delta(\vec x-\vec y)\, ,\label{comm1}\\
\bigl[\hat{\delta s}(t,\vec x),\hat{\delta\bar\theta}(t,\vec y)\bigr]&=-\frac{i}{N}\delta(\vec x-\vec y)\, . \label{comm2}
\end{align}
All the other commutators are zero and $\pi$ is the canonical
conjugate momentum of $\psi$. The corresponding Hamiltonian is given
by
\begin{align}
\label{eq:hamiltonian}
{\hat H}=\int \mathrm{d}^3\vec x&\left[\frac12\,{\left({\hat \pi}-C_1 \hat{\delta s} \right)}^2+
\frac {c_s^2}{2a^2}(\vec\nabla {\hat \psi})^2\right.\notag\\
&-\left.C_2 \hat{\delta s} {\hat \psi}
+\frac{D}{2} {\hat {\delta s}}^2 \right].
\end{align}
One can easily check that the Heisenberg equations, with the help of
the commutation relations, yield the same equations of motion as the
classical ones derived from the variation of Eq.~(\ref{eq:act2.5}).
The relation (\ref{comm2}) shows that $\hat{\delta s}$ and
$\hat{\delta\bar\theta}$ become non-commuting variables at the quantum level.
In Quantum Field Theory, different fields (i.e., different particles) can be simultaneously
observed at the same position.
Here the perturbation fields related to the entropy and the temperature,
to which we may individually attribute arbitrary numbers at the classical level,
cannot be measured at the same space-time point.
That this non-commutativity arises from the action of a perfect fluid is thus intriguing.
We should concede that consequences directly linked to present observations are missing.
However, at this level it is quite interesting to compare the action (\ref{eq:act2.5}) with
the one of the Landau problem \cite{Jackiw}, an archetype of non-commutative
geometry. Regarding $\hat{\delta s}$ and $\hat{\delta\bar\theta}$, the
action (\ref{eq:act2.5}) is essentially the same as the one for a
charged particle moving on a two-dimensional surface with a constant
magnetic field background in the transverse direction:
\begin{equation}
\label{eq:LPR}
S=\int \mathrm{d} t\left[\frac m2(\dot x^2+\dot y^2)-\frac{\cal B}2(\dot x y-\dot y x)-V(x,y)\right].
\end{equation}
Within this analogy, the perturbation fields $(\delta s, \delta \bar
\theta)$ correspond to the $(x,y)$ space coordinates for the particle,
and the number of particles $N=na^3$ plays the role of the constant
magnetic field ${\cal B}$. Interestingly enough, while the quite heuristic
non-commutative relation $[\hat x,\hat y]=-i/{\cal B}$ in the Landau
problem \cite{Jackiw} holds only in the absence of the kinetic term in
Eq.\ (\ref{eq:LPR}), which is valid in the large magnetic field limit,
the non-commutative relation (\ref{comm2}) of a perfect fluid is exact
for any finite number of particles. So, perfect fluids provide a nice
example of non-commutativity between different fields.
The other non-commutation relation (\ref{comm1}) leads also to an
interesting physical consequence. By using once more the Einstein
equations and the energy-momentum conservation law, we find that
the pressure perturbation in the comoving gauge ($v=0$) is given by
$\hat{\delta p}=-(\rho+p){\dot {\hat \phi}}/H$. Then, the commutator
between $\phi$ and $\delta p$ becomes
\begin{equation}
\bigl[\hat{\phi}(t,\vec x),\hat{\delta p}(t,\vec y)\bigr]=-i c_s^2 H\delta(\vec x-\vec y)/a^3.
\end{equation}
Consequently, local curvature and pressure perturbations cannot be measured simultaneously.
\subsection{Vector type perturbations}
Time derivatives of $V_i$ and $\delta u_i$ do not appear in the action
(\ref{eq:VC2}). Therefore, those are auxiliary fields which can be
eliminated through their equations of motion. The action
(\ref{eq:VC2}) becomes then a functional which depends only on $C_i$.
To make this action canonical, we introduce a new variable $F_i ({\vec
k},t)=\sqrt{a^3 Q_V (k,t)} C^{\parallel}_i({\vec k},t)$, where
$C^\parallel_i({\vec k},t)$ is the Fourier transform of $C^\parallel_i
({\vec x},t)$ and $Q_V$ is given by
\begin{equation}
Q_V (k,t)=\frac{a^2 k^2 (\rho+p)}{k^2+16\pi G a^2 (\rho+p)}.
\end{equation}
To avoid the appearance of ghosts, $Q_V$ must be positive. So, as for
the scalar modes we require $\rho+p>0$, i.e.\ the null energy
condition to hold. In terms of $F_i$, the canonical action in Fourier
space is given by
\begin{equation}
S_V= \int \mathrm{d} t \mathrm{d}^3k \, \left( \tfrac{1}{2} \dot{F_i^\ast} \dot{F_i} -\tfrac{1}{2} m_k^2 F_i^\ast F_i \right), \\
\end{equation}
with
\begin{equation}
m_k^2=-\frac12\frac{\mathrm{d}^2}{\mathrm{d} t^2} \log (a^3 Q_V)-\frac{1}{4} {\left( \frac{\mathrm{d}}{\mathrm{d} t} \log a^3 Q_V \right)}^2.
\end{equation}
Now the quantization is done by imposing the following canonical
condition for $F_i$ and its conjugate momentum
\begin{equation}
\bigl[\hat{F_i}(t,\vec k),{\hat{\pi_j}}^\dagger (t,\vec k')\bigr]=i \delta(\vec k-\vec k') \left( \delta_{ij}-\frac{k_i k_j}{k^2} \right). \label{comvec}
\end{equation}
The corresponding Hamiltonian is given by
\begin{equation}
{\hat H}= \int \mathrm{d}^3k \left( \tfrac{1}{2} {\hat \pi_i}^\dagger {\hat \pi_i} +\tfrac{1}{2} m_k^2 {\hat F_i}^\dagger \hat F_i \right),
\end{equation}
and the evolution of the operators is given by the Heisenberg equation
with the help of the commutation relation (\ref{comvec}). The quantum version of Eq.\ (\ref{eq:vectN}) implies $\bigl[\hat{V_i}(t,\vec x),\hat{\delta u_j}(t,\vec y)\bigr]=0$.
Therefore, the gauge invariant metric perturbation and the vorticity of the perfect fluid
can be measured at the same time, at the same position.
As for the tensor perturbations, they come only from the metric
perturbation. The action for the tensor perturbations and its quantum
aspects have been widely studied in the literature (e.g. \cite{Weinberg}), mainly in
connection with the quantum generation during inflation. So, we do not
discuss it any longer.
\section{Conclusions}
We have studied the theory of cosmological perturbations for a perfect fluid in GR
at the action level.
Starting from the action proposed by Schutz, we first reproduced the known results derived from
the equations of motion alone.
This enabled us to illustrate the advantage of our action approach at the classical level.
Quantizing then the perturbation fields, we found that some of
them do not commute, leading thus to a non-commutative
field-geometry. In particular, we pointed out that a simultaneous measurement of local curvature
perturbations and pressure inhomogeneities is not allowed at the quantum level.
Finally, we proved that both the null energy condition and a positive squared sound speed
have to hold at all times in order to avoid ghost degrees of freedom.
Another advantage of our action approach is that one can easily obtain the second order action
depending only on the dynamical fields.
Such an approach is thus suited to study cosmology in extended gravity models with more than one
dynamical field.
In particular, we expect that the approach presented here will be quite useful for the perturbation analysis
of $f(R,G)$ gravity models \cite{ANTONIO}, or for the treatment of non-gaussianities for the entropy and vector perturbations on perfect fluids following \cite{Boubekeur:2008kn}.
\begin{acknowledgments}
We thank Sean Murray for helpful discussions. This work is supported
by the Belgian Federal Office for Scientific, Technical and Cultural
Affairs through the Interuniversity Attraction Pole P6/11.
\end{acknowledgments}
|
1,314,259,993,544 | arxiv | \section{Introduction}
\label{introduction}
Reliability measures how reliable a system is by finding the probability that it will continue to provide its desirable service in a given period of time. Fault trees (FTs) \cite{DFT-survey} and reliability block diagrams (RBDs) \cite{hasan2015reliability} are the most commonly used reliability modeling techniques. FTs graphically model the sources of failure of a system or subsystem using FT gates. An RBD, on the other hand, is a graphical representation of the reliability of a system. The components of a system are modeled as blocks and are connected using connectors (lines) to create a path or multiple paths from the RBD input to its output. These paths represent the required working blocks (system components) for the system to have a successful operation. The modeled system fails when components fail in such a manner that leads to the disconnection of all the paths between the input and the output. RBDs can be connected in a \textit{series}, \textit{parallel}, \textit{series-parallel} or \textit{parallel-series} fashion to create the appropriate modeling structure depending on the behavior and the components redundancy of the modeled system, which provides flexible and extensible modeling configurations to represent complex systems. However, both of the traditional RBDs and FTs are unable to model the dynamic behavior of system components, where the change of a state of one component can affect the state of other system components.\\
\indent Dynamic fault trees (DFTs) \cite{DFT-survey} are proposed as an extension to traditional FTs by introducing DFT gates, such as spare gates. However, the only behavior that is captured by DFTs is the dynamic failure effect of one system component in the failure or activation of other components. To overcome the modeling limitations of DFTs, RBDs are extended to \textit{dynamic reliability block diagrams} (DRBDs) to model the dynamic dependency among system components in several scenarios by introducing new constructs \cite{Distefano-Thesis}. These new constructs are basically DRBD blocks that enable modeling dynamic relationships among system components, such as a load sharing construct that captures the effect of sharing a load on the reliability of system components and spare construct that models the reliability of spare parts in a DRBD. \\
\indent Formal methods have been used in the analysis of RBDs; both the dynamic and static (traditional) ones. For instance, in \cite{xu2007formal}, the formal semantics of DRBD constructs in Object-Z formalism \cite{smith2012object} are proposed. However, analyzing and verifying the behavior of DRBDs based on this formalism is not feasible since there is a lack in the support of the used tools. Therefore, in \cite{smith2012object}, the DRBDs are then converted into a Colored Petri Net (CPN) to be analyzed using Petri nets tools. An algorithm to automatically convert a DRBD into a CPN is also proposed in \cite{robidoux2010automated}. Since CPN is used, only some state-based properties of the modeled system can be analyzed. In \cite{ahmed2016formalization}, the HOL4 theorem prover \cite{HOL4} is used to formalize several configurations of static RBDs. However, this formalization can only analyze the combinatorial behavior of systems and cannot provide support to analyze DRBDs. In addition, this formalization cannot be tailored to provide support for DRBDs, and thus, it is required to have a brand new higher-order logic (HOL) formalization to support this kind of analysis. \\
\indent In system engineering, it is important to be able to analyze DRBDs qualitatively in order to identify the sources of system vulnerability, and quantitatively in order to evaluate the system reliability. However, to the best of our knowledge, so far there exists no algebraic approach that mathematically models a given DRBD and enables expressing its function based on basic components just like the DFT algebra \cite{Merle-thesis}. Using such algebra in the reliability analysis will result in simpler and fewer proof steps than the DFT-based algebraic analysis \cite{Merle-thesis}, since the probabilistic principle of inclusion and exclusion will not be invoked. In this work, we propose a new algebraic approach for DRBD analysis that allows having a DRBD expression to be used for both qualitative and quantitative analyses.
We introduce new operators to mathematically model the dynamic behavior in DRBD structures and constructs. In particular, we use these operators to model a DRBD spare construct besides traditional series, parallel, series-parallel and parallel-series structures. Moreover, we provide simplification theorems that allow reducing the structure of a given DRBD. This DRBD structure can be then analyzed to obtain a generic expression of the system reliability. The reliability expressions obtained using this approach are generic and independent of the distribution and density functions that represent the system components. Although basic operators, such as OR and AND, were introduced in \cite{Distefano-Thesis}, they are only useful to model parallel and series constructs of dependent components. Moreover, there is no general mathematical expression that would allow reasoning about the behavior of DRBDs. In addition, the DRBDs constructs of \cite{Distefano-Thesis} are quite complex, which complicates modeling large systems. Therefore, we use the constructs proposed in \cite{xu2007formal} as they are much simpler, which facilitates defining the new algebra to model various new DRBD constructs. In this work, we use the DRBD constructs of \cite{xu2007formal}. Leveraging upon the expressive nature of HOL, we formally verify the soundness of the proposed DRBD algebra using HOL theorem proving. Although the formalization development can be conducted using many theorem provers, we choose the HOL4 theorem prover, as our existing formalization of DFT algebra can be useful since our proposed DRBD algebra is compatible with the DFT's. The work contributions can be summarized as:
\begin{compactitem}
\item A new DRBD algebra that includes DRBD operators and simplification theorems that allow expressing the structure of a given DRBD.
\item A HOL formalization of the introduced DRBD algebra, i.e., modeling the DRBD operators and verifying their simplification theorems using HOL4 to ensure the soundness of the proposed approach.
\item A mathematical expression and HOL formalization of the spare construct and its reliability.
\item Mathematical models and reliability expressions of the traditional series, parallel and deeper structures for an arbitrary number of inputs using the new DRBD operators with their HOL formalization.
\item Formal reliability analysis of two real-world systems
\end{compactitem}
Our ultimate goal is to develop a formally verified algebra that follows the traditional reliability expressions of the series and parallel structures in an easily extensible manner and at the same time can capture the dynamic behavior of real-world systems.
Our formalization differs from and overcomes the formalization of traditional RBDs presented in \cite{ahmed2016formalization} in the sense that it can formally express the structure function of a DRBD using the introduced DRBD operators. In addition, it can formally model and analyze DRBD spare constructs. Furthermore, we model the traditional RBD structures, i.e., series, parallel and deeper structures in a way similar to the mathematical models available in the literature, which makes it easily understood and followed by reliability engineers that are not familiar with HOL theorem proving. Finally, we illustrate the usefulness of the proposed developments in conducting the formal analysis of two real-world systems; the terminal reliability of a shuffle-exchange network and the reliability of a drive-by-wire system.
\section{DRBD Algebra}
\label{algebra}
In this section, we present the proposed algebra for DRBD analysis. This algebra allows modeling the structure function of DRBDs with spare constructs. Moreover, we present some simplification properties that enable reducing the structure function when possible. Throughout this work, we assume that system components or blocks are represented by random variables that in turn represent their time-to-failures. In addition, we assume that system components are non-repairable, i.e., we are interested in expressing the reliability of the system considering that the failed components will not be repaired. It is worth mentioning that our proposed algebra follows the general lines for the DFT algebra \cite{Merle-thesis}, which allows DFTs conversion into DRBDs for conducting their analysis as well.
The reliability of a single component, which time-to-failure function is represented by random variable $X$, is mathematically defined as \cite{hasan2015reliability}:
\begin{equation}
\label{eq:rel}
R_{X}(t)=Pr\{s\ |\ X(s)\ >\ t\} = 1-Pr \{s\ |\ X(s)\ \leq\ t\} = 1-F_{X}(t)
\end{equation}
\noindent where $F_{X}(t)$ is the cumulative distribution function (CDF) of $X$.
We call $\{s\ |\ X(s)\ >\ t\}$ as a DRBD event as it represents the set that we are interested in finding the probability of until time $t$:
\begin{equation}
\label{eq:event}
event\ (X,\ t)\ =\ \{s\ |\ X(s)\ >\ t\}
\end{equation}
\subsection{Identity Elements, Operators and Simplification Properties}
Similar to the identity elements of ordinary Boolean algebra and DFT algebra \cite{Merle-thesis}, we introduce two identity elements, i.e., ALWAYS and NEVER, that represent two states of any system block. The \textit{ALWAYS} element represents a system component that always fails, i.e., it fails from time $0$. While the \textit{NEVER} element represents a component that never fails, i.e., the time of its failure is $+\infty$. These identity elements play an important role in the reduction process of the structure functions of DRBDs, as will be introduced in the following sections.
\begin{equation}
ALWAYS = 0
\end{equation}
\begin{equation}
NEVER = +\infty
\end{equation}
We introduce operators to model the relationship between the various blocks in a DRBD. These operators can be divided into two categories: 1) The AND and OR operators that are not concerned with the dependencies among system components. 2) Temporal operators, i.e., \textit{After}, \textit{Simultaneous} and \textit{Inclusive After}, that can capture the dependencies between system components. It is worth mentioning that DRBDs are concerned with modeling the several paths of success of a given system. Therefore, if we are concerned in knowing the success behavior of a DRBD until time $t$, it means that we are interested in knowing how the system would not fail until time $t$. As a result, we can use the time-to-failure random variables in modeling the time-to-failure of a given DRBD, i.e., its structure function. It is assumed that for any two system components that possess continuous failure distribution functions, the possibility that these components fail at the same time can be neglected. \\
\indent In \cite{Distefano-Thesis}, AND and OR operators were introduced to model the parallel and series constructs between dependent components only without providing any mathematical model to these operators. We propose to use the AND ($\cdot$) and OR ($+$) operators to model series and parallel blocks in a DRBD, respectively without any restriction. We provide a mathematical model for each operator based on the time of failure of its inputs, as listed in Table~\ref{table:or_and_reliability}, to be used in the proposed algebra. The AND operator models the series connection between two or more system blocks, as shown in Figure~\ref{fig:two-block-DRBD}(a). For example, the DRBD in Figure~\ref{fig:two-block-DRBD}(a) will continue to work only if component $X$ and component $Y$ are working. Once one of these blocks stops working, then there will be no connection between the input and the output of the DRBD and thus the system will no longer work. We model the AND operator as the minimum time of its input arguments. Similarly, the OR operator models the connection between parallel components in a DRBD. For example, the DRBD in Figure~\ref{fig:two-block-DRBD}(b) will continue to work if $X$ is working or $Y$ is working. All the components in a parallel structure should fail for this DRBD to fail. Therefore, we model the OR operator as the maximum time of failure of its input arguments, which represents the time of failure of basic system blocks or sub-DRBDs. This approach facilitates using these operators to model even the complex structures.
\begin{figure}[!t]
\subfigure[Series DRBD]{
\makebox[0.5\textwidth]{
{\includegraphics[scale=0.7]{series_2_block1.jpg}}}}
\hfill
\subfigure[Parallel DRBD]{
\makebox[0.5\textwidth]{
{\includegraphics[scale=0.7]{parallel_2_block1.jpg}
}}}
\caption{Two-Block Series and Parallel DRBDs}\label{fig:two-block-DRBD}
\end{figure}
\begin{table}[b]
\centering
\caption{Mathematical and Reliability Expressions of AND and OR Operators}
\label{table:or_and_reliability}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|l|l|}
\hline
{Operator} & \multicolumn{1}{c|}{{Math. Expression}} & \multicolumn{1}{c|}{{Reliability}} \\ \hline \hline
{AND} & $ X \cdot Y= min\ (X,Y)$ & $ R_{(X\cdot Y)}(t)\ = R_{X}(t)\ \times\ R_{Y}(t)$ \\ \hline
{OR} & $ X + Y = max\ (X,Y)$ & $ R_{(X+ Y)}(t)\ = 1- ((1-R_{X}(t))\times(1-R_{Y}(t)))$ \\ \hline
\end{tabular}
\end{table}
If $X$ and $Y$ are independent, then the reliability of the systems, shown in Figure \ref{fig:two-block-DRBD}, can be expressed as in Table \ref{table:or_and_reliability}. To reach these expressions, it is required first to express the DRBD events as the intersection and union for the AND and OR operators, respectively, as:
\begin{equation}
\label{and-intersect}
event\ ((X\cdot Y),\ t)\ =\ event\ (X,\ t)\ \cap\ event\ (Y,\ t)
\end{equation}
\begin{equation}
\label{or-union}
event\ ((X+ Y),\ t)\ =\ event\ (X,\ t)\ \cup\ event\ (Y,\ t)
\end{equation}
In order to model the dynamic behavior of systems in DRBDs, we introduce new temporal operators: \textit{after} ($\rhd$), \textit{simultaneous} ($\Delta$), and \textit{inclusive after}($\unrhd$), as listed in Table~\ref{table:temporal_operators}. The \textit{after} operator represents a situation where it is required to model a component that continues to work after the failure of another. The time of failure of the after operator equals the time of failure of the last component, which is required to fail. However, if the required sequence does not occur, then the output can never fail, i.e., the time of failure equals $+\infty$.
\begin{table}[t]
\caption{Mathematical Expressions of Temporal Operators}
\label{table:temporal_operators}
\centering
\begin{tabular}{|c||c||c|}
\hline
{After($\rhd$)}&{Simultaneous($\Delta$)} & {Inclusive After($\unrhd$)}\\ \hline \hline
$ X \rhd Y = \begin{cases} X, & X > Y\\[-1\jot] +\infty, & X \leq Y
\end{cases}$ & $ X \Delta Y = \begin{cases} X, & X = Y\\[-1\jot] +\infty, & X \neq Y
\end{cases} $ & $ X \unrhd Y = \begin{cases} X, & X \geq Y\\[-1\jot] + \infty, & X < Y
\end{cases} $ \\ \hline
\end{tabular}
\end{table}
The behavior of the simultaneous operator is similar to the one introduced in the DFT algebra \cite{Merle-thesis}. The output of this operator fails if both its inputs fail at the same time, otherwise it can never fail.
Finally, the inclusive after operator encompasses the behavior of both the after and simultaneous operators, i.e, it models a situation where it is required that one component continues to work after another one or fail at the same time, otherwise it can never fail. In the case of dealing with basic components, the inclusive after will behave in a similar way as the after operator. Therefore, their probabilities can be expressed for independent random variables in the same way as:
\begin{equation}
R_{(X \rhd Y)}(t) = 1-\int_{0}^{t} f_{X}(x) \times F_{Y}(x)\ dx
\end{equation}
\noindent where $F_{X}$ is the probability density function (PDF) of $X$ and $F_Y$ is the CDF of $Y$.
We introduce several simplification properties to reduce the structure function of a DRBD. These simplification properties range from simple ones, such as the associativity and idempotence of the operators, to more complex theorems. The idea of these properties is to reduce the algebraic expressions based on the time of failure. For example, $X \cdot ALWAYS = ALWAYS$ means that if a component in a series structure is not working, i.e., always fails, then the series structure is not working as well. Similarly, $X + NEVER = NEVER$ means that if a component in a parallel structure cannot fail, then the whole parallel structure cannot fail as well.
$X+Y = Y+X$, $X\cdot Y = Y \cdot X$ and $X\Delta Y = Y \Delta X$ represent the commutativity property for the OR, AND and simultaneous operators, respectively. An example of a more complex theorem is $X \rhd (Y\cdot Z) = (X \rhd Y)\cdot( X \rhd Z)$. In Section \ref{Formalization_in_hol}, a full list of the developed theorems will be introduced.
\subsection{DRBD Constructs and Structures}
The spare construct, shown in Figure \ref{fig:spare} \cite{xu2007formal}, is introduced in DRBDs to model situations where a spare part is activated and replaces the main part, after its failure, by introducing a spare controller to activate the spare \cite{xu2007formal}. Depending on the failure behavior of the spare part, we can have three variants, i.e., hot, warm and cold ($H|W|C$) spares. The hot spare possesses the same failure behavior in both its active and dormant states. The cold spare cannot fail in its dormant state and is only activated after the failure of the main part. The failure behavior of the warm spare in the dormant state is attenuated by a dormancy factor from the active state. In order to distinguish between the dormant and active states of the spare, just like the DFT algebra \cite{Merle-thesis}, we use two different symbols to model the spare part of the DRBD spare construct, one for the dormant state and the other for the active one. For the spare construct of Figure~\ref{fig:spare}, the spare $X$ is represented by $X_{a}$ and $X_{d}$ for the active and dormant states, respectively. After the failure ($F$) of the main part $Y$, $X$ will be activated ($A$) by the spare controller. We model the structure function of the spare construct ($Q_{spare}$) using the DRBD operators based on the description of its behavior as:
\begin{figure}[b]
\centering
\includegraphics[scale=0.7]{spare_DRBD1.jpg}
\caption{Spare Construct}
\label{fig:spare}
\end{figure}
\begin{equation}
\label{spare}
Q_{spare}= (X_{a} \rhd Y)\cdot (Y \rhd X_{d})
\end{equation}
Thus, we need two conditions to be satisfied in order for the spare to work. The first one is that the active state of the spare will continue to work after the failure of the main part $(X_{a}\rhd Y)$. The second condition is that the main part will continue to work after the failure of the spare in its dormant state $(Y \rhd X_{d})$. However, since the spare part can only fail in one of its states ($X_{a}, X_{d}$) but not both as it is non-repairable, only one of the terms in Equation~(\ref{spare}) affects the behavior and the other term can never fail, i.e., it fails at $+\infty$.
Since the spare construct of the DRBD and the spare gate of the DFT exhibit complementary behavior, i.e., the DRBDs consider the success and the DFTs consider the failure, we can use the probability of failure of the spare DFT gate \cite{Merle-thesis} to find the reliability of the spare DRBD construct. It is assumed that the dormant spare and the main part are independent since the failure of one does not affect the failure of the other. However, the failure of the active spare is affected by the time of failure of the main part, since it will be activated after the failure of the main part. We express the reliability of the spare as:
\begin{equation}
\label{spare_prob}
\begin{split}
R_{spare}(t) = 1 - \int_{0}^{t} \int_{y}^{t} f_{(X_{a}|Y=y)}(x)\ f_{Y}(y) dx dy - \int_{0}^{t} f_{Y}(y)F_{X_{d}}(y)dy
\end{split}
\end{equation}
\noindent where $f_{(X_{a}|Y=y)}$ is the conditional density function of $X_{a}$ given that $Y$ failed at time $y$.
Equations (\ref{spare}) and (\ref{spare_prob}) represent the general behavior of the spare, i.e., the warm spare. The hot and cold spares represent special cases of the warm spare and can be expressed as:
\begin{equation}
\label{hot_spare}
Q_{hot spare} = X + Y
\end{equation}
\begin{equation}
\label{cold_spare}
Q_{cold spare} = X_{a} \rhd Y
\end{equation}
In Equation (\ref{hot_spare}), the spare part $X$ has the same behavior in both states and thus there is no need to use any subscript to distinguish both states. The probability of Equation~(\ref{hot_spare}) can be expressed using the reliability of the OR operator, given in Table~\ref{table:or_and_reliability}. While the reliability of the cold spare construct can be expressed as:
\begin{equation}
R_{cold\ spare}(t) = 1 - \int_{0}^{t} \int_{y}^{t} f_{(X_{a}|Y=y)}(x)\ f_{Y}(y)\ dx\ dy
\end{equation}
\begin{table}[!b]
\caption{Mathematical and Reliability Expressions of DRBD Structures}
\renewcommand{\arraystretch}{1.2}
\centering
\label{table_DRBD_structure}
\begin{tabular}{|c||c||c|}
\hline
{Structure}&{Math. Expression} &{Reliability} \\ \hline\hline
{Series}& $ \bigcap_{i=1}^{n}(event\ (X_{i},\ t))$ &$ \prod_{i=1}^{n}R_{X_{i}}(t)$ \\ \hline
{Parallel}& $ \bigcup_{i=1}^{n}(event\ (X_{i},\ t))$ & $ 1- \prod_{1=1}^{n}(1-R_{X_{i}}(t))$\\ \hline
{Series-Parallel} & $\bigcap_{i=1}^{m}\bigcup_{j=1}^{n}(event\ (X_{(i,j)},\ t))$ & $ \prod_{i=1}^{m}(1- \prod_{j=1}^{n}(1-R_{X_{(i,j)}}(t)))$\\ \hline
{Parallel-Series} & $ \bigcup_{i=1}^{n}\bigcap_{j=1}^{m}(event\ (X_{(i,j)},\ t))$ & $ 1-(\prod_{i=1}^{n}(1- \prod_{j=1}^{m}(R_{X_{(i,j)}}(t))))$\\ \hline
\end{tabular}
\end{table}
\begin{figure}[t]
\subfigure[Series]{
\makebox[0.25\textwidth]{
\includegraphics[scale=0.6]{series1.jpg}
}}
\subfigure[Parallel]{
\makebox[0.2\textwidth]{
\includegraphics[scale=0.6]{parallel1.jpg}}}
\subfigure[Series-Parallel]{
\makebox[0.25\textwidth]{
\includegraphics[scale=0.6]{series_parallel1.jpg}}}
\subfigure[Parallel-Series]{
\makebox[0.2\textwidth]{
\includegraphics[scale=0.6]{parallel_series1.jpg}}}
\caption{DRBD Structures}
\label{fig:rbd_structures}
\end{figure}
Table \ref{table_DRBD_structure} lists the mathematical and reliability expressions of these structures \cite{hasan2015reliability}. The series structure represents a collection of blocks that are connected in series, as shown in Figure \ref{fig:rbd_structures}(a). The system continues to work until the failure of one of these blocks. We define a series structure that represents the intersection of all events of the blocks in this structure as in Table \ref{table_DRBD_structure}, where $X_{i}$ represents the $i^{th}$ block in the series structure and $n$ is the number of blocks. Interestingly, any block in our proposed algebra can represent a basic system component or a complex structure, such as a spare construct. Moreover, since we are dealing with the events, we can use the ordinary reliability expressions for the series structure assuming the independence of the individual blocks. The parallel structure, shown in Figure \ref{fig:rbd_structures}(b), represents a system that continues to work until the failure of the last block in the structure. The behavior of the parallel structure can be expressed using the OR operator. We represent the parallel structure as the union of the individual events of the blocks. The series-parallel structure, shown in Figure~\ref{fig:rbd_structures}(c), represents a series structure, where the blocks of the series structure are parallel structures. The structure function of this structure can be expressed using AND of ORs operators. Table~\ref{table_DRBD_structure} lists the model for this structure with its reliability expression, where $n$ is the number of blocks in the parallel structure and $m$ is the number of parallel structures that are connected in series. The parallel-series structure represents a group of series structures that are connected in parallel, as shown in Figure~\ref{fig:rbd_structures}(d). Its structure function can be expressed using OR of ANDs operators.
\section{Formalization of DRBDs in HOL}
\label{Formalization_in_hol}
In this section, we present our formalization for the proposed DRBD algebra including DRBD events, operators and constructs, simplification theorems and reliability expressions. First, we review some HOL probability theory preliminaries required for understanding the rest of the work.
\subsection{HOL Probability Theory}
\begin{table}[!b]
\centering
\setlength\tabcolsep{1pt}
\caption{HOL4 Probability Functions}
\footnotesize
\label{table:assumption}
\begin{tabular}{|p{6.5cm}|p{9cm}|}
\hline
Function & Explanation \\ \hline \hline
{$\!\begin{aligned}[t]
& {\texttt{rv\_gt0\_ninfinity L}}\end{aligned}$}
& Random variables in list $L$ are greater than $0$ and not equal to $+\infty$ \\ \hline
{$\!\begin{aligned}[t]
& {\texttt{indep\_var p lborel}}\\[-2\jot]&{\texttt{~~(real o X) lborel (real o Y)}}\end{aligned}$}
&
Independence of random variables defined from the probability space \texttt{p} to the Lebesgue Borel measure (\texttt{lborel})
\\ \hline
{$\!\begin{aligned}[t]
& {\texttt{distributed p lborel (real o X) f\textsubscript{x}}}\end{aligned}$}
&
Defines a density function \texttt{f\textsubscript{x}} for the real version of random variable $X$ defined from the probability space $p$ to the Lebesgue-Borel measure\\ \hline
{$\!\begin{aligned}[t]
&{\texttt{measurable\_CDF p (real o Y) }}\end{aligned}$}
&
Ensures that CDF (F\textsubscript{Y}) is measurable \\ \hline
{$\!\begin{aligned}[t]
& {\texttt{cont\_CDF p (real o Y) }}\end{aligned}$}
&
Ensures that CDF (F\textsubscript{Y}) is continuous \\ \hline
{$\!\begin{aligned}[t]
& {\texttt{cond\_density lborel lborel p }}\\[-2\jot]&{\texttt{~~(real o X)(real o Y) y f\textsubscript{xy} f\textsubscript{y} f\textsubscript{X\textsubscript{a}|Y} }}\end{aligned}$}
&
Defines a conditional density function f\textsubscript{X\textsubscript{a}$|$Y} using the joint density function f\textsubscript{xy} and the marginal density function f\textsubscript{y} \\ \hline
{$\!\begin{aligned}[t]
& {\texttt{den\_gt0\_ninfinity f\textsubscript{X\textsubscript{a}Y} f\textsubscript{Y} f\textsubscript{X\textsubscript{a}|Y}}}\end{aligned}$}
&
Ensures the proper values for the density functions; joint, marginal and conditional, respectively. 0 $\leq$ f\textsubscript{X\textsubscript{a}Y}, 0 $<$ f\textsubscript{Y} and 0 $\leq$ f\textsubscript{X\textsubscript{a}$|$Y} \\ \hline
{$\!\begin{aligned}[t]
& {\texttt{indep\_sets p X s}}\end{aligned}$}
&
Ensures that the group of sets X indexed by the numbers in set s are independent over the probability space p \\ \hline
\end{tabular}
\end{table}
\indent The probability space is defined in HOL as a measure space, where the measure (probability) of the entire space is 1. A probability space is a measure space and is defined as a triplet $(\Omega, \mathcal{A}, \mathcal{P}r)$, where $\Omega$ is the space, $\mathcal{A}$ are the probability events and $\mathcal{P}r$ is the probability \cite{Mhamdi-entropy}. Two functions are defined in HOL; \texttt{p\_space p} and \texttt{events p}, that return the space ($\Omega$) of the above triplet and the events ($\mathcal{A}$), respectively. A random variable is a measurable function that maps the probability space $p$ to another space. It is defined in HOL as \cite{Mhamdi-entropy}:
\begin{definition}
\label{DEF:random_variable}
{\small\textup{\texttt{$\vdash$ $\forall$X p s. random\_variable X p s $\Leftrightarrow$}\\
\mbox {\texttt{prob\_space p $\wedge$ X $\in$ measurable (p\_space p, events p) s }}}}
\end{definition}
\noindent where \texttt{X} is the random variable, \texttt{p} is the probability space and \texttt{s} is the space that the random variable maps to. In our work, we use the \texttt{borel} space, which is defined over the real line~\cite{Qasim-CICM}.
For a random variable $X$, the probability distribution is defined as the probability that this random variable belongs to a certain set \cite{Tarek-thesis}:
\begin{definition}
\label{DEF:distribution}
\emph{}\\
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$p X. distribution p X = ($\lambda$s. prob p (PREIMAGE X s $\cap$ p\_space p)) }}}}
\end{definition}
The cumulative distribution function (CDF) is defined as \cite{elderhalli2019probabilistic}:
\begin{definition}
\label{DEF:Cumulative_density_function}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$p X t. CDF p X t = distribution p X \{y | y $\leq$ (t:real)\}} }}}
\end{definition}
\noindent where \texttt{p} is a probability space, \texttt{X} is a real-valued random variable and \texttt{t} is a variable of type real and represents time.
Independence of random variables is an important property that ensures that the probability of the intersection of the events of these random variables equals the product of the individual events. This definition is ported from Isabelle/HOL \cite{Isabelle} to HOL4 as \cite{Qasim-CICM}:
\begin{definition}
\label{indep_vars}
{\small
\textup{\texttt{$\vdash$ indep\_vars p M X ii = }\\
\mbox{\texttt{($\forall$i. i $\in$ ii $\Rightarrow$}}\\
\mbox{\texttt{~random\_variable (X i) p (m\_space (M i), measurable\_sets (M i))) $\wedge$}}\\
\mbox{\texttt{indep\_sets p }}\\
\mbox{\texttt{~($\lambda$i. \{PREIMAGE f A $\cap$ p\_space p|(f=X i) $\wedge$ A $\in$ measurable\_sets (M i)\}) ii}}}}
\end{definition}
This definition ensures that a group $X$ is composed of random variables indexed by the elements in set $ii$ and that the events represented by the preimage of these random variables are independent using \texttt{indep\_sets}. \texttt{indep\_var} is defined, based on Definition \ref{indep_vars}, to capture the behavior of independence for two random variables \cite{Qasim-CICM}.
Finally, the Lebesgue integral is defined in HOL4 based on positive simple functions and then extended for positive functions and functions with positive and negative values \cite{Mhamdi-entropy}. Throughout this work, we use the Lebesgue integral for positive functions, i.e., \texttt{pos\_fn\_integral}, since we are integrating cumulative distribution and probability density functions, which are always positive. The integration is over the real line and thus we use the Lebesgue-Borel measure (\texttt{lborel}) \cite{Qasim-CICM} for this purpose. The boundaries of this integral can be identified using an indicator function by specifying the set of elements used in the integration. For example, $\int_{A} f dx$ can be represented as \texttt{pos\_fn\_integral lborel ($\lambda$x. indicator\_fn A * f x)}. However, for the ease of understanding, we use the regular mathematical expressions, i.e., we use $\int f\ dx$ to express integrals instead. Table \ref{table:assumption} lists the probability theory functions used in the rest of the work~\cite{elderhalli2019probabilistic}.
\subsection{DRBD Event}
\indent In our formalization, we define the inputs, or the random variables representing the time to failure of system components, as lambda abstracted functions with a return datatype of extended-real, which represents real numbers besides $\pm\infty$. \\
\indent We define the DRBD event of Equation (\ref{eq:event}) as:
\begin{definition}
\label{DEF:DRBD_event}
\emph{}\\
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$p X t. DRBD\_event p X t = \{s | Normal t < X s\} $\cap$ p\_space p}}}}
\end{definition}
\noindent where \texttt{Normal} typecasts the real value of \texttt{t} from real to extended-real.
This type conversion is required since we need real-valued random variables. However, we need to deal with the extended-real datatype to model the \texttt{NEVER} element. Therefore, we define the time-to-failure functions to return extended-real and typecast the values from extended-real to real using the function \texttt{real} and vice versa using \texttt{Normal}.\\
\indent We define the reliability as the probability of the DRBD event according to Equation~(\ref{eq:rel}):
\begin{definition}
\label{DEF:Rel}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$p X t. Rel p X t = prob p (DRBD\_event p X t)}}}}
\end{definition}
We verify the relationship between the reliability and the CDF of Equation (\ref{eq:rel}) as:
\begin{theorem}
\label{thm:Rel-CDF}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$p X t. rv\_gt0\_ninfinity [X] $\wedge$}}}}\\
{\mbox{\textup{\texttt{~~random\_variable (real o X) p borel $\Rightarrow$}}}}\\
{\mbox{\textup{\texttt{~~(Rel p X t = 1- CDF p (real o X) t)}}}}
\end{theorem}
\noindent where \texttt{real} typecasts the values of the random variable from extended-real to real as the CDF is defined for real-valued random variables, \texttt{random\_variable (real o X) p borel} ensures that \texttt{(real o X)} is a random variable over the real line represented by the \texttt{borel} space, and \texttt{rv\_gt0\_ninfinity} ensures that the random variable is greater than or equal to $0$ and not equal to $+\infty$, as described in Table \ref{table:assumption}, which means that the time of failure of any component cannot be negative or $+\infty$. Theorem \ref{thm:Rel-CDF} is verified based on the fact that the \texttt{DRBD\_event} and the set of the CDF are the complement of each other. Therefore, the probability of one of them equals one minus the other. For the rest of the work, we will denote \texttt{CDF p (real o X) t} by $F_{X}(t)$ to facilitate the understanding of the theorems.
\subsection{Identity Elements, Operators and Simplification Theorems}
Our formalization of the identity elements and the DRBD operators is listed in Table~\ref{table:element-operator}, where \texttt{extreal} is the extended-real datatype in HOL4, \texttt{PosInf} represents $+\infty$, \texttt{min} and \texttt{max} are HOL functions that return the minimum and maximum values of their arguments, respectively. This formalization follows the proposed definitions in Tables \ref{table:or_and_reliability} and \ref{table:temporal_operators}. However, we define the operators as lambda abstracted functions to be able to conduct the probabilistic analysis later. In addition, we verify several simplification theorems based on the properties of \texttt{extreal} numbers in HOL and the definitions of the DRBD operators.
For example, the following theorem represents the distributive property of the after operator over the AND:
\begin{theorem}
{\small\textup{\texttt{$\vdash \forall$X Y Z. X $\rhd$ (Y $\cdot$ Z) = (X $\rhd$ Y)$\cdot$(X $\rhd$ Z)}}}
\end{theorem}
Table \ref{table:simplification_theorems} lists the simplification theorems that we developed and verified in the proposed algebra.
\begin{table}[!t]
\caption{Definitions of Identity Elements and DRBD Operators}
\footnotesize
\centering
\label{table:element-operator}
\begin{tabular}{|l|l|p{7.5cm}|}
\hline
Element/Operator & Mathematical Expression & Formalization \\ \hline \hline
{\footnotesize{\texttt{Always element}}} &
$\!\begin{aligned}[b]
{\displaystyle ALWAYS\ =\ 0}
\end{aligned}$&$\!\begin{aligned}[c]
& \footnotesize{\texttt{$\vdash$ R\_ALWAYS = ($\lambda$s. (0:extreal)) }}\end{aligned}$ \\ \hline
{\footnotesize{\texttt{Never element}}} &
$\!\begin{aligned}[b]
{\displaystyle NEVER\ =\ \texttt{+$\infty$}}
\end{aligned}$& $\!\begin{aligned}[c]
& \footnotesize{\texttt{$\vdash$ R\_NEVER = ($\lambda$s. PosInf) }}\end{aligned}$ \\ \hline
{\footnotesize{\texttt{AND}}}&
$\!\begin{aligned}[b]
{{\displaystyle X \cdot Y= min (X ,Y)}} \end{aligned}$& $\!\begin{aligned}[c]
& \footnotesize{\texttt{$\vdash$ $\forall$X Y.
R\_AND X Y =($\lambda$s. min (X s) (Y s))}}\end{aligned}$
\\ \hline
{\footnotesize{\texttt{OR}}}&
$\!\begin{aligned}[b]
{{\displaystyle X + Y= max (X, Y)}
}
\end{aligned}$& $\!\begin{aligned}[c]
& \footnotesize{\texttt{$\vdash$ $\forall$X Y.
R\_OR X Y = ($\lambda$s. max (X s) (Y s))
}}\end{aligned}$
\\ \hline
{\footnotesize{\texttt{After}}}&
$\!\begin{aligned}[b]
{{\displaystyle X \rhd Y= }{\scriptsize
\begin{cases} X, &X > Y\\ +\infty, &X\leq Y
\end{cases}}
}
\end{aligned}$& $\!\begin{aligned}[c]
& \footnotesize{\texttt{$\vdash$ $\forall$X Y.
R\_AFTER X Y =}}\\[-1\jot]
&\footnotesize{\texttt{($\lambda$s. if Y s < X s then X s else PosInf)
}}\end{aligned}$
\\ \hline
{\footnotesize{\texttt{Simultaneous}}}& $\!\begin{aligned}[b]
{{\displaystyle X \Delta Y= }{\scriptsize
\begin{cases} X, &X = Y\\ +\infty, &X\neq Y
\end{cases}}
}
\end{aligned}$ & $\!\begin{aligned}[c]
& \footnotesize{\texttt{$\vdash$ $\forall$X Y.
R\_SIMULT X Y =}}\\[-1\jot]
&\footnotesize{\texttt{($\lambda$s. if X s = Y s then X s else PosInf)
}}\end{aligned}$ \\ \hline
{\footnotesize{\texttt{Inclusive After}}}& $\!\begin{aligned}[b]
{{\displaystyle X \unrhd Y=}{\scriptsize
\begin{cases} X, &X \geq Y\\ +\infty, &X < Y
\end{cases}}
}
\end{aligned}$ & $\!\begin{aligned}[c]
& \footnotesize{\texttt{$\vdash$ $\forall$ X Y.
R\_INCLUSIVE\_AFTER X Y =}}\\ &\footnotesize{\texttt{($\lambda$s. if Y s $\leq$ X s then X s else PosInf)
}}\end{aligned}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[!b]
\caption{Formally Verified Simplification Theorems}
\small
\centering
\label{table:simplification_theorems}
\begin{tabular}{|l|}
\hline
\multicolumn{1}{|c|}{Simplification Theorem} \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X. ($\forall$s. 0 $\scriptstyle\leq$ X s) $\Rightarrow$ (X $\cdot$ R\_ALWAYS = R\_ALWAYS)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y Z. (X $\cdot$ Y) $\cdot$ Z = X $\cdot$ (Y $\cdot$ Z)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y. X $\cdot$ Y = Y $\cdot$ X}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X. X $\cdot$ X = X}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X. X $\cdot$ R\_NEVER = X}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X. ($\forall$s. 0 $\scriptstyle\leq$ X s) $\Rightarrow$ (X + R\_ALWAYS = X)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y Z. (X + Y) + Z = X + (Y + Z)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y. X + Y = Y + X}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X. X + X = X}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X. X + R\_NEVER = R\_NEVER}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y. X + (X $\cdot$ Y) =X}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y Z. X $rhd$ (Y $\rhd$ Z) = ((X $\rhd$ Y) + (X $\rhd$ Z)) (Y $\rhd$ Z)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y. (X $\rhd$ Y) + (Y $\rhd$ X) = R\_NEVER}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y Z. X $\rhd$ (Y $\cdot$ Z) = (X $\rhd$ Y) $\cdot$ (X $\rhd$ Z)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y Z. X $\cdot$ (Y + Z) = (X $\cdot$ Y) + (X $\cdot$ Z)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y Z. X + (Y $\cdot$ Z) = (X + Y) $\cdot$ (X + Z)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y. X $\unrhd$ Y = (X $\rhd$ Y) $\cdot$ (X $\Delta$ Y)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y Z. X $\rhd$ (Y + Z) = (X $\rhd$ Y) + (X $\rhd$ Z)}}
\end{aligned}$ \\ \hline
$\!\begin{aligned}[c]
& \small{\texttt{$\vdash$ $\forall$X Y. X $\Delta$ Y = Y $\Delta$ X}}
\end{aligned}$ \\ \hline
\end{tabular}
\end{table}
In order to verify the reliability of the DRBD constructs, such as the spare, we need first to verify the reliability of the DRBD operators that are used to express the structure function of these constructs. For the AND and OR operators, we verify their reliability expressions as in Theorems~\ref{thm-rel-and} and \ref{thm-rel-or}, respectively.
\begin{theorem}
\label{thm-rel-and}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$p X t. rv\_gt0\_ninfinity [X;Y] $\wedge$ }}}}\\
\mbox{\small{\textup{\texttt{~~~~indep\_var p lborel (real o X) lborel (real o Y) $\Rightarrow$}}}}\\
\mbox{\small{\textup{\texttt{~~~~(Rel p (X$\cdot$Y) t = Rel p X t * Rel p Y t)}}}}
\end{theorem}
\begin{theorem}
\label{thm-rel-or}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$p X t. rv\_gt0\_ninfinity [X;Y] $\wedge$ }}}}\\
\mbox{\small{\textup{\texttt{~~~~indep\_var p lborel (real o X) lborel (real o Y) $\Rightarrow$}}}}\\
\mbox{\small{\textup{\texttt{~~~~(Rel p (X + Y) t = 1 - (1 - Rel p X t) * (1 - Rel p Y t))}}}}
\end{theorem}
We verify Theorem \ref{thm-rel-and} by first rewriting using Definition \ref{DEF:Rel}. Then, we prove that \texttt{DRBD\_event} of the AND operator equals the intersection of the individual events, as in Equation (\ref{and-intersect}). Utilizing the independence of the real-valued random variables \texttt{real o X} and \texttt{real o Y}, the probability of intersection of their events equals the product of the probability of the individual events. Since \texttt{X} and \texttt{Y} are greater than $0$ and are not equal to $+\infty$, based on the function \texttt{rv\_gt0\_ninfinity}, the events in the probability space that correspond to $X$ and $Y$ are equal to the ones that correspond to \texttt{real o X} and \texttt{real o Y}. As a result, the \texttt{DRBD\_events} of \texttt{X} and \texttt{Y} are independent. Hence, the probability of their intersection equals the product of the probability of the individual events, i.e., their reliability. Theorem~\ref{thm-rel-or} is verified in a similar way. However, we prove that the \texttt{DRBD\_event} of the OR operator equals the union of the individual events, as in Equation~(\ref{or-union}). We verify that this union of events equals to the complement of the intersection of the complements of the individual events. Now, Theorem~\ref{thm-rel-or} can be proven using the independence of random variables.
We extend the definition of the AND and OR operators to n-ary operators, \texttt{nR\_AND} and \texttt{nR\_OR}, that can be used to represent the relationship between an arbitrary number of elements. We formally define n-ary AND (\texttt{nR\_AND}) as:
\begin{definition}
\label{def:nR_AND}
\emph{}\\
{\small \textup{\texttt{$\vdash$ $\forall$X s. nR\_AND X s = ITSET ($\lambda$e acc. R\_AND (X e) acc) s R\_NEVER}}}
\end{definition}
\noindent where \texttt{ITSET} is the HOL function to iterate over sets. This definition applies the \texttt{R\_AND} over the elements of \texttt{X} indexed by the numbers in \texttt{s}. \texttt{R\_NEVER} is the identity element of the \texttt{R\_AND} operator.
Similarly, we formally define n-ary OR (\texttt{nR\_OR}) as:
\begin{definition}
\label{def:nR_OR}
\emph{}\\
{\small \textup{\texttt{$\vdash$ $\forall$X s. nR\_OR X s = ITSET ($\lambda$e acc. R\_OR (X e) acc) s R\_ALWAYS}}}
\end{definition}
\noindent where \texttt{R\_ALWAYS} is the identity element of the \texttt{R\_OR} operator.
The reliability of these two operators would be similar to the reliability of the series and parallel structures, respectively, as will be described in the following section.
Finally, we verify the reliability expression of the after operator utilizing our formalization in \cite{elderhalli2019probabilistic}, where the description of the assumptions is listed in Table \ref{table:assumption}:
\begin{theorem}
{\small \textup{\texttt{$\vdash$ $\forall$X Y p f\textsubscript{x} t. rv\_gt0\_ninfinity [X; Y] $\wedge$ 0 $\leq$ t $\wedge$}}\\
{\mbox{\textup{\texttt{~~~~indep\_var p lborel (real o X) lborel (real o Y) $\wedge$}}}}\\
{\mbox{\textup{\texttt{~~~~distributed p lborel (real o X) f\textsubscript{x} $\wedge$ ($\forall$x. 0 $\leq$ f\textsubscript{x} x) $\wedge$}}}}\\
{\mbox{\textup{\texttt{~~~~cont\_CDF p (real o Y) $\wedge$ measurable\_CDF p (real o Y) $\Rightarrow$}}}}\\
{\mbox{\textup{\texttt{~~~~(Rel p (X$\rhd$Y) t = 1- $\int_{0}^{t}$f\textsubscript{X}(x) $\times$ F\textsubscript{Y}(x) $d$x)}}}}}
\end{theorem}
The proof of this theorem is based on $Pr(Y < X < t) = \int_{0}^{t}f_{X}(x) \times F_{Y}(x)\ dx$, which has been verified in \cite{elderhalli2019probabilistic} using the properties of the Lebesgue integral and independence of random variables. The DRBD \textit{after} operator represents a situation where the system continues to work until two components fail in sequence. Thus, the above expressions allow us to verify the reliability expression of the \textit{after} operator, as the DRBD and DFT events complement one another.
\subsection{DRBD Constructs and their Reliability Expressions}
As mentioned previously, the spare construct can have three variants according to the type of the spare block. We formally define the generic case, i.e., the warm spare (WSP) as:
\begin{definition}
\label{DEF:WSP}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$Y X\textsubscript{a} X\textsubscript{d}. R\_WSP Y X\textsubscript{a} X\textsubscript{d} = (X\textsubscript{a} $\rhd$ Y) $\cdot$ (Y $\rhd$ X\textsubscript{d}) }}}}
\end{definition}
Since the DRBD and DFT events complement one another, we use our formalization of the probability of failure of the warm spare gate \cite{elderhalli2019probabilistic} to verify the reliability of the WSP construct:
\begin{theorem}
\label{thm:WSP_prob}
{\small
\textup{\texttt{$\vdash$ $\forall$p Y X\textsubscript{a} X\textsubscript{d} t f\textsubscript{Y} f\textsubscript{X\textsubscript{a}Y} f\textsubscript{X\textsubscript{a}|Y}. 0 $\leq$ t $\wedge$}\\
{\mbox{\texttt{~~($\forall$s. ALL\_DISTINCT [X\textsubscript{a} s; X\textsubscript{d} s; Y s]) $\wedge$ DISJOINT\_WSP Y X\textsubscript{a} X\textsubscript{d} t $\wedge$}}}\\
{\mbox{\texttt{~~rv\_gt0\_ninfinity [X\textsubscript{a}; X\textsubscript{d}; Y] $\wedge$ den\_gt0\_ninfinity f\textsubscript{X\textsubscript{a}Y} f\textsubscript{Y} f\textsubscript{X\textsubscript{a}|Y} $\wedge$}}}\\
{\mbox{\texttt{~~($\forall$y. cond\_density lborel lborel p (real o X\textsubscript{a})(real o Y) y f\textsubscript{X\textsubscript{a}Y} f\textsubscript{Y} f\textsubscript{X\textsubscript{a}|Y}) $\wedge$}}}\\
{\mbox{\texttt{~~indep\_var p lborel (real o X\textsubscript{d}) lborel (real o Y) $\wedge$}}}\\
{\mbox{\texttt{~~cont\_CDF p (real o X\textsubscript{d}) $\wedge$ measurable\_CDF p (real o X\textsubscript{d}) $\Rightarrow$}}}\\
{\mbox{\texttt{~~$\big($Rel p (R\_WSP Y X\textsubscript{a} X\textsubscript{d}) t) =}}}\\
{\mbox{\texttt{~~~1 - $(\int_{0}^{t} f\textsubscript{Y}(y) * (\int_{y}^{t}$ f\textsubscript{(X\textsubscript{a}|Y=y)}(x) dx$)$ dy + $\int_{0}^{t}$ f\textsubscript{Y}(y)F\textsubscript{X\textsubscript{d}}(y)dy$)\big)$
}}}}}
\end{theorem}
\noindent where \texttt{ALL\_DISTINCT} ensures that the main and spare parts cannot fail at the same time, \texttt{DISJOINT\_WSP Y X\textsubscript{a} X\textsubscript{d} t} ensures that until time \texttt{t}, the spare can only fail in one of its states and \texttt{den\_gt0\_ninfinity}
ascertains the proper values of the density functions; joint ($f_{XY}$), marginal ($f_{Y}$) and conditional ($f_{X_{a}|Y}$). The description of these assumptions is listed in Table \ref{table:assumption} \cite{elderhalli2019probabilistic}. More details about the formal definitions of these functions can be found in \cite{elderhalli2019probabilistic}.
Theorem \ref{thm:WSP_prob} is verified by first defining a conditional density function \texttt{f\textsubscript{X\textsubscript{a}|Y}} for random variables (\texttt{real o X\textsubscript{a}}) and (\texttt{real o Y}). This is required as the failure of the spare part is affected by the time of failure of the main part. Therefore, it is required to define this conditional density function then prove the expression based on the probability of failure of the DFT spare gate, which is verified based on the properties of the Lebesgue integral.
We formally define the cold spare construct (CSP), which is a special case of the WSP, as:
\begin{definition}
\label{DEF:CSP}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$Y X. R\_CSP Y X = ($\lambda$s. if Y s < X s then X s else PosInf)}}}}
\end{definition}
This definition means that the CSP construct will continue to work until the main part fails then the spare part is activated and fails in its active state. It is worth noting that since the spare part has only one state that affects the behavior of the CSP, which is the active state, therefore, we do not use any subscript with the active state, as the dormant state has no effect here in the behavior. We verify the reliability of the CSP construct based on the probability of failure of the CSP gate as \cite{elderhalli2019probabilistic}:
\begin{theorem}
\label{thm:CSP_prob}
{\small
\textup{\texttt{$\vdash$ $\forall$p X Y f\textsubscript{XY} f\textsubscript{Y} f\textsubscript{X|Y} t. 0 $\leq$ t $\wedge$}\\
{\mbox{\texttt{~~rv\_gt0\_ninfinity [X; Y] $\wedge$ den\_gt0\_ninfinity f\textsubscript{XY} f\textsubscript{Y} f\textsubscript{X|Y} $\wedge$}}}\\
{\mbox{\texttt{~~($\forall$y. cond\_density lborel lborel p (real o X)(real o Y) y f\textsubscript{XY} f\textsubscript{Y} f\textsubscript{X|Y}) $\wedge$}}}\\
{\mbox{\texttt{~~$\big($Rel p (R\_CSP Y X) t) =~1 - $(\int_{0}^{t} f\textsubscript{Y}(y) * (\int_{y}^{t}$ f\textsubscript{(X|Y=y)}(x) dx$)$ dy $\big)$
}}}}}
\end{theorem}
The conditions required for this theorem are similar to the ones of Theorem \ref{thm:WSP_prob}, as the WSP exhibits the behavior of the CSP if the main part fails before the spare.
Finally, we define the hot spare construct (HSP) as:
\begin{definition}
\label{DEF:HSP}
{\small\textup{\texttt{\texttt{$\vdash$ $\forall$Y X. R\_HSP Y X = ($\lambda$s. max (Y s) (X s))}}}}
\end{definition}
This means that the HSP acts like the OR operator, where at least one of the main or the spare parts should continue to work for the HSP construct to maintain its successful behavior. Therefore, we can use Theorem \ref{thm-rel-and} to express the reliability of the HSP construct.
We formally define the series structure as:
\begin{definition}
\label{def:DRBD_parallel}
\small{\textup{\texttt{$\vdash \forall$Y s. DRBD\_series Y s = $\displaystyle\bigcap_{i\in s}$ (Y i)}}}
\end{definition}
We define the series structure as a function that accepts a group of sets, \texttt{Y}, that are indexed by the numbers in set \texttt{s} and returns the intersection of these sets.
The parallel structure is defined in a similar way but it returns the union of the sets rather than the intersection. We formally define it as:
\begin{definition}
\label{def:DRBD_parallel}
\small{\textup{\texttt{$\vdash \forall$Y s. DRBD\_parallel Y s = $\displaystyle\bigcup_{i\in s}$ (Y i)}}}
\end{definition}
The group of sets, \texttt{Y}, in both structures represents a family of events, i.e, \texttt{Y} will be instantiated later with DRBD events. The reliability expressions of the series and parallel structures are given in Table \ref{table_DRBD_structure}. We verify these expressions as:
\begin{theorem}
\label{thm:rel_series}
\emph{}\\
\small{\textup{\texttt{$\vdash \forall$p X t s. s $\neq$ \{\} $\wedge$ FINITE s $\wedge$}}}\\\mbox{\small{\textup{\texttt{~indep\_sets p ($\lambda$i. \{rv\_to\_event p X t i\}) s $\Rightarrow$}}}}\\
\mbox{\small{\textup{\texttt{~(prob p (DRBD\_series (rv\_to\_event p X t) s) =}}}}\\
\mbox{\small{\textup{\texttt{~~Normal ($\displaystyle\prod_{i\in s}$ (real (Rel p (X i) t))))}}}}
\end{theorem}
\begin{theorem}
\label{thm:rel_parallel}
\emph{}\\
\mbox{\small{\textup{\texttt{$\vdash \forall$p X t s. s $\neq$ \{\} $\wedge$ FINITE s $\wedge$}}}}\\
\mbox{\small{\textup{\texttt{~indep\_sets p ($\lambda$i. \{rv\_to\_event p X t i\}) s $\wedge$}}}}\\
\mbox{\small{\textup{\texttt{~($\forall$i. i $\in$ s $\Rightarrow$ rv\_to\_event p X t i $\in$ events p) $\Rightarrow$}}}}\\
\mbox{\small{\textup{\texttt{~(prob p (DRBD\_parallel (rv\_to\_event p X t) s) =}}}}\\
\mbox{\small{\textup{\texttt{~~1 - Normal ($\displaystyle\prod_{i\in s}$ (real (1 - Rel p (X i) t))))}}}}
\end{theorem}
\noindent where \texttt{s$\neq$\{\} $\wedge$ FINITE s} ensures that the set of indices, \texttt{s}, is nonempty and finite. The reliability of the series structure is verified based on the independence of the input events using \texttt{indep\_sets}, which ensures that for the probability space \texttt{p}, the given group of sets (\texttt{($\lambda$i. \{rv\_ti\_event p X t i\}}) indexed by the numbers in set \texttt{s} are independent, as described in Table \ref{table:assumption}. The family of sets (\texttt{($\lambda$i. \{rv\_ti\_event p X t i\}}) represents the DRBD events of the group of time-to-failure functions, \texttt{X}. This is defined as:
\begin{definition}
\small{\textup{\texttt{$\vdash\forall$p X t. rv\_to\_event p X t = ($\lambda$i. DRBD\_event p (X i) t)}}}
\end{definition}
The function \texttt{rv\_to\_event} enables us to create the group of \texttt{DRBD\_event} of time-to-failure functions of system blocks (\texttt{X}).
Based on the independence of these sets and the definition of the series structure (intersection of sets), we verify that the probability of the series structure is equal to the product of the reliability of the individual blocks (\texttt{Rel p (X i) t}), where \texttt{i$\in$s}. The product function ($\scriptstyle\prod$) in HOL4 returns a real value and the probability returns \texttt{extreal}, therefore, it is required to typecast the product function to \texttt{extreal} using \texttt{Normal}. Similarly, the product function finds the product of real-valued functions, therefore, it is required to typecast the reliability function(\texttt{Rel}) to real using the \texttt{real} function. The parallel structure is verified in a similar way. We replace the parallel structure (the union of events) with the complement of the intersection of the complements of the events. Then, we verify that the probability of this complement equals one minus the probability of the intersection of the complements. This requires the added condition that all DRBD events created using \texttt{rv\_to\_event} belong to the events of the probability space \texttt{p}.
In order to express the series and parallel structures using DRBD operators, we verify that these structures are equal to the DRBD events of the \texttt{nR\_AND} and \texttt{nR\_OR}, respectively:
\begin{theorem}
\label{thm:nR_AND_series}
{\textup{\small{\texttt{$\vdash\forall$p X t s.~FINITE s $\wedge$ s $\neq$ \{\} $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~~~(DRBD\_event p (nR\_AND X s) t = DRBD\_series (rv\_to\_event p X t) s)}}}}
\end{theorem}
\begin{theorem}
\label{thm:nR_OR_parallel}
{\textup{\small{\texttt{$\vdash\forall$p X t s.~FINITE s $\wedge$ 0 $\leq$ t $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~~~(DRBD\_event p (nR\_OR X s) t = DRBD\_parallel (rv\_to\_event p X t) s)}}}}
\end{theorem}
\noindent We verify Theorems \ref{thm:nR_AND_series} and \ref{thm:nR_OR_parallel} by inducting on set \texttt{s} using \texttt{SET\_INDUCT\_TAC} that will create two subgoals to be solved; one for the empty set and another one for inserting an element to a finite set. Furthermore, we use the fact that the DRBD event of the AND and OR operators equal the intersection and the union of the individual events, respectively. For Theorem \ref{thm:nR_OR_parallel}, an additional condition is required, \texttt{0$\leq$t}, to be able to manipulate the sets and reach the final form of the theorem.
Interestingly, these structures can be easily extended to model and verify more complex structures, such as two-level structures, i.e., series-parallel and parallel series structures. We formally verify the reliability of the series-parallel structure as:
\begin{theorem}
\label{thm:rel_series_parallel}
\mbox{\textup{\small{\texttt{$\vdash \forall$p X t s J.}}}}\\
\mbox{\textup{\small{\texttt{~indep\_sets p}}}}\\
\mbox{\textup{\small{\texttt{~~($\lambda$i. \{rv\_to\_event p X t i\}) ($\displaystyle\bigcup_{j\in J}$ (s j)) $\wedge$}}}}\\
\mbox{\textup{\small{\texttt{~($\forall$i. i $\in$ J $\Rightarrow$ s i $\neq$ \{\} $\wedge$ FINITE (s i)) $\wedge$}}}}\\
\mbox{\textup{\small{\texttt{~~FINITE J $\wedge$ J $\neq$ \{\} $\wedge$ disjoint\_family\_on s J $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~(prob p}}}}\\
\mbox{\textup{\small{\texttt{~~~(DRBD\_series}}}}\\
\mbox{\textup{\small{\texttt{~~~~($\lambda$j. DRBD\_parallel}}}}\\
\mbox{\textup{\small{\texttt{~~~~~(rv\_to\_event p X t) (s j)) J) =}}}}\\
\mbox{\textup{\small{\texttt{~~Normal}}}}\\
\mbox{\textup{\small{\texttt{~~~($\displaystyle\prod_{j\in J}$ (1 - $\displaystyle\prod_{i\in (s\ j)}$ (real (1 - Rel p (X i) t)))))}}}}
\end{theorem}
We formally verify the reliability of the parallel-series structure as:
\vspace{50pt}
\begin{theorem}
\label{thm:rel_parallel_series}
\emph{}\\
\mbox{\textup{\small{\texttt{$\vdash \forall$p X t s J.}}}}\\
\mbox{\textup{\small{\texttt{~indep\_sets p}}}}\\
\mbox{\textup{\small{\texttt{~~($\lambda$i. \{rv\_to\_event p X t i\}) ($\displaystyle\bigcup_{j\in J}$ (s j)) $\wedge$}}}}\\
\mbox{\textup{\small{\texttt{~($\forall$i. i $\in$ $\displaystyle\bigcup_{j\in J}$ (s j) $\Rightarrow$ rv\_to\_event p X t i $\in$ events p) $\wedge$}}}}\\
\mbox{\textup{\small{\texttt{~($\forall$i. i $\in$ J $\Rightarrow$ s i $\neq$ \{\} $\wedge$ FINITE (s i)) $\wedge$}}}}\\
\mbox{\textup{\small{\texttt{~FINITE J $\wedge$ J $\neq$ \{\} $\wedge$ disjoint\_family\_on s J $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~(prob p}}}}\\
\mbox{\textup{\small{\texttt{~~(DRBD\_parallel}}}}\\
\mbox{\textup{\small{\texttt{~~~($\lambda$j. DRBD\_series}}}}\\
\mbox{\textup{\small{\texttt{~~~~(rv\_to\_event p X t) (s j)) J) =}}}}\\
\mbox{\textup{\small{\texttt{~~1 - }}}}\\
\mbox{\textup{\small{\texttt{~~Normal}}}}\\
\mbox{\textup{\small{\texttt{~~~($\displaystyle\displaystyle\prod_{j\in J}$ (1 - $\displaystyle\prod_{i \in (s\ j)}$ (real (Rel p (X i) t)))))}}}}
\end{theorem}
The main idea in building these two-level structures is to partition the family of blocks into distinct groups, where we use a set, \texttt{J}, to index these partitions, i.e., it includes the number of groups in the first top level. Then, for each group in this top level, we have another set, \texttt{\{s j| j $\in$ J\}}, that includes the indices of the blocks in the second level, i.e. the subgroups. For example, consider the parallel-series structure of Figure~\ref{fig:rbd_structures}(d), if $n=m=1$, then the outer parallel structure has two series structures, where each series structure has two blocks. Thus, \texttt{J = \{0;1\}}. For each \texttt{j$\in$J}, we have a certain set \texttt{s j} that has the indices of the blocks in the inner series structure. Thus, \texttt{s = ($\lambda$j. if j = 0 then \{0;1\} else \{2;3\}}). The same concept is applied to the series-parallel structure. Therefore, the structure of the DRBD can be determined based on the given sets of indices.
We verify Theorems~\ref{thm:rel_series_parallel} and \ref{thm:rel_parallel_series} by extending the proofs of the series and parallel structures. However, it is required to deal with the intersection of unions in case of the series-parallel structure and the union of intersections in case of parallel-series structure. Therefore, we need to extend the independence of sets properties to include the independence of union and intersection of partitions of the events. We verify these properties as:
\begin{theorem}
\label{thm:indep_sets_collect_sigma_BIGUNION}
\small{\textup{\texttt{$\vdash \forall$p s J Y. indep\_sets p ($\lambda$i. \{Y i\}) $\bigcup_{j \in J}$ (s j) $\wedge$ J $\neq$ \{\} $\wedge$}}}\\
\mbox{\small{\textup{\texttt{~~($\forall$i. i $\in$ J $\Rightarrow$ countable (s i)) $\wedge$ FINITE J $\wedge$
disjoint\_family\_on s J $\Rightarrow$}}}}\\
\mbox{\small{\textup{\texttt{~~indep\_sets p ($\lambda$j. \{$\bigcup_{i \in s\ j}$ (Y i)\}) J}}}}
\end{theorem}
\begin{theorem}
\label{thm:indep_sets_collect_sigma_BIGINTER}
\small{\textup{\texttt{\hspace{-2pt}$\vdash\forall$p s J Y. indep\_sets p ($\lambda$i. \{Y i\}) $\bigcup_{j \in J}$ (s j) $\wedge$ J $\neq$ \{\} $\wedge$}}}\\
\mbox{\small{\textup{\texttt{~~($\forall$i. i $\in$ J $\Rightarrow$ countable (s i) $\wedge$ s i $\neq$ \{\}) $\wedge$ FINITE J $\wedge$}}}}\\
\mbox{\small{\textup{\texttt{~~disjoint\_family\_on s J $\wedge$ ($\forall$i. i $\in$ $\bigcup_{i \in J}$ (s j) $\Rightarrow$ Y i $\subset$ m\_space p) $\Rightarrow$}}}}\\
\mbox{\small{\textup{\texttt{~~indep\_sets p ($\lambda$j. \{$\bigcap_{i \in s\ j}$ (Y i)\}) J}}}}
\end{theorem}
\noindent where set \texttt{J} includes the indices of the partitions and \texttt{s} has the indices of the individual blocks of each partition, \texttt{disjoint\_family\_on} ensures that the indices of the blocks in different partitions are disjoint and \texttt{indep\_sets p ($\lambda$i. \{Y i\}) $\bigcup_{j \in J}$ (s j)} ensures the independence of the family of blocks \texttt{\{Y i\}} where the indices of the individual blocks are given by the union of \texttt{s}. In order to verify Theorems \ref{thm:indep_sets_collect_sigma_BIGUNION} and \ref{thm:indep_sets_collect_sigma_BIGINTER}, we need the fact that the $\sigma$-algebras generated by \texttt{($\lambda$j. $\bigcup_{i\in s\ j}$\{Y i\}}) with index set \texttt{J} are independent. Then we verify that
\texttt{$\forall$j. j $\in$ J, set \{$\bigcup_{i\in s\ j}$ \{Y i\}\}} is a subset of the $\sigma$-algebra generated by \texttt{$\bigcup_{i\in s\ j}$\{Y i\}} . Finally, based on these intermediate verified steps and the definition of \texttt{indep\_sets}, we are able to verify these theorems.
In order to verify the reliability of the series-parallel structure, we need to ensure the independence of the individual blocks. Therefore it is required to combine the indices of all blocks into a single set using \texttt{$\scriptstyle\bigcup_{j\in J}$ (s j)} to be used with \texttt{indep\_sets}. To be able to use the reliability of the series structure in this proof, we use Theorem \ref{thm:indep_sets_collect_sigma_BIGUNION} to verify the independence of the unions of partitions of events. This means verifying that the parallel structures are independent, i.e., the probability of intersection of these parallel structures equals the product of the reliability of the parallel structures. Finally, several assumptions related to sets \texttt{\{s i| i $\in$ J\}} and \texttt{J} are required, which include that these sets are finite and nonempty. Finally, it is required that every block has a unique index, which is ensured using \texttt{disjoint\_family\_on}. The reliability of the parallel-series structure is verified in a similar manner based on the reliability of the parallel structure. We verify the independence of the intersection of partitions of events rather than the union using Theorem \ref{thm:indep_sets_collect_sigma_BIGINTER}. In addition, it is required that all DRBD events belong to the events of the probability space.
We extend the reliability of the two-level series-parallel structure to verify the reliability of a more nested structure, i.e., series-parallel-series-parallel, as:
\begin{theorem}
\label{thm:nested-series-parallel}
\textup{\small{\texttt{$\vdash\forall$p X t s L A J.}}}\\
\mbox{\textup{\small{\texttt{~($\forall$i. i $\in$ nested\_BIGUNION s L A J $\Rightarrow$~rv\_to\_event p X t i $\in$ events p) $\wedge$}}}}\\
\mbox{\textup{\small{\texttt{~indep\_sets p ($\lambda$i. \{rv\_to\_event p X t i\}) (nested\_BIGUNION s L A J) $\wedge$}}}}\\
\mbox{\textup{\small{\texttt{~sets\_finite\_not\_empty s L A J $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~(prob p}}}}\\
\mbox{\textup{\small{\texttt{~~(DRBD\_series ($\lambda$j.}}}}\\
\mbox{\textup{\small{\texttt{~~~~DRBD\_parallel ($\lambda$a.}}}}\\
\mbox{\textup{\small{\texttt{~~~~~DRBD\_series ($\lambda$l.}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~DRBD\_parallel (rv\_to\_event p X t) (s l)) (L a)) (A j)) J) =}}}}\\
\mbox{\textup{\small{\texttt{~~Normal}}}}\\
\mbox{\textup{\small{\texttt{~~~($\prod_{j\in J}$}}}}\\
\mbox{\textup{\small{\texttt{~~~~(1 - $\prod_{a\in (A\ j)}$(1 - $\prod_{l\in (L\ a)}$ (1 - $\prod_{i\in (s\ l)}$(real (1 - Rel p (X i) t)))))))}}}}
\end{theorem}
For this four-level nested structure, we have four sets (indexed sets) that determine the structure of the DRBD, which are: \texttt{J}, \texttt{A}, \texttt{L} and \texttt{s}. This is similar to the two-level nested structure but with a deeper hierarchy. Therefore, in order to combine the indices of all the individual blocks in the DRBD in a single set, we define \texttt{nested\_BIGUNION s L A J} to union the elements of all \texttt{s i}, where \texttt{i$\in$ L a}, \texttt{a$\in$ A j} and \texttt{j$\in$J}. This is done in a hierarchical manner and can be extended easily to deeper levels. We use the previously mentioned function to ensure that all the individual events belong to the probability events and are independent as well. Moreover, it is required to ensure that the sets are finite, disjoint and nonempty, just like the series-parallel structure. We combine these set-related conditions using the function \texttt{sets\_finite\_not\_empty}. Finally, we verify Theorem \ref{thm:nested-series-parallel} within two main steps. The first step is to verify the reliability of the outer series-parallel, which requires verifying the independence of the intersection of union of partition of the DRBD blocks, i.e., the inner series-parallel structures are independent. The second step is to verify the reliability of the inner series-parallel structures, which can be done based on some set manipulation. This theorem can be used to verify even deeper structures, which would require verifying the independence of more nested structures. We use Theorem \ref{thm:nested-series-parallel} to verify the reliability of the series-parallel-series structure as it represents a special case of the series-parallel-series-parallel, where each of the innermost parallel structures has only one block. Our formalization follows the natural definitions of parallel and series structures. Moreover, our verified lemmas of independence allow verifying deeper structures, which makes our formalization flexible and applicable to model the most complex systems. The proof script, which is available at \cite{Yassmeen-DRBDcode}, of our formalization required around 3200 lines. In the following section, we utilize our formalization in the verification of the reliability of two real-world systems.
\section{Applications}
\label{case_study}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.8]{DBW_DRBD1.jpg}
\caption{DRBD of drive-by-wire system}
\label{fig:DBW}
\end{figure}
To demonstrate the applicability of our proposed DRBD algebra, we present the formal reliability analysis of a drive-by-wire system (DBW) \cite{altby2014design} and a shuffle-exchange network (SEN) \cite{bistouni2014analyzing} to verify generic expressions that are independent of the failure distribution of the system components, i.e., we can use different types of distributions to model the failure of system components as long as they satisfy the required conditions, such as the continuity. \\
\indent The DRBD of the DBW system, shown in Figure~\ref{fig:DBW},
models the successful behavior of a key part of the modern automotive industry. This system controls the functionality of the
vehicle using a computerized controller. We provide the analysis of
the throttle and brake subsystems. The throttle subsystem continues to work as long as the throttle (TF) and the engine (EF) are working. In addition, the system successful operation requires the operation of the brake control unit
(BCU). The system includes a primary control
unit (PC) with a warm spare (SC) that replaces the main part after failure. Finally, the system needs the operation of the throttle sensor (TS) and the brake sensor (BS). The DRBD of this system is modeled as a series structure with a spare construct. We express the structure function of this DRBD using our operators:
\begin{equation*}
\small{\textup{\texttt{Q\textsubscript{DBW} = TF $\cdot$ EF $\cdot$ BCU $\cdot$ (R\_WSP PC SC\textsubscript{a} SC\textsubscript{d}) $\cdot$ TS $\cdot$ BS}}}
\end{equation*}
Then we verify the DBW reliability as:
\begin{thm}
\label{thm:Rel_DBW}
\textup{\small{\texttt{$\vdash\forall$p TF EF BCU PC SC\textsubscript{a} SC\textsubscript{d} TS BS t.}}}\\
\mbox{\textup{\small{\texttt{~DBW\_set\_req p TF EF BCU PC SC\textsubscript{a} SC\textsubscript{d} TS BS t $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~(prob p (DRBD\_event p Q\textsubscript{DBW} t) =}}}}\\
\mbox{\textup{\small{\texttt{~~Rel p TF t * Rel p EF t * Rel p BCU t * Rel p (R\_WSP PC SC\textsubscript{a} SC\textsubscript{d}) t *}}}}\\ \mbox{\textup{\small{\texttt{~~Rel p TS t * Rel p BS t})}}}
\end{thm}
\noindent where \texttt{DBW\_set\_req} ensures the proper conditions for the independence of the blocks in the DBW system. In Figure~\ref{plot_DBW}, we evaluate, using MATLAB, the reliability of the DBW system assuming exponential distributions for the system components with failure rates as given in the figure and a dormancy factor of 0.5.
\begin{figure}[]
\centering
\makebox[1\textwidth]{
\includegraphics[scale=0.7]{DBW_rel_1.png}
}
\caption{Reliability of DBW system}
\label{plot_DBW}
\end{figure}
\begin{figure}[b]
\centering
{\includegraphics[scale=0.7]{SEN.jpg}}
\caption{DRBD of Shuffle-exchange Network with Spare Constructs}
\label{fig:SEN}
\end{figure}
In multi-processor systems, it is required to have an efficient communication method among system components, such as processors and memories. Multistage interconnection networks (MINs) can provide the necessary switching in multi-processor systems. A MIN consists of sources (inputs) and destinations (outputs) and is divided into a single-path MIN or a multiple-path MIN. In single-path MINs, there is only one possible path between each source and destination. Therefore, loosing any of the intermediate connections may lead to a failure. A SEN is an example of a single-path MIN. In order to increase the reliability of such network, additional switching elements are added to the network which provide additional paths between each source and destination. A SEN having two paths between each source and destination is usually called SEN+. The terminal reliability analysis, which is the reliability of the connection between a given source and destination, is usually conducted using traditional RBDs \cite{bistouni2014analyzing}. Although the reliability of the system is increased going from SEN to SEN+, each source is always connected to a single switch and the same thing applies to the destination. The failure of these single switches would lead to the failure of the connection. Therefore, we propose to further enhance the reliability of this connection by using spare parts for these single switches so they can be replaced after failure. The DRBD of the modified SEN+ is shown in Figure \ref{fig:SEN}, where $Y$ and $Z$ are the main single switches that are connected to the source and destination with their spares $Ys$ and $Zs$, respectively. The parallel structure in the middle represents the reliability model of the two alternative paths between the source and the destination. Therefore, this DRBD consists of a series of two spare constructs and one parallel structure that consists of two series structures.
Using our DRBD operators, we formally express the structure function of this DRBD as:
\begin{equation}
\begin{split}
\small{\textup{\texttt{Q\textsubscript{SEN}}}} = &\small{\textup{\texttt{ nR\_AND ($\lambda$i. if i = 0 then R\_WSP Y Ys\textsubscript{a} Ys\textsubscript{d}}}}\\
&{\small{\textup{\texttt{~~~~~~~~~~~~~~else if i = 1 then \big((nR\_AND X L1) + (nR\_AND X L2)\big)}}}}\\
&{\small{\textup{\texttt{~~~~~~~~~~~~~~else R\_WSP Z Zs\textsubscript{a} Zs\textsubscript{d}) \{0; 1; 2\}}}}}
\end{split}
\end{equation}
Thus, the outer series structure is expressed using the \texttt{nR\_AND} operator over the set $\{0;1;2\}$ as this structure has three different structures; i.e., two spare constructs and one parallel structure.
In order to re-utilize the verified expressions of reliability, it is required to express this DRBD using the series and parallel structures. Therefore, we verify that the DRBD event of the \texttt{Q\textsubscript{SEN}} is equal to a nested series-parallel-series structure as:
\begin{theorem}
\label{thm:SEN_nR_AND}
\textup{\small{\texttt{$\vdash\forall$p X Y Ys\textsubscript{a} Ys\textsubscript{d} Z Zs\textsubscript{a} Zs\textsubscript{d} t L1 L2.}}}\\
\mbox{\textup{\small{\texttt{~DISJOINT3 \{0; 3\} L1 L2 $\wedge$ FINITE L1 $\wedge$ FINITE L2 $\wedge$ L1 $\neq$ \{\} $\wedge$ L2 $\neq$ \{\} $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~(DRBD\_event p Q\textsubscript{SEN} t =}}}}\\
\mbox{\textup{\small{\texttt{~~DRBD\_series ($\lambda$j.}}}}\\
\mbox{\textup{\small{\texttt{~~~~DRBD\_parallel ($\lambda$a.}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~DRBD\_series ($\lambda$i.}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~~~event\_set}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~~~~[(DRBD\_event p (R\_WSP Y Ys\textsubscript{a} Ys\textsubscript{d}) t,0);}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~~~~~(DRBD\_event p (R\_WSP Z Zs\textsubscript{a} Zs\textsubscript{d}) t,3)]}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~~~~(rv\_to\_event p X t) i)}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~~ind\_set [\{0\}; L1; L2; \{3\}] a))}}}}\\
\mbox{\textup{\small{\texttt{~~~~~(ind\_set [\{0\}; \{1; 2\}; \{3\}] j)) \{0; 1; 2\})}}}}
\end{theorem}
\noindent where \texttt{DISJOINT3} ensures that all sets are disjoint. Since \texttt{DRBD\_series} accepts a group of indexed sets, we define a function \texttt{event\_set} that accepts a list of pairs each of which is composed of a DRBD event with its index. This function also accepts the remaining blocks of the DRBD that have their indices embedded in a set (that can be generic of any size), such as the parallel structure of the SEN. We also define \texttt{ind\_set} that accepts a list of sets and returns a group of indexed sets. Since we are dealing with a series-parallel-series structure, we need three sets to identify the hierarchy of this nested structure. Set $\{0;1;2\}$ in Theorem \ref{thm:SEN_nR_AND} indicates that the outer series structure has three elements, i.e., three parallel structures. \texttt{ind\_set [\{0\}; \{1;2\}; \{3\}]} indicates that the first parallel structure has only one series structure with index $0$, the second parallel structure has two series structures with indices $1$ and $2$, and the third parallel structure has only one series structure with index $3$. Finally, \texttt{ind\_set [\{0\}; L1; L2; \{3\}]} implies that the first series structure has only one element with index $0$, the second and third series structures have an arbitrary number of blocks indexed by $L1$ and $L2$. The last series structure has one element with index $3$. We verify Theorem \ref{thm:SEN_nR_AND} using Theorem \ref{thm:nR_AND_series} and the equivalence of the event of the OR with the union of events besides some set-related theorems.
Based on Theorem \ref{thm:SEN_nR_AND}, we verify a generic expression for the reliability of the SEN system:
\begin{theorem}
\label{thm:Rel_SEN}
\textup{\small{\texttt{$\vdash\forall$p X Y Ys\textsubscript{a} Ys\textsubscript{d} Z Zs\textsubscript{a} Zs\textsubscript{d} t L1 L2.}}}\\
\mbox{\textup{\small{\texttt{~SEN\_set\_req p L1 L2 (ind\_set [\{0\}; L1; L2; \{3\}])}}}}\\
\mbox{\textup{\small{\texttt{~~~(ind\_set [\{0\}; \{1; 2\}; \{3\}]) \{0; 1; 2\}}}}}\\
\mbox{\textup{\small{\texttt{~~~(event\_set [(DRBD\_event p (R\_WSP Y Ys\textsubscript{a} Ys\textsubscript{d}) t,0);}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~~~~~~~~~~(DRBD\_event p (R\_WSP Z Zs\textsubscript{a} Zs\textsubscript{d}) t,3)] (rv\_to\_event p X t)) $\Rightarrow$}}}}\\
\mbox{\textup{\small{\texttt{~(prob p (DRBD\_event p Q\textsubscript{SEN} t) =}}}}\\
\mbox{\textup{\small{\texttt{~~Rel p (R\_WSP Y Ys\textsubscript{a} Ys\textsubscript{d}) t * Rel p (R\_WSP Z Zs\textsubscript{a} Zs\textsubscript{d}) t *}}}}\\
\mbox{\textup{\small{\texttt{~~(1 - (1 - Normal ($\prod_{l\in L1}$ (real (Rel p (X l) t)))) *}}}}\\
\mbox{\textup{\small{\texttt{~~~~~~~(1 - Normal ($\prod_{l\in L2}$ (real (Rel p (X l) t))))))}}}}
\end{theorem}
\noindent where \texttt{SEN\_set\_req} ensures the required conditions of the input sets including that the sets are finite and nonempty. It also ensures the independence of the input events over the probability space and that they belong to the probability events. We first rewrite the goal using Theorem \ref{thm:SEN_nR_AND}, then we use the reliability of the series-parallel-series to verify the final expression. The reliability of the spare constructs can be further rewritten using Theorem \ref{thm:WSP_prob} given that the required conditions are ensured, such as the continuity of the CDFs. The final theorem with the expressions of the reliability of the spare constructs is available in \cite{Yassmeen-DRBDcode}. Finally, we evaluate the reliability of the SEN system assuming the same failure rate of $1\times 10^{-5}$ for all switching elements. We also assume that each series structure has 16 switching elements. We evaluate the reliability for the SEN system without and with spare parts with a dormancy factor of 0.1, as shown in Figure~\ref{plot}. This result shows that considering the spares in the reliability analysis leads to having more reliable and realistic system than the traditional RBDs.
\begin{figure}[!t]
\centering
\makebox[1\textwidth]{
{\includegraphics[scale=0.7]{SEN_rel_1.png}}}
\caption{Reliability of SEN with/without spare constructs}
\label{plot}
\end{figure}
To sum up, we are able to provide a generic expression of reliability of the SEN+ system that is verified in HOL theorem proving, which cannot be obtained using any other formal method. In addition, through the verified reliability expressions of the SEN+ and DBW systems, we demonstrated that our formalization is flexible and can be used to model more complex systems of an arbitrary number of blocks by implementing its hierarchy using sets that can be instantiated later to model a specific system structure, which is an added feature of our formalized algebra.
\section{Conclusion }
\label{Conclusion}
In this work, we proposed a new algebra to analyze dynamic reliability block diagrams (DRBDs). We developed the HOL formalization of this algebra in HOL4, which ensures its correctness and allows conducting the analysis within a theorem prover. Furthermore, this algebra provides formalized generic expressions of reliability that cannot be verified using other formal tools. This HOL formalization is the first of its kind that takes into account the system dynamics by providing the HOL formal model of spare constructs and temporal operators. The proposed algebra is compatible with the reliability expressions of traditional RBDs as demonstrated by the reliability expressions of the series and parallel structures. It also facilitates extending the verified reliability expressions to model complex systems using nested structures.
Finally, we demonstrated the usefulness of this work by formally conducting the analysis of a drive-by-wire and a shuffle-exchange network systems to verify generic expressions of reliability, which are independent of the failure probability distribution of system components. We plan to extend this algebra to include other DRBD constructs, such as state dependencies, with their formalization in HOL, which would provide a more complete framework to algebraically analyze DRBDs within HOL theorem proving.
\bibliographystyle{unsrt}
|
1,314,259,993,545 | arxiv | \section{Introduction}
Quantum gravity with a non-vanishing cosmological constant
formulated in Ashtekar's spin-connection variables
\cite{4,5,6,7,8} has
interesting physical states given by the exponential of the
Chern-Simons functional \cite{8,9,10} and appropriate
transformations thereof.
In order to elucidate the physical meaning of such states
it is interesting to consider their restrictions to
spatially homogeneous cosmological models. In a recent
paper \cite{0}, henceforth quoted as [I], we considered the
diagonal Bianchi
IX models with positive cosmological constant from this
point of view and found that five linearly independent
physical states in the metric representation could be
derived from the Chern-Simons functional. This set of
solutions was found to be in one to one correspondence
with the set of topologically different integration
contours which exist for the generalized
Fourier-transformation from the Ashtekar-representation to
the metric representation. Due to the positivity of the
cosmological constant the quantum states found in [I]
describe an expanding (or collapsing) classically
interpretable Lorentzian Universe at large scale
parameters. On the other hand, at sufficiently small
scale parameters the action defined by the exponent
of the wavefunction becomes imaginary and can be
associated only with a quantum mechanically allowed
Euclidean Universe. The two "phases" are separated
by a caustic surface in minisuperspace. It was found
that only one of the five linearly independent states
defines a normalizable probability distribution on this
caustic, that this state satisfies the no-boundary
condition of Hartle and Hawking \cite{15,16}
semi-classically for $\hbar \to 0$ (which means on
scales large compared to the Planck scale), and that,
again for $\hbar \to 0$, it picks out an initial condition
which evolves into a classically interpretable Lorentzian
Universe. For details and further literature we refer to [I].
It is now of interest to consider also what happens for
negative cosmological constant, even though it seems very
unlikely that our Universe has $\Lambda <0$ (The
age-problem resulting from recently measured high values
of the Hubble-parameter \cite{11} and the measured large age
of globular
star-clusters and also the observed high density of
galaxies with large red-shifts seem to call for
$\Lambda >0$). Our motivation is rather to try the
method of [I] for a model-Universe which recollapses, i.e.
for which quantum-mechanically
a classically interpretable Lorentzian evolution phase
is bounded for small {\em and} large values of the scale
parameter by Euclidean evolution phases. Therefore, we have
to expect the appearance of two caustic surfaces in the
minsuperspace of these models, one at small
and the other at large scale parameter.
Is there still a wavefunction, or are there even several,
which give a normalizable probability distribution on
these surfaces, and how are these wavefunctions related
to the no-boundary state?
To answer these questions we apply in section II the method
of [I] to obtain expressions for again five linearly
independent physical states and identify the caustic
surfaces in minisuperspace. In section III we determine
the behavior of the absolute square of the wavefunctions
on the caustic and identify a single physical state which
gives a normalizable probability distribution in this way.
In section IV we summarize our results. Within the narrow
class of models we consider here they seem to rule out,
with high probability, a classically evolving Universe
with $\Lambda <0$.
\section{Quantum states generated by the Chern-Simons
solution}
In this section we want to construct solutions of the
Wheeler-DeWitt equation for the diagonal Bianchi type
IX model with a cosmological constant $\Lambda <0$,
\begin{equation}\label{1.0}
\Biggl \{
\Bigl \lbrack \hbar \partial_{\alpha}\!-\!\Phi_{,\alpha} \Bigr \rbrack
\Bigl \lbrack \hbar \partial_{\alpha}\!+\!\Phi_{,\alpha} \Bigr \rbrack
\!-\!\Bigl \lbrack \hbar \partial_{+}\!-\!\Phi_{,+} \Bigr \rbrack
\Bigl\lbrack \hbar \partial_{+}\!+\!\Phi_{,+} \Bigr \rbrack
\!-\!\Bigl \lbrack \hbar \partial_{-}\!-\!\Phi_{,-} \Bigr \rbrack
\Bigl \lbrack \hbar \partial_{-}\!+\!\Phi_{,-} \Bigr\rbrack
\!+\!3\,(8 \pi)^{2}\Lambda \,e^{6 \alpha}
\Biggr \}
\,\Psi(\alpha,\beta_{\pm};\Lambda)=0\, ,
\end{equation}
\begin{equation}\label{1.0+}
\mbox{where}\ \ \Phi:=2 \pi\,e^{2 \alpha}\, \mbox{Tr}\,
e^{2 \mbox{{\footnotesize
$\beta\mbox{\hspace{-1.4 ex}}\beta$}}} \qquad \mbox{and}\qquad
\mbox{\boldmath $\beta$}=
(\beta_{i j}):=\mbox{diag}\left(\beta_{+}+\sqrt{3}\
\beta_{-}, \beta_{+}-
\sqrt{3}\ \beta_{-}, -2\, \beta_{+} \right )\ .
\end{equation}
\\
In this notation $\partial_{+}$ and $\partial_{-}$ denote
derivatives with respect to the variables $\beta_{+}$ and
$\beta_{-}$, respectively. By writing the Wheeler-DeWitt equation
in the form (\ref{1.0})
we have assumed a specific factor-ordering, which is suggested
by a supersymmetric extension of the
model \cite{20,21,22}. A different factor-ordering is obtained
by considering (\ref{1.0}) with $\Phi$ replaced by $-\Phi$. In
the present paper, as in [I], we will restrict ourselves
to the factor-ordering as in (\ref{1.0}), while a brief comment
on the solutions in the second case $\Phi \to -\Phi$ is given in
appendix A.
If the expression (\ref{1.0+}) for $\Phi$ is inserted into the
Wheeler-DeWitt equation (\ref{1.0}), the following more explicit
form is obtained
\begin{equation}\label{1.1}
\biggl\{
\frac{\hbar^{2}}{3\, \pi^{2}} \left \lbrack
\frac{\partial^{2}}{\partial \alpha^{2}}
-\frac{\partial^{2}}{\partial \beta_{+}^{\,2}}
-\frac{\partial^{2}}{\partial \beta_{-}^{\,2}} \right
\rbrack
-\frac{2\, \hbar}{\pi}\, a^{2}
\ \mbox{Tr}\, e^{2 \mbox{{\footnotesize
$\beta\mbox{\hspace{-1.4 ex}}\beta$}} } +a^{4}
\ \mbox{Tr} \left (e^{4 \mbox{{\footnotesize
$\beta\mbox{\hspace{-1.4 ex}}\beta$}}}-
2\,e^{-2 \mbox{{\footnotesize
$\beta\mbox{\hspace{-1.4 ex}}\beta$}}} \right )
+\Lambda\,a^{6}
\biggr \}\,\Psi(\alpha,\beta_{\pm};\Lambda)=0\ ,
\end{equation}
\\
where we have introduced the mean scale factor
$a:=2\,e^{\alpha}$.
As in the case $\Lambda > 0$, solutions of (\ref{1.1}) can be obtained by a
transformation to the Ashtekar representation,
where the Chern-Simons functional, restricted to the
Bianchi type IX case, turns out to be an exact solution.
Two of the Fourier
integrals which occur in the transformation back to the metric
representation can be
carried out analytically without any loss of generality
and afterwards the same one dimensional integral representation
as in [I] is obtained:\footnote{
Here, in contrast to [I], the {\em total} action, including the part
which effects the similarity transformation between Ashtekar
and metric variables, has been defined as the exponent of the
integrand.}
\begin{equation}\label{1.2}
\Psi(\kappa, \beta_{\pm}; \lambda)\, \propto
\int\limits_{{\cal C}} \mbox{d} u\ \exp\left\lbrack\,
\frac{1}{\lambda}\,
f(\sin u;\kappa,\beta_{\pm})\,\right\rbrack\ ,
\end{equation}
\begin{equation}\label{1.3}
\mbox{with} \qquad
f(z;\kappa,\beta_{\pm}):=2\, \kappa^{2} e^{-2 \beta_{+}}\,
\frac{z+\cosh \bigl (2 \sqrt{3} \,\beta_{-} \bigr)}
{1-z^{2}}-z^{2}+2\, \kappa e^{2 \beta_{+}} \left (z-\cosh
\bigl (2 \sqrt{3}\, \beta_{-}
\bigr ) \right)-
\kappa e^{-4 \beta_{+}}\ .
\end{equation}
\\
Here we have introduced the new variable $\kappa$ and parameter
$\lambda$
\begin{equation}\label{1.4}
\kappa:=\frac{1}{12}\,\Lambda\,a^{2}\ ,\qquad
\lambda:=\frac{\hbar \Lambda}{6 \pi}\ ,
\end{equation}
\\
thereby effectively reducing the number of parameters occuring
in (\ref{1.2}), and we shall also make use of the variables $\kappa_{j}$
defined by
\begin{equation}\label{1.4+}
\kappa_{j}:=\kappa\,e^{-\beta_{j}}\ ,
\end{equation}
\\
where the $\beta_{j}$ are the
entries of the diagonal anisotropy matrix {\boldmath $\beta$}.
The integration contour ${\cal C}$ in the integral-representation
(\ref{1.2}) can
be chosen quite freely, as long as a sufficiently strong
fall-off for the integrand and its $u$-derivatives at the
borders $\partial {\cal C}$ of ${\cal C}$ is guaranteed.
The proportionality factor left open in (\ref{1.2}) may
depend on $\lambda$ and will be fixed later.
While in the case $\Lambda > 0$ the curves of steepest
{\em descent} of $\Re f$ were of interest, now, due to the
different sign of $\Lambda$, the curves of steepest {\em
ascent} lead to suitable integration contours. Moreover,
new possibilities for the location of the saddle-points
occur, which we classify as follows:
\begin{itemize}
\item By choosing $|\kappa |$ sufficiently small at fixed
$\beta_{\pm}$, it is always possible to make all five saddle
points of $f(z)$ lie on the real axis of the complex
$z$-plane, defining the {\em Euclid I}-region of
minisuperspace. Note, however, that the corresponding
points in the $u$-plane of fig.\ref{grcsd-}
(here $u=\mbox{arcsin} z$) are real-valued only for
$|z| \leq 1$, whereas real $z$-values with $|z|>1$ are
mapped into complex conjugate pairs of points on the axes
$\Re \, u=\pm \frac{\pi}{2}$
and periodic repetitions thereof.
\item Except for the case $\beta_{\pm} =0$, where all five
saddle-points are on the real $z$-axis, there is the
possibility for two of the saddle-points to become
complex in the $z$-plane, which defines the
{\em Lorentzian regime}.
\item For large values of $| \kappa |$ one always enters
the {\em Euclid II} region, where again all five
saddle-points of $f(z)$ become real-valued.
\end{itemize}
\noindent
Some typical locations of the saddle-points in these
different regimes of minisuperspace
and the corresponding curves of steepest ascent are
presented in fig.\ref{grcsd-}.
\vspace{0.5 cm}
\begin{figure}
\begin{center}
\hskip 0 cm
\psfig{figure=grcsa.eps,width=16 cm}
\end{center}
\caption{Saddle-points and curves of steepest ascent of
$\Re\,f$ in the complex $u$-plane for $\Lambda <0$. The
picture given in the Lorentzian case only holds for
$\kappa_{3}=\mbox{min}\, \{\kappa_{j}\}$. The remaining
case can easily be constructed by reflecting this figure on
the imaginary axis. The dashed curves come from $- \infty$
with respect to $\Re\,f$ and are given just for
completeness.}
\label{grcsd-}
\end{figure}
\vspace{0.5 cm}
\noindent
By passing from one of
these regions to another, a
{\em marginal} situation occurs, where two of the
saddle-points confluate. We will refer to the corresponding
hypersurface in minisuperspace as the {\em caustic}; it
has been calculated and is plotted in fig.\ref{grkaustik-}.
In contrast to the case $\Lambda > 0$ the caustic obtained
here consists of an upper and a lower branch, which are
connected just by a single {\em point} at $ \kappa=-2,\,
\beta_{\pm}=0 $. Furthermore, there are {\em kinks} at
$\beta_{+} >0,\, \beta_{-}=0$ and also at the other half-rays
of the $\beta_{\pm}$-plane, related to the former by the
typical $\beta_{\pm}$-symmetries of diagonal Bianchi IX.
Obviously, an exactly isotropic Universe $\beta_{\pm}=0$ has to stay
purely Euclidean throughout its evolution. On the other hand,
``large'' Universes with Lorentzian geometry must become
very anisotropic. Apart from the possibility of a negative
cosmological constant very close to zero, which would allow
for large scale parameters even at $|\kappa |$-values of
order one, it seems impossible for the model under
investigation to describe the Universe observed today.
\vspace{0.3 cm}
\begin{figure}[t]
\begin{center}
\hskip 0 cm
\psfig{figure=grkaustmps.eps,height=9 cm}
\end{center}
\caption{The caustic in minisuperspace for $\Lambda <0$}
\label{grkaustik-}
\end{figure}
\noindent
Nevertheless, let us now construct a basis of solutions to
the Wheeler-DeWitt equation (\ref{1.1}) by choosing
topologically independent integration contours ${\cal C}$
in the representation (\ref{1.2}).
\vspace{0.5 cm}
\begin{figure}
\begin{center}
\hskip 0 cm
\psfig{figure=intcm.eps,height=6 cm}
\end{center}
\caption{Basis set of integration curves}
\label{grintc}
\end{figure}
\noindent
Using the curves defined
in fig.\ref{grintc} we introduce the following
solutions
\begin{equation}\label{1.5}
\Psi_{0}:= \frac{-i\,e^{\mu}}{K_{0}(-\mu)}\,
\int \limits_{\mbox{{\footnotesize${\cal C}_{0}$}}} \mbox{d} u\,
\exp \left \lbrack \frac{1}{\lambda}\,
f(\sin u) \right \rbrack\ , \qquad
\Psi_{\varrho}:=\frac{e^{\mu}}{\pi\,I_{0}(\mu)}\,
\int \limits_{\mbox{{\footnotesize ${\cal C}_{\varrho}
\oplus {\cal C}_{\varrho}^{*}$}}} \mbox{d} u\, \exp \left
\lbrack \frac{1}{\lambda}\,f(\sin u) \right \rbrack\ ,\ \
\varrho \,\epsilon\, \{\mbox{{\footnotesize $-$}},
\mbox{{\footnotesize $+$}},3,\mbox{{\scriptsize $NB\,$}}\}
\ ,
\end{equation}
\begin{equation}\label{1.5+}
\mbox{with} \qquad \mu:=\frac{1}{2\,\lambda}\ ,
\end{equation}
\\
which, by definition, are real-valued and normalized in
accordance with
\begin{equation}\label{1.6}
\Psi_{\varrho}(a=0) \equiv 1\ \ ,\ \
\varrho \,\epsilon\, \{0,\mbox{{\footnotesize $-$}},
\mbox{{\footnotesize $+$}},3,\mbox{{\scriptsize $NB\,$}}\}
\ .
\end{equation}
\\
The functions $K_{0}$ and $I_{0}$ occuring in (\ref{1.5})
are the usual modified Bessel functions with index $0$.
It will be of some advantage to replace the solutions
$\Psi_{+}$ and $\Psi_{-}$ by
\begin{equation}\label{1.7}
\Psi_{1}:=\biggl\{
\begin{array}{ccc}
\Psi_{+}\!\!&,&\beta_{-} \geq 0\\
\Psi_{-}\!\!&,&\beta_{-} \leq 0
\end{array}\qquad ,\qquad
\Psi_{2}:=\biggl\{
\begin{array}{ccc}
\Psi_{+}\!\!&,&\beta_{-}\leq 0\\
\Psi_{-}\!\!&,&\beta_{-} \geq 0
\end{array}\ .
\end{equation}
\\
Then saddle-point expansions of the integrals (\ref{1.5})
in the limit $\Lambda \to 0$, $a$ and $\beta_{\pm}$ fixed, reveal
\begin{equation}\label{1.8}
\lim_{\Lambda \to 0} \, \Psi_{0}=\Psi_{\mbox{{\tiny $W\!H$}}}^{0}\ ,
\qquad\lim_{\Lambda \to 0} \, \Psi_{\mbox{{\tiny $N\!B$}}}=
\Psi_{\mbox{{\tiny $N\!B$}}}^{0}\ ,\qquad
\lim_{\Lambda \to 0} \, \Psi_{i}=\Psi_{i}^{0}\ ,\
i\,\epsilon\,\{1,2,3\}\ ,
\end{equation}
\\
where the upper index ``$0$'' denotes the solutions of the
$\Lambda=0$-model given in [I].
Without proof we mention that
$\Psi_{i},\,i\,\epsilon\,\{1,2,3\}$, are three asymmetric
solutions which generate each other by cyclic permutations
of the $\kappa_{j}$, so consequently the sum of these
states,
\begin{equation}\label{1.15}
\Psi_{\Sigma}:=\frac{1}{3}\ \sum_{i=1}^{3}\,\Psi_{i}\ ,
\end{equation}
\\
besides $\Psi_{0}$ and $\Psi_{\mbox{{\tiny $N\!B$}}}$, turns out to be
symmetric with respect to arbitrary
$\kappa_{j}$-permutations.
Up to different normalization factors, the asymptotic
behavior in the limit $\kappa \to - \infty$ can immediately
be extracted from the corresponding expansions in [I]
by taking account of the negative sign of $\Lambda$. The
only difficulty lies in the determination of the
saddle-points, which give the dominating contribution to
the different solutions in this limit. A detailed calculation
finally yields
\begin{equation}\label{1.9}
\Psi_{0}\ \ \ \sim^{^{\!\!\!\!\!\!\!\!\!\!\!\!
\!\mbox{{\scriptsize $\kappa\!
\to \! -\infty$}}}}\,
\frac{\sqrt{\hbar}}{K_{0}\left (-\frac{3 \pi}
{\hbar \Lambda}
\right )}\, \left (-\frac{3}{\Lambda} \right )^{\frac{1}
{4}}\
\left ( \frac{a}{2} \right )^{-\frac{3}{2}}\
\exp \left \lbrack - \frac{\pi\,a^{3}}{\hbar}\, \sqrt{
-\frac{\Lambda}{3}} \,\right \rbrack\ ,
\end{equation}
\begin{equation}\label{1.10}
\Psi_{\mbox{{\tiny $N\!B$}}}\ \ \ \sim^{^{\!\!\!\!\!\!\!\!\!\!\!\!
\!\mbox{{\scriptsize $\kappa\!
\to \! -\infty$}}}}\,
\frac{\sqrt{\hbar}}{\pi\,I_{0}\left (\frac{3 \pi}{\hbar
\Lambda}
\right )}\, \left (-\frac{3}{\Lambda} \right )^{
\frac{1}{4}}\
\left ( \frac{a}{2} \right )^{-\frac{3}{2}}\
\exp \left \lbrack + \frac{\pi\,a^{3}}{\hbar}\, \sqrt{
-\frac{\Lambda}{3}} \,\right \rbrack\ ,
\end{equation}
\\
at $\beta_{\pm}=0$, i.e. while $\Psi_{0}$ falls of rapidly for
$a \to \infty$, the wavefunction $\Psi_{\mbox{{\tiny $N\!B$}}}$ is strongly
divergent in the same limit. Moreover, since $\Psi_{\mbox{{\tiny $N\!B$}}}$
{\em always} gets its dominant contribution from the real
saddle-point $z \geq 1$ (corresponding to the points
$u_{\mbox{{\tiny $N\!B$}}}$ and $u_{\mbox{{\tiny $N\!B$}}}^{*}$ in
the complex $u$-plane of
fig.\ref{grcsd-} via $z=\sin u$), just Euclidean geometries
are described by this state, so we will reject $\Psi_{\mbox{{\tiny $N\!B$}}}$
as a physically relevant solution. Note, however, that it
is the only state which satisfies the {\em no-boundary}
condition in the limit $\hbar \to 0,\,
a \to 0$, hence the name of this wavefunction.
To give the asymptotic behavior in the limit
$\kappa \to - \infty$ for the states
$\Psi_{i},\,i\,\epsilon\,\{1,2,3\}$, it will be helpful
to consider
\begin{equation}
\Psi^{i}:=\frac{1}{2}\,\left (\Psi_{j}+\Psi_{k} \right )
\ ,\qquad
\varepsilon_{i j k}=1\ ,
\end{equation}
\\
instead. For these solutions the asymptotic expansions
\begin{equation}\label{1.12}
\Psi^{i}\ \ \ \sim^{^{\!\!\!\!\!\!\!\!\!\!\!\!
\!\mbox{{\scriptsize $\kappa\!
\to \! -\infty$}}}}\,
-\frac{\Psi_{\mbox{{\tiny $W\!H$}}}^{0}}{I_{0}(\mu)}\
\sqrt{-\frac{\lambda}{\pi}}\ \frac{2}{\kappa_{i}}\,
\left \{1-2\,\frac{\kappa_{j} \kappa_{k}}{\kappa_{i}^{3}}
\right \}\,\exp \left \lbrack \frac{1}{\lambda}\, \left (
\kappa_{i}^{2}-2\, \frac{\kappa_{j} \kappa_{k}}{\kappa_{i}}
\right )\,\right \rbrack\ , \
\varepsilon_{i j k}=1\ ,
\end{equation}
\\
hold, so they fall off very rapidly for $a \to \infty$
(remember the negative sign of $\lambda$ !).
By considering additional asymptotic expansions for large
anisotropy it is possible to show that the four states
$\Psi_{i},\,i\,\epsilon\,\{0,1,2,3\}$, are all normalizable
on minisuperspace in the distribution sense (see [I] for a discussion
of this point for $\Lambda >0$), i.e. so far
we are left with a still four dimensional space of
physically interesting solutions.
However, while in the Lorentzian regime $\Psi_{0}$ receives
saddle-point contributions exclusively from the
saddle-points at {\em complex} $z$ and thus describes a Lorentzian
Universe in this part of minisuperspace, the states
$\Psi_{i},\,i\,\epsilon\,\{1,2,3\}$,
get additional Euclidean contributions of similar order of
magnitude from {\em real} saddle-points and are therefore
hard to interpret.
The classical trajectories which are generated by $\Psi_{0}$
in the
semi-classical limit $\hbar \to 0$ {\em in the Lorentzian
regime} can be computed by solving the equations
\begin{equation}\label{1.13}
\frac{\mbox{d} \alpha}{\mbox{d} t}=-\frac{\mbox{d} \Im\,f(z_{0})}
{\mbox{d} \alpha}\ ,\qquad
\frac{\mbox{d} \beta_{\pm}}{\mbox{d} t}= \,\frac{\mbox{d} \Im\,f(z_{0})}
{\mbox{d} \beta_{\pm}}\ ,
\end{equation}
\\
where we have chosen the lapse-function to be
$N=\frac{1}{2}\,\Lambda a^{3}$. While the complex saddle-point $z_{0}$
occuring in (\ref{1.13}) is intended to correspond to the point $u_{0}$
of fig.\ref{grcsd-}, the complex conjugate saddle-point $z_{0}^{*}=
\sin u_{0}^{*}$, which describes the time-reversed classical evolution,
can be considered with the same right. The corresponding second branch of
the classical evolution of the Universe is actually {\em needed}
to define the continuation of a classical trajectory which has reached
the caustic:
in approaching the caustic the saddle-points $z_{0}$ and
$z_{0}^{*}$ confluate and become real-valued, so that, in accordance
with (\ref{1.13}), the time-derivatives of $\alpha$ and $\beta_{\pm}$ vanish.
To continue such a trajectory in time, the time-reversed version of
(\ref{1.13}) has to be considered. Since
the Universe is "reflected" in this way whenever it meets the caustic,
and since in the generic case the classical trajectories have both of
their endpoints on the caustic, {\em oscillating} Universes are described
by $\Psi_{0}$.
The numerical results for the classical trajectories which are
obtained in the plane
$\beta_{-}=0$ of the minisuperspace are presented in
fig.\ref{class.traj}.
\vspace{0.3 cm}
\begin{figure}
\begin{center}
\hskip 0 cm
\psfig{figure=kltraj.eps,height=6 cm}
\end{center}
\caption{Semi-classical trajectories generated by the
complex saddle-points in the Lorentzian regime. For
simplicity, we have restricted the plot to the plane
$\beta_{-}=0$. The arrows indicate the direction of
increasing time $t$ in eq. (2.18).}
\label{class.traj}
\end{figure}
\noindent
We should stress one important difference in
the pictures which are obtained for the different signs
of $\beta_{+}$:
While for $\beta_{+} >0, \beta_{-}=0$ all trajectories run
to infinite anisotropy (which is, indeed, a peculiarity of the
special $\beta_{\pm}$-direction, corresponding to a kink on the caustic, cf. fig.\ref{grkaustik-}),
in the case $\beta_{+}<0$ the trajectories meet the lower
branch of the caustic again at a finite $\beta_{+}$-value,
representing the general situation. This feature gives rise
to the existence of a special trajectory with coinciding
start- and endpoints, hence describing a Universe that
never really becomes Lorentzian. The corresponding points
in minisuperspace
can be calculated analytically, requiring
the solution of (\ref{1.13}) to be {\em tangential} to
the caustic, with the result
\begin{equation}\label{1.14}
\kappa=-\sqrt[\!3]{2}\,\left (\frac{2}{5} \right )^{
\frac{4}{3}}\ ,\qquad \beta_{+}+i\,\beta_{-}=\frac{1}{6}
(\ln 5-5 \ln 2 )\,
e^{\mbox{{\footnotesize $\frac{2 \pi i n}{3}$}}}\ ,\qquad
n\,\epsilon\,\{-1,0,1\}\ .
\end{equation}
\\
These points will play an important role in the following
section.
\section{Behavior on the caustic}
Since the classical Lorentzian evolution of the Universe described by
the wavefunctions $\Psi_{i},\,i\,\epsilon\,\{0,1,2,3\}$,
is bounded by the caustic in
minisuperspace, the value of $|\Psi|^{2}_{c}$ {\em on
the caustic} predicted by the different solutions is of
particular interest. In fact $|\Psi|^{2}_{c}$ governs the
realization of the different possible histories of the
Universe and may thus be interpreted as the "initial" value
distribution for the classical evolution.
However, at this stage a new problem arises due to the
different branches of the caustic. Since the semi-classical trajectories
allways can be passed through in both directions, it is impossible
to distinguish between their start- and endpoints. The distributions of
$|\Psi|^{2}_{c}$ on the upper and lower branch of the caustic
may therefore be considered with the same right, and we
will always discuss them together in the following.
The numerical results obtained for $|\Psi_{0}|^{2}_{c}$ and
$|\Psi_{\Sigma}|^{2}_{c}$ on the lower caustic are given
in fig.\ref{grkaustl}, and fig.\ref{grkaustu} shows the
behavior on the upper caustic, which is very similar for
the two different solutions. In the following the
additional indices "$u$" and "$l$" denote the upper
and lower branch of the caustic, respectively.
\vspace{0.3 cm}
\begin{figure}
\begin{center}
\hskip 0 cm
\psfig{figure=grk0ps.eps,width=7 cm}
\hskip 1.5 cm
\psfig{figure=grksps.eps,width=7 cm}
\end{center}
\caption{Initial value distributions generated by $\Psi_{0}$
and $\Psi_{\Sigma}$ on the lower caustic($\Lambda=-3,\,
\hbar=2 \pi$). Like the caustic itself, the distributions
have kinks in some critical $\beta_{\pm}$-directions, which are
partially hidden in the figures.}
\label{grkaustl}
\end{figure}
\vspace{0.5 cm}
\begin{figure}
\begin{center}
\hskip 0 cm
\psfig{figure=ukaust.eps,width=8 cm}
\end{center}
\caption{Initial value distribution of $\Psi_{0}$ on the
upper caustic, normalized to unity at $\beta_{\pm} =0$. The
numerical plots obtained for the different wavefunctions
$\Psi_{0}$ and $\Psi_{\Sigma}$ (and thus for $\widehat
\Psi$ defined below) on the upper caustic look very
similar, so we restrict ourselves to a representation
of $|\Psi_{0}|^{2}_{c,u}$. The absolute values taken by
$|\Psi|^{2}_{c,u}$ at $\beta_{\pm} =0$ are given by
$1.45\!\cdot\! 10^{-11},\,3.56\!\cdot\! 10^{-13}$ and
$2.43\!\cdot\! 10^{-11}$ for the wavefunctions
$\Psi_{0},\,\Psi_{\Sigma}$ and $\widehat
\Psi$, respectively (here again $\Lambda =-3,
\hbar =2 \pi$). Since the lower and upper caustic
coincide at $\beta_{\pm} =0$, it is clear that these values
hold for the distributions on the lower caustic, too.
That is why we suggest to consider the two distributions
obtained on the different branches of the caustic as
analytical continuations of one another through the
isotropic point.}
\label{grkaustu}
\end{figure}
\noindent
While on the upper caustic both distributions fall off
rapidly with increasing $\beta_{\pm}$ and may be shown to be
integrable over the $\beta_{\pm}$-plane, there are
$\beta_{\pm}$-directions on the lower caustic, in which
$|\Psi|^{2}_{c,l}$ approaches a finite value at infinity.
Consequently, the wavefunctions on the lower branch are
{\em not} square-integrable, and hence difficult to
interpret as probability distributions.
Nevertheless, as in the case $\Lambda >0$, one may construct
a new wavefunction as a linear combination of the two symmetric
wavefunctions $\Psi_{0}$ and $\Psi_{\Sigma}$:
By normalizing $\Psi_{0}$ and $\Psi_{\Sigma}$ to approach
unity in the critical $\beta_{\pm}$-directions, the difference
of these new functions is square-integrable on
the {\em full} caustic. To give an explicit expression
for the quantum state obtained in this way, we introduce
the integrals
\begin{displaymath}
{\cal J}_{0}^{(1)}(\nu):=\int \limits_{-\frac{\pi}{4}}^{
+\frac{\pi}{4}} \mbox{d} x \,
e^{-\nu \sin^{4} x}\ ,\qquad
{\cal J}_{0}^{(2)}(\nu):=\int \limits_{-\frac{\pi}{4}}^{
+\frac{\pi}{4}} \mbox{d} x \,
e^{-\nu \cos^{4} x}\ ,
\end{displaymath}
\begin{equation}\label{2.1}
{\cal K}_{0}^{(1)}(\mu):=\int \limits_{0}^{\infty} \mbox{d} x\,
\sin \left (4\,\mu \sinh t \right )\, e^{\mu \cosh 2 t}\ ,
\qquad
{\cal K}_{0}^{(2)}(\mu):=\int \limits_{0}^{\infty} \mbox{d} x\,
\cos \left (4\,\mu \sinh t \right )\, e^{\mu \cosh 2 t}\ ,
\end{equation}
\\
which, as far as we know, have no simple representation in
terms of tabulated functions. Defining now
\begin{equation}\label{2.2}
{\cal Q}(\lambda):=3 \pi\,e^{-3 \mu}\
\frac{I_{0}(\mu)}{K_{0}(-\mu)}\ \,
\frac{{\cal K}_{0}^{(2)}(\mu)}{2 {\cal J}_{0}^{(1)}(8 \mu)+
{\cal J}_{0}^{(2)}
(8 \mu)+e^{-3 \mu}\,{\cal K}_{0}^{(1)}(\mu)}\ \ ,\ \ \
\mbox{with} \ \ \mu=\frac{1}{2 \lambda}\ ,
\end{equation}
\\
the new state can be written in the form
\begin{equation}\label{2.3}
\widehat \Psi := \frac{\Psi_{0}-{\cal Q}\,\Psi_{\Sigma}}{1-{\cal Q}}
\ ,
\end{equation}
\\
where the overall normalization factor has again been chosen
to make $\widehat \Psi \equiv 1$ at $a =0$.
The behavior of $\widehat \Psi$ on the caustic has been
computed for fig.\ref{grkausth}. Taking account of the
full distribution, three maxima on the lower branch of
the caustic pick out special initial values for the
classical evolution of the Universe. The general
representation (\ref{1.2}) easily reveals that the
wavefunction becomes arbitrarily sharply concentrated
about these maxima in the limit $\lambda \to 0$, i.e. in
particular in the limit $\hbar \to 0$ at fixed $\Lambda$. Consequently, in
the semi-classical limit there are just three histories
of the Universe which occur with significant probability.
\vspace{.3 cm}
\begin{figure}
\begin{center}
\hskip 0 cm
\psfig{figure=grkhaps.eps,width=8 cm}
\end{center}
\caption{Initial value distribution generated by $\widehat
\Psi$ on the lower caustic ($\Lambda=-3,\,\hbar=2 \pi$).
For the distribution obtained on the upper caustic see
fig.6.}
\label{grkausth}
\end{figure}
\noindent
In the following we shall be interested in the special
points of minisuperspace where the maxima of $| \widehat
\Psi |^{2}_{c,l}$ arise.
Using the saddle-point method for $\lambda \to 0$ the
asymptotic behavior of the integrals defined in (\ref{2.1})
after some calculation yields
\begin{equation}\label{2.4}
{\cal Q} \ \, \to^{^{\!\!\!\!\!\!\!\!\!\!\!
\mbox{{\scriptsize $\lambda\! \to \!0$}}}}\,
\frac{3}{2} \sqrt{\frac{2}{\pi}}\,
\Gamma(\mbox{{\footnotesize $\frac{1}{4}$}})\,
(-\lambda)^{-\frac{1}{4}}\,
e^{\mbox{{\footnotesize $\frac{3}{\lambda}$}}}\ ,
\end{equation}
\\
and with this result the relation
\begin{equation}\label{2.5}
\widehat \Psi\, \ \, \sim^{^{\!\!\!\!\!\!\!\!\!\!\!
\mbox{{\scriptsize $\hbar\! \to \!0$}}}}\, \Psi_{0}
\end{equation}
\\
can be shown to hold at least on the lower caustic. Since
$\widehat \Psi$ is a real-valued, non-vanishing
wavefunction, the maxima of $|\widehat \Psi|^{2}$
coincide with the maxima of $\widehat \Psi$, and
using (\ref{2.5}) they may also be calculated from
$\Psi_{0}$ in the semi-classical limit. By performing
again a saddle-point expansion for $\lambda \to 0$, now
in the integral representation (\ref{1.5}) of $\Psi_{0}$,
the maxima of $\Psi_{0}$ on the caustic can be calculated
analytically with the result given exactly by
(\ref{1.14}).
Consequently, within the class of solutions considered here,
the {\em only} quantum state that is
square-integrable on the full caustic turns out to
predict a Universe which never becomes
Lorentzian in the classical limit (albeit classical
Lorentzian solutions
of the Bianchi type IX model with negative cosmological
constant actually do exist, cf. fig.\ref{class.traj}).
\section{Conclusion}
In the present paper we constructed exact quantum states for the
diagonal Bianchi type IX model with a negative cosmological constant.
We found
that the method presented in [I] for $\Lambda >0$ is indeed perfectly
applicable to the
model with $\Lambda <0$. As for $\Lambda >0$ it gives
five linearly independent solutions, which are generated
by the Chern-Simons state using topologically different
integration contours in the generalized
Fourier-transformation to the metric representation.
Imposing the condition that the wavefunction be
normalizable on the caustic, just one wavefunction remains,
that turns out to have some nice additional properties:
It is found to be normalizable in minisuperspace in the
distribution sense and it respects the symmetries of
the Bianchi type IX model. However, this state does
{\em not} satisfy the no-boundary condition in the
semi-classical limit in contrast to the case $\Lambda >0$,
and it turns out to predict a Universe that never becomes
Lorentzian, after all. Hence we obtain the result, that,
{\em if} one allows for a non-zero cosmological constant
at all, it should be positive, at least as far as the
Chern-Simons functional related states of the quantized
Bianchi IX model are concerned.
\acknowledgements
Support of this work by the Deutsche Forschungsgemeinschaft
through the Sonderforschungsbereich ``Unordnung und gro{\ss}e
Fluktuationen'' is gratefully acknowledged.
\begin{appendix}
\section{Solutions in a different factor-ordering}
For completeness, and in order to obtain an important argument
for the factor-ordering chosen for the Wheeler-DeWitt equation
(\ref{1.1}), we shall make some comments on a further class of
solutions, which can again be discussed by using the methods of [I].
Considering the Wheeler-DeWitt equation in the form (\ref{1.0}) one
may ask, why we have not chosen a different factor-ordering, which
is obtained by changing $\Phi \to - \Phi$. This choice, of course,
would not have affected the classical Hamiltonian, but the quantum
correction $-\frac{2\, \hbar}{\pi}\,a^{2}\,\mbox{Tr}\,
e^{2 \mbox{{\footnotesize$\beta$}} }$ in (\ref{1.1})
would have changed its sign.\footnote{The factor-ordering obtained
in this way corresponds to the ${\cal A}^{+}$-representation
introduced by Kodama in \cite{9},
in contrast to the ${\cal A}^{-}$-representation,
which we have considered up to now.}
Since the coordinate transformation $a \to i\,a, \Lambda
\to -\Lambda$ has exactly the same effect as the above mentioned
change of the factor-ordering, it is possible to discuss the
solutions of the Wheeler-DeWitt equation in the new
factor-ordering by considering still equation (\ref{1.1}),
but substituting formally $a \to i\,a, \Lambda \to -\Lambda$
in the solutions. In the following it will be more convenient to
use the coordinates $\kappa_{j}$ and $\lambda$ introduced in (\ref{1.4})
and (\ref{1.4+}), which transform like $\kappa_{j} \to \kappa_{j},
\lambda \to -\lambda$ under this substitution.
It should be clear that the solutions of the Wheeler-DeWitt
equation (\ref{1.1}) are still of the form (\ref{1.2}),
but while we looked at the cases $\kappa >0,
\lambda >0$ in [I] and $\kappa <0, \lambda <0$ in
the present paper, now the remaining sectors
$\kappa >0, \lambda <0$ and $\kappa <0, \lambda >0$
are of interest, which, because of the formal substitution
$\lambda \to -\lambda$ just mentioned, describe
solutions for {\em positive} and {\em negative}
cosmological constant in the {\em new} factor-ordering,
respectively. It is easily checked that the location of
the saddle-points, and therefore the caustic, depends
only on $f(z;\kappa,\beta_{\pm})$ defined
in eq. (\ref{1.3}). This means that, irrespective
of the sign of $\lambda$, we deal with the caustic of
[I] in the case $\kappa >0$, and with the caustic
fig.\ref{grkaustik-} in the case $\kappa <0$. On
the other hand, it is just the sign of $\lambda$ which
decides whether the integration curves of [I]
(for $\lambda >0$) or of fig.\ref{grintc}
(for $\lambda <0$) give suitable integration contours.
However, constructing the solutions for the new
factor-ordering in this manner and applying the
saddle-point method to the integral representation
(\ref{1.2}) in the limit of large anisotropy
$\beta_{\pm}$, it finally turns out, that
{\em any} solution to the Wheeler-DeWitt
equation in the new factor-ordering
{\em diverges} for $\beta_{\pm} \to \infty$,
at least in some $\beta_{\pm}$-sectors. In
other words, in the new factor-ordering there is
no solution which is normalizable in minisuperspace,
not even in the distribution sense. Furthermore, if
the behavior of the wavefunctions on the caustic is
considered, actually none of these solutions is
found to be square-integrable with respect to
$\beta_{\pm}$.
Comparing these results with the nice normalizability
properties of the solutions of the Wheeler-DeWitt
equation in the factor-ordering of (\ref{1.0})
presented in [I] and the present paper, we believe
to have a compelling argument to rule out the new
factor-ordering.
It would be interesting
if this argument could be extended even to
the general, inhomogeneous case of quantum gravity.
\end{appendix}
|
1,314,259,993,546 | arxiv | \section{Supplemental Material to Spin Drag of a Fermi Gas in a Harmonic Trap}
\section{Derivation of the transport equation from Boltzmann's equation}
We consider an ensemble of spin 1/2 fermions of mass $m$. In the dilute limit, the statistical properties of the system are fully captured by the single-particle phase-space densities $f_s (\bm r,\bm p,t)$ of the spin species $s=\pm$. In the presence of a trapping potential $V$, the evolution of $f_s$ is given by Boltzmann's equation
\begin{equation}
\partial_t f_s+\frac{\bm p}{m}\cdot\partial_{\bm r}f_s+\bm F\cdot\partial_{\bm p}f_s=I_{\rm coll}[f_s,f_{-s}],
\label{Eq:1}
\end{equation}
where $\bm F=-\partial_{\bm r}V$ is the trapping force and $I_{\rm coll}$ is the collision operator. For low-temperature fermions, collisions between same-spin particles are suppressed and at low phase-space densities, the collision operator is given by
\begin{equation}
\begin{split}
&I_{\rm coll}[f_s,f_{-s}](\bm r,\bm p_1)=\\
&\int d^3\bm p_2 d^2\bm\Omega'v_{\rm rel}\frac{d\sigma}{d\Omega'}\left(f_{s,3}f_{-s,4}-f_{s,1}f_{-s,2}\right),
\end{split}
\end{equation}
where $\bm p_1$ and $\bm p_2$ ($\bm p_3$ and $\bm p_4$) are ingoing (outgoing) momenta satisfying energy and momentum conservation, $v_{\rm rel}=|\bm p_2-\bm p_1|/m$ is the relative velocity, $d\sigma/d\Omega'$ is the differential cross-section towards the solid angle $\bm\Omega'$ and $f_{s,i}$ stands for $f_s(\bm r,\bm p_i)$.
When the populations of the two spin states are equal, the equilibrium solution of Eq.~(\ref{Eq:1}) is given by the Maxwell-Boltzmann distribution $f_+=f_-=f_0 ={\cal A}\exp\left[-\beta(p^2/2m+V)\right]$, where $\beta=1/k_B T$ and ${\cal A}$ is an integration constant such that $\int d^3\bm rd^3\bm p f_0$ is the population of one spin state. We consider a spin perturbation of the form $f_s(\bm r,\bm p,t)=f_0(\bm r,\bm p)\left(1+s \alpha(\bm r,\bm p,t)\right)$. Assuming the perturbation is small enough, we can expand Boltzmann's equation in $\alpha$ and to first order we obtain
\begin{equation}
\partial_t \alpha+\frac{\bm p}{m}\cdot\partial_{\bm r}\alpha+\bm F\cdot\partial_{\bm p}\alpha=-C[\alpha],
\label{Eq:2}
\end{equation}
where, for s-wave collisions, the linearized collisional operator $C$ is given by
\begin{equation}
C[\alpha](\bm r,\bm p_1)=\int d^3\bm p_2 f_0(\bm r,\bm p_2)v_{\rm rel}\sigma(v_{\rm rel})\left(\alpha_1-\alpha_2\right),
\end{equation}
and as above $\alpha_i=\alpha(\bm r,\bm p_i)$.
In experiments, the trap can be described by a cylindrically-symmetric harmonic potential with frequency $\omega_z$ along the symmetry axis $z$ and $\omega_\perp$ in the transverse $(x,y)$ plane. In the rest of this Supplemental Material, we work in a unit system where $m=k_BT=\omega_\perp=1$ and we then write $V(x,y,z)=(\omega_z^2z^2+\rho^2)/2$, with $\bm\rho=(x,y)$.
We look for exponentially decaying solutions corresponding to small deviations from equilibrium, and therefore take $\alpha(\bm r,\bm p,t)=e^{-\gamma t}\widetilde\alpha (\bm r,\bm p)$. Eq.~(\ref{Eq:2}) then becomes
\begin{equation}
\left[-\gamma+p_z\partial_z-\omega_z^2z\partial_{p_z}\right]\widetilde\alpha =-\left[\bm\Pi\cdot\partial_{\bm\rho}-\bm\rho\cdot\partial_{\bm\Pi}+C\right]\widetilde\alpha,
\label{Eq:3}
\end{equation}
where $\bm\Pi=(p_x,p_y)$ is the projection of the momentum in the $(x,y)$ plane. We note that in the rhs of Eq. (\ref{Eq:3}) the only $z$-dependence is in the linearized collisional operator $C$, from $f_0\propto \exp(-\omega_z^2 z^2/2)$. Let $C=\bar n_0(z) \tilde C$, where $\bar n_0(z)=\int d^2\bm \rho d^3\bm p f_0(\bm r,\bm p)=\bar n_0(0) e^{-\omega_z^2 z^2/2}$ is the equilibrium 1D-density and $\tilde C$ no longer acts on the coordinate $z$. Taking $z'=\omega_z z$, we obtain
\begin{equation}
\left[-\gamma+\omega_z(p_z\partial_{z'}-z'\partial_{p_z})\right]\widetilde\alpha =-{\cal L}_{\bar n_0(z')}[\widetilde\alpha ],
\label{Eq:4}
\end{equation}
with
\begin{equation}
{\cal L}_{\bar n_0(z')}=\bar n_0(z')\tilde C+\bm\Pi\cdot\partial_{\bm\rho}-\bm\rho\cdot\partial_{\bm\Pi}.
\end{equation}
Note that ${\cal L}_{\bar n_0(z')}$ depends on the axial coordinate $z'$ only through the axial density. In particular, $z'$ is only a parameter of the operator since we neither integrate nor differentiate with respect to this coordinate.
Two properties of ${\cal L}_{\bar n}$ will be used below: (i) its kernel is spanned by the functions of $z'$ only \footnote{This result can be obtained by noting that if $\alpha$ belongs to the kernel of $C$ we have first $\langle \alpha|C[\alpha]\rangle=\langle \alpha|{\cal L}[\alpha]\rangle=0$ where the scalar product is defined as in Eq. (\ref{scalar}). Moreover we have $\langle \alpha|C[\alpha]\rangle=\int d^2\bm\rho d^3\bm p_1 d^3\bm p_2 f_{0,1}f_{0,2}v_{\rm rel}\sigma (\alpha_1-\alpha_2)^2/2$. So we see that $\alpha$ should not depend on the momentum and that $C[\alpha]=0$. Using these properties in the equation ${\cal L}[\alpha]=0$, we see that $\alpha$ is not a function of $\rho$ either.} and (ii) due to atom number conservation, we have for any $\widetilde\alpha$, $\int d^2\bm\rho d^3\bm p f_0 {\cal L}_{\bar n}[\widetilde\alpha ]=0$.
We look for solutions of Eq.~(\ref{Eq:4}) in the limit of a very elongated trap $\omega_z\rightarrow 0$. If we focus on the {\em slow} axial spin dynamics of the cloud studied experimentally in \cite{sommer2011universal}, we have also $\gamma\rightarrow 0$ and we can therefore expand $\gamma$ and $\widetilde\alpha $ as $\gamma=\sum_{n\ge 1}\gamma_n \omega_z^n$ and $\widetilde\alpha(\bm r,\bm p)=\sum_{n\ge 0}\omega_z^n a_n(\bm r,\bm p)$. Note that we are ultimately interested in the coefficient $\gamma_2$, since we take $\Gamma_{\rm SD}=\omega_z^2/\gamma$ as in \cite{sommer2011universal}.
Inserting these expansions in Eq.~(\ref{Eq:4}) we get to zero-th order ${\cal L}_{\bar n}[a_0]=0$ \footnote{Note that, strictly speaking, $\omega_z$ still appears in $C$ through the normalisation factor $\cal A$. We thus take the limit $\omega_z\rightarrow 0$ at constant peak-density to avoid this difficulty.}. According to property (i), $a_0$ is thus a function of $z'$ only. It is determined explicitly by the study of the next order terms of the expansion. For $n=1$ we obtain
\begin{equation}
-\gamma_1 a_0 + p_z \partial_{z'}a_0=-{\cal L}_{\bar n_0(z')}[a_1].
\end{equation}
Using property (ii), we see readily that $\gamma_1=0$, which then leads to the following relation:
\begin{equation}
p_z \partial_{z'}a_0=-{\cal L}_{\bar n_0(z')}[a_1].
\label{Eq:4b}
\end{equation}
Consider a uniform density $\bar n$ and assume for a moment that we know the solution $\chi_{\bar n} (\bm\rho,\bm p)$ of the integro-differential equation $p_z={\cal L}_{\bar n}[\chi_{\bar n}]$ (the properties of $\chi_{\bar n}$ will be discussed below). Since $\bar n_0 (z')$ is only a parameter, and by linearity of $\cal L$, the solution of Eq.~(\ref{Eq:4b}) is thus $a_1=-\chi_{\bar n_0(z')}\partial_{z'}a_0$.
Having expressed $a_1$ as a function of $a_0$, we close the set of equations by considering the $n=2$ term of the expansion. It reads:
\begin{equation}
\left(-\gamma_2 a_0 + (p_z\partial_{z'}-z'\partial_{p_z})a_1\right)=-{\cal L}_{\bar n_0(z')}[a_2].
\end{equation}
Using the expression of $a_1$ as well as the property (ii), we obtain after integration by parts
\begin{equation}
\gamma_2 \bar n_0(z')a_0(z')+\partial_{z'}\left(G(z')\partial_{z'}a_0(z')\right)=0,
\label{Eq:5}
\end{equation}
where
\begin{equation}
G(z')\equiv \int d^2\bm\rho d^3\bm p f_0 p_z \chi_{\bar n_0(z')}(\bm\rho,\bm p).
\end{equation}
We now show that $G$ defined above can be identified with the spin conductance of an ideal gas of 1D density $\bar n$ in a cylindrical harmonic trap. Indeed, by definition, the conductance is obtained by solving Boltzmann's equation in a cylindrical trap in the presence of a spin pulling force $\bm F_s=s F_0\bm e_z$. Expanding Eq. (\ref{Eq:1}) to first order in perturbation, and taking as above $f_s=f_0 (1+s \alpha(\bm \rho,\bm p))$, we see that $\alpha$ is solution of
\begin{equation}
F_0p_z = {\cal L}_{\bar n}[\alpha].
\end{equation}
We recognize here the same equation as for the definition of $\chi$ and we then have $\alpha=F_0\chi_{\tilde n}$. Since the particle flux is given by $\Phi=\int d^3\bm pd^2\bm\rho f_0 \alpha p_z$, we see finally that, as claimed above, $G=\Phi/F_0=\int d^2\rho d^3\bm p f_0p_z\chi_{\tilde n}(\bm \rho,\bm p)$.
\section{Spin drag coefficient for the Maxwellian gas}
Consider the special case of the radially trapped Maxwellian gas for which $\sigma (p)=\Lambda/p$ where $\Lambda$ is some constant. This model is useful to interpolate between the weakly interacting ($\sigma=$const.) and the strongly interacting ($\sigma\propto1/p^2$) limits. Taking the ansatz $\alpha(\bm r,\bm p)=F_0 p_z H(\bm\rho,\bm\Pi)/2$, Boltzmann's equation for spin excitations turns into
\begin{equation}
\frac{1}{2}\left(\bm\Pi\cdot\partial_{\bm\rho}-\bm\rho\cdot\partial_{\bm\Pi}\right)H(\bm\rho,\bm\Pi)-1=-\frac{\Gamma_0}{2} e^{-\rho^2/2}H(\bm\rho,\bm\Pi),
\label{EqSOM1}
\end{equation}
with $\Gamma_0=\Lambda n_0$ the damping rate of the spin excitations of a homogeneous gas of density $n_0$. Using the rotational invariance around the $z$ axis, the phase space density can be expressed using the new variables
\begin{eqnarray*}
h&=&(\Pi^2+\rho^2)/2\\
u&=&(\Pi^2-\rho^2)/2\\
v&=&\bm\Pi\cdot\bm\rho.
\end{eqnarray*}
Let $u+iv=R e^{i\varphi}$. Eq. (\ref{EqSOM1}) then becomes
\begin{equation}
\partial_{\varphi}H(h,R,\varphi)-1=-\frac{\Gamma_0 e^{-h/2}}{2} e^{R\cos\varphi/2}H(h,R,\varphi).
\label{EqSOM2}
\end{equation}
Moreover, in these new variables, we have
\begin{equation}
\int d^2\bm\Pi d^2\bm\rho \cdots=2\pi\int_{h=0}^\infty\int_{x=0}^1\int_{\varphi=0}^{2\pi}\frac{xdx\, hdh\,d\varphi}{\sqrt{1-x^2}} \cdots .
\end{equation}
where $R=xh$ and the dots stand for any cylindrically symmetric function of $\bm\Pi$ and $\bm\rho$. In particular, the spin-conductance is given by
\begin{equation}
G=\frac{n_0}{2}\int\frac{xdx\, hdh\,d\varphi}{\sqrt{1-x^2}} e^{-h}H(h,R=xh,\varphi),
\label{EqSOM4}
\end{equation}
We now turn to the solution of Eq.~(\ref{EqSOM2}) where we focus on the $\varphi$-dependence, since it is the only variable appearing in the differential operator. Eq. (\ref{EqSOM2}) takes the general form
\begin{equation}
H'(\varphi)+\mu A(\varphi) H(\varphi)=1,
\label{EqSOM3}
\end{equation}
with $\mu=\Gamma_0 e^{-h/2}/2$ and $A=\exp(R\cos(\varphi)/2)$ is a $2\pi$-periodic function. Take
\begin{equation}
K_{\mu}(\varphi)=\exp\left[-\mu\int_0^{\varphi}d\varphi' A(\varphi')\right],
\end{equation}
the general solution of (\ref{EqSOM3}) is
\begin{equation}
H(\varphi)=K_\mu(\varphi)\int_{\varphi_0}^\varphi\frac{d\varphi'}{K_\mu(\varphi')},
\end{equation}
where $\varphi_0$ is an integration constant that can be determined by imposing the periodicity of $H$. Taking $H(0)=H(2\pi)$, we have finally
\begin{equation}
H(\varphi)=K_\mu(\varphi)\left[\int_0^\varphi\frac{d\varphi'}{K_\mu(\varphi')}+\frac{K_{\mu}(2\pi)}{1-K_{\mu}(2\pi)}\int_0^{2\pi}\frac{d\varphi'}{K_\mu(\varphi')}\right].
\end{equation}
Let's now discuss the behavior of the solutions in the collisionless and hydrodynamic limits.
\subsection{Collisionless limit $\mu\rightarrow 0$}
In this limit, $K_\mu\simeq 1-\mu\int_0^{\varphi}A(\varphi')d\varphi'$. The asymptotic behavior of $H$ is then dominated by the singularity due to the denominator $1-K_\mu(2\pi)$ that vanishes for $\mu=0$. To leading order, we see that $H$ does not depend on $\varphi$ and is given by
\begin{equation}
H=\frac{1}{\mu\bar A},
\end{equation}
where
\begin{equation}
\bar A=\frac{1}{2\pi}\int_0^{2\pi}A(\varphi')d\varphi'=I_0(R/2)
\end{equation}
is the average value of $A$ ($I_0$ is the zeroth-order modified Bessel function of the first kind).
Using the actual values of $\mu$ and $A$ we have
\begin{eqnarray}
G&\sim&\frac{n_0}{\Gamma_0}\int_{h=0}^\infty\int_{x=0}^1\frac{hdh\,xdx}{\sqrt{1-x^2}} \frac{2\pi e^{-h/2}}{{\rm I}_0(xh/2)}\\
&\sim& \frac{15.87}{\Lambda},
\end{eqnarray}
Note in particular that $G$ does not depend on the density of the cloud.
Within this limit the velocity field in the collisionless regime is given by
\begin{equation}
v(\rho)=
\frac{F_0}{2\pi\Gamma_0}\int \frac{e^{(\rho^2-\Pi^2)/4}\Pi d\Pi d\theta}{I_0\left(\frac{1}{2}\sqrt{\Pi^2\rho^2\cos^2\theta+(\Pi^2-\rho^2)^2/4}\right)}
\label{EqSOMMaxwell}
\end{equation}
\subsection{Hydrodynamic limit $\mu\rightarrow\infty$}
In the limit $\Gamma_0\rightarrow\infty$ we may neglect the transport term $H'$ in (\ref{EqSOM3}), yielding $H(\varphi)=1/\mu A(\varphi)\propto 1/\Gamma_0$. Taking $\Gamma(\bm \rho)=\Gamma_0 e^{-\rho^2/2}$, the local spin damping rate, we recover the expected result $v(\rho )\propto 1/\Gamma (\bm\rho)$, that was obtained in the main text using local density arguments.
\section{A variational result for the collisionless spin conductance}
The spin conductance in a transverse trap is obtained by solving the linearized Boltzmann equation
\begin{equation}
\left(\bm\Pi\cdot\partial_{\bm\rho}-\bm\rho\cdot\partial_{\bm\Pi}\right)\alpha-Fp_z=-C[\alpha],
\end{equation}
where $C$ is the linearized collisional operator defined by
\begin{equation}
C[\alpha](\bm\rho,\bm p_1)=\int d^3\bm p_2f_0(\bm\rho,\bm p_2)|\bm p_2-\bm p_1|\sigma \left[\alpha(\bm\rho,\bm p_1)-\alpha(\bm\rho,\bm p_2)\right].
\end{equation}
We recall that $C$ is symmetric and positive for the scalar product
\begin{equation}
\langle g_1|g_2\rangle=\int d^2\bm\rho d^3\bm p g_1(\bm\rho,\bm p)g_2(\bm\rho,\bm p)f_0(\bm\rho,\bm p), \label{scalar}
\end{equation}
where $f_0(\bm\rho,\bm p)=n_0e^{-(p^2+\rho^2)/2}/\sqrt{2\pi}^3$ is the static phase-space density, and $n_0$ is the density at the center of the trap.
The spin current is defined by $\Phi=\int d^3\bm p d^2\bm\rho f_0 \alpha p_z=\langle \alpha|p_z\rangle$ and the spin conductance is then $G=\Phi/F$. Letting $\alpha(\bm\rho,\bm p)=F a(\bm\rho,\bm p)$ we have more simply $G=\langle p_z|a\rangle$.
We work in the collisionless limit $\sigma\rightarrow 0$ and we thus take $\sigma (p_{\rm rel})=\varepsilon \hat\sigma (p_{\rm rel})$ and $C=\varepsilon C_2$ where $\varepsilon$ is small. Following the results obtained for the Maxwellian gas, we expand $a$ as
\begin{equation}
a(\bm\rho,\bm p)=\frac{a_0(\bm\rho,\bm p)}{\varepsilon}+a_1(\bm\rho,\bm p)+\varepsilon a_2(\bm\rho,\bm p)+\ldots
\end{equation}
Inserting this expansion in Boltzmann's equation, we obtain to leading order
\begin{equation}
\left(\bm\Pi\cdot\partial_{\bm\rho}-\bm\rho\cdot\partial_{\bm\Pi}\right)a_0=0.
\label{Eq3SOM}
\end{equation}
This equation is solved readily by introducing the variables $(p_z,h,x,\varphi)$ defined in the study of the Maxwellian gas.
In these coordinates, Eq.~(\ref{Eq3SOM}) becomes simply $\partial_\varphi a_0=0$. The set ${\cal F}_0$ of solutions of Eq.~(\ref{Eq3SOM}) is thus composed of functions whose value does not depend on the angle $\varphi$. To get the actual expression of $a_0$ we need to go one step further in the expansion. At this order in $\varepsilon$, we have
\begin{equation}
\partial_\varphi a_1-p_z=-C_2[a_0].
\end{equation}
To get rid of $a_1$, we integrate over $\varphi$ and use the fact that the $a_n$ are periodic functions of $\varphi$. We then obtain the equation
\begin{equation}
\bar C_2[a_0]=p_z,
\label{Eq2}
\end{equation}
with $\bar C_2[a_0]=\int d\varphi C_2[a_0]/2\pi$ and where $a_0$ is now the only unknown.
We define on ${\cal F}_0$ the new scalar product
\begin{equation}
(a|b)=4\pi^2\int \frac{xdx\, hdh\, dp_z}{\sqrt{1-x^2}} f_0 a(x,h,p_z)b(x,h,p_z)
\end{equation}
which is equivalent to the old scalar product $\langle a|b\rangle$.
We then see readily that $(a|\bar C[b])=\langle a|C[b]\rangle$. Using the properties of $C$, we deduce that $\bar C_2$ is a symmetric, positive operator on ${\cal F}_0$. Eq. (\ref{Eq2}) then has the same structure as the ones used to calculate transport coefficients in homogeneous systems. We can then use the usual tricks to get a bound on the spin conductance \cite{smith1989transport}. We indeed write that for any real $\lambda$ and any function $b\in {\cal F}_0$, we have $( a_0+\lambda b|\bar C_2[a_0+\lambda b])\ge 0$, and using the fact that this second order polynomial in $\lambda$ is always positive, we obtain from the negativity of the discriminant that for any $b$,
\begin{equation}
G\ge \frac{( p_z|b)^2}{( b|\bar C[b])}=\frac{\langle p_z|b\rangle^2}{\langle b|C[b]\rangle},
\end{equation}
the bound being reached for $b=a_0$. We take as a variational ansatz $b=p_z$, since as discussed earlier, the collisionless regime is associated with a rather flat velocity profile. We then obtain
\begin{equation}
G\ge \frac{\left(\int d^2\bm\rho e^{-\rho^2/2}\right)^2}{\int d^2\bm\rho e^{-\rho^2}}\frac{n_0}{\Gamma_0}=4\pi \frac{n_0}{\Gamma_0},
\end{equation}
with $\Gamma_0$ the spin drag on the axis of the trap. The prefactor is $4\pi\simeq 12.56$, not far from the result 15.87 found analytically for the Maxwellian gas, and the bound is indeed satisfied.
An improved variational bound can be obtained by using the exact result Eq. (\ref{EqSOMMaxwell}) found for the Maxwellian gas to estimate the spin drag for constant-momentum or unitary-limited cross-sections. For the Maxwellian gas, this gives by definition the exact result, while for the constant cross-section and the unitary gases, we obtain respectively $\Gamma\ge 14.5/\Gamma_0$ and $\Gamma\ge 17/\Gamma_0$.
\section{Interpolation scheme for the spin conductance}
We know that for a power-law cross-section \footnote{For a real cross-section, $G$ should also depend on $k_{\rm th}a$ - although this dependence is weak.}, the spin conductance $G$ scales like $n_0(0)/\Gamma_0 f(1/\Gamma_0)$ where $f$ obeys the following asymptotic behaviors:
\begin{enumerate}
\item In the collisionless limit, $f$ converges to a constant value ($\simeq 15.87$ for the Maxwellian gas).
\item In the hydrodynamic limit, $f$ has a logarithmic singularity and scales like $2\pi \ln \Gamma_0$.
\end{enumerate}
To interpolate between these two limits we make use of the Bessel function $K_0$ which vanishes at $+\infty$ and diverges as $-\ln x$ at $x=0$. We thus approximate $f$ by
\begin{equation}
f(x)=2\pi K_0(x)+15.87\frac{x+a}{x+b},
\end{equation}
where $a$ and $b$ are determined by fitting the results of the molecular dynamics simulations (see Fig. \ref{Fig:1}). In the case of a constant cross-section, we obtain $a=0.11$ and $b=0.52$. We note that the largest relative error between the Pad\'e interpolation and the result of the molecular simulation is observed for the largest values of $\Gamma_0$ and amounts to $\simeq 6$~\% for $\Gamma_0\simeq 10$.
\begin{figure}
\centerline{\includegraphics[width=\columnwidth]{GraphPade.eps}}
\caption{Spin conductance $G$ for a constant cross-section. Dots: Molecular Dynamics Simulation. Solid line: Pad\'e's approximation. }
\label{Fig:1}
\end{figure}
|
1,314,259,993,547 | arxiv | \subsection*{#1}\atocs{#1}}
\newcommand{\aob}[2]{\ens\big({#1 \atop #2}\big)}
\newcommand{\aobc}[2]{\ens\big\{{#1 \atop #2}\big\}}
\newcommand{\aonb}[2]{\ens\frac{#1}{#2}}
\newcommand{\aoboc}[3]{\lba{c}#1\\#2\\#3\ear}
\newcommand{\aobcod}[4]{\ens\big({#1 \atop #2}{\mbox{ } #3 \atop \mbox{ } #4}\big)}
\newcommand{\aobcodx}[4]{\ens\left[{#1 \atop #2}{\mbox{ } #3 \atop \mbox{ } #4}\right]}
\newcommand{\aobcodL}[4]{\ens\left[{#1 \atop #2}{ #3 \atop #4}\right]}
\newcommand{\abx}[2]{ {{#1 \atop #2}}}
\newcommand{\abL}[2]{\left[ {#1 \atop #2}\right]}
\newcommand{\abc}[3]{ \left[ { { #1 \atop #2} \atop _{#3}}\right]}
\newcommand{\abcL}[3]{ \left[ { { #1 \atop #2} \atop _{#3}}\right]}
\newcommand{\abcx}[3]{ { { #1 \atop #2} \atop _{#3}}}
\newcommand{\vecL}[3]{ \left[ { { #1 \atop #2} \atop _{#3}}\right]}
\newcommand{\vecx}[3]{ { { #1 \atop #2} \atop _{#3}}}
\newcommand{\abcdx}[4]{{ { #1 \atop #2} \atop {#3 \atop #4}}}
\newcommand{\ivecx}[4]{{ { #1 \atop #2} \atop {#3 \atop #4}}}
\newcommand{\abcdL}[4]{\left[{ { #1 \atop #2} \atop {#3 \atop #4}}\right]}
\newcommand{\ivecL}[4]{\left[{ { #1 \atop #2} \atop {#3 \atop #4}}\right]}
\newcommand{\tubytu}[4]{ \left[ \abx{#1}{#2} \abx{#3}{#4} \right]}
\newcommand{\abcS}[6]{ \left[ \abcx{#1}{#2}{#3} \abcx{#4}{#5}{#6} \right]}
\newcommand{\thbytu}[6]{ \left[ \abcx{#1}{#2}{#3} \abcx{#4}{#5}{#6} \right]}
\newcommand{\abcs}[6]{\left[ { #1 \atop #2} { #3 \atop #4} { #5 \atop #6} \right]}
\newcommand{\tubyth}[6]{\left[ { #1 \atop #2} { #3 \atop #4} { #5 \atop #6} \right]}
\newcommand{\abcM}[9]{ \left[ \abcx{#1}{#2}{#3} \abcx{#4}{#5}{#6}
\abcx{#7}{#8}{#9}\right]}
\newcommand{\thbyth}[9]{ \left[ \abcx{#1}{#2}{#3} \abcx{#4}{#5}{#6}
\abcx{#7}{#8}{#9}\right]}
\newcommand{\abcdex}[5]{{ {{{ #1 \atop #2} \atop {#3 \atop #4}} \atop _{#5}}} }
\newcommand{\abcdeL}[5]
{\left[{ {{{ #1 \atop #2} \atop {#3 \atop #4}} \atop _{#5}}}\right]}
\newcommand{\vvecx}[5]{{ {{{ #1 \atop #2} \atop {#3 \atop #4}} \atop _{#5}}} }
\newcommand{\vvecL}[5]
{\left[{ {{{ #1 \atop #2} \atop {#3 \atop #4}} \atop _{#5}}}\right]}
\newcommand{\dadb}[2]{\ens{\frac{\pde #1}{\pde #2}}}
\newcommand{\dadbd}[2]{\ens{\frac{d #1}{d #2}}}
\newcommand{\dsqdadb}[2]{\ens{\frac{\pde^2}{\pde #1 \pde #2}}}
\newcommand{\dsqadbdc}[3]{\ens{\frac{\pde^2 #1}{\pde #2 \pde #3}}}
\newcommand{\dsqadbdcd}[3]{\ens{\frac{d^2 #1}{d #2 d #3}}}
\newcommand{\dsqadbsq}[2]{\ens{\frac{\pde^2 #1}{\pde #2^2}}}
\newcommand{\dsqadbsqd}[2]{\ens{\frac{d^2 #1}{d #2^2}}}
\newcommand{\adotb}[2]{\ens{#1,\cdots,#2}}
\newcommand{\dash}{$"$}
\newcommand{\dai}{^{'}}
\newcommand{\dae}{$^{'}$}
\newcommand{\daes}{$^{'}$ }
\newcommand{\ddai}{^{''}}
\newcommand{\ddae}{$^{''}$}
\newcommand{\emu}[1]{\ens{e^{-#1}}}
\newcommand{\epu}[1]{\ens{e^{#1}}}
\newcommand{\asubb}[2]{\ens{#1_{#2}}}
\newcommand{\calx}[1]{\ens{{\cal #1}}}
\newcommand{\subc}[1]{\ens{_{(#1)}}}
\newcommand{\aorb}[5]{ #1 = \lcba {r@{\quad if\quad}l} #2 & #3 \\ #4 & #5 \end{array}\right. }
\newcommand{\aorborc}[7]{ #1 = \lcba {r@{\quad:\quad}l} #2 & #3 \\ #4 & #5\\ #6 & #7 \end{array}\right. }
\newcommand{\aorbx}[6]{ #1 #6 \lcba {r@{\quad if\quad}l} #2 & #3 \\ #4 & #5 \end{array}\right. }
\newcommand{\aorbs}[4]{\lcba {r@{\quad if\quad}l} #1 & #2 \\ #3 & #4 \end{array}\right. }
\newcommand{\intab}[2]{\ens{\int_{#1}^{#2}}}
\newcommand{\mbspace}[1]{\ens{$\mbox{ }$\hspace{#1 em}}}
\newcommand{\subab}[2]{\ens{#1_#2}}
\newcommand{\wov}[1]{\ens{\frac{1}{#1}}}
\newcommand{\sumab}[2]{\ens{\Sg_#1^#2}}
\newcommand{\wtv}[1]{\ens{1,\cdots,#1}}
\newcommand{\vtv}[2]{\ens{#1,\cdots,#2}}
\newcommand{\upcx}[1]{\ens{^{(#1)}}}
\newcommand{\subcx}[1]{\ens{_{(#1)}}}
\newcommand{\und}[1]{_{_#1}}
\newcommand{\snk}[1]{#1_{n,k}}
\newcommand{\wfig}{\begin{wrapfigure}{r}{0.5\textwidth}}
\newcommand{\winc}[1]{\includegraphics[width=0.48\textwidth]{#1}}
\newcommand{\wfigb}[1]{\begin{wrapfigure}[#1]{r}{0.5\textwidth}}
\newcommand{\wfige}{\end{wrapfigure}}
\newcommand{\alr}{\alpha_r}
\newcommand{\bmp}{\begin{minipage}}
\newcommand{\bfig}{\begin{figure}}
\newcommand{\bpf}{\begin{proof}}
\newcommand{\cip}{{_{p} \atop ^{\ra}}}
\newcommand{\call}{{\cal L}}
\newcommand{\calh}{{\cal H}}
\newcommand{\cale}{{\cal E}}
\newcommand{\cali}{{\cal I}}
\newcommand{\calp}{{\cal P}}
\newcommand{\calf}{{\cal F}}
\newcommand{\calg}{{\cal G}}
\newcommand{\cald}{{\cal D}}
\newcommand{\colw}{\columnwidth}
\newcommand{\domtpi}{\frac{d\omega}{2\pi}}
\newcommand{\ept}{\ep_t}
\newcommand{\epi}{\ep_i}
\newcommand{\emp}{\end{minipage}}
\newcommand{\efig}{\end{figure}}
\newcommand{\epf}{\end{proof}}
\newcommand{\fh}{\hat{f}}
\newcommand{\intoT}{\int_0^T}
\newcommand{\intot}{\int_0^t}
\newcommand{\intoi}{\int_0^\infty}
\newcommand{\intmii}{\int_{-\infty}^\infty}
\newcommand{\intow}{\int_0^1}
\newcommand{\intmww}{\int_{-1}^1}
\newcommand{\intmwo}{\int_{-1}^0}
\newcommand{\ion}{\frac{i}{n}}
\newcommand{\intiin}{\int_{\frac{i-1}{n}}^{\frac{i}{n}}}
\newcommand{\intmpp}{\int_{-\pi}^{\pi}}
\newcommand{\intopiot}{\int_0^{\pi/2}}
\newcommand{\limni}{\lim_{n\ra\infty}}
\newcommand{\limepo}{\lim_{\ep\ra 0}}
\newcommand{\limTi}{\lim_{T\rai}}
\newcommand{\nut}{\nu_t}
\newcommand{\omk}{\omega_k}
\newcommand{\odel}{o(\del)}
\newcommand{\phir}{\phi_r}
\newcommand{\quart}{\frac{1}{4}}
\newcommand{\qh}{\hat{q}}
\newcommand{\Qh}{\hat{Q}}
\newcommand{\rai}{\ra\infty}
\newcommand{\rao}{\ra 0}
\newcommand{\sgwn}{\Sg_1^n}
\newcommand{\sgwp}{\Sg_1^p}
\newcommand{\sgwm}{\Sg_1^m}
\newcommand{\sgmmm}{\Sg_{-m}^m}
\newcommand{\sgwi}{\Sg_1^\infty}
\newcommand{\sgmii}{\Sg_{-\infty}^\infty}
\newcommand{\sgoi}{\Sg_0^\infty}
\newcommand{\sgwN}{\Sg_1^N}
\newcommand{\sgwr}{\Sg_1^r}
\newcommand{\sgwM}{\Sg_1^M}
\newcommand{\sgwT}{\Sg_1^T}
\newcommand{\tpikon}{\frac{2\pi}{n}}
\newcommand{\upm}{^{-1}}
\newcommand{\upgi}{^{-}}
\newcommand{\upa}{\uparrow}
\newcommand{\woT}{\frac{1}{T}}
\newcommand{\won}{\frac{1}{n}}
\newcommand{\wtn}{1,\cdots,n}
\newcommand{\wtT}{1,\cdots,T}
\newcommand{\woN}{\frac{1}{N}}
\newcommand{\xt}{x_t}
\newcommand{\Xt}{X_t}
\newcommand{\Xsubi}{X_i}
\newcommand{\Xsi}{X_i}
\newcommand{\yt}{y_t}
\begin{document}
\title{State Space Methods for Granger-Geweke Causality Measures
\thanks{This work was partially supported by the NIH}}
\author{Victor Solo\thanks{v.solo@unsw.edu.au}\\
School of Electrical Engineering\\
University of New South Wales, Sydney, AUSTRALIA}
\date{January 16, 2015}
\maketitle
\begin{abstract}
At least two recent developments have put the
spotlight on some significant gaps in the theory of
multivariate time series.
The recent interest in the dynamics of networks;
and the advent, across a range of applications,
of measuring modalities
that operate on different temporal scales.
Fundamental to the description of network dynamics
is the direction of interaction between nodes, accompanied
by a measure of the strength of such interactions.
Granger causality (GC) and its associated frequency domain
strength measures (GEMs) (due to Geweke) provide
a framework for the formulation and analysis of these issues.
In pursuing this setup, three significant unresolved issues emerge.
Firstly computing GEMs
involves computing submodels of
vector time series models, for which reliable methods do not exist;
Secondly the impact of filtering on GEMs has never been definitively
established. Thirdly the impact of downsampling on GEMs has never
been established. In this work, using state space methods,
we resolve all these issues
and illustrate the results with some simulations.
Our discussion is motivated by some problems
in (fMRI) brain imaging but is of general applicability.
\end{abstract}
\newcommand{\anb}[2]{\ens{(#1 \mbox{ } #2)}}
\newcommand{\abcd}[4]{\ens{\big({#1 \atop #2} \mbox{ } {#3 \atop #4}\big)}}
\newcommand{\expj}[1]{exp(#1)}
\newcommand{\Lm}{L^{-1}}
\newcommand{\Lp}{L}
\newcommand{\Ab}{\bar{A}}
\newcommand{\Abm}{\bar{A}_m}
\newcommand{\As}{A_s}
\newcommand{\at}{a_t}
\newcommand{\bsubt}{b_t}
\newcommand{\aum}{A^m}
\newcommand{\cx}{C_X}
\newcommand{\cy}{C_Y}
\newcommand{\chisq}{\chi^2}
\newcommand{\bx}{B_X}
\newcommand{\by}{B_Y}
\newcommand{\bo}{B_o}
\newcommand{\ddash}{$"$}
\newcommand{\dlamtpi}{\frac{d\lambda}{2\pi}}
\newcommand{\evec}{eigenvector }
\newcommand{\bxt}{\bar{X}_t}
\newcommand{\byt}{\bar{Y}_t}
\newcommand{\bepxt}{\bar{\ep}_{X,t}}
\newcommand{\bnt}{\bar{n}_t}
\newcommand{\bzt}{\bar{Z}_t}
\newcommand{\dlamotpi}{\frac{d\lambda}{2\pi}}
\newcommand{\calc}{{\mathcal C}}
\newcommand{\calo}{{\mathcal O}}
\newcommand{\dy}{d_y}
\newcommand{\dx}{d_x}
\newcommand{\dxx}{D_{XX}}
\newcommand{\dxy}{D_{XY}}
\newcommand{\dyx}{D_{YX}}
\newcommand{\dyy}{D_{YY}}
\newcommand{\epa}{\ep_a}
\newcommand{\epb}{\ep_b}
\newcommand{\est}{e_t}
\newcommand{\epxt}{\ep_{X,t}}
\newcommand{\epyt}{\ep_{Y,t}}
\newcommand{\epbk}{\bar{\ep}_k}
\newcommand{\epbxk}{\bar{\ep}_{X,k}}
\newcommand{\epbyk}{\bar{\ep}_{Y,k}}
\newcommand{\epxoyt}{({_{\ep_{X,t}} \atop ^{\ep_{Y,t}}})}
\newcommand{\elam}{\expj{j\lambda}}
\newcommand{\elamm}{\expj{-j\lambda}}
\newcommand{\fyx}{F_{Y\ra X}}
\newcommand{\fxy}{F_{X\ra Y}}
\newcommand{\fyxh}{\hat{F}_{Y\ra X}}
\newcommand{\fxyh}{\hat{F}_{X\ra Y}}
\newcommand{\fbyx}{F_{\bar{Y}\ra \bar{X}}}
\newcommand{\fbxy}{F_{\bar{X}\ra \bar{Y}}}
\newcommand{\fyxlam}{f_{Y\ra X}(\lambda)}
\newcommand{\fxylam}{f_{X\ra Y}(\lambda)}
\newcommand{\fbyxlam}{f_{\bar{Y}\ra\bar{X}}(\lambda)}
\newcommand{\fbxylam}{f_{\bar{X}\ra\bar{Y}}(\lambda)}
\newcommand{\felam}{f_e(\lambda)}
\newcommand{\fxlam}{f_X(\lambda)}
\newcommand{\fylam}{f_Y(\lambda)}
\newcommand{\fbxlam}{f_{\bar{X}}(\lambda)}
\newcommand{\fbzlam}{f_{\bar{Z}}(\lambda)}
\newcommand{\fydx}{F_{Y.X}}
\newcommand{\fxoy}{F_{XoY}}
\newcommand{\fydxh}{\hat{F}_{Y.X}}
\newcommand{\fxoyh}{\hat{F}_{XoY}}
\newcommand{\fg}{f_G}
\newcommand{\fy}{f_Y}
\newcommand{\fxz}{f_X(\Lp)}
\newcommand{\fez}{f_e(\Lp)}
\newcommand{\fa}{f_a}
\newcommand{\fb}{f_b}
\newcommand{\Gc}{G_c}
\newcommand{\Go}{G_o}
\newcommand{\Goc}{G_{o,c}}
\newcommand{\hxik}{\hat{\xi}_k}
\newcommand{\hxikp}{\hat{\xi}_{k+1}}
\newcommand{\hxx}{H_{XX}}
\newcommand{\hxy}{H_{XY}}
\newcommand{\hyx}{H_{YX}}
\newcommand{\hyy}{H_{YY}}
\newcommand{\he}{H_e}
\newcommand{\hex}{H_{eX}}
\newcommand{\hexz}{H_{eX}(\Lp)}
\newcommand{\hbex}{H_{e\bar{X}}}
\newcommand{\Hb}{\bar{H}}
\newcommand{\gss}{(A,C,[Q,S,R])}
\newcommand{\iss}{(A,C,B,\Sg_\ep)}
\newcommand{\gamx}{\gamma_x}
\newcommand{\gamy}{\gamma_y}
\newcommand{\gzm}{g(z^{-1})}
\newcommand{\Gzm}{G(z^{-1})}
\newcommand{\heyxz}{H_{e,YX}(\Lp)}
\newcommand{\ho}{H_o}
\newcommand{\hoex}{H_{oeX}}
\newcommand{\hnt}{\hat{n}_t}
\newcommand{\hyet}{\hat{Y}_{E,t}}
\newcommand{\gamo}{\gamma_0}
\newcommand{\gamw}{\gamma_1}
\newcommand{\hxyz}{H_{XY}(L)}
\newcommand{\hxxz}{H_{XX}(L)}
\newcommand{\hyxz}{H_{YX}(L)}
\newcommand{\hyyz}{H_{YY}(L)}
\newcommand{\hc}{H_c}
\newcommand{\hbxyz}{\bar{H}_{XY}(L)}
\newcommand{\hbxxz}{\bar{H}_{XX}(L)}
\newcommand{\hbyxz}{\bar{H}_{YX}(L)}
\newcommand{\hbyyz}{\bar{H}_{YY}(L)}
\newcommand{\kbex}{K_{e\bar{X}}}
\newcommand{\km}{K_m^\ast}
\newcommand{\Kbm}{\bar{K}_m}
\newcommand{\lm}{L_m}
\newcommand{\kcx}{K_{(X)}}
\newcommand{\kcy}{K_{(Y)}}
\newcommand{\Kt}{K_t}
\newcommand{\Ks}{K_s}
\newcommand{\Lum}{L\upm}
\newcommand{\Jo}{J_o}
\newcommand{\Jb}{\bar{J}}
\newcommand{\ka}{k_a}
\newcommand{\kb}{k_b}
\newcommand{\kc}{k_c}
\newcommand{\nuxk}{\nu_{X,k}}
\newcommand{\nuyk}{\nu_{Y,k}}
\newcommand{\nuxl}{\nu_{X,l}}
\newcommand{\ozk}{\ol{z}_k}
\newcommand{\oxik}{\ol{\xi}_k}
\newcommand{\oxikp}{\ol{\xi}_{k+1}}
\newcommand{\oepk}{\ol{\ep}_k}
\newcommand{\nuk}{\nu_k}
\newcommand{\nt}{n_t}
\newcommand{\omx}{\Om_X}
\newcommand{\phizm}{\phi(\Lp)}
\newcommand{\Phizm}{\Phi(\Lp)}
\newcommand{\pcx}{P_{(X)}}
\newcommand{\phix}{\phi_x}
\newcommand{\Phix}{\Phi_X}
\newcommand{\phiy}{\phi_y}
\newcommand{\phia}{\phi_a}
\newcommand{\phib}{\phi_b}
\newcommand{\Phiz}{\Phi(\Lp)}
\newcommand{\phixyz}{\Phi_{XY}(\Lp)}
\newcommand{\phixxz}{\Phi_{XX}(\Lp)}
\newcommand{\phixx}{\Phi_{XX}}
\newcommand{\phiyxz}{\Phi_{YX}(\Lp)}
\newcommand{\phiyyz}{\Phi_{YY}(\Lp)}
\newcommand{\nxt}{n_{X,t}}
\newcommand{\nyt}{n_{Y,t}}
\newcommand{\py}{p_Y}
\newcommand{\Pt}{P_t}
\newcommand{\Ptp}{P_{t+1}}
\newcommand{\Pm}{P_m^\ast}
\newcommand{\Phib}{\bar{\Phi}}
\newcommand{\Psib}{\bar{\Psi}}
\newcommand{\Sgep}{\Sg_{\ep}}
\newcommand{\sgo}{\Sg_o}
\newcommand{\sgoo}{\Sg^o}
\newcommand{\sgep}{\Sg_{\ep}}
\newcommand{\sgepx}{\Sg_{X,\ep}}
\newcommand{\sgepy}{\Sg_{Y,\ep}}
\newcommand{\sgepyx}{\Sg_{YX,\ep}}
\newcommand{\sgepxy}{\Sg_{XY,\ep}}
\newcommand{\sgepygx}{\Sg_{(Y|X),\ep}}
\newcommand{\sgepxgy}{\Sg_{(X|Y),\ep}}
\newcommand{\sgxe}{\sgepx}
\newcommand{\sgyxe}{\sgepyx}
\newcommand{\sgxye}{\sgepxy}
\newcommand{\sgye}{\sgepy}
\newcommand{\sgxem}{\Sg_{X,\ep}^{-1}}
\newcommand{\qssr}{({_Q \atop ^{S^T}} {_{S} \atop ^R})}
\newcommand{\qsm}{Q_m}
\newcommand{\Qb}{\bar{Q}}
\newcommand{\Qbm}{\bar{Q}_m}
\newcommand{\Qs}{Q_s}
\newcommand{\Qm}{Q_m}
\newcommand{\Qmm}{Q_{m-1}}
\newcommand{\thrd}{\frac{1}{3}}
\newcommand{\ry}{r_Y}
\newcommand{\rg}{r_G}
\newcommand{\thty}{\theta_Y}
\newcommand{\thtg}{\theta_G}
\newcommand{\thtx}{\theta_x}
\newcommand{\sga}{\sg_a}
\newcommand{\sgb}{\sg_b}
\newcommand{\sm}{S_m}
\newcommand{\Sgb}{\overline{\Sg}}
\newcommand{\Rb}{\overline{R}}
\newcommand{\Sb}{\overline{S}
\newcommand{\stsp}{state space }
\newcommand{\taua}{\tau_a}
\newcommand{\taub}{\tau_b}
\newcommand{\vm}{V_m^\ast}
\newcommand{\vbm}{\bar{V}_m}
\newcommand{\vcx}{V_{(X)}}
\newcommand{\supa}{^a}
\newcommand{\hxit}{\hat{\xi}_t}
\newcommand{\hxitp}{\hat{\xi}_{t+1}}
\newcommand{\wk}{w_k}
\newcommand{\wt}{w_t}
\newcommand{\vnu}{V_\nu}
\newcommand{\Vt}{V_t}
\newcommand{\Vtm}{V_t^{-1}}
\newcommand{\upcm}{^{(m)}}
\newcommand{\zimai}{(zI-A)^{-1}}
\newcommand{\wophixzm}{\frac{1}{1-\phi_X(z^{-1})}}
\newcommand{\wophiyzm}{\frac{1}{1-\phi_Y(z^{-1})}}
\newcommand{\wxk}{w_{X,k}}
\newcommand{\wyk}{w_{Y,k}}
\newcommand{\wtxk}{\tilde{w}_{X,k}}
\newcommand{\uph}{^{\half}}
\newcommand{\xit}{\xi_t}
\newcommand{\xitp}{\xi_{t+1}}
\newcommand{\zmt}{z^{-2}}
\newcommand{\zt}{z_t}
\newcommand{\xoyt}{({_{x_t} \atop ^{y_t}})}
\newcommand{\ybk}{\bar{y}_k}
\newcommand{\xbk}{\bar{x}_k}
\newcommand{\zbk}{\bar{z}_k}
\newcommand{\xbat}{\bar{x}_t}
\newcommand{\ybat}{\bar{y}_t}
\newcommand{\zbt}{\bar{z}_t}
\newcommand{\xabm}{X_{a,b}^{-}}
\newcommand{\xabp}{X_{a,b}^{+}}
\newcommand{\xoa}{X_a^0}
\newcommand{\xab}{X_a^b}
\newcommand{\xmt}{X_{-\infty}^t}
\newcommand{\ymt}{Y_{-\infty}^t}
\newcommand{\Xtp}{X_{t+1}}
\newcommand{\xupt}{X^t}
\newcommand{\xuptp}{X^{t+1}}
\newcommand{\yupt}{Y^t}
\newcommand{\yuptp}{Y^{t+1}}
\newcommand{\xuptpp}{X^{t+p}}
\newcommand{\yuptpp}{Y^{t+p}}
\newcommand{\xuptppp}{X^{t+p+1}}
\newcommand{\xtpupinf}{X_{t+1}^{\infty}}
\newcommand{\xmtmm}{X^{-1}_{-(t-1)}}
\newcommand{\ymtmm}{Y^{-1}_{-(t-1)}}
\newcommand{\Ytp}{Y_{t+1}}
\newcommand{\xb}{\bar{X}}
\newcommand{\xix}{\xi_x}
\newcommand{\xiy}{\xi_y}
\newcommand{\zetak}{\zeta_k}
\newcommand{\zimam}{(L^{-1}I-A)^{-1}}
\newcommand{\zimaumm}{(L^{-1}I-A^m)^{-1}}
\newcommand{\liima}{(L^{-1}I-A)^{-1}}
\newcommand{\imal}{(I-AL)^{-1}}
\newcommand{\liimam}{(L^{-1}I-A^m)^{-1}}
\newcommand{\imaml}{(I-A^mL)^{-1}}
\newcommand{\czl}{C_Z(L)}
\newcommand{\czk}{C_{Z,k}}
\newcommand{\czo}{C_{Z,0}}
\newcommand{\alp}{\alpha_\perp}
\newcommand{\cxl}{C_X(L)}
\newcommand{\cyl}{C_Y(L)}
\newcommand{\cyxl}{C_{YX}(L)}
\newcommand{\bcyxl}{\bar{C}_{YX}(L)}
\newcommand{\bczl}{\bar{C}_Z(L)}
\newcommand{\bcxl}{\bar{C}_X(L)}
\newcommand{\bepxs}{\bar{\ep}_{X,s}}
\newcommand{\bcx}{\bar{C}_X}
\newcommand{\hepxt}{\hat{\ep}_{X,t}}
\newcommand{\hepxms}{\hat{\ep}_{X,ms}}
\newcommand{\eptmk}{\ep_{t-k}}
\newcommand{\dombtpi}{\frac{d\bar{\omega}}{2\pi}}
\newcommand{\czom}{C_Z(e^{j\omega})}
\newcommand{\epxms}{\ep_{X,ms}}
\newcommand{\ejkom}{e^{jk\omega}}
\newcommand{\byes}{\bar{Y}_{E,s}}
\newcommand{\bns}{\bar{n}_s}
\newcommand{\bep}{\bar{\ep}}
\newcommand{\hep}{\hat{\ep}}
\newcommand{\bac}{\bar{C}}
\newcommand{\bys}{\bar{Y}_s}
\newcommand{\bxs}{\bar{X}_s}
\newcommand{\bzs}{\bar{Z}_s}
\newcommand{\byis}{\bar{Y}_{I,s}}
\newcommand{\hx}{H(X)}
\newcommand{\hxgy}{H(X|Y)}
\newcommand{\hygx}{H(Y|X)}
\newcommand{\ixy}{I(X;Y)}
\newcommand{\hy}{H(Y)}
\newcommand{\gz}{|Z}
\newcommand{\xcy}{X;Y}
\newcommand{\fzom}{f_Z(\omega)}
\newcommand{\gamzk}{\Gam_{Z,k}}
\newcommand{\gamzkm}{\Gam_{Z,km}}
\newcommand{\intpp}{\int_{-\pi}^\pi}
\newcommand{\intotpi}{\int_0^{2\pi}}
\newcommand{\fz}{f_Z}
\newcommand{\fbz}{f_{\bar{Z}}}
\newcommand{\fwom}{F_W(\omega)}
\newcommand{\grcs}{{_{GC} \atop ^{\lra}}}
\newcommand{\lgrcs}{{_{GC} \atop ^{\lra}}}
\newcommand{\wgrcs}{{_{WGC} \atop ^{\lra}}}
\newcommand{\wlgrcs}{{_{WGC} \atop ^{\lra}}}
\newcommand{\wgrcsa}{{_{WGC^\ast} \atop ^{\lra}}}
\newcommand{\sgrcs}{{_{SGC} \atop ^{\lra}}}
\newcommand{\sgrcsa}{{_{SGC^\ast} \atop ^{\lra}}}
\newcommand{\slgrcs}{{_{SGC} \atop ^{\lra}}}
\newcommand{\usgrcs}{{_{USGC} \atop ^{\lra}}}
\newcommand{\ugrcs}{{_{UGC} \atop ^{\lra}}}
\newcommand{\ulgrcs}{{_{UGC} \atop ^{\lra}}}
\newcommand{\haw}{\hat{\omega}}
\newcommand{\hyes}{\hat{Y}_{E,s}}
\newcommand{\lt}{$L_t$ }
\newcommand{\lot}{$L_t^0$ }
\newcommand{\omba}{\bar{\omega}}
\newcommand{\omb}{\bar{\omega}}
\newcommand{\limminf}{{_{lim} \atop ^{m\ra\infty}}}
\newcommand{\rt}{$R_t$ }
\newcommand{\rot}{$R_t^0$ }
\newcommand{\sgx}{\Sg_X}
\newcommand{\sgxy}{\Sg_{XY}}
\newcommand{\sgyx}{\Sg_{YX}}
\newcommand{\sgy}{\Sg_Y}
\newcommand{\xw}{X_1}
\newcommand{\xtwo}{X_2}
\newcommand{\xtre}{X_3}
\newcommand{\xat}{X_{\alpha,t}}
\newcommand{\xbt}{X_{\beta,t}}
\newcommand{\ztt}{Z_{\theta,t}}
\newcommand{\zbs}{\bar{Z}_s}
\newcommand{\zms}{Z_{ms}}
\newcommand{\xbs}{\bar{X}_s}
\newcommand{\xms}{X_{ms}}
\newcommand{\ybs}{\bar{Y}_s}
\newcommand{\yms}{Y_{ms}}
\newcommand{\wom}{\frac{1}{m}}
\newcommand{\wscs}{{_{WSC} \atop ^{\lra}}}
\newcommand{\sscs}{{_{SSC} \atop ^{\lra}}}
\newcommand{\wscsa}{{_{WSC^\ast} \atop ^{\lra}}}
\newcommand{\sscsa}{{_{SSC^\ast} \atop ^{\lra}}}
\newcommand{\wpcs}{{_{WPC} \atop ^{\lra}}}
\newcommand{\spcs}{{_{SPC} \atop ^{\lra}}}
\newcommand{\wpcsa}{{_{WPC^\ast} \atop ^{\lra}}}
\newcommand{\spcsa}{{_{SPC^\ast} \atop ^{\lra}}}
\newcommand{\yet}{Y_{E,t}}
\newcommand{\yit}{Y_{I,t}}
\newcommand{\upw}{^{-1}}
\newcommand{\yonm}{Y_1^{n-1}}
\newcommand{\zdi}{Z_i}
\newcommand{\zimw}{Z^{i-1}_1}
\newcommand{\zmimw}{Z^{-1}_{-(i-1)}}
\newcommand{\zwn}{Z_1^n}
\newcommand{\xmimw}{X^{-1}_{-(i-1)}}
\newcommand{\ymimw}{Y^{-1}_{-(i-1)}}
\newcommand{\zdm}{Z^{-}}
\newcommand{\xmmn}{X_{-n}^{-1}}
\newcommand{\ymmn}{Y_{-n}^{-1}}
\newcommand{\px}{p_x}
\newcommand{\xon}{X_1^n}
\newcommand{\xton}{X_1,\cdots,X_n}
\newcommand{\xtn}{X_1,\cdots,X_n}
\newcommand{\yon}{Y_1^n}
\newcommand{\xonm}{X_1^{n-1}}
\newcommand{\yonp}{Y_1^{n+1}}
\newcommand{\yton}{Y_1,\cdots,Y_n}
\newcommand{\zon}{Z_1^n}
\newcommand{\fon}{\frac{1}{n}}
\newcommand{\paperp}{\perp}
\newcommand{\calm}{{\cal M}}
\newcommand{\xupn}{X^n}
\newcommand{\xupnp}{X^{n+1}}
\newcommand{\xupinf}{X^{\infty}}
\newcommand{\yupn}{Y^n}
\newcommand{\yupnp}{Y^{n+1}}
\newcommand{\yupinf}{Y^{\infty}}
\newcommand{\xupnpp}{X^{n+p}}
\newcommand{\yupnpp}{Y^{n+p}}
\newcommand{\xupnppp}{X^{n+p+1}}
\newcommand{\xnpupinf}{X_{n+1}^{\infty}}
\newcommand{\xmnmm}{X^{-1}_{-(n-1)}}
\newcommand{\ymnmm}{Y^{-1}_{-(n-1)}}
\newcommand{\xo}{X^0}
\newcommand{\yo}{Y^0}
\newcommand{\zo}{Z^0}
\newcommand{\xm}{X^{-}}
\newcommand{\ym}{Y^{-}}
\newcommand{\zum}{Z^{-}}
\newcommand{\xp}{X^{+}}
\newcommand{\yp}{Y^{+}}
\newcommand{\zp}{Z^{+}}
\newcommand{\comdots}{,\cdots,}
\newcommand{\mn}{m_n}
\newcommand{\xn}{x_n}
\newcommand{\xnp}{x_{n+1}}
\newcommand{\yn}{y_n}
\newcommand{\ynp}{y_{n+1}}
\newcommand{\ns}{ance}
\newcommand{\ect}{empirical causality testing }
\newcommand{\ectp}{empirical causality testing procedure }
\newcommand{\evals}{eigenvalues }
\newcommand{\dt}{discrete time }
\newcommand{\ct}{continuous time }
\newcommand{\biva}{bivariate }
\newcommand{\casy}{causality }
\newcommand{\anly}{analysis }
\newcommand{\catch}{catchment }
\newcommand{\cata}{catchment area }
\newcommand{\cind}{conditional independence }
\newcommand{\cent}{conditional entropy }
\newcommand{\chr}{chain rule }
\newcommand{\cdl}{conditional }
\newcommand{\dcri}{{\bf DCRI} }
\newcommand{\dri}{{\bf DRI} }
\newcommand{\dpi}{{\bf DPI} }
\newcommand{\cdip}{conditional independence }
\newcommand{\Cdip}{Conditional independence }
\newcommand{\dngc}{does not Granger cause }
\newcommand{\dnsc}{does not Sims cause }
\newcommand{\dn}{does not }
\newcommand{\conl}{controllable }
\newcommand{\conly}{controllability }
\newcommand{\eval}{eigenvalue }
\newcommand{\gc}{Granger causality }
\newcommand{\grc}{Granger cause }
\newcommand{\gct}{Granger causality testing }
\newcommand{\gcs}{Granger causes }
\newcommand{\gctp}{Granger causality testing procedure }
\newcommand{\inft}{information theory }
\newcommand{\mult}{multivariate }
\newcommand{\nst}{nonstationary }
\newcommand{\nsty}{nonstationarity }
\newcommand{\obint}{observation interval }
\newcommand{\mui}{mutual information }
\newcommand{\Mui}{Mutual Information }
\newcommand{\nl}{nonlinear }
\newcommand{\nons}{nonstationary }
\newcommand{\pnd}{purely non-deterministic }
\newcommand{\obse}{observable }
\newcommand{\obsy}{observability }
\newcommand{\lamw}{\lambda_1}
\newcommand{\lamt}{\lambda_2}
\newcommand{\procs}{processes }
\newcommand{\proc}{process }
\newcommand{\stty}{stationarity }
\newcommand{\sint}{sampling interval }
\newcommand{\rivl}{river level }
\newcommand{\pev}{prediction error variance }
\newcommand{\pca}{principal component analysis }
\newcommand{\pc}{principal component }
\newcommand{\pdf}{probability density function }
\newcommand{\pdfs}{probability density functions }
\newcommand{\sym}{{\bf SYM} }
\newcommand{\scas}{Sims causality }
\newcommand{\wod}{Wold decomposition }
\newcommand{\varie}{variance }
\section{\bf Introduction}
Following the operational development of the notion
of causality by \cc{GRAN69} and \cc{SIMS72},
Granger causality (henceforth denoted GC)
analysis has become an important
part of time series and econometric testing and inference
e.g. \cc{HAML94}. It has also been applied in the biosciences,
\cc{KAMI91}, \cc{BERN99}, \cc{DING00b};
climatology
(global warming) \cc{SUNA96}, \cc{STER97}, \cc{TRIA05};
and most recently
functional magnetic resonance imaging (fMRI).
Since its introduction into fMRI
\cc{GOEB05}, \cc{OZAK05} it has become the subject of
an intense debate: e.g. see \cc{ROEB11} and
associated commentary
on that paper.
There are two main issues in that debate but
which occur more widely in dynamic networks.
Firstly,
the impact of downsampling on GC.
In the fMRI neuro-imaging application
causal processes
may operate on a time-scale of order
tens of milli-seconds
whereas the recorded
signals are only available on a one-second time-scale.
So it is natural to wonder if GC analysis on a slow
time-scale can reveal dynamics on a much faster time-scale.
Secondly, the impact of filtering on GC
due to the
hemodynamic response function
which relates the neural activity to the recorded fMRI
signal.
Since intuitively GC will be sensitive to
time delay,
the variability of the hemodynamic response
function, particularly spatially varying
time to onset and time to peak (confusingly
called delay in the fMRI literature)
has been suggested as a potential
source of problems \cc{DESP10a},\cc{HANW12}.
An important advance in GC theory and tools was made by \cc{GEWK82}
who provided measures of the strength of causality (henceforth
called GEM for Geweke causality measure)
including frequency domain decompositions of them.
Subsequently
it was pointed out that the GEMs are measures of mutual information
\cc{WAXR87}.
The GEMs were extended to
conditional causality in \cc{GEWK84}.
However GEMs have not found as wide application
as they should have, partly because of some technical difficulties
in calculating them discussed further below. But GEMs
(and their frequency domain versions) are
precisely the tool needed to pursue both the
GC downsampling and filtering questions.
In the econometric literature,
it was appreciated early that downsampling, especially
in the presence of aggregation could cause problems.
This was implicit in work of \cc{SIMS71}
; mentioned also in
work of \cc{CREI87} who gave an example
of contradictory causal analysis based
on monthly versus quaterly data and also
discussed in \cc{MARC85}. But precise general conditions
under which problems do and do not arise have
never been given. We do so below.
Some of the above econometric discussion is framed in terms
of sampling of continuous time models \cc{SIMS71},
\cc{MARC85},\cc{CREI87}. And authors such as \cc{SIMS71}
have suggested that models are best formulated initially
in continuous time. While this is a view the author
has long shared we deal with only discrete time models
here. To cast our development in terms of continuous
time models would require a considerable development
of its own without changing our basic message.
The issue at stake, in its simplest form, is the following.
Suppose that a pair of (possibly vector) processes posses a
unidirectional GC relation but suppose
measurements are only available at a slower time-scale
on filtered series.
Then two questions arise.
The first, which we call the \ul{forward} question, is this:
Is the unidirectional Granger causal relation preserved?
The second, which we call the \ul{reverse} question, is
harder. Suppose the downsampled filtered
series exhibit a uni-directional
GC relation; does that mean the underlying
unfiltered
faster
time-scale processes do?
The latter question is the more important and so far
has received no theoretical attention.
In order to resolve these issues we need to develop
some theory and some computational/modeling tools.
Firstly to compute GEMs
one needs to be able to find submodels
from a larger
(i.e. one having more time series) model.
Thus to compute the GEMs between time series $\xt,\yt$
\cc{GEWK82},\cc{GEWK84} attempted to avoid this by fitting
submodels separately to $\xt$ to $\yt$ and
then also fitting a joint model to $\xt,\yt$.
Unfortunately this can generate negative values for some of the
frequency domain GEMs \cc{DING06}.
Properly computing submodels will
resolve this problem and previous work
has not accomplished this
(we discuss the attempts in \cc{DUFT10} and \cc{DING06} below).
Secondly one needs to be able to compute how models
transform when downsampled.
This has only been done in special cases
\cc{PANW83}
or by methods
that are not computationally realistic.
We provide
computationally reliable, state space based methods
for doing this here.
Thirdly we need to study the effect of filtering on GEMs.
And then using these tools
one can compute filtered downsampled GEMs and hence
study the effect of sampling and filtering on GEMs.
To sum up we can say that
previous discussions including those above as well
as \cc{GEWK78},\cc{TELS67},\cc{SIMS71},\cc{MARC85}
fail to provide general algorithms for finding
submodels or models induced
by downsampling.
Indeed both these problems have remained
open problems in multivariate time series in their own right
for several decades and we resolve them here.
Further there does not seem to have been any
theoretical discussion
of the effect of filtering on GEMs and we
resolve that here also. To do that it turns
out that state space models provide the proper framework.
Throughout this work we deal with the dynamic interaction between
two vector time series. It is well known
that if there is
a third vector time series involved in the dynamics but not accounted
for then spurious causality can occur
for reasons that have nothing to do with downsampling.
This situation has been
discussed by \cc{HSIAO82}; see also \cc{GEWK84}. Other causes of spurious
causality such as observation noise are also not discussed.
Of course the impact of downsampling in the presence of a third (vector)
variable is also of interest but will be pursued elsewhere.
Finally our whole discussion is carried out in
the framework of linear time series models.
It is of great interest to pursue nonlinear versions
of these issues but that will be a major task.
The remainder of the paper is organized as follows.
In section 2 we review and modify some
state space results important for
system identification or model
fitting and needed in the following sections.
In section 3 we develop \stsp methods
methods for computing submodels of innovations state space models.
In section 4 we develop
methods for transforming state space models under downsampling.
In section 5 we review
GC and
GEMs and extend them to a state space setting.
In section 6 we study the effect of filtering on GC via frequency
domain GEMs.
In section 7 we give theory
to explain when causality is preserved under
downsampling.
In section 8
we discuss the reverse problem
showing how spurious causality can be induced by
downsampling.
Conclusions are in section 9. There are three appendices.
\subsection{\bf Acronyms and Notation
GC is Granger causality or Granger causes.
We use the GC designator alone where we make statements of
interest in both weak and strong cases.
dn-gc is does not Granger cause;
GEM is Geweke causality measure;
SS is state space or state space model;
ISS is innovations state space model;
VAR is vector autoregression;
VARMA is vector autoregressive moving average process;
wp1 is with probability 1.
$\xab$ denotes the values $x_a,x_{a+1},\cdots,x_b$; so $X_a^a\equiv x_a$.
For stationary processes we have $a=-\infty$.
$\zm=L$ is the lag or backshift operator;
LHS denotes left hand side etc.
If $M,N$ are positive semi-definite matrices
then $M\geq N$ means $M-N$ is positive semi-definite.
A square matrix is stable if all its \evals have modulus $<1$.
\section{\bf State Space}
\sco
The computational methods
we develop rely on state space techniques
and spectral factorization.
There is an intimate relation between
the steady state Kalman filter and
spectral factorization
which is fundamental to
our computational procedures.
So
in this section we review
and modify
some basic results in
state space theory, Kalman filtering
and spectral factorization.
In the sequel we deal with
two vector time series, which we collect together as,
$\zt=(\xt^T, \yt^T)^T$.
\subsection{\bf State Space Models}
We consider a general constant parameter SS model,
\begin{equation}
\xitp=A\xit+w_t ,\; \zt=C\xit+v_t \lab{ss}
\end{equation}
with positive semi-definite noise covariance,
$var\aob{w_t}{v_t}=\aobcod{Q}{S^T}{S}{R}$.
We refer to this as a SS model with parameters
(A,C,[Q,R,S]).
It is common with SS models to take $S=0$, but for
equivalence between the class of VARMA models
and the class of state space models it is necessary
to allow $S\neq 0$.
Now by matrix partitioning,
$|\aobcod{Q}{S^T}{S}{R}|=|R||\Qs|,\Qs=Q-SR\upm S^T$.
So introduce,
\noi{\bf Noise Condition N}.
$R$ is positive definite.
whereupon $\Qs$ is positive semi-definite.
\subsection{\bf Steady State Kalman Filter,
Innovations State Space (ISS) Models and
the Discrete Algebraic Ricatti Equation (DARE)}
We now recall the Kalman filter
for mean square estimation of the unobserved state sequence
$\xit$ from the observed time series $\zt$. It is given by
\cc{KAIL00}(Theorem 9.2.1),
\EQ
\hxitp=A\hxit+K_t\est,\;\;
\est=\zt-C\hxit,\mbox{ or } \zt=C\hxit+\est
\EN
where $\est$ is the
innovations sequence of variance $\Vt=R+C\Pt C^T$ and
$K_t=(A\Pt C^T+S)\Vtm$ is the Kalman gain sequence
and $\Pt$ is the state error variance matrix generated
from the Ricatti equation,
$\Ptp=A\Pt A^T+Q-\Kt\Vt \Kt^T$.
The Kalman filter is a time-varying filter
but we are interested in its steady state.
If there is a steady state
i.e. $\Pt\ra P$ as $t\rai$ then
then the limiting state error
variance matrix $P$ will obey the so-called
discrete algebraic Ricatti equation ({\bf DARE})
\begin{equation}
P=APA^T+Q-KVK^T, \lab{dare}
\end{equation}
where $V=R+CPC^T$ and $K=(APC^T+S)V\upm$ is the corresponding
steady state Kalman gain. With some clever algebra
\cc{KAIL00}(section 9.5.1) the DARE can be rewritten
(the Ricatti equation can be similarly rewritten),
\begin{eqnarray*}
P=\As P\As^T+\Qs-\Ks V\Ks^T
\end{eqnarray*}
where $\As=A-SR\upm C$ and $\Ks=\As PC^TV\upm$.
We now introduce two assumptions.\\
\noi{\bf Stabilizability Condition St}:
$\As,\Qs\uph$ is stabilizable (see Appendix A)\\
\noi{\bf Detectability Condition De}:
$\As,C$ is detectable.
\noi In Appendix A
it is shown this is equivalent to $A,C$ being detectable.
And also it holds automatically if $A$ is stable.\\
The resulting steady state Kalman filter can be written as,
\begin{equation}
\hxitp=A\hxit+K\ept,\;\; \zt=C\hxit+\ept \lab{iss}
\end{equation}
where $\ept$ is the steady state innovation
process and has variance
matrix $V$ and Kalman gain $K$.
This steady state filter provides a new state
space representation
of the data sequence. We refer to it as an
innovations
state space (ISS) model with parameters $(A,C,K,V)$.
We summarize this in,
\noi{\bf Result I}.
Given the SS model (\ref{ss}) with parameters $(A,C,[Q,R,S])$,
then provided N,St,De hold:
(a)
The corresponding ISS model (\ref{iss})
with parameters $(A,C,K,V)$
can be found by solving the DARE
(\ref{dare}) which has a unique
positive definite solution $P$.
(b)
$V$ is positive definite,
$(A,C)$ is detectable and
$A-KC$ is stable so that $(A,K)$ is controllable.
{\it Proof}. See appendix A.\\
{\it Remarks}.
(i) Henceforth an ISS model
with parameters $(A,C,K,V)$ will be required to have
$V$ positive definite, $(A,C)$ detectable and
$(A,K)$ controllable so that $A-KC$ is stable.
(ii) It is well known that any VARMA
model can be represented as an
ISS model and vice versa
\cc{SOLO86},\cc{HDE88}.
(iii) Note that the ISS model with parameters $(A,C,K,V)$ can also be written
as the SS model with parameters $(A,C,[KVK^T,V,KV])$.
(iv) The DARE is a quadratic matrix equation but can be
computed using the
(numerically reliable) DARE command in matlab as follows.
Compute:
$[P,L_0,G]=DARE(A^T,C^T,Q,R,S,I)$
and then,
$V=R+CPC^T,K=G^T$.
(v) Note that stationarity is not required for this result.
\subsection{\bf Stationarity and Spectral Factorization}
Given an ISS model with parameters $(A,C,B,\Sg_\ep)$,
we now introduce,\\
\noi{\bf Condition Ev}:
$A$ has all \evals with modulus $<1$ i.e. $A$ is a stability matrix.\\
With this assumption we can obtain an infinite vector moving average (VMA)
representation, an infinite vector autoregressive (VAR)
representation and a spectral factorization.
The following result is based on \cc{KAIL00}[Theorem 8.3.2] and surrounding
discussion.\\
\noi{\bf Result II}. For the ISS
model $(A,C,B,\Sg_\ep)$ obeying condition Ev we have,
\noi(a) Infinite VMA or Wold decomposition,
\begin{equation}
\zt=H(L)\ept=(C\liima B+I)\ept =(C\imal B L+I)\ept \lab{issb}
\end{equation}
(b)
Infinite VAR representation,
\begin{equation}
\ept=G(L)\zt=[I+C(L\upm I -A+KC)\upm K]\zt \lab{ivar}
\end{equation}
(c) Spectral factorization.
Put $L=\exp(-j\lambda)$ then, $\zt$
has positive definite spectrum with spectral factorization as follows,
\begin{equation}
f_Z(\lambda)=[C\liima , I]
\aobcodL{Q}{S^T}{S}{R} \abL{(L I-A^T)\upm C^T}{I} \lab{spf}
=H(L)\sgep H^T(L\upm)
\end{equation}
(d) $H(L)$ is minimum phase i.e. its
inverse exists and is causal and stable.\\
\noi{\it Proof}.
\ul{(a)}. Just write (\ref{iss}) in operator form.
The series is convergent wp1 and in mean square since $A$ is stable.
\ul{(b)}. Rewrite (\ref{iss}) as,
$\hxitp=(A-KC)\hxit+K\zt,\ept=\zt-C\hxit$. Then write this in operator
form. The series is convergent wp1 and in mean square since $A-KC$ is stable
and $\zt$ is stationary.
\ul{(c)}.
Follows from standard formulae
for spectra
of filtered stationary time series applied to (a).
\ul{(d)}.
From (a),(b) $G(L)=H\upm(L)$
and by (b) $G(L)$ is causal and stable and the result follows.\\
\noi{\it Remarks}.
(i)
For further discussion of minimum phase
filters see \cc{GREE88},\cc{SOLO86}.
(ii) Result II is a special case of a general result
that given a full rank multivariate spectrum $f_Z(\lambda)$ there exists
a unique causal stable minimum phase spectral factor $H(L)$
with $H(0)=I$
and
positive definite innovations variance matrix
$\sgep$ such that (\ref{spf}) holds \cc{HDE88},\cc{GREE88}.
In general $det H(L)$ may have
some roots on the unit circle \cc{HANP88},\cc{GREE88}
but the assumptions in result II rule this case out.
Such roots mean that some linear combinations of $\zt$ can be perfectly
predicted from the past \cc{HANP88},\cc{GREE88} something that is not
realistic in the fMRI application.
(iii) Result II is also crucial from a system identification
or model fitting point of view. From that point of view all we can
know (from second order statistics) is the spectrum and so
if, naturally, we want a unique model,
the only model we can obtain is the causal stable minimum phase model
i.e. the ISS model. The standard approach to SS model fitting
is the so-called state space subspace method \cc{DEIS95},\cc{BAUR05}
and indeed it delivers an ISS model. The alternative approach
of fitting a VARMA model \cc{HDE88},\cc{LUTK93} is equivalent to getting an ISS model.
(iv) We need result I however since when we form submodels
we do not immediately get an ISS model, rather we must compute it.
\section{\bf Submodels}
\sco
Our computation of causality measures requires
that we compute induced submodels.
In this section we show how to obtain a ISS
submodel from the ISS joint model.
Now we partition $\zt=(\xt,\yt)^T$ into subvector signals of interest
and partition the \stsp model correspondingly,
$C=\aob{\cx}{\cy}$ and $B=(\bx,\by)$.
We first read out a SS submodel for $\xt$ from the ISS model
for $\zt$. We have simply
$\xitp=A\xit+\wt,\;\; \xt=\cx\xit+\epxt$
where,
$\wt=B\epxoyt=\bx\epxt+\by\epyt$.
We need to calculate the covariance matrix,
$var\aob{\wt}{\epxt}=\abcd{Q}{\Sb^T}{\Sb}{\Rb}$.
We find, $\Rb=\sgepx$,
$Q=var(\wt)=B\sgep B^T$ and,
$\Sb=E(\wt\epxt^T)=B\aob{\sgepx}{\sgepyx}=\bo$.
This leads to
\noi{\bf Theorem I}.
Given the joint ISS model (\ref{iss}) or (\ref{issb}) for
$\zt$, then under condition Ev,
the corresponding ISS submodel for $\xt$ namely
($A,\cx,\kcx,\omx)$
(the bracket notation $\kcx$ is used to avoid confusion
with e.g. $\cx,\sgepx$ which are submatrices)
can be found
by solving the DARE (\ref{dare})
with $[Q,\Rb,\Sb]=[B\sgep B^T,\sgepx,B_o]$.
\noi{\it Proof}.
Firstly we note by partitioning $|\sgep|=|\sgepx||\sgepygx|$
where $\sgepygx=\sgepy-\sgepyx\sgepx\upm\sgepxy$
so that $\sgepx$ and $\sgepygx$ are both positive definite.
Now we need only check conditions N,St,De of result I.
We need to show,
$\Rb=\sgepx$ is positive definite,$(A,\cx)$ is detectable and
$(A-\Sb\Rb\upm \cx,(B\sgep B^T-\Sb\Rb\upm\Rb^T)\uph$ is
stabilizable; in fact we show it is controllable.
The first is already established.
The second follows trivially since $A$ is stable.
We use the PBH test (see Appendix A) to check the third.
Suppose controllability fails, then by the PBH test, there
exists $q\neq 0$ with
$\lambda q^T=q^T(A-\Sb\Rb\upm \cx)$
and
$0=q^T(B\sgep B^T-\Sb\Rb\upm\Sb^T)\uph
\Ra 0=q^T(B\sgep B^T-\Sb\Rb\upm\Sb^T)
=q^T(B\sgep B^T-B\aob{\sgepx}{\sgepyx}\sgepx\upm[\sgepx,\sgepxy]B^T)
=q^T(\bx,\by)\aobcod{0}{0}{0}{\sgepygx}\aob{\bx^T}{\by^T}\\
=q^T\by\sgepygx\by^T
\Ra 0=q^T\by\sgepygx\by^Tq
\Ra \pa\by^Tq\pa=0\Ra q^T\by=0$
since $\sgepygx$ is positive definite.
But then,
$\lambda q^T=q^T(A-(\bx,\by)\aob{\sgepx}{\sgepyx}\sgepx\upm\cx)
=q^T(A-(\bx\cx+\by\sgepyx\sgepx\upm\cx)
=q^T(A-\bx\cx)$.
Thus $(A-\bx\cx,\by)$ is not controllable.
But this is a contradiction since we can find a matrix
,namely $\cy$ so that $A-\bx\cx-\by\cy=A-BC$ is stable.
\noi{\it Remarks}.
(i) For implementation in matlab
positive definiteness in constructing $Q$ can be an issue.
A simple resolution is to use a Cholesky factorization of
$\sgep=L_\ep L_\ep^T$ and form $B_\ep=BL_\ep$
and then form $Q=B_\ep B_\ep^T$.
(ii) The $\pcx$ matrix from DARE,\\
$[\pcx,L_0,G]=DARE(A^T,C_X^T,B\sgep B^T,\sgepx,B_o,I)$
obeys
$\pcx=A\pcx A^T+B\sgep B^T-\kcx\omx\kcx^T$ and then
$\kcx=(A\pcx\cx^T+B_o)\omx\upm,\omx=\sgepx+\cx\pcx\cx^T$.
(iii) \cc{DUFT10} discuss a method for obtaining submodels
but it is flawed.
Firstly it requires
the computation of the inverse of the
VAR operator.
While this might be feasible (analytically) on a toy example,
there is no known numerically reliable
way to do this in general
(computation of determinants is notoriously ill-conditioned).
Secondly it requires the solution of simultaneous
quadratic autocovariance
equations to determine VMA parameters for which no algorithm
is given. In fact these are precisely the equations required
for a spectral factorization of a VMA process.
There do exist reliable algorithms for doing this but given
the flaw already revealed
we need not discuss this approach any further.\\
Next we state an important corollary:\\
\noi{\bf Corollary I}.
Any submodel is in general a VARMA
model not a VAR. To put it another way the class of VARMA models
is closed under the forming of submodels whereas
the class of VAR models is not.
This means that VAR models are not generic and is a strong
argument against their use. Any vector time series can be
regarded as a submodel of a larger dimensional time series
and thus must in general obey a VARMA model. This result
(which is well known in time series folk lore)
is significant for econometrics where VAR models are in widespread use.
For the next section we need,\\
\noi{\bf Theorem II}.
For the joint ISS model (\ref{iss}) or (\ref{issb}) for $\zt$
with conditions St,De holding and
with induced submodel for $\xt$ given in Theorem I,
we have,
\EBN
\fxlam&=&h_X(\elamm)\omx h_X^T(\elam) \lab{fxlam}\\
h_X(L)&=&I+\cx\liima \kcx =I+L \cx\imal\kcx \nn\\
ln|\omx|&=&\int_{-\pi}^{\pi}ln|\fxlam|\dlamtpi\nn
\EEN
\section{\bf Downsampling}
\sco
There are two approaches to the problem of finding
the model obeyed by a downsampled process;
frequency domain and time domain.
While the
the general formula
for the spectrum of a sampled process has long been known,
it is not straightforward to use
and has not yielded any general computational
approach to finding submodels of parameterized spectra.
Otherwise the most complete (time domain) work seems to be that of
\cc{PANW83} who only treat the first and second
order scalar cases. There is work in the engineering
literature for systems with observed inputs but that
is also limited and in any case not helpful here.
We follow a SS route.
We begin with the ISS model (\ref{iss}).
Suppose we downsample the observed signal $\zt$ with
sampling multiple $m$.
Let $t$ denote the fine time scale and $k$ the
coarse time scale so
$t=mk$. The downsampled signal is $\ozk=z_{mt}$.
To develop the SS model for $\ozk$ we iterate
the SS model above to
obtain
\EQ
\xi_{t+l}=A^l\xit+\Sg_1^l A^{l-i}B\ep_{t+i-1}
\EN
Now set $t=mk,l=m$ and denote sampled signals,
$\oxik=\xi_{mk},\ozk=z_{mk},\oepk=\ep_{mk}$.
Then we find,
\EQ
\oxikp=A^m\oxik+\wk\;\;,
\ozk=C\oxik+\oepk
\EN
where $\wk=\Sg_1^m A^{m-i}B\ep_{km+i-1}$.
We now use result I to find the ISS model
corresponding to this
SS model.
We have first to calculate the model covariances,
\EBN
E(\oepk\oepk^T)&=&\sgep=R \nn\\
E(\wk\oepk^T)&=&A^{m-1}B\sgep=\sm \lab{isssa}\\
E(\wk\wk^T)&=&Q_m \nn\\
Q_m&=&\Sg_1^m A^{m-i}B\sgep B^T(A^T)^{m-i}
=\Sg_0^{m-1}A^r B\sgep B^T (A^T)^r \lab{isssq}\\
\Ra Q_m&=&AQ_{m-1}A^T+B\sgep B^T, m\geq 2, Q_1=B\sgep B^T\nn
\EEN
We now obtain,
\noi{\bf Theorem III}.
Given the ISS model (\ref{iss}), then under condition E,
for $m>1$, the ISS
model for the downsampled process $\ozk=z_{mt}$ is
$(A^m,C,\km,\vm)$ obtained by solving the DARE
with SS model $(A^m,C,[Q_m,R,\sm]$) where $Q_m$
is given in (\ref{isssq})
and $\sm$ is given in (\ref{isssa}).
\noi{\it Proof}. Using result I we need to show the following.
$R$ is positive definite, $(A^m,C)$ is detectable
and $(A^m-\sm R\upm C),(Q_m-\sm R\upm \sm^T)\uph)$
is stabilizable; in fact we show controllability.
The first holds trivially; the second also since $A$
is stable and thus so is $A^m$. For the third
we use the PBH test.
Suppose \conly fails. Then there is a left \evec $q$
(possibly complex)
with $\lambda q^T=q^T(A^m-\sm R\upm C)=q^T A^{m-1}(A-BC)$
and
$q^T(Q_m-\sm R\upm \sm^T)
=0=q^T \Sg_0^{m-2}A^r B\sgep B^T(A^T)^r
\Ra \Sg_0^{m-2} q^H \Sg_0^{m-1}A^r B\sgep B^T(A^T)^rq=0$.
Since $\sgep$ is positive definite this delivers
$\Sg_0^{m-2}\pa B^T(A^T)^rq\pa^2=0\Ra B^T(A^T)^rq=0$
for $r=0,\cdots,m-2$.
Using this, we now find,
$\lambda q^T=q^T A^{m-1}(A-BC)
=q^T A^{m-2}(A-BC)^2+q^T A^{m-2}BC(A-BC)
=q^T A^{m-2}(A-BC)^2$.
Iterating this yields, $\lambda q^T=q^T(A-BC)^m$.
Thus if $\lambda_m$ is an $m$-th root of $\lambda$
then $\lambda_m q^T=q^T(A-BC)$. Since also $q^TB=0$
we thus conclude $(A-BC,B)$ is not controllable.
But this is a contradiction since,
$(A-BC)+BC=A$ is stable.
\noi{\it Remarks}.
(i) In matlab we would compute,
$[\Pm,L_0,G_m]=\mbox{DARE}((A^m)^T,C^T,Q_m,\sgep,\sm,I)$,
yielding $\vm=\sgep+C\Pm C^T$ and $\km=G_m^T$.
(ii) More specifically $\Pm$ ($m>1$) obeys,
$\Pm=\Abm \Pm \Abm^T+\Qbm-\Kbm\vm\Kbm^T$,\\
where $\Abm=A^m-\sm\sgep\upm C=A^{m-1}(A-BC)$;
$\Qbm=\Qm-\sm\sgep\upm\sm^T=\Qmm;
\vm=\sgep+C\Pm C^T;\km=\Abm\Pm C^T(\vm)\upm$.
\section{\bf Granger Causality}
\sco
In this section we review
and extend some basic results in
Granger causality. In particular
we extend GEMs to the state space setting
and show how to compute them reliably.
Since the development of Granger causality
it has become clear \cc{DUFR98},\cc{DUFT10}
that in general one cannot
address the causality issue with only one step ahead
measures as commonly used; one needs to look at causality
over all forecast horizons.
However one step measures are sufficient when one is considering
only two vector time series as we are \cc{DUFR98}[Proposition 2.3].
\subsection{\bf Granger Causality Definitions
Our definitions
of one step Granger causality
naturally draw on \cc{GRAN63},
\cc{GRAN69},\cc{SIMS72},\cc{SOLO86}
but are also influenced by
\cc{CAIN76b}, who, drawing on work of \cc{PIERH77}, distinguished
between weak and strong GC or what Caines calls weak and strong feedback
free processes.
We introduce:\\
\noi{\bf Condition WSS}:
The vector time series $\xt,\yt$ are jointly second order
stationary.
\noi{\bf Definition: Weak Granger Causality}.\\
Under WSS, we say $\yt$
does not \ul{weakly} \grc (dn-wgc) $\xt$ if, for all $t$
\EQ
var(\Xtp|\xmt,\ymt)=var(\Xtp|\xmt)
\EN
Otherwise we say $\yt$ weakly \gcs (wgc) $\xt$.
Because of the elementary identity,
$var(X|Z)=E[var(X|Z,W)]+var[E(X|Z,W)]
=E[var(X|Z,W)]+E[(E(X|Z,W)-E(X|Z))(E(X|Z,W)-E(X|Z))^T|Z]$
the equality of variance matrices in the definition
also ensures the equality of predictions,
$E(\Xtp|\xmt,\ymt)=E(\Xtp|\xmt)$.
This definition agrees with \cc{GRAN69},\cc{CAIN75}
who do not use the designator weak and \cc{CAIN76b},\cc{SOLO86} who do.
\noi{\bf Definition: Strong Granger Causality}.\\
Under WSS, we say $\yt$
does not \ul{strongly} \grc (dn-sgc) $\xt$ if, for all $t$,
\EQ
var(\Xtp|\xmt,\ymt,\Ytp)=var(\Xtp|\xmt)
\EN
Otherwise we say $\yt$ strongly \gcs (sgc) $\xt$.
Again equality of the variance matrices ensures equality
of predictions, $E(\Xtp|\xmt,\ymt,\Ytp)=E(\Xtp|\xmt)$.
This definition agrees with \cc{CAIN76b} and \cc{SOLO86}.
\noi{\bf Definition: FBI}. Feedback Interconnected.\\
If $\xt$ \gcs $\yt$ and $\yt$ \gcs $\xt$
then we say $\xt,\yt$ are feedback interconnected.\\
\noi{\bf Definition: UGC}. Unidirectionally Granger Causes.\\
If $\xt$ \gcs $ \yt$ but $\yt$ dn-gc $\xt$ we say
$\xt$ unidirectionally Granger causes $ \yt$.
\subsection{\bf Granger Causality for Stationary State Space Models}
Now we partition $\zt=(\xt,\yt)^T$ into subvector signals of interest
and partition the vector MA or \stsp model (\ref{issb})
correspondingly,
\EBN
\abL{{\xt}}{{\yt}}&=&
[I+
\abL{{\cx}}{{\cy}}
\zimam
\anb{{\bx}}{{\by}}
\abL{{\epxt}}{{\epyt}} \lab{issc}\\
&=&\aobcodL{{\hxxz}}{{\hyxz}}{{\hxyz}}{{\hyyz}}
\abL{{\epxt}}{{\epyt}} \lab{issd}\\
\Sg_\ep&=&var\abL{{\epxt}}{{\epyt}}
=\aobcodL{\sgxe}{{\sgyxe}}{{\sgxye}}{{\sgye}} \nn\\
\aobcodL{{\hxxz}}{{\hyxz}}{{\hxyz}}{{\hyyz}}
&=&\aobcodL{\cx\zimam\bx+I}{\cy\zimam\bx}{\cx\zimam\by}{\cy\zimam\by+I}\nn
\EEN
Now we recall results of \cc{CAIN76b}:\\
\noi{\bf Result III}:
If $\zt=\aob{\xt}{\yt}$ obeys a Wold model of the form
$\zt=H_Z(L)\ept$ where $H_Z(L)$ is a one-sided square summable
moving average polynomial with $H_Z(0)=I$
which is partitioned as in (\ref{issd})
then:
(a) $\yt$ dn-wgc $\xt$ iff $\hxyz=0$.
(b) $\yt$ dn-sgc $\xt$ iff $\hxyz=0$ and $\sgxye=0$.
We can now state a new SS version of this result:\\
\noi{\bf Theorem IV}.
For the stationary ISS model (\ref{issc},\ref{issd}):
(a) $\yt$ dn-wgc $\xt$ iff $\cx A^r \by=0, r\geq 0$.
(b) $\yt$ dn-sgc $\xt$ iff $\cx A^r \by=0, r\geq 0$ and $\sgxye=0$.
\noi{\it Proof}. Follows immediately from result III since
$\hxyz=\Sg_0^\infty \cx A^r\by L^{r+1}$.\\
\noi{\it Remarks}.
(i) By the Cayley Hamilton Theorem we can replace (a)
with: $\cx A^r\by=0,0\leq r\leq n-1,n=dim(\xit)$.
(ii) Collecting these equations together gives
$\cx(\by,A\by,\cdots,A^{n-1}\by)=0$ which says that
the pair $(A,\by)$ is not controllable. Also we have,\\
$\by^T(\cx^T,A^T\cx^T,\cdots,(A^T)^{n-1}\cx^T)=0$ which
says that the pair $(\cx,A)$ is not observable.
Thus the representation of $\hxyz$ is not minimal.
From a data analysis point of view we need
to embed this result
in a well behaved hypothesis test. Results of \cc{GEWK82}, suitably modified,
allow us to do this.
\subsection{\bf Geweke Causality Measures for SS Models}
Although much of the discussion in \cc{GEWK82}
is in terms of VARs we can show it applies more generally.
We begin as \cc{GEWK82} did with the following definitions.
Firstly,
$\fyx=ln\frac{|\Om_X|}{|\sgxe|}$ is a
measure of the gain in using
the past of $y$ to predict $x$ beyond using just the past of $x$;
similarly introduce $\fxy=ln\frac{|\Om_Y|}{|\sgye|}$. Next define the instantaneous
influence measure, $\fydx=ln\frac{|\sgxe||\sgye|}{|\Sg_\ep|} $.
These are then joined in the
fundamental decomposition \cc{GEWK82},
\begin{equation}
\fxoy=\fyx+\fxy+\fydx \lab{decomp}
\end{equation}
where, $\fxoy=ln\frac{|\Om_X||\Om_Y|}{|\sgep|}$.
\cc{GEWK82} then proceeds to decompose these measures in the frequency domain.
Thus the frequency domain GEM for the dynamic influence
of $\yt$ on $\xt$ is given by \cc{GEWK82},
\begin{equation}
\fyx=\intpp\fyxlam\dlamtpi \mbox{ where }
\fyxlam=ln\frac{|f_X(\lambda)|}{|f_e(\lambda)|} \lab{fyxlam}
\end{equation}
and $f_e(\lambda)$ is assembled
(following \cc{GEWK82}) as follows.
Introduce
$W=\sgyxe\sgxe\upm$
and note that
$\epyt-W\epxt$ is uncorrelated with $\epxt$ and has variance
$\sgepygx=\sgye-\sgyxe\sgxe^{-1}\sgxye$. Then rewrite (\ref{issd}) as
\begin{eqnarray*}
\abL{\xt}{\yt}
&=&
\aobcodL{{\hxxz}}{{\hyxz}}{{\hxyz}}{{\hyyz}}
\aobcodL{I}{W}{0}{I}
\aobcodL{I}{-W}{0}{I}
\abL{\epxt}{\epyt}\\
&=&
\aobcodL{{\hxxz+\hxyz W,}}{{\hyxz+\hyyz W,}}{{\hxyz}}{{\hyyz}}
\abL{\epxt}{\epyt-W\epxt}
\end{eqnarray*}
This corresponds to (3.3) in \cc{GEWK82} and yields the
following expressions corresponding to those in \cc{GEWK82}.
\EBN
\fxz&=&\fez +\hxyz\sgepygx\hxy^T(\Lm) \lab{fxz}\\
\fez&=&\hex(\Lp)\sgxe \hex^T(\Lm) \lab{fez}\\
\hex(\Lp)&=&\hxxz+\hxyz W \nn
\EEN
Using the SS expressions above we rewrite $\hex(L)$
in a form more suited to computation as,
\EBN
\hex(\Lp)&=&[\cx\liima B_o+I] \lab{hex}\\
B_o&=&B_X+B_Y\sgxye^T\sgepx\upm
=B\aob{{\sgepx}}{{\sgyxe}}\sgepx\upm \nn
\EEN
Note that then, using Theorem II,
\EBN
\fyx&=&\intpp ln|\fxlam|\dlamotpi-\intpp ln|\felam|\dlamotpi \nn\\
&=&ln|\Om_X|-ln|\sgxe|
=ln\frac{|\Om_X|}{|\sgxe|} \lab{fyx}
\EEN
Clearly, with $L=exp(-j\lambda)$, $\fxz\geq \fez\Ra \fyx\geq 0$.
Also the instantaneous causality measure is,
\EBN
\fydx&=&ln\frac{|\sgxe||\sgye|}{|\Sg_\ep|}
=ln\frac{|\sgxe||\sgye|}{|\sgepxgy||\sgye|}
=ln\frac{|\sgxe|}{|\sgepxgy|} \lab{fydx}\\
\sgepxgy&=&\sgxe-\sgxye\sgye^{-1}\sgyxe \nn
\EEN
Clearly $\sgxe\geq \sgepxgy$ so that $\fydx\geq 0$.
Introduce the normalised cross covariance based matrix,
$\Gam_{x,y}=\sgye^{-\half}\sgyxe\sgxe\upm\sgxye\sgye^{-\half}$.
Then using a well known partitioned matrix determinant
formula \cc{NEUD99} we find $\fydx=ln|I-\Gam_{x,y}|$.
This means that the instantaneous causality measure
depends only on the canonical correlations
(which are the eigenvalues of $\Gam_{x,y}$) between
$\epxt,\epyt$, \cc{SEBR84},\cc{KSIH72}.
To implement these formulae,
we need expressions for $\Om_X,\Om_Y,\fxlam$. To get them \cc{GEWK82}
fits separate models to each of $\xt$ and $\yt$. But this causes
positivity problems with $\fyxlam$ \cc{DING06}.
Instead we obtain the required quantities from the correct submodel
obtained in the previous section. We have,\\
{\bf Theorem Va}.
The GEMs can be obtained
from the joint ISS model (\ref{issc})
and the submodel in Theorem II, as follows,
(a) $\fyx=ln\frac{|\Om_X|}{|\sgxe|}$ where
$\omx$ is got from the submodel in Theorem II.
(b) The frequency domain GEM $\fyxlam$ (\ref{fyxlam}) can be computed
from (\ref{fez}),(\ref{fxlam}),(\ref{hex}).
\noi And $\Om_Y,\fxy,\fxylam$ can be obtained similarly.
Now pulling all this together with the help of result III we have
an extension of the results of \cc{GEWK82} to the
state space/VARMA case.
\noi{\bf Theorem Vb}:
For the joint ISS model (\ref{issc}),
(a) $\fyx\geq 0,\fydx\geq 0$ and
$\fyx+\fydx=ln\frac{|\Om_X|}{|\sgepxgy|}$.
(b) $\yt$ dn-wgc $\xt$ iff,
$\fxz=\fez$ which holds iff $\fyx=0$ i.e. iff $\Om_X=\sgxe$.
(c) $\yt$ dn-sgc $\xt$ iff $\fxz=\fez$ and $\sgepxgy=\sgxe$
i.e. iff $\fyx=0$ and $\fydx=0$ i.e.
iff $\fyx+\fydx=0$ i.e. iff $\Om_X=\sgepxgy $.
\noi{\it Remarks}.
(i) A very nice nested hypothesis testing
explanation of the decomposition (\ref{decomp})
is given by Parzen in the discussion to \cc{GEWK82}.
(ii) It is straightforward to see that the GEMs
are unaffected by scaling of the variables.
This is a problem for other GC measures \cc{EDIN10}.
(iii) For completeness we state extensions of the inferential
results in \cc{GEWK82} without proof.
Suppose we fit a SS model to data $\zt,t=1,\cdots,T$
using e.g. so-called state space subspace methods \cc{DEIS95},\cc{BAUR05}
or VARMA methods in e.g. \cc{LUTK93}.
Let $\fyxh,\fxyh,\fydxh,\fxoyh$ be the corresponding GEM estimators.
If we denote true values
with a superscript $0$, we find under some regularity conditions:
\begin{eqnarray*}
&&H_0:\fyx^0=0\Ra
T\fyxh\Ra \chisq_{2n\px\py},\mbox{ as }T\ra\infty\\
&&\mbox{and } H_0:\fydx^0=0\Ra T\fydxh\Ra \chisq_{\px\py}
\end{eqnarray*}
So to test for strong GC we put these together,
\EQ
H_0:\fyx^0=0,\fydx^0=0\Ra
T(\fyxh+\fydxh)\Ra\chisq_{(2n+1)\px\py}
\EN
Together with similar asymptotics for $\fxyh,\fxoyh $
we see that the fundamental decompositon (\ref{decomp})
has a sample version involving
a decomposition of a chi-squared into sums of
smaller chi-squared statistics.
(iv) \cc{DING06} attempts also to derive
$\fyx$ without fitting separate models to $\xt,\yt$.
However the proposed procedure to compute $\fxlam$
involves a two sided filter and is thus in error.
The only way to get $\fxlam$ is by spectral factorization
(which produces one-sided or causal filters)
as we have done.
(v) Other kinds of causality measures have emerged
in the literature e.g. \cc{KAMI91} but it is not known whether
they obey the properties in theorems IVa,IVb. However these
properties are crucial to our subsequent analysis.
\section{\bf Effect of Filtering on Granger Causality Measures}
Now the import of the frequency
domain GEM becomes apparent since it allows us
to determine the effect of one-sided
(or causal) filtering on GC.
We need to be clear on the situation envisaged here.
The unfiltered time series are the underlying
series of interest but we only have access to the filtered
time series. So we can only find the GEMs from
the spectrum of the filtered time series. What we need to know
is when those \dae filtered \dae GEMs are the same as the
underlying GEMs.
We have,\\
\noi{\bf Theorem VI}.
Suppose we filter $\zt$ with a stable, full rank, one-sided filter
$\Phi(L)=\aobcod{\phixxz}{0}{0}{\phiyyz}$ then,
(a) If $\Phi(L)$ is minimum phase then
the GEMs (and so GC) are unaffected by filtering.
(b) If
$\Phi(L)$ has the form $\Phi(L)=\psi(L)\Phib(L)$ where $\psi(L)$
is a scalar all pass filter and $\Phib(L)$ is
stable, minimum phase
then the GEMs (and so GC) are unaffected by filtering.
(c) If $\Phi(L)$
is nonminimum phase and case (b) does not hold
then the GEMs (and so GC) are changed by filtering.
\noi{\it Proof}. Denote $\zbt=\Phi(L)\zt=\Phi(L)H(L)\ept$
by Result II(a).
Then for the frequency
domain GEM we need to find,
$\fbyxlam=ln\frac{|\fbxlam|}{|\hbex(L)\sgepx\hbex^T(\Lum)|}$
where $L=\exp(-j\lambda)$.
We find trivially that. $|\fbxlam|=|\Phix(L)\fxlam\Phix(\Lum)|
=|\Phix(L)||\fxlam||\Phix(\Lum)|$.
Finding $\hbex(L)$ is
much more complicated; we need the minimum phase vector
moving average or \stsp model corresponding to (\ref{issd}).
Taking $\Phi(L)$ to be non-minimum
phase we carry out a spectral factorization,
$\fbzlam=\Hb(L)\Sgb\Hb^T(\Lum)$
where $\Hb(L)$ is causal, stable,
minimum phase with $\Hb(0)=I$
and then from
appendix C, $\Hb(L)$ can be written,
$\Hb(L)=\Phi(L)H(L)D(\Lum),D(\Lum)=JE^T(\Lum)\Jb\upm$
where $E(L)$ is all pass and $J,\Jb$ are
constant matrices (Cholesky factors).
Writing this in partitioned form,
\EQ
\Hb(L)=\aobcodL{\Phix(L)}{0}{0}{\Phi_Y(L)}
\aobcodL{\hxxz}{\hyxz}{\hxyz}{\hyyz}
\aobcodL{\dxx(\Lum)}{\dyx(\Lum)}{\dxy(\Lum)}{\dyy(\Lum)}
\EN
yields $\hbex(L)=\Phix(L)\kbex(L)$ where,
\begin{eqnarray*}
\kbex(L)&=&\hxxz(\dxx(\Lum)+\dxy(\Lum)\sgepyx\sgepx\upm)\\
&+&\hxyz(\dyy(\Lum)\sgepyx\sgepx\upm+\dyx(\Lum))
\end{eqnarray*}
Thus in $\fbyxlam$
the $|\Phix(L)|$ factors cancel giving,
$\fbyxlam=ln\frac{|\fxlam|}{|\kbex(L)\sgepx\kbex^T(\Lum)|}$.
This will reduce to $\fyxlam$ iff
$\kbex(L)=\hex(L)K(\Lum)$
where $K(\Lum)$ is all pass
which occurs iff
$\dxy(\Lum)=0,\dyx(\Lum)=0,
\dxx(\Lum)=\psi(\Lum)I,\dyy(\Lum)=\psi(\Lum)I$
where $\psi(\Lum)$ is a scalar all-pass filter.
Results (a),(b),(c) now follow.
We now give two examples.
{\it Example I}. Differential delay.
Suppose $\aob{\xt}{\yt}=\aobcod{1}{\rho}{0}{1}\aob{\at}{\bsubt}$
and $\Phi(L)=\aobcod{1}{0}{0}{L}$. So the two series are white noises
that exhibit an instantaneous GC. The filtering delays one series relative to the other.
Then we have,
$\zbt=\aobcod{1}{0}{0}{L}\aobcod{1}{\rho}{0}{1}\aob{\at}{\bsubt}
=\aobcod{1}{L\rho}{0}{1}\aob{\at}{\bsubt}=\Hb(L)\aob{\at}{\bsubt}$. And
we see that $\Hb(0)=I$ while $\Hb(L)$ is stable, causal and invertible,
indeed $\Hb\upm(L)=\aobcod{1}{-L\rho}{0}{1}$. Thus we see that
the differential delay has introduced a
spurious dynamic GC relation and
the original purely instantaneous GC is lost.
{\it Example II}. fMRI Hemodynamic
Response is non-minimum phase.
A number of stylized or `canonical' HRFs
based on the double gamma (i.e. difference of
two gamma functions)
have been presented in the literature
e.g. \cc{HENS03},\cc{GLOV99}.
These stylized HRFs capture two essential
features of empirical HRFs; namely
a slow rise to a peak followed by
a small negative undershoot.
And past practice
has been to use one of them for all voxels
in a slice or even volume.
Here we illustrate with a motor cortex HRF
from \cc{GLOV99}
\EQ
h(t)=f_a(\frac{t}{\tau_am})^m\epu{-(t/\tau_a-m)}
-f_b\alpha(\frac{t}{\tau_bp})^p\epu{-(t/\tau_b-p)}
\EN
where $(\tau_a,m)=(1.1,5)$ and $(\tau_b,p)=(.9,12)$
while $\alpha=.4$.
Also we have scaled each term to have maximum
value of $1$.
Here $f_a,f_b$ are amplitudes to be found
in a model fitting exercise.
In Fig.1
we show a plot of the HRF with
$f_a=1=f_b$ and the zeros on a log scale.
One zero has magnitude $>1$ showing
the HRF is non-minimum phase.
\begin{figure}
\begin{center}
\resizebox{!}{8cm}{\includegraphics{hrfmotor}}
\caption{Canonical Motor Cortex HRF and its (log) Roots}
\end{center}
\label{hrfm}
\end{figure}
\section{\bf Downsampling and Forwards Granger Causality}
\sco
We now consider to what extent GC
is preserved under downsampling.
Using the sampled notation of our discussion above,
and defining $\zbk=\aob{{\xbk}}{{\ybk}}$,
we have the following result:\\
\noi{\bf Theorem VII}.
Forwards Causality.
(a) If $\yt$ dn-sgc $\xt$ then
$\ybk$ dn-sgc $\xbk$.
(b) If $\yt$ dn-wgc $\xt$ then
in general $\ybk$ wgc $\xbk$.
\noi{\it Remarks}.
(i) Part (a) is new although technically a special
case of a result of the author\dae s
established in a non SS framework.
(ii) We might consider taking part(b)
as a formalization
of long standing folklore in econometrics
\cc{CREI87},\cc{MARC85} that downsampling
can destroy unidirectional Granger causality.
However that same folklore is flawed because
it failed to recognize the possibility of (a).
The folklore is further flawed because it failed
to recognize the more serious reverse problem
discussed below.
\noi\ul{ Proof of (a)}.
We use the partitioned expressions
in the discussion leading up to result III.
We also refer to the discussion leading up to Theorem VI.
This allows us to write two decompositions.
Firstly $\wk=\wxk+\wyk$ where
\EQ
\wxk=\sgwm A^{m-i}\bx\ep_{X,km+i-1}\;\;,
\wyk=\sgwm A^{m-i}\by\ep_{X,km+i-1}
\EN
From result III and the definition of dn-sgc
\begin{equation}
\wxk\mbox{ is uncorrelated with }w_{Y,l}\mbox{ for all }k,l \lab{unca}
\end{equation}
The other decomposition is
$\epbk=\aob{{\epbxk}}{{\epbyk}}$
and
\begin{equation}
\epbxk\mbox{ is uncorrelated with }\bar{\ep}_{Y,l}\mbox{ for all }k,l \lab{uncb}
\end{equation}
Next we note from Theorem IV that
$\cx(\Lm I-\aum)\upm A^p\by=\cx\sgwi A^{m+r-1}\by L^{r}=0$
for all $p\geq 0$.
Thus we deduce
\begin{equation}
\cx\wyk=0,\mbox{ for all }k \lab{wyk}
\end{equation}
We can now write
\EBN
\xbk&=&\cx(\Lm I-\aum)\upm\wk+\epbxk \nn\\
&=&\cx(\Lm I-\aum)\upm\wxk+\epbxk \lab{xbk}\\
\ybk&=&\cy\zimaumm\wxk+\cy\zimaumm\wyk+\epbyk \nn
\EEN
Based on (\ref{xbk}) we now introduce the ISS model for $\xbk$
\EQ
\xbk=\cx\zimaumm\kcx\nuxk+\nuxk
\EN
where $\nuxk$ is the innovations sequence.
Using this we introduce the estimator $\kcx\nuxk$ of $\wxk$
and the estimation error $\wtxk=\wxk-\kcx\nuxk$.
Below we show
\begin{equation}
\wtxk \mbox{ is uncorrelated with }\nu_{X,l}\mbox{ for all }k,l \lab{uncc}
\end{equation}
We thus rewrite the model for $\ybk$ as,
\begin{eqnarray*}
\ybk&=&\cy\zimaumm\kcx\nuxk+\zetak\\
\zetak&=&\cy\zimaumm[\wyk+\wtxk]+\epbyk
\end{eqnarray*}
Now we can construct an ISS model for
$\zetak=(I+\cy\zimaumm\kcy)\nuyk$ where $\nuyk$ is the
innovations sequence. In view of (\ref{unca},\ref{uncb},\ref{uncc})
$\nuxk$ and $\nu_{Y,l}$ are uncorrelated for all $k,l$.
Thus we have constructed the joint ISS model
\begin{eqnarray*}
\abL{{\xbk}}{{\ybk}}&=&\bar{H}(z)\abL{{\nuxk}}{{\nuyk}}\\
\bar{H}(z)&=&\aobcodL{I+\cx\zimaumm\kcx}{\cy\zimaumm\kcx}
{0}{I+\cy\zimaumm\kcy}
\end{eqnarray*}
From this we deduce that $\ybk$ dn-sgc $\xbk$ as required.
\ul{Proof of (\ref{uncc})}. Consider then\\
$E(\wtxk\nuxl^T)=E(\wxk-\kcx\nuxk)\nuxl^T)=E(\wxk\nuxl^T-\kcx E(\nuxk\nuxl^T)$.
The second term vanishes for $k\neq l$. The first term vanishes for $k>l$
since $\wxk$ is uncorrelated with the past and hence $\nuxl$;
for $l>k$ it vanishes since $\nuxl$ is uncorrelated with the past.
For $k=l$ it vanishes by the definition of $\kcx$ \cc{KAIL00}.
\ul{Proof of (b)}.
A perusal of the proof of (a) shows that
we cannot construct the block lower triangular
joint ISS model; in general we obtain a full block ISS model.
\section{\bf Downsampling and Reverse Granger Causality}
We now come to the more serious issue of whether unidirectional
Granger causality might arise from downsampling even though not
present on the original timescale.
To establish this we have simply to exhibit a numerical
example but that is not as simple as one might hope.
\subsection{\bf Simulation Design}
Designing a procedure to generate a wide class of
examples of spurious causality
is not as simple as one might hope.
We develop such a procedure for a
bivariate vector autoregression of
order one; a bivariate VAR(1). On the one hand this is
about the simplest example one can consider;
on the other hand it is general enough to generate
important behaviours.
The \biva VAR(1) model is then,
\EQ
\aob{\xt}{\yt}=A\aob{x_{t-1}}{y_{t-1}}+\aob{\epxt}{\epyt}\;\;,
A=\aobcod{\phix}{\gamy}{\gamx}{\phiy}\;\;,
\Sg=\aobcod{\sg_a^2}{\rho\sg_a\sg_b}{\rho\sg_a\sg_b}{\sg_b^2}
\EN
where $\Sg$ is the \varie matrix of the zero mean white
noise $\aob{\epxt}{\epyt}$;
$\rho$ is a correlation.
We note that this model can be written as an ISS model
with parameters, $A,I,-A,\Sg$. Hence all the computations
desribed above are easily carried out.
But the real issue is how to select the parameters.
By a straightforward scaling
argument it is easy to see the we may set $\sg_a=1=\sg_b$
without loss of generality. Thus we need to choose
only $A,\rho$.
Some reflection shows that there are two issues.
Firstly we must ensure
the process is \stty i.e. for the \evals $\lamw,\lamt$
of $A$ we must have $|\lamw|<1,|\lamt|<1$.
Secondly to design a simulation we need to choose
$\fyx,\fxy$; but these quantities depend on the parameters
$A,\rho$
in a highly nonlinear way so it is not obvious how to
do this. And five parameters is already too many
to pursue this by trial and error.
For the first issue we have
$trace(A)=\lamw+\lamt=\phix+\phiy$
and $det(A)=\lamw\lamt=\phix\phiy-\gamx\gamy$.
Our approach is to select $\lamw,\lamt$ and then
find $\phix,\phiy$ to satisfy
$\phix+\phiy=\lamw+\lamt,\phix\phiy=\lamw\lamt+\gamx\gamy$.
This requires solution of a quadratic equation. If we
denote the solutions as $r_{+},r_{-}$ then
we get two cases:
$(\phix,\phiy)=(r_{+},r_{-})$ and $(\phiy,\phix)=(r_{+},r_{-})$.
This leaves us to select $\gamx,\gamy$.
In Appendix B we show that
$\fyx=ln\frac{\sg_x^2}{\sg_a^2}
\geq ln(1+\xix)$ where
$\xix=\gamx^2(1-\rho^2)$.
And similarly $\fxy\geq ln(1+\xiy)$
where $\xiy=\gamy^2(1-\rho^2)$.
But we also show that $\xix=0\Ra \fyx=0$
and $\xiy=0\Ra \fxy=0$.
So we select $\xix,\xiy$ thereby setting
a lower bounds to $\fyx,\fxy$. This seems to be the best
one can do and as we see below works quite well.
So given $\xix,\xiy$ compute
$\gamx=\pm \frac{\sqrt{\xix}}{\sqrt{1-\rho^2}}$
and $\gamy=\pm \frac{\sqrt{\xiy}}{\sqrt{1-\rho^2}}$.
This gives four cases and together with the
two cases above yields eight cases.
This is not quite the end of the story since
the $\gamx,\gamy$ values need to be consistent with
the $\phix,\phiy$ values. Specifically the quadratic
equation to be solved for $\phix,\phiy$ must have real roots.
Thus the discriminant must be $\geq 0$. So
$(\phix+\phiy)^2-4(\phix\phiy)
=(\lamw+\lamt)^2-4(\lamw\lamt+\gamx\gamy)
=(\lamw-\lamt)^2-4\gamx\gamy\geq 0$.
There are four cases; two with real roots, two with complex roots.
If $\lamw,\lamt$ are real
then we require
$(\lamw-\lamt)^2\geq 4\gamx\gamy
\Ra (\lamw-\lamt)^2\geq
4sign(\gamx\gamy)\frac{\sqrt{\xix\xiy}}{1-\rho^2}$.
This always holds if $sign(\gamx\gamy)\leq 0$.
If $sign(\gamx\gamy)>0$ then we have a
binding constraint which restricts the sizes of $\xix,\xiy$.
If $\lamw,\lamt$ are complex conjugates
then $(\lamw-\lamt)^2$ is negative.
If $sign(\gamx\gamy)\geq 0$ then the condition never holds.
If $sign(\gamx\gamy)<0$ then there is a binding
constraint which restricts the sizes of $\xix,\xiy$.
In particular note that if $sign(\gamx\gamy)=0$ then
one cannot have complex roots for $A$.
We now use this design procedure to illustrate reverse causality.
\subsection{\bf Computation}
We describe the steps used to generate the results below.
We assume the \stsp model for $\zt=\aob{\xt}{\yt}$ comes
in ISS form.
Since standard state space subspace model fitting
algorithms \cc{LARI83},\cc{OBDM96},\cc{BAUR05} generate ISS models
this is a reasonable assumption. Otherwise
we use result I to generate the corresponding
ISS model.
Given a sampling multiple $m$ we first
use Theorem III to generate the subsampled ISS model and hence
$\Sgep\upcm$. To obtain the GEMs
we use Theorem I to generate the marginal models for
$\xt,\yt$ yielding $\Om_X\upcm,\Om_Y\upcm$.
And now $\fyx\upcm,\fxy\upcm$ are gotten from
the formulae (\ref{fyx}),(\ref{fydx}) and the comment following
Theorem Va.
\subsection{\bf Scenario Studies}
We now illustrate
the various results above with some bivariate simulations.
\noi\ul{Example 1}. GEMs decline gracefully.
\noi\ul{Table 1}. GEMs for various sampling intervals for\\
$(\lamw,\lamt,\xix,\xiy,\rho)
=(-.95 \expj{j\times .1}, -.95 \expj{-j\times .1},1.5,.2,.2)$
$\Ra A=\aobcod{-.204}{.452}{-1.24}{-1.69}$.\\
\bt{|l|l|l|l|l|l|l|l|l|l|l|}\hl
m & 1 & 2 & 3 & 4 & 5 & 6 & 10 & 20 & 30 & 40 \\\hl
$\fyx$
& 1.3761
& 1.657
& 1.408
& 1.169
& 0.994
& 0.864
& 0.551
& 0.151
& 0.001
& 0.014
\\\hl
$\fxy$
& 0.19834
& 0.253
& 0.287
& 0.308
& 0.319
& 0.322
& 0.293
& 0.109
& 0.001
& 0.011
\\\hl
\et\\
Here, for the underlying process,
$y$ pushes $x$ much harder than $x$ pushes $y$.
This pattern is roughly preserved with slower sampling,
but the relative strengths change.
\noi\ul{Example 2}. GEMs Reverse
\noi\ul{Table 2}. GEMs for various sampling intervals for\\
$(\lamw,\lamt,\xix,\xiy,\rho)
=(.95 \expj{j\times .1}, .95 \expj{-j\times .1},1.5,.2,.2)$
$\Ra A=\aobcod{1.69}{.452}{-1.24}{.204}$.\\
\bt{|l|l|l|l|l|l|l|l|l|l|l|}\hl
m & 1 & 2 & 3 & 4 & 5 & 6 & 10 & 20 & 30 & 40 \\\hl
$\fyx$
& 0.92983
& 0.879
& 0.766
& 0.683
& 0.62
& 0.57
& 0.418
& 0.131
& 0.001
& 0.013
\\\hl
$\fxy$
& 1.0476
& 1.824
& 2.006
& 1.795
& 1.527
& 1.3
& 0.751
& 0.18
& 0.002
& 0.016
\\\hl
\et\\
In this case the underlying processes push eachother with
roughly equal strength. But subsampling yields a false picture
with $x$ pushing $y$ much harder than the reverse.
\noi\ul{Example 3}. Near Equal Strength Dynamics
Becomes Nearly Unidirectional
\noi\ul{Table 3}. GEMs for various sampling intervals for\\
$(\lamw,\lamt,\xix,\xiy,\rho)
=(.995 , -.7,1,.5,.7)$
$\Ra A=\aobcod{1.45}{-.84}{1.18}{-1.16}$.\\
\bt{|l|l|l|l|l|l|l|l|l|l|l|}\hl
m & 1 & 2 & 3 & 4 & 5 & 6 & 10 & 20 & 30 & 40 \\\hl
$\fyx$
& 1.487
& 0.051
& 0.245
& 0.057
& 0.111
& 0.052
& 0.038
& 0.019
& 0.012
& 0.009
\\\hl
$\fxy$
& 1.685
& 0.167
& 0.638
& 0.258
& 0.467
& 0.294
& 0.289
& 0.212
& 0.159
& 0.125
\\\hl
\et\\
In this case the underlying relation is one of near
equal strength feedback interconnection.
But almost immediately a very unequal relation appears
under subsampling which soon decays to a near
unidirectional relation.
\noi\ul{Example 4}. Near Unidirectional Dynamics Becomes Near
Equal Strength
\noi\ul{Table 4}. GEMs for various sampling intervals for\\
$(\lamw,\lamt,\xix,\xiy,\rho)
=(.99 \expj{j\times .25}, .95 \expj{-j\times .25},.1,3,-.8)$
$\Ra A=\aobcod{1.883}{2.236}{-0.408}{0.036}$.\\
\bt{|l|l|l|l|l|l|l|l|l|l|l|}\hl
m & 1 & 2 & 3 & 4 & 5 & 6 & 10 & 20 & 30 & 40 \\\hl
$\fyx$
& 0.023
& 0.284
& 0.381
& 0.428
& 0.451
& 0.457
& 0.309
& 0.454
& 0.407
& 0.168
\\\hl
$\fxy$
& 2.937
& 2.384
& 1.617
& 1.243
& 1.019
& 0.859
& 0.372
& 0.532
& 0.453
& 0.178
\\\hl
\et\\
In this case a near unidirectional dynamic relation
immediately becomes one of significant but unequal
strengths and then one of near equal strength.
There is nothing pathological about these examples
and using the design procedure developed above
it is easy to generate other similar kinds of examples.
They make it emphatically clear that GC cannot be
reliably discerned from
subsampled data.
\section{\bf Conclusions}
This paper has given a theoretical and computational analysis
of the use of Granger casuality in fMRI.
There were two main issues:
the effect of downsampling and the effect of
hemodynamic convolution.
To deal with these issues a number of
novel results in multivariate time series
and Granger causality were developed
via state space methods
as follows.
\ben
\item[(a)] Computations of submodels via the DARE (Theorems I,IV).
\item[(b)] Reliable computation of GEMs via the DARE
(Theorems Va,Vb).
\item[(c)] Effect of filtering on GEMs (Theorem VI).
In particular the destructive effect of the
non-minimum phase property of HRFs.
\item[(d)] Computation of downsampled models via the DARE.
\een
Using these results we were able to develop, in
section 8, a framework for generating downsampling induced
spurious Granger causality 'on demand' and provided a number
of illustrations.
All this leads to the conclusion that
that Granger causality analysis of fMRI data cannot be used to discern
neuronal level driving relationships .
Not only is the time-scale too slow but
even with faster sampling
the non-minimum phase aspect
of the HRF will still compromise the method.
Future work would naturally include an extension
of the Granger causality results to handle
the presence of a third vector time series.
And also extensions to deal with time-varying Granger
causality.
Non-Gaussian versions could mitigate the non-minimum phase
problem to some extent but there does not seem
to be any evidence for the non-Gaussianity of fMRI data.
Extensions to nonlinear Granger causality
are currently of great interest but need a considerable development.
\section{\bf Stabilizability, Detectability and DARE}
In this section we restate and modify for our purposes
some standard state space results.
We rely mostly on
\cc{KAIL00}[Appendices E,C].
We denote an eigenvalue of a matrix by $\lambda$
and a corresponding eigenvector by $q$.
We say $\lambda$ is a \ul{stable} eigenvalue if $|\lambda|<1$;
otherwise $\lambda$ is an \ul{unstable} eigenvalue.
\subsection{\bf Stabilizability}
The pair $(A,B)$ is \ul{controllable} if
there exists a matrix $G$ so that $A-BG$ is stable
i.e. all eigenvalues of $A-BG$ are stable.
$(A,B)$ is controllable iff any of the following conditons hold,
(i) Controllability matrix:
$\calc=[B,AB,\cdots, A^{n-1}B]$ has rank $n$.
(ii) Rank Test:
$rank[\lambda I-A, B]=n$ for all eigenvalues $\lambda$ of $A$.
(iii) PBH test:
There is no left eigenvector
of $A$ that is orthogonal to $B$ i.e. if $q^TA=\lambda q^T$
then $q^TB\neq 0$.
\noi The pair $(A,B)$ is \ul{stabilizable} if:
$rank[\lambda I-A,B]=n$ for all \ul{unstable} eigenvalues of $A$.
Three useful tests for stabilizability are:
(i) PBH Test:
$(A,B)$ is stabilizable iff there is no left eigenvector of $A$
corresponding to an \ul{unstable} \eval
that is orthogonal to $B$ i.e. if
$q^TA=\lambda q^T$ and $|\lambda|\geq 1$
then $q^TB\neq 0$.
(ii) $(A,B)$ is stabilizable if $(A,B)$ is controllable.
(iii) $(A,B)$ is stabilizable if $A$ is stable.
\subsection{\bf Theorem DARE}
\cc{KAIL00}(Theorem E6.1, Lemma 14.2.1,section 14.7)\\
Under conditions, N,St,De the DARE has a unique positive
semi-definite solution $P$
which is stabilizing, i.e. $\As-\Ks C$ is a stable matrix.
Further if we initialize $P_0=0$ then
$\Pt$ is nondecreasing and $\Pt\ra P$ as $t\rai$.
\noi{\it Remarks}.
(i)
$\As-\Ks C$ stable means $A-KC$ is
stable (see below). And this implies
that $(A,K)$ is controllable (see below).
(ii)
Since $V\geq R$ then $N\Ra V$ is positive definite.
\noi\ul{\it Proof of (i)}.
We first note (taking limits in) \cc{KAIL00}(equation 9.5.12)
$\Ks=K-SR\upm$.
We have then $\As-\Ks C=A-SR\upm C-(K-SR\upm)C=A-KC$.
So $\As-\Ks C$ is stable iff $A-KC$ is stable.
But then $(A,K)$ is controllable.
\subsection{\bf Detectability}
The pair $(A,C)$ is \ul{detectable} if $(A^T,C^T)$ is stabilizable.
\noi{\it Remarks}.
(i)
If $\As$ is stable (all eigenvalues have modulus $<1$)
then $S,D$ automatically hold.
(ii)
Condition D can be replaced with the detectability
of $(A,C)$ which is the way \cc{KAIL00} states the result.
We show equivalence below (this is also noted
in a footnote in \cc{KAIL00}(section 14.7).
\noi\ul{\it Proof of Remark(ii)}.
Suppose $(\As,C)$ is detectable but $(A,C)$ is not.
Then by the PBH test there is a right eigenvector $p$ of $A$
corresponding to an unstable \eval of $A$
with $Ap=\lambda p,Cp=0$. But then $\As p=(A-SR\upm C)p=Ap=\lambda p$
while $Cp=0$ which contradicts the detectbility of $(\As,C)$.
The reverse argument is much the same.\\
\ul{Proof of Result I}.
(a) follows from the discussion leading to theorem DARE.
(b) follows from the remarks after theorem DARE.
\section{\bf GEMs for Bivariate VAR(1)}
Applying formula (\ref{fxz}) and reading off $\hxx $ etc. from
the VAR(1) model yields,
\begin{eqnarray*}
\fxlam&=&[(1+\phiy^2-2\phiy cos(\lambda))\sga^2
+\gamx^2\sgb^2-2\rho\sga\sgb(\gamx\phiy-\gamx cos(\lambda))]
/|D(\epu{j\lambda})|^2\\
D(L)&=&(1-\phix L)(1-\phiy L)-\gamx\gamy L^2
\end{eqnarray*}
This can clearly be written as an ARMA(2,1) spectrum
$\sg_x^2|1-\thtx\emu{j\lambda}|^2/|D(\epu{j\lambda})|^2$.
Equating coefficients gives
\begin{eqnarray*}
\gamo&=&\sg_x^2(1+\thtx^2)
=(1+\phiy^2)\sga^2+\sgb^2\gamx^2-2\rho\sga\sgb\gamx\phiy
=\sga^2(1+\xix+\dx^2)\\
\gamw&=&\sg_x^2\thtx
=\sga^2\dx
\end{eqnarray*}
where $\xix=(1-\rho^2)\gamx^2\sgb^2$ and
$\dx=\phiy-\rho\gamx\frac{\sgb}{\sga}$.
We thus have $\thtx^2=\gamw^2/\sg_x^4$ and using this
in the first equation gives,
$\gamo=\sg_x^2+\gamw^2/\sg_x^2$
or $\sg_x^4-\sg_x^2\gamo+\gamw^2=0$.
This has, of course, two solutions
\begin{eqnarray*}
\sg_x^2&=&\half(\gamo\pm\sqrt{\gamo^2-4\gamw^2})\\
\Ra\frac{\sg_x^2}{\sga^2}&=&
\half(\frac{\gamo}{\sga^2}\pm
\sqrt{\frac{\gamo^2}{\sga^4}-4\frac{\gamw^2}{\sga^4}})
=\half(1+\xix+\dx^2\pm\sqrt{(1+\xix+\dx^2)^2-4\dx^2})
\end{eqnarray*}
Note that if $\xix=0$ this delivers
$\frac{\sg_x^2}{\sga^2}=\half(1+\dx^2+\sqrt{(1-\dx^2)^2}=1$.
We must choose the solution which ensures
$|\thtx|<1
\equiv \frac{\gamw^2}{\sg_x^4}<1
\equiv \frac{\gamo}{\sg_x^4}<2
\equiv \gamo/\sga^2<2\sg_x^2/\sga^2$
$\equiv 1+\xix+\dx^2<1+\xix+\dx^2\pm \sqrt{(1+\xix+\dx^2)^2-4\dx^2}$.
And so we must choose the \dae$+$\dae solution.
Continuing, we now claim,
\EQ
\frac{\sg_x^2}{\sga^2}\geq\half(1+\xix+\dx^2+\sqrt{(1+\xix-\dx^2)^2}
=\half(1+\xix+\dx^2+1+\xix-\dx^2)
=1+\xix
\EN
This follows if,
$(1+\xix+\dx^2)^2-4\dx^2\geq(1+\xix-\dx^2)^2
\equiv (1+\xix+\dx^2)^2-(1+\xix-\dx^2)^2\geq4\dx^2
\equiv 2(1+\xix)2\dx^2\geq4\dx^2
\equiv 1+\xix\geq1$,
which holds.
\section{\bf Spectral Factorization}
Suppose $\ept$ is a white noise sequence with
$E(\ept)=0,var(\ept)=\Sg$.
Let $G(L)$
be a stable causal possibly non-minimum phase filter.
Then $\zbt=G(L)\ept$ has spectrum
$\fbzlam=G(L)V G^T(\Lum)$ where $L=exp(-j\lambda)$.
We can then find a unique causal, stable minimum phase spectral
factorization, $\fbzlam=\Go(L)V_o\Go^T(\Lum)$.
Let $V,V_o$ have Cholesky factorizations,
$V=JJ^T,V_o=\Jo\Jo^T$
and set $\Gc(L)=G(L)J,\Goc(L)=\Go(L)\Jo$.
Then $\fbzlam=\Gc(L)\Gc^T(\Lum)=\Goc(L)\Goc^T(\Lum)$.
Since $\Goc(L)$ is minimum phase we can introduce
the causal filter $E(L)=\Goc\upm(L)\Gc(L)\Ra
E(L)E^T(\Lum)=I$.
Such a filter is called an \ul{all} \ul{pass} filter
\cc{HDE88},\cc{GREE88}.
Now, $\Gc(L)=\Goc(L) E(L)$
or $G(L)=\Go(L)\Jo E(L)J\upm$ i.e. a
decomposition of a non-minimum phase
(matrix) filter into a product of a minimum phase
filter and an all pass filter. We can also write this as,
$\Go(L)=G(L)J E\upm(L)\Jo\upm
=G(L)JE^T(\Lum)\Jo\upm$ showing how
the non-minimum phase filter is transformed to yield a spectral factor.
|
1,314,259,993,548 | arxiv |
\section{Introduction}
Supersymmetry(SUSY) is one of the most appealing theories for
physics beyond the Standard Model(SM), as it solves the
hierarchy problem and could provide a good candidate
for the cold dark matter(CDM) in the universe.
SUSY predicts the existence of a super-partner
for each SM particle, sharing the same
quantum numbers but differing by half a unit of spin.
Despite numerous attempts, SUSY particles(``sparticles'')
have not yet been observed;
this means that SUSY must be broken and
sparticles must be heavier than their SM counterparts.
A new quantum number is introduced, R-Parity($R_P$),
which is $+1$ for SM and $-1$ for SUSY particles.
The conservation of $R_P$ implies that the lightest
supersymmetric particle(LSP) is stable and
escapes the experimental apparatus undetected,
causing a striking experimental signature of large missing transverse
energy($\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$).
Most SUSY theories predict the sparticle spectrum to become
accessible at the TeV scale;
in these scenarios the Tevatron might be able to discover
SUSY before the advent of the LHC.
We will present searches for various sparticles
at the CDF
and D\O\ experiments; the results are interpreted
in $R_P$ conserving and violating(RPV) models.
\section{Charginos and Neutralinos}
We present searches for associated production of
the lightest chargino($\tilde{{\chi}}^\pm_{1}$) and
second to lightest neutralino($\tilde{{\chi}}^{0}_{2}$),
in three different models.\\
$\bullet$ {\it mSugra: three leptons plus missing transverse energy.}
The scenario of minimal Sugra(mSugra) is a Grand Unification
Theory which includes Gravity; owing to the small
number of free parameters ($M_{0}$, $M_{1/2}$, $A_{0}$, ${\rm tan}\beta$, $sign(\mu)$),
mSugra is very popular
in experimental searches. In mSugra
the lightest neutralino($\tilde{{\chi}}^{0}_{1}$)
is the LSP and a CDM candidate.
The LEP2 limit of $M_{\tilde{{\chi}}^\pm_{1}} >$ 103.5~\ensuremath{\GeV\!/c^2}\ implies
the squark($\tilde{q}$) and gluino($\tilde{g}$)
masses
to be $\raisebox{-.7ex}{$\stackrel{\textstyle >}{\sim}$} 300~\ensuremath{\GeV\!/c^2}$, meaning that
strong sparticle production
at the Tevatron is suppressed.
Thus, the associated \nobreak{production}
\begin{table}[t]
\begin{minipage}[c]{0.48\textwidth} {
\vspace*{-0.0cm}
\caption{The number of observed events and the SM expectation in 320~$\pbinv$ of
data for the D\O\ trilepton analyses and their sum.} \label{tab::d0_trilep}
\begin{center} \resizebox{\textwidth}{!} {
\begin{tabular}{|c c c c|}
\hline
{\bf mode}
& $\mathbf{p_T^{\ell_1},p_T^{\ell_2},p_T^{\ell_3}}$(\ensuremath{\GeV\!/c})
& {\bf SM expected} & {\bf Observed} \\%& $\mathrm{\mathbf{\sigma\cdot Br\, limit}(pb)}$ & {\bf $m_{\tilde{{\chi}}^\pm_{1}}$ limit(\ensuremath{\GeV\!/c^2})}\\
\hline
$ee+\ell$ &12,8,4 & 0.21$\pm$0.12 & 0 \\
$e\mu+\ell$ &12,8,7 & 0.31$\pm$0.13 & 0 \\
$\mu\mu+\ell$ &11,5,3 & 1.75$\pm$0.57 & 2 \\
$\mu^\pm\mu^\pm$ &11,5 & 0.64$\pm$0.38 & 1 \\
$e\tau+\ell$ & 8,8,5 & 0.58$\pm$0.14 & 0 \\
$\mu\tau+\ell$ &14,7,4 & 0.36$\pm$0.13 & 1 \\ \hline
Total & & 3.85$\pm$0.75 & 4 \\
\hline
\end{tabular} }
\end{center}
}
\end{minipage}\hfill
\begin{minipage}[c]{0.5\textwidth} {
of $\tilde{{\chi}}^\pm_{1}\tilde{{\chi}}^{0}_{2}$ would
be the dominant SUSY production mechanism if
sparticles are accessible at these energies.
D\O~\cite{d0_np} has looked for the ``tri-lepton'' signal
from $p\bar{p}\rightarrow\tilde{{\chi}}^\pm_{1}\tilde{{\chi}}^{0}_{2}$
followed by $\tilde{{\chi}}^{0}_{2}\rightarrow\ell\bar{\ell}\tilde{{\chi}}^{0}_{1}$
and~$\tilde{{\chi}}^\pm_{1}\rightarrow\ell\nu\tilde{{\chi}}^{0}_{1}$.
Events are selected with large $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$ and
two isolated leptons
satisfying analysis
dependent
topological cuts.
The identification requirements
on the third lepton are loosened
to increase acceptance and include
hadronic decays of $\tau$ leptons.
The dominant SM
}\hfill
\end{minipage}
\vspace{-0.7cm}
\end{table}
\noindent
background sources are
dibosons and QCD production.
The number of observed events is
reported in Tab.~\ref{tab::d0_trilep},
together with the lepton selection criteria
and the number of
SM expected events in each channel.
No evidence of SUSY is observed.
This translates into a limit on
$\sigma\cdot Br$
\begin{figure}[h]
\begin{minipage}[c]{0.46\textwidth} {
\vspace*{-0.2 cm}
of 0.2~pb and on the mass
of the chargino of 116~\ensuremath{\GeV\!/c^2}\ for the scenario with
$M(\tilde{\ell})\simeq M(\tilde{{\chi}}^{0}_{2})$
and no
slepton
mixing(``3-l max''~\cite{d0_np}).
The combined limit is shown
in
Fig.~\ref{fig::d0_trilep};
the main systematic uncertainties
on the limit calculation are those
related to the
statistics of the Monte Carlo background
samples and the modelling of the QCD background.
This result
reaches for the first time
beyond the Run~I and LEP2 limit in mSugra
(for this
choice of parameter space).
CDF also has preliminary results~\cite{cdf_web} for searches for
$\tilde{{\chi}}^\pm_{1}\tilde{{\chi}}^{0}_{2}$ in the $ee+\ell$ channel; the results in the other
channels are imminent.
}
\end{minipage}\hfill
\begin{minipage}[c]{0.5\textwidth} {
\vspace*{-0.3 cm}
\begin{center}
\epsfig{figure=my_trilep_d0_limit.eps,width=\textwidth,height=5.cm}
\vspace*{-1.0cm}\begin{flushright}{\mbox{\small{\bf Chargino Mass (\ensuremath{\GeV\!/c^2})}}}\end{flushright}
\end{center}
\caption{D\O\ combined limit
on the $\sigma\cdot Br$ for
$p\bar{p}\rightarrow\tilde{{\chi}}^\pm_{1}\tilde{{\chi}}^{0}_{2}\rightarrow 3\ell+\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$
as a function of the chargino mass for three different mSugra models.
}\label{fig::d0_trilep}
}
\end{minipage}\hfill
\end{figure}
\newline$\bullet$ {\it GMSB: diphotons plus missing transverse energy.}
In Gauge-Mediated Supersymmetry Breaking(GMSB) models
a light gravitino is the LSP and
the next to lightest SUSY particle(NLSP)
\vspace*{-0.75cm}
\begin{figure}[h]
\begin{minipage}[c]{0.46\textwidth} {
\begin{center}
\epsfig{figure=ggMetCombo_v6.eps,width=\textwidth,height=4.9cm}
\end{center}
\caption{CDF and D\O\ combined cross section times branching ratio limit
for the search for GMSB SUSY events in the $\gamma\gamma\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$ channel.
}\label{fig::ggmet}
}
\end{minipage}\hfill
\begin{minipage}[c]{0.53\textwidth} {
is
expected to decay in a photon and the LSP if $R_P$
is conserved.
The main production mode
at the Tevatron is
predicted to be $p\bar{p}\rightarrow\tilde{{\chi}}^\pm_{1}\tilde{{\chi}}^\mp_{1}$ or $\tilde{{\chi}}^\pm_{1}\tilde{{\chi}}^{0}_{2}$, where
each gaugino pair cascades down to two $\tilde{{\chi}}^{0}_{1}$s, leading to a final
state with $\gamma\gamma\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$.
Both CDF and D\O\ have performed
searches in this
channel~\cite{ggmet_run2}.
CDF(D\O) selects events requiring two
photons above 13(20)~\ensuremath{\mathrm{Ge\kern -0.1em V}}\
and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$ above 45(40)~\ensuremath{\mathrm{Ge\kern -0.1em V}}. CDF(D\O) observes zero(two)
events, with 0.3$\pm$0.1(3.7$\pm$0.6) SM events expected.
With these results CDF and D\O\ are able to set a limit
on
the mass of the chargino of 167
and 195~\ensuremath{\GeV\!/c^2}\ respectively.
The main systematic uncertainty
comes from
the photon identification efficiency in both
}
\end{minipage}\hfill
\vspace*{-0.5 cm}
\end{figure}
\noindent cases.
CDF and D\O\ combined the two analyses
obtaining a limit on $M_{\tilde{{\chi}}^\pm_{1}}$ of
209~\ensuremath{\GeV\!/c^2}
(see Fig.~\ref{fig::ggmet}),
significantly improving over the single experiments and Run~I
results.\vspace{0.4cm}
\\
$\bullet$ {\it Other models: CHAMPS.}
D\O\ has looked for electrically charged long-lived
massive particles(CHAMPS) in 390~$\pbinv$ of data.
Using timing information in the muon system,
events are selected by requiring two isolated muons with
\ensuremath{p_{\rm T}}$>$15~\ensuremath{\GeV\!/c}\ and
speed significantly smaller than $c$.
No events have been observed, with 0.66$\pm$0.06 events expected,
measured in ${ Z}^0\rightarrow{ \mu}^+ { \mu}^-$ data.
The main systematic uncertainty comes from
the $\mu$ efficiencies and
the time measurement.
D\O\ sets
a limit of 174~\ensuremath{\GeV\!/c^2}\ on the mass of the stable chargino in
Anomaly Mediated Supersymmetry Breaking(AMSB) models where the
$\tilde{{\chi}}^\pm_{1}$ is the NLSP and a long-lived particle.
This is the most stringent limit to date
in this model. The result has also been interpreted in
GMSB with the lightest stau NLSP
and in AMSB for long-lived charged higgsinos~\cite{d0_np}.
\section{Squarks and Gluinos}
$\bullet$ {\it mSugra: jets plus missing transverse energy.}
In the most general mSugra models with low ${\rm tan}\beta$ the first five squarks
are predicted to be heavy and almost degenerate in mass.
The cross-section for $p\bar{p}\rightarrow\tilde{q}\bar{\tilde{q}}$
should then effectively be the sum of the cross-sections over the ten
squark species and would be large at hadron colliders if
the squarks are kinematically accessible.
The signature for
$\tilde{q}\tilde{q},\tilde{q}\tilde{g}$ and
$\tilde{g}\tilde{g}$ production and decay is
two to four jets
(from $\tilde{q}\rightarrow q\tilde{\chi}$ if $M(\tilde{q})>M(\tilde{g})$
and $\tilde{g}\rightarrow qq\tilde{\chi}$ if vice versa) and
$\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$
coming from the LSPs.
\begin{table}[t]
\caption{Number of events and $\tilde{g},\tilde{q}$ mass limits in
the D\O\ squark-gluino searches in 310~$\pbinv$.} \label{tab::sqgl}
\begin{center} \resizebox{\textwidth}{!} {
\begin{tabular}{|c c c c c c c|}
\hline
{\bf mode}
& $\mathbf{{\it E_T^{j}}}$(\ensuremath{\mathrm{Ge\kern -0.1em V}})
& $\mathrm{\mathbf{\Sigma {\it E_T^j}\,(\ensuremath{\mathrm{Ge\kern -0.1em V}})}}$ & $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$ ($\ensuremath{\mathrm{Ge\kern -0.1em V}}$)
& {\bf SM expected} & {\bf Observed} & {\bf mass limit($\ensuremath{\GeV\!/c^2}$)}\\
\hline
di-jet &60,50 & 250 & 175 & 12.8$\pm$5.4 & 12 & $M_{\tilde{q}}>$318 \\
three jets &60,40,25 & 325 & 100 & 6.1$\pm$3.1 & 5 & $M_{\tilde{q}}>$333 \\
four jets &60,40,30,25 & 175 & 75 & 7.1$\pm$0.9 & 10 & $ M_{\tilde{g}}>$233 \\
\hline
\end{tabular} }
\end{center}
\end{table}
\begin{figure}
\vspace*{-0.5cm}
\begin{center}
\epsfig{figure=sqgl_lim.eps,width=0.45\textwidth,height=5cm}
\epsfig{figure=stop_lim.eps,width=0.45\textwidth,height=5cm}
\end{center}
\caption{{\it On the left:} mSugra limits on $M_{\tilde{q}}$ and
$M_{\tilde{q}}$ with ${\rm tan}\beta=3,A_{0}=0,\mu<0$ and $\tilde{q}=\tilde{u},\tilde{d},
\tilde{c},\tilde{s},\tilde{b}$. The red region corresponds to
the D\O\ analyses.
The di-jet analysis limit
lies above the dashed area in the region of
$M(\tilde{g})\simeq 350\,\ensuremath{\GeV\!/c^2}$
and $M(\tilde{q})\simeq 320\,\ensuremath{\GeV\!/c^2}$. The 3-jets limit is on the
diagonal, for $M(\tilde{q})= M(\tilde{g})\simeq 333\,\ensuremath{\GeV\!/c^2}$, whilst the
4-jets limit is on the right of the dashed line (corresponding
to the expected limit) at $M(\tilde{g})\simeq 230~\ensuremath{\GeV\!/c^2}$.
{\it On the right:} CDF 95\% C.L. on $\sigma(p\bar{p}\rightarrow \tilde{t}_1\bar{\tilde{t}}_1 \rightarrow(c\tilde{{\chi}}^{0}_{1})(\bar{c}\tilde{{\chi}}^{0}_{1}))$ as a function of $M(\tilde{t}_1)$.
}\label{fig::sqgl}
\end{figure}
D\O\ has looked for squark and gluinos in 310~$\pbinv$
of data with three analyses, optimised to
search for $\tilde{q}\bar{\tilde{q}}$, $\tilde{q}\tilde{g}$
or $\tilde{g}\tilde{g}$ assuming
$M(\tilde{q})<M(\tilde{g})$,
$M(\tilde{q})\simeq M(\tilde{g})$ or
$M(\tilde{q})>M(\tilde{g})$
respectively.
Table~\ref{tab::sqgl} shows the selection requirements,
the SM expected and the observed
events for the three cases.
The main backgrounds are
$Z(\rg\nu\bar{\nu})+jets$,
$W(\rg\tau\nu)+jets$ and $t\bar{t}\rightarrow b\bar{b}jj\ell\nu$
for two, three and four jets respectively, while
the main systematic comes from the jet energy scale.
No evidence of SUSY
is observed, which leads to a limit on $M(\tilde{q},\tilde{g})$
(see Table~\ref{tab::sqgl}
and Fig.~\ref{fig::sqgl}, left).
CDF has also recently obtained results in this channel,
which are compatible with D\O.
\\$\bullet$ {\it Stop searches.}
CDF has searched~\cite{cdf_web} for
$\tilde{t}_1\bar{\tilde{t}}_1$ pair production
assuming $Br(\tilde{t}_1\rightarrow c\tilde{{\chi}}^{0}_{1})$=1
and $M_{\tilde{{\chi}}^{0}_{1}(LSP)}>40~\ensuremath{\GeV\!/c^2}$.
The decay $\tilde{t}_1\rightarrow c\tilde{{\chi}}^{0}_{1}$
dominates via a one-loop diagram in the absence
of flavour changing neutral currents if
$M_{\tilde{t}_1}<M_{b}+M_{\tilde{{\chi}}^\pm_{1}}$,
$M_{\tilde{t}_1}<M_W+M_{b}+M_{\tilde{{\chi}}^{0}_{1}}$,
$M_{\tilde{t}_1}<M_{b}+M_{\tilde{\nu}}$ and
$M_{\tilde{t}_1}<M_{b}+M_{\tilde{\ell}}$.
The signature for this process is a pair
of acollinear heavy flavour jets in the transverse plane,
large $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$ and no isolated high \ensuremath{p_{\rm T}}\ leptons.
The existing CDF(D\O) Run~I limit is
$M_{\tilde{t}_1}>119(122)~\ensuremath{\GeV\!/c^2}$ for $M_{\tilde{{\chi}}^{0}_{1}}$
up to 40(45)~$\ensuremath{\GeV\!/c^2}$.
CDF~\cite{cdf_web} has looked for this signature,
with and without the requirement of the jets to be tagged by the Silicon
Vertex Detector.
In 163~$\pbinv$ of data CDF expects 105$\pm$12 SM events
in the ``pre-tag''sample and 8.3$\pm$2.3 in the ``tag''
sample, and observes 119 and 11 respectively, with
background dominantly QCD multi-jet
and $W/Z$+jets production.
The main systematic uncertainty
comes from the jet energy scale.
Fig.~\ref{fig::sqgl}(right) shows the 95\% confidence limit on the $\sigma\cdot Br$.
This result does not improve the Run~I limit yet.
\section{R-Parity Violation}\label{sec::rpv}
Three analyses have been performed~\cite{d0_np} by D\O, looking for RPV
in the decays
$\tilde{{\chi}}^{0}_{1}\rightarrow\mu e\nu_e, e e\nu_\mu ,$\\
$\tilde{{\chi}}^{0}_{1}\rightarrow\mu\mu\nu_e, \mu e \nu_\mu $ and
$\tilde{{\chi}}^{0}_{1}\rightarrow e \tau\nu_\tau, \tau\tau \nu_e$.
Here $R_P$ is assumed to be conserved in the production, with the
main process being $p\bar{p}\rightarrow\tilde{{\chi}}^\pm_{1}\tilde{{\chi}}^{0}_{2}$.
The signature for
these analyses is four charged leptons and missing energy coming
from the neutrinos.
The events have been selected with at least three leptons
and missing
energy. No evidence of RPV SUSY is observed,
which translates in a limit on the masses of the $\tilde{{\chi}}^{0}_{1}$ and the
$\tilde{{\chi}}^\pm_{1}$ (see Table~\ref{tab::rpv}).\\
D\O\ has also searched~\cite{d0_np} for the RPV process
$u\bar{d}\rightarrow\tilde{\mu}$,
with
$\tilde{\mu}\rightarrow\tilde{{\chi}}^{0}_{1}\mu
\rightarrow (\mu\bar{\tilde{\mu}^*}) \mu$,
where the virtual smuon decays through
$\tilde{\mu} \rightarrow u\bar{d}$
again violating $R_P$.
The same RPV coupling $\lambda^\prime_{211}$ is involved
in both vertices; all the other couplings are
assumed to be zero.
Events are selected requiring two jets with \ensuremath{E_{\rm T}}$>15\,\ensuremath{\mathrm{Ge\kern -0.1em V}}$
and two high \ensuremath{p_{\rm T}}\ isolated muons.
The number of events and the corresponding limit are summarised
in Table~\ref{tab::rpv}.\\
CDF has performed a search~\cite{cdf_web}
for $\tilde{t}_1\bar{\tilde{t}}_1$
production
followed by $R_P$ violating
decay of the stop into $b\tau$
with $Br(\tilde{t}_1 \rightarrow b\tau)$=1.
The signature for this analysis is
either an electron or a muon (from
$\tau\rightarrow\ell\bar{\nu_\ell}\nu_\tau$),
a hadronically decaying tau and at least
two jets.
CDF expects 2.6$\pm$0.6 $e\tau$ and 2.2$\pm$0.5 $\mu\tau$
events
from SM processes, and observes 2 and 3
respectively.
The dominant uncertainty on the mass limit comes from the
PDFs(10\%) and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{\rm T}\:$}$ estimate(3\%).
As the data are in good agreement with the
expectation, a limit of 129~$\ensuremath{\GeV\!/c^2}$
is set on the mass of the stop
as shown in Table~\ref{tab::rpv}.
\begin{table}[t]
\caption{The number of SM expected and observed events for the
RPV analyses described in section~\ref{sec::rpv}.} \label{tab::rpv}
\vspace{0.2cm}
\begin{center} \resizebox{0.9\textwidth}{1cm} {
\begin{tabular}{|c c c c c c c|}
\hline
{\bf Experiment} & {\bf RPV process} & {\bf RPV coupling}
& $\mathcal{L}(\pbinv)$
& {\bf SM expected} & {\bf Observed} & {\bf mass limit(\ensuremath{\GeV\!/c^2})}\\
\hline
D\O\ &$\tilde{{\chi}}^{0}_{1}\rightarrow\mu\mu\nu_e, \mu e \nu_\mu$ &$\lambda_{122}$& 160 & 0.6$\pm$1.9 & 2 & $M({\tilde{{\chi}}^\pm_{1}})>$165,$M({\tilde{{\chi}}^{0}_{1}})>$84\\
D\O\ &$\tilde{{\chi}}^{0}_{1}\rightarrow\mu e\nu_e, e e\nu_\mu$ &$\lambda_{121}$& 238 & 0.5$\pm$0.4 & 0 & $M({\tilde{{\chi}}^\pm_{1}})>$181,$M({\tilde{{\chi}}^{0}_{1}})>$95\\
D\O\ &$\tilde{{\chi}}^{0}_{1}\rightarrow e \tau\nu_\tau, \tau\tau \nu_e$&$\lambda_{133}$ & 200 & 1.0$\pm$1.4 & 0 & $M({\tilde{{\chi}}^\pm_{1}})>$118,$M({\tilde{{\chi}}^{0}_{1}})>$66\\
\hline
D\O\ &$\tilde{\mu}\rightarrow\tilde{{\chi}}^{0}_{1}\mu$, $\tilde{\mu} \rightarrow u\bar{d}$
&$\lambda^\prime_{211}$=0.07 & 154 & 1.1$\pm$0.4 & 2 &$M({\tilde{\mu}})>$255,$M({\tilde{{\chi}}^{0}_{1}})\simeq$100\\
CDF &$\tilde{t}_1 \rightarrow b\tau$ &$\lambda^\prime_{333}$ & 200 & 4.8$\pm$0.7 & 5 &$M({\tilde{t}_1})>$129\\
\hline
\end{tabular} }
\end{center}
\vspace*{-0.3cm}
\end{table}
This result can also be interpreted as a limit on the third
generation leptoquark ($LQ_3$), assuming $Br(LQ_3\rightarrow \tau b$)=1.
\section{Conclusions and prospects}
The CDF and D\O\ detectors are running efficiently
and have already collected more than three times the luminosity
of Run~I.
Most of the SUSY limits we have presented
are an improvement
over the Run~I results, and are
probing the region beyond the LEP2 limits.
Many more results are to come
with the present and future data; the exciting era has just begun.
\section*{References}
|
1,314,259,993,549 | arxiv | \section{\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}
{\bfseries
\centering
}}
\def\@secnumfont{\bfseries}
\makeatother
\setlength{\textheight}{19.5 cm}
\setlength{\textwidth}{12.5 cm}
\usepackage{amsmath,amssymb, amsbsy}
\usepackage{subfigure}
\usepackage{graphpap,latexsym,epsf}
\usepackage{color,psfrag}
\usepackage[dvips]{graphicx}
\usepackage{enumerate}
\usepackage{bbm}
\usepackage{relsize}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{{\mathbb{R}}}{{\mathbb{R}}}
\newcommand{{\mathbb H}}{{\mathbb H}}
\newcommand{{\mathbb S}}{{\mathbb S}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb{R}^n}}{{\mathbb{R}^n}}
\newcommand{{\mathcal F}}{{\mathcal F}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{\hat{f}}{\hat{f}}
\newcommand{\put(260,0){\rule{2mm}{2mm}}\\}{\put(260,0){\rule{2mm}{2mm}}\\}
\newcommand{\mathlarger{\mathlarger{\mathbbm{1}}}}{\mathlarger{\mathlarger{\mathbbm{1}}}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\numberwithin{equation}{section}
\setcounter{page}{1}
\begin{document}
\title[Optimal portfolios for different anticipating integrals]{Optimal portfolios for different anticipating integrals under insider information}
\author[Carlos Escudero]{Carlos Escudero*}
\thanks{* This work has been partially supported by the Government of Spain (Ministerio de Ciencia, Innovaci\'on y Universidades) through Project PGC2018-097704-B-I00.}
\address{Carlos Escudero: Departamento de Matem\'aticas Fundamentales, Universidad Nacional de Educaci\'on a Distancia, Spain}
\email{cescudero@mat.uned.es}
\author{Sandra Ranilla-Cortina}
\address{Sandra Ranilla-Cortina: Departamento de An\'alisis Matem\'atico y Matem\'atica Aplicada, Universidad Complutense de Madrid, Spain}
\email{sranilla@ucm.es}
\subjclass[2010] {60H05, 60H07, 60H10, 60H30, 91G80}
\keywords{Insider trading, Hitsuda-Skorokhod integral, Russo-Vallois forward integral, Ayed-Kuo integral, anticipating stochastic calculus, optimal portfolios.}
\begin{abstract}
We consider the non-adapted version of a simple problem of portfolio optimization in a financial market that results from the presence of insider information. We analyze it via anticipating stochastic calculus and compare the results obtained by means of the Russo-Vallois forward, the Ayed-Kuo, and the Hitsuda-Skorokhod integrals. We compute the optimal portfolio for each of these cases with the aim of establishing
a comparison between these integrals in order to clarify their potential use in this type of problem.
Our results give a partial indication that, while the forward integral yields a portfolio that is financially meaningful, the Ayed-Kuo and the Hitsuda-Skorokhod integrals do not provide an appropriate investment strategy for this problem.
\end{abstract}
\maketitle
\section{Introduction}\label{introduction}
Many mathematical models in the applied sciences are
expressed in terms of stochastic differential equations such as
\begin{equation}\label{white}
\frac{dx}{dt}= a(x,t) + b(x,t) \, \xi(t),
\end{equation}
where $\xi(t)$ is a ``white noise''. Of course, this equation cannot be
understood in the sense of the classical differential calculus of Leibniz and Newton.
Instead, the use of stochastic calculus provides a precise meaning to these models. However, the choice of a particular notion of stochastic integration has
generated a debate that has expanded along decades~\cite{mmcc,kampen}.
This debate, which has been particularly intense in the physics literature, has been focused mainly on the choice between the It\^o integral~\cite{ito1,ito2}, which leads to the notation
\begin{equation*}\label{ito}
dx = a(x,t) \, dt + b(x,t) \, dB(t),
\end{equation*}
and the Stratonovich integral~\cite{stratonovich} usually denoted as
\begin{equation*}\label{str}
dx = a(x,t) \, dt + b(x,t) \circ dB(t),
\end{equation*}
where $B(t)$ is a Brownian motion. Also, quite often in the physics literature,
one finds different meanings associated to equation~\eqref{white}~\cite{mmcc}.
The use of different notions of stochastic integration leads, in general, to different dynamics and, when they exist, to different stationary probability distributions~\cite{hl}; but it may also lead to different numbers of solutions~\cite{ce,escudero2}.
Since the preeminent place for this debate on the noise interpretation has been the physics literature, the focus has been put on diffusion processes, perhaps due to the influence of the seminal works by Einstein~\cite{einstein} and Langevin~\cite{langevin}. Herein we move out of even the Markovian setting and, following the steps of~\cite{bastonsescudero,escudero}, we concentrate on stochastic differential equations with non-adapted terms.
Non-adaptedness arises in financial markets concomitantly to the presence of insider traders.
Let us exemplify this with a model that is composed by two assets, the first of which is free of risk such as a bank account
\begin{eqnarray}\nonumber
d S_0 &=& \rho \, S_0 \, d t \\ \nonumber
S_0(0) &=& M_0,
\end{eqnarray}
and the second one is risky such a stock
\begin{eqnarray}\nonumber
d S_1 &=& \mu \, S_1 \, d t + \sigma \, S_1 \, d B(t) \\ \nonumber
S_1(0) &=& M_1.
\end{eqnarray}
This model is characterized by the set of positive parameters
$\{M_0, M_1, \rho, \mu, \sigma\}$, each one of which has an economic significance:
\begin{itemize}
\item $M_0$ is the initial wealth to be invested in the bank account,
\item $M_1$ is the initial wealth to be invested in the stock,
\item $\rho$ is the interest rate of the bank account,
\item $\mu$ is the appreciation rate of the stock,
\item $\sigma$ is the volatility of the stock.
\end{itemize}
Therefore the total initial wealth is $M = M_0+M_1$. Moreover we assume the inequality $\mu > \rho$ that imposes the higher
expected return of the risky investment. We allow only buy-and-hold strategies in which the trader divides the fixed initial amount $M$ between the two assets; that is, long-only strategies are allowed.
The total wealth at time $t$ is
$$
S^{\text{(I)}}(t):=S_0(t)+S_1(t),
$$
that is, the sum of the returns from the bank account and the stock.
Assuming a fixed time horizon $T$ and
using It\^o calculus we can compute the expected final total wealth, which is given by
\begin{eqnarray}\nonumber
\mathbb{E}\left[S^{\text{(I)}}(T)\right]
&=& M_0 \, e^{\rho T} + M_1 \, e^{\mu T}
\\ \nonumber
&=& M \left[ \frac{M_0}{M} \, e^{\rho T} + \frac{M_1}{M} \, e^{\mu T} \right] \\ \nonumber
&=& M \left[ \frac{M_0}{M} \, e^{\rho T} + \frac{M-M_0}{M} \, e^{\mu T} \right].
\end{eqnarray}
The last expression, written in terms of a convex linear combination, shows that this quantity can be maximized by choosing the optimal pair
\begin{eqnarray} \nonumber
M_0 &=& 0 \\ \nonumber
M_1 &=& M-M_0=M.
\end{eqnarray}
The corresponding maximal expected wealth therefore reads
\begin{equation*}\label{average}
\mathbb{E}\left[S^{\text{(I)}}(T)\right] = M e^{\mu T}.
\end{equation*}
This optimization problem is simple enough to allow for
an analytical approach to the extension we will consider herein. In particular, we will permit random and non-adapted initial conditions that will model the knowledge of an insider trader. Under these conditions, we will derive the optimal portfolio for different notions of stochastic integration. The precise problem is presented in the next section.
\section{Insider trading}\label{insider}
We consider a financial market in which
an insider trader, who possesses information on the future price of a stock, is present.
Precisely we assume the trader knows the value $S_1(T)$ at time $t=0$, what we will implement mathematically as if she knew the value $B(T)$.
Moreover we only consider buy-and-hold strategies for which shorting is not allowed.
Mathematically this translates into her control of the anticipating initial condition $f(B(T))$, where $f \in L^{\infty}(\mathbb{R})$ and $0 \leq f \leq 1$, for the stock. Therefore, in our two assets market, we find the following two equations that model the insider wealth:
\begin{subequations}
\begin{eqnarray}\label{rode1}
d S_0 &=& \rho \, S_0 \, d t \\ \label{rode2}
S_0(0) &=& M \left(1-f(B(T))\right),
\end{eqnarray}
\end{subequations}
and
\begin{subequations}
\begin{eqnarray}\label{s1}
d S_1 &=& \mu \, S_1 \, d t + \sigma \, S_1 \, d B(t) \\ \label{s10}
S_1(0) &=& M f(B(T)).
\end{eqnarray}
\end{subequations}
This system of equations, as written in~\eqref{s1} and \eqref{s10}, is ill-posed. While problem~\eqref{rode1} and \eqref{rode2} could still be considered a random differential equation, the non-adaptedness of initial condition~\eqref{s10}
implies that equation~\eqref{s1} is ill-posed as an It\^o stochastic differential equation.
There is a way out of this pitfall that consists in changing the notion of stochastic integration from the It\^o integral to one of its generalizations that admit non-adapted integrands.
The following sections analyze three different possibilities: the Russo-Vallois forward, the Ayed-Kuo and the Hitsuda-Skorokhod stochastic integrals. We compute the optimal investment strategy provided by each of these integrals, and subsequently we compare them. We will show that, while any of these anticipating stochastic integrals guarantees the well-posedness of the problem at hand, the financial consequences of each choice might be very different.
The aim of this work is to clarify the suitability of the use of each of these stochastic integrals in this particular portfolio optimization.
We anticipate that the forward integral is the only one among these that provides an optimal investment strategy that takes advantage of the anticipating condition in the financial sense.
Therefore our analysis further supports the use of this integral in a financial context as employed,
for instance, in~\cite{bo,noep,do1,do2,do3,leon,nualart}. It is also worth mentioning that the problem of insider trading has been studied by means of different approaches, notably enlargement of filtrations~\cite{jyc,pk},
but semimartingale treatments are not as general as the use of anticipating stochastic calculus~\cite{noep}.
Let us conclude this section mentioning that this problem can be approached by means of much simpler
mathematical techniques, as it is done in section~\ref{frc}. However, the goal of this work is not to
solve this simplified financial question that, as said, is relatively simple to address. Our objective
is to give a partial indication of the use of three different anticipating stochastic integrals in
finance. For this purpose we need a simple enough problem at least for two reasons. The first is that
the availability of explicit solutions to stochastic differential equations interpreted in either the
Hitsuda-Skorokhod or Ayed-Kuo senses is limited~\cite{noep,hksz}. The second is that we want to maximize
the clarity of our derivations. Once the role of the different anticipating integrals in the context of
finance is clear, one is able to approach much more general questions as done for instance
in~\cite{bo,noep,do1,do2,do3,leon,nualart} with the forward integral. Herein we aim to highlight
some of the contrasts between the uses of these integrals in a financial context, so the nature
of this work is fundamentally methodological.
\section{The Russo-Vallois integral}\label{rvi}
The Russo-Vallois forward integral was introduced by F. Russo and P. Vallois in 1993 in~\cite{russovallois}. This stochastic integral generalizes the It\^o one, in the sense that it allows to integrate anticipating processes, but it produces the same results as the latter when the integrand is adapted~\cite{noep}.
\begin{definition}
A stochastic process $\{\varphi(t), t \in[a,b]\}$ is \textit{forward integrable} (in the weak sense) over $[a,b]$ with respect to Brownian motion $\{B(t), t\in[a,b]\}$ if there exists a stochastic process $\{I(t), t \in [a,b]\}$ such that
\begin{eqnarray}\nonumber
\sup_{t\in[a,b]} \left | \int_a^t \varphi(s) \frac{B(s+\varepsilon)-B(s)}{\varepsilon} ds - I(t) \right| \to 0, \ \ \ \ \mbox{as} \ \varepsilon \to 0^+,
\end{eqnarray}
in probability. In this case, $I(t)$ is the \textit{forward integral} of $\varphi(t)$ with respect to $B(t)$ on $[a,b]$ and we denote
\begin{eqnarray}\nonumber
I(t) &:=& \int_a^t \varphi(s) \, d^- B(s), \ \ \ \ t \in [a,b].
\end{eqnarray}
\end{definition}
When the choice is the Russo-Vallois integral, we face the initial value problem
\begin{subequations}
\begin{eqnarray}\label{rv1}
d^- S_1 &=& \mu \, S_1 \, dt + \sigma \, S_1 \, d^- B(t) \\ \label{rv2}
S_1(0) &=& M f(B(T)),
\end{eqnarray}
\end{subequations}
where $d^-$ denotes the Russo-Vallois forward stochastic differential, and $f$ was defined in the previous section. It follows directly from the results in~\cite{noep} that this problem possesses a unique solution.
In the next statement, we establish the optimal investment strategy for the insider trader under Russo-Vallois forward integration.
\begin{theorem}\label{mainthrv}
Let $f$ be a function of $B(T)$ such that $f \in L^{\infty}(\mathbb{R})$ and $0 \leq f \leq 1$. The optimal investment strategy under Russo-Vallois integration is
\begin{eqnarray}\nonumber
f(B(T)) &=& \mathlarger{\mathlarger{\mathbbm{1}}}_{ \big \lbrace B(T) > \frac{T}{\sigma} \left(\rho - \mu + \frac{1}{2}\sigma^2\right) \big \rbrace},
\end{eqnarray}
for model~\eqref{rode1}-\eqref{rode2} and~\eqref{rv1}-\eqref{rv2}.
\end{theorem}
\begin{proof}
The Russo-Vallois integral preserves It\^o calculus~\cite{noep} so using this classical stochastic calculus it is possible to solve problem~\eqref{rv1}-\eqref{rv2} explicitly to find
\begin{subequations}
\begin{eqnarray}\nonumber
S_0(t) &=& M \left(1-f(B(T))\right) e^{\rho t} \\ \nonumber
S_1(t) &=& M f(B(T))e^{\left(\mu - \sigma^2 /2 \right)t + \sigma B(t)}.
\end{eqnarray}
\end{subequations}
Our goal is to find a strategy $f$ such that $\mathbb{E}\left[M^{\text{(RV)}}(T)\right]$,
with $M^{\text{(RV)}}(T):=S_1(T)+S_0(T)$, is maximized. We compute
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{\text{(RV)}}(T)\right] &=& \mathbb{E}\left[S_0(T)\right] + \mathbb{E}\left[S_1(T)\right] \\ \nonumber
&=& M \left( 1- \mathbb{E}\left[f(B(T))\right] \right) e^{\rho T} + M \mathbb{E}\left[f(B(T))\right]e^{\sigma B(T)} e^{\left(\mu - \sigma^2/2\right)T} \\ \nonumber
&=& M e^{\rho T} \left( 1 - \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}}dx \right) \\ \nonumber
&& + Me^{\left(\mu - \sigma^2 /2 \right)T} \left( \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}} e^{\sigma x} dx \right),
\end{eqnarray}
where we have used that $B(T) \sim \mathcal{N}(0,T)$. Now consider
\begin{eqnarray}\nonumber
\Bar{M}(f(x)) &:=& \frac{\mathbb{E}\left[M^{\text{(RV)}}(T)\right]}{M}.
\end{eqnarray}
Hence, we get
\begin{eqnarray}\nonumber
\Bar{M}(f(x)) &=& e^{\rho T} \left( 1 - \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}}dx \right) \\ \nonumber
&& + e^{\left(\mu - \sigma^2 /2 \right)T} \left( \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}} e^{\sigma x} dx \right) \\ \nonumber
&=& e^{\rho T} + \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}} \left( -e^{\rho T} + e^{\left(\mu -\sigma^2/2\right)T} e^{\sigma x} \right) dx.
\end{eqnarray}
The sign of this integrand is determined by the value of $x$, and it possesses a unique root $x_c$ at
\begin{eqnarray}\nonumber
x_c &=& \frac{T}{\sigma} \left( \rho - \mu + \frac{\sigma^2}{2} \right ).
\end{eqnarray}
The inequality $x > x_c$ corresponds to
\begin{eqnarray}\nonumber
B(T) &>& \frac{T}{\sigma} \left( \rho - \mu + \frac{\sigma^2}{2} \right ),
\end{eqnarray}
and to the positivity of the integrand. Clearly, in this interval we should take $f$ as large as possible in order to maximize $\mathbb{E}\left[M^{\text{RV}}(T)\right]$. On the other hand, if $x < x_c$ then the integrand is negative and we should take $f$ as small as possible for the same reason. This, together with the assumption $0 \leq f \leq 1$, renders the optimal investment strategy
\begin{eqnarray}\nonumber
f(B(T)) &=& \mathlarger{\mathlarger{\mathbbm{1}}}_{\big\lbrace B(T) > \frac{T}{\sigma} \left( \rho - \mu + \frac{\sigma^2}{2} \right ) \big\rbrace}.
\end{eqnarray}
\end{proof}
\begin{remark}
The function $f$ from Theorem \ref{mainthrv} implies that the trader should invest the whole amount $M$ in the bank account or the stock according to the value of $B(T)$, in particular, in the asset which actual value is larger at the maturity time $t=T$ (something that is known by the insider). Hence, this investment strategy not only maximizes the expected value $\mathbb{E}\left[M^{\text{(RV)}}(T)\right]$, but it does also take advantage of the anticipating condition in an intuitive way. Thus, the Russo-Vallois integral works as one would expect from the financial point of view, at least for this formulation of the insider trading problem.
\end{remark}
\begin{remark}
The expected wealth of the Russo-Vallois insider under this optimal strategy was computed in~\cite{escudero} and reads
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{\text{(RV)}}(T)\right] &=& M \, \Phi \left[ \frac{(\sigma^2 + 2 \rho - 2 \mu)\sqrt{T}}{2 \, \sigma} \right] e^{\rho T}
\\ \nonumber & & + \,
M \, \Phi \left[ \frac{(\sigma^2 - 2 \rho + 2 \mu)\sqrt{T}}{2 \, \sigma} \right] e^{\mu T},
\end{eqnarray}
where
$$
\Phi\,(\cdot)= \frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{\, \cdot} \! e^{-s^2/2} \, ds ,
$$
is the cumulative distribution function of the standard normal distribution.
In the same reference it was proven that
$$
\mathbb{E}\left[S^{\text{(I)}}(T)\right] < \mathbb{E}\left[M^{\text{(RV)}}(T)\right],
$$
a fact that matches well with what one expects from the financial viewpoint.
\end{remark}
\section{The Ayed-Kuo integral}\label{aki}
The Ayed-Kuo integral was introduced in~\cite{akuo1,akuo2}. As the previous theory, it generalizes the It\^o integral to anticipating integrands.
Let us now consider a Brownian motion $\{B(t), t \geq 0\}$ and a filtration $\{{\mathcal F}_t, t \geq 0\}$ such that:
\begin{itemize}
\item[(i)] For all $t \geq 0$, $B(t)$ is ${\mathcal F}_t$-measurable,
\item[(ii)] for all $0 \leq s < t$, $B(t)-B(s)$ is independent of ${\mathcal F}_s$.
\end{itemize}
We also recall the notion of instantly independent stochastic process introduced in~\cite{akuo1}.
\begin{definition}\label{ayedkuo}
A stochastic process $\{\varphi(t)$, $t \in [a,b]\}$ is said to be an \textit{instantly independent stochastic process} with respect to the filtration $\{{\mathcal F}_t, t \in [a,b]\}$, if and only if, $\varphi(t)$ is independent of ${\mathcal F}_t$ for each $t \in [a,b]$.
\end{definition}
For the Ayed-Kuo integral, the integrand is assumed to be a product of an adapted stochastic process with respect to the filtration ${\mathcal F}_t$ and an instantly independent stochastic process. Next, we recall its definition as given in~\cite{akuo1}.
\begin{definition}
Let $\{f(t), t\in [a,b]\}$ be a $\{{\mathcal F}_t\}$-adapted stochastic process and let $\{\varphi(t), t\in [a,b]\}$ be an instantly independent stochastic process with respect to $\{{\mathcal F}_t, t \in [a,b]\}$. The \textit{Ayed-Kuo stochastic integral} of $f(t)\varphi(t)$ is defined by
\begin{eqnarray}\nonumber
\int_a^b f(t) \varphi(t) \, d^*B(t) &: =& \lim_{\left| \Pi_n \right| \to 0} \sum_{i=1}^n f(t_{i-1}) \varphi(t_i) \left(B(t_i)-B(t_{i-1}) \right),
\end{eqnarray}
provided that the limit in probability exists, where $\Pi = \lbrace a = t_0, t_1, t_2, ..., t_n = b \rbrace $ is a partition of the interval $[a,b]$ and $\left| \Pi_n \right| = \max _{1 \leq i \leq n} \left(t_i - t_{i-1}\right)$.
\end{definition}
This definition was extended in~\cite{hksz}, and this extension is the one we are going to use herein.
It reads:
\begin{definition}\label{extak}
Consider a sequence $\lbrace \Phi_n(t) \rbrace _{n=1}^{\infty}$ of stochastic processes of the form
\begin{eqnarray}\label{defkuo}
\Phi_n(t) &: =& \sum_{i=1}^k f_{i}(t) \varphi_{i}(t), \ \ \ \ a \leq t \leq b,
\end{eqnarray}
where $f_i(t)$ are $\{{\mathcal F}_t\}$-adapted continuous stochastic processes and $\varphi_i(t)$ are continuous stochastic processes being instantly independent of $\{{\mathcal F}_t\}$, and $k \in \mathbb{N}$.
Suppose $\Phi(t)$ is a stochastic process satisfying the conditions:
\begin{itemize}
\item[(a)] $\int_a^b \left| \Phi(t) - \Phi_n(t) \right|^2 dt \ \to \ 0 \ \mbox{almost surely,}$
\item[(b)] $\int_a^b \Phi_n(t) d^{*}B(t) \ \ \mbox{converges in probability,}$
\end{itemize}
as $n \to \infty$. Then the \textit{stochastic integral} of $\Phi(t)$ is defined by
\begin{eqnarray}\nonumber
\int_a^b \Phi(t) d^{*}B(t) &: =& \lim_{n \to \infty} \int_a^b \Phi_n(t) d^{*}B(t), \ \ \ \ \text{in probability}.
\end{eqnarray}
\end{definition}
\begin{remark}
The integral
$$
\int_a^b \Phi_n(t) d^{*}B(t)
$$
is defined by linearity from Definition~\ref{ayedkuo}; its consistency was proven in~\cite{hksz}.
\end{remark}
For the Ayed-Kuo integral we denote the initial value problem as
\begin{subequations}
\begin{eqnarray} \label{ak1}
d^* S_1 &=& \mu \, S_1 \, dt + \sigma \, S_1 \, d^* B(t) \\ \label{ak2}
S_1(0) &=& M f(B(T)),
\end{eqnarray}
\end{subequations}
where $d^*$ denotes the Ayed-Kuo stochastic differential, and $f$ is as before.
This anticipating stochastic differential equation, as the previous case, is known to have
a unique and explicitly computable solution at least when $f \in \mathcal{C}(\mathbb{R})$~\cite[Theorem 4.8]{hksz}.
Now we need to extend this result to $f \in L^\infty(\mathbb{R})$, and to this end we introduce the following
auxiliary result.
\begin{lemma}\label{lemmappr}
Let $\Upsilon(t)$ be a stochastic process that satisfies the conditions:
\begin{itemize}
\item[(a)] $\int_a^b \left| \Upsilon(t) - \Upsilon_n(t) \right|^2 dt \ \to \ 0 \ \mbox{almost surely,}$
\item[(b)] $\int_a^b \Upsilon_n(t) d^{*}B(t) \ \ \mbox{converges in probability,}$
\end{itemize}
as $n \to \infty$, where $\lbrace \Upsilon_n(t) \rbrace _{n=1}^{\infty}$ is a sequence of
Ayed-Kuo integrable (in the sense of Definition~\ref{extak}) stochastic processes.
Then $\Upsilon(t)$ is an Ayed-Kuo integrable process and
\begin{eqnarray}\nonumber
\int_a^b \Upsilon(t) d^{*}B(t) &=& \lim_{n \to \infty} \int_a^b \Upsilon_n(t) d^{*}B(t), \ \ \ \ \text{in probability}.
\end{eqnarray}
\end{lemma}
\begin{proof}
Let $\lbrace \Phi_{m}^{(n)}(t) \rbrace _{m=1}^{\infty}$ be an approximating sequence
of $\Upsilon_n(t)$ as in Definition~\ref{extak}.
Consider for $\varepsilon>0$ the inclusion of events
\begin{eqnarray}\nonumber
&& \left\{ \left| \int_a^b \Upsilon(t) d^{*}B(t) - \int_a^b \Phi_{m_n'}^{(n)}(t) d^{*}B(t) \right| > \varepsilon \right\} \\ \nonumber &\subset&
\left\{ \left| \int_a^b \Upsilon_n(t) d^{*}B(t) - \int_a^b \Phi_{m_n'}^{(n)}(t) d^{*}B(t) \right| > \frac{\varepsilon}{2} \right\} \\ \nonumber
&& \cup \left\{ \left| \int_a^b \Upsilon(t) d^{*}B(t) - \int_a^b \Upsilon_n(t) d^{*}B(t) \right| > \frac{\varepsilon}{2} \right\},
\end{eqnarray}
where we have chosen, for each $n \in \mathbb{N}$, a $m_n' \in \mathbb{N}$ large enough such that
$$
\mathbb{P} \left\{ \left| \int_a^b \Upsilon_n(t) d^{*}B(t) - \int_a^b \Phi_{m_n'}^{(n)}(t) d^{*}B(t) \right| > \frac{\varepsilon}{2} \right\} \le \frac{1}{n}
$$
and the same holds for all $m \ge m_n'$;
note that this is possible as a consequence of the Ayed-Kuo integrability of the constituents of the sequence $\lbrace \Upsilon_n(t) \rbrace _{n=1}^{\infty}$. This in turn implies
\begin{eqnarray}\nonumber
&& \mathbb{P} \left\{ \left| \int_a^b \Upsilon(t) d^{*}B(t) - \int_a^b \Phi_{m_n'}^{(n)}(t) d^{*}B(t) \right| > \varepsilon \right\} \\ \nonumber &\le&
\mathbb{P} \left\{ \left| \int_a^b \Upsilon(t) d^{*}B(t) - \int_a^b \Upsilon_n(t) d^{*}B(t) \right| > \frac{\varepsilon}{2} \right\} \\ \nonumber
&& + \mathbb{P} \left\{ \left| \int_a^b \Upsilon_n(t) d^{*}B(t) - \int_a^b \Phi_{m_n'}^{(n)}(t) d^{*}B(t) \right| > \frac{\varepsilon}{2} \right\} \\ \nonumber &\le&
\mathbb{P} \left\{ \left| \int_a^b \Upsilon(t) d^{*}B(t) - \int_a^b \Upsilon_n(t) d^{*}B(t) \right| > \frac{\varepsilon}{2} \right\} + \frac{1}{n},
\end{eqnarray}
and consequently
\begin{eqnarray}\nonumber
&& \lim_{n \to \infty} \mathbb{P} \left\{ \left| \int_a^b \Upsilon(t) d^{*}B(t) - \int_a^b \Phi_{m_n'}^{(n)}(t) d^{*}B(t) \right| > \varepsilon \right\} \\ \nonumber &\le& \lim_{n \to \infty} \mathbb{P} \left\{ \left| \int_a^b \Upsilon(t) d^{*}B(t) - \int_a^b \Upsilon_{n}(t) d^{*}B(t) \right| > \frac{\varepsilon}{2} \right\}
\\ \nonumber &=& 0,
\end{eqnarray}
for each $\varepsilon>0$
by assumption (b), what as a matter of fact implies condition (b) in Definition~\ref{extak}, and where we have defined
\begin{eqnarray}\nonumber
\int_a^b \Upsilon(t) d^{*}B(t) &:=& \lim_{n \to \infty} \int_a^b \Upsilon_n(t) d^{*}B(t), \ \ \ \ \text{in probability},
\end{eqnarray}
which is a well defined random variable again by assumption (b).
Since almost sure convergence implies convergence in probability,
for each $n \in \mathbb{N}$ we may choose a $m_n \in \mathbb{N}$ large enough such that
\begin{eqnarray}\nonumber
\mathbb{P} \left\{ \int_a^b \left| \Upsilon_n(t) - \Phi_{m_n}^{(n)}(t) \right|^2 dt
> \frac{\delta}{4} \right\} \le \frac{1}{n}, \qquad \delta>0,
\end{eqnarray}
and the same holds for all $m \ge m_n$.
This is possible on account of $\lbrace \Upsilon_n(t) \rbrace _{n=1}^{\infty}$ being a family of Ayed-Kuo integrable processes,
and where $\lbrace \Phi_{m}^{(n)}(t) \rbrace _{m=1}^{\infty}$ is the same approximating sequence
as before.
Arguing similarly as before we find
\begin{eqnarray}\nonumber
&& \left\{ \int_a^b \left| \Upsilon(t) - \Phi_{m_n}^{(n)}(t) \right|^2 dt
> \delta \right\} \\ \nonumber &\subset&
\left\{ \int_a^b \left| \Upsilon(t) - \Upsilon_n(t) \right|^2 dt
> \frac{\delta}{4} \right\}
\cup \left\{\int_a^b \left| \Upsilon_n(t) - \Phi_{m_n}^{(n)}(t) \right|^2 dt > \frac{\delta}{4} \right\},
\end{eqnarray}
and therefore
\begin{eqnarray}\nonumber
&& \mathbb{P} \left\{ \int_a^b \left| \Upsilon(t) - \Phi_{m_n}^{(n)}(t) \right|^2 dt > \delta \right\} \\ \nonumber &\le&
\mathbb{P} \left\{ \int_a^b \left| \Upsilon(t) - \Upsilon_n(t) \right|^2 dt > \frac{\delta}{4} \right\}
+ \mathbb{P} \left\{ \int_a^b \left| \Upsilon_n(t) - \Phi_{m_n}^{(n)}(t) \right|^2 dt > \frac{\delta}{4} \right\} \\ \nonumber &\le&
\mathbb{P} \left\{ \int_a^b \left| \Upsilon(t) - \Upsilon_n(t) \right|^2 dt > \frac{\delta}{4} \right\} + \frac{1}{n}.
\end{eqnarray}
In consequence
\begin{eqnarray}\nonumber
&& \lim_{n \to \infty} \mathbb{P} \left\{ \int_a^b \left| \Upsilon(t) - \Phi_{m_n}^{(n)}(t) \right|^2 dt > \delta \right\} \\ \nonumber &\le& \lim_{n \to \infty} \mathbb{P} \left\{ \int_a^b \left| \Upsilon(t) - \Upsilon_n(t) \right|^2 dt > \frac{\delta}{4} \right\}
\\ \nonumber &=& 0,
\end{eqnarray}
for each $\delta>0$ by assumption (a). Now, by taking a subsequence
$\left\lbrace \Phi_{m_{n_j}}^{(n_j)}(t) \right\rbrace _{j=1}^{\infty} \subset
\left\lbrace \Phi_{m_{n}}^{(n)}(t) \right\rbrace _{n=1}^{\infty}$
if necessary, we conclude
$$
\lim_{j \to \infty}
\int_a^b \left| \Upsilon(t) - \Phi_{m_{n_j}}^{(n_j)}(t) \right|^2 dt =0, \qquad \text{almost surely};
$$
therefore condition (a) in Definition~\ref{extak} follows.
Altogether these results imply
\begin{eqnarray}\nonumber
\int_a^b \Upsilon(t) d^{*}B(t) &=& \lim_{j \to \infty}
\int_a^b \Phi_{m_{n_j} \vee m_{n_j}'}^{(n_j)}(t) d^{*}B(t), \ \ \ \ \text{in probability},
\end{eqnarray}
and
$$
\int_a^b \left| \Upsilon(t) - \Phi_{m_{n_j} \vee m_{n_j}'}^{(n_j)}(t) \right|^2 dt \ \to \ 0
\ \ \ \ \mbox{almost surely}
$$
as $j \to \infty$,
so the statement follows.
\end{proof}
Now we are ready to prove existence and uniqueness of the solution to equation~\eqref{ak1}, along with an explicit representation formula.
\begin{theorem}\label{exunak}
The unique solution of~\eqref{ak1} and~\eqref{ak2} is
$$
S_1(t) = M f(B(T) - \sigma t) e^{\left(\mu - \sigma^2 /2 \right)t + \sigma B(t)}.
$$
\end{theorem}
\begin{proof}
Using the It\^o formula for the Ayed-Kuo integral~\cite{hksz} it is possible to solve problem~\eqref{ak1}-\eqref{ak2} to find
\begin{equation}\label{solfic}
S_1(t) = M \tilde{f}(B(T) - \sigma t) e^{\left(\mu - \sigma^2 /2 \right)t + \sigma B(t)},
\end{equation}
for $\tilde{f} \in \mathcal{C}(\mathbb{R})$; in particular, it is an Ayed-Kuo integrable process.
If we fix a realization of Brownian motion then, by \textit{Lusin Theorem}~\cite{lusin}, there exists a family of continuous functions $\{f_n\}_{n \in \mathbb{N}}$ such that
\begin{eqnarray}\label{lusin}
\int_{0}^{T} |f(B(T) - \sigma t) - f_n(B(T) - \sigma t)| dt \to 0,
\end{eqnarray}
with $||f_n||_\infty \le ||f||_\infty$, as $n \to \infty$. Then it holds that
\begin{eqnarray}\nonumber
&& \int_{0}^{T} |f(B(T) - \sigma t) - f_n(B(T) - \sigma t)|^2 dt \\ \nonumber
&\le& ||f(B(T) - \sigma t) - f_n(B(T) - \sigma t)||_\infty \\ \nonumber
&& \times \int_{0}^{T} |f(B(T) - \sigma t) - f_n(B(T) - \sigma t)| dt \\ \nonumber
&\le& \left[||f(B(T) - \sigma t)||_\infty + ||f_n(B(T) - \sigma t)||_\infty \right] \\ \nonumber
&& \times \int_{0}^{T} |f(B(T) - \sigma t) - f_n(B(T) - \sigma t)| dt \\ \nonumber
&\le& 2 ||f(B(T) - \sigma t)||_\infty
\int_{0}^{T} |f(B(T) - \sigma t) - f_n(B(T) - \sigma t)| dt \\ \nonumber
&\to& 0, \qquad \text{as} \quad n \to \infty;
\end{eqnarray}
therefore we conclude
\begin{equation}\label{convl2}
\int_{0}^{T} |f(B(T) - \sigma t) - f_n(B(T) - \sigma t)|^2 dt \to 0 \qquad \text{almost surely}
\end{equation}
as $n \to \infty$.
Note that equation~\eqref{ak1} means
\begin{equation}\label{eqmeans}
S_1(t) = S_1(0) + \mu \int_0^t S_1(s) \, ds + \sigma \int_0^t S_1(s) \, d^* B(s).
\end{equation}
As
$$
S_1^{(n)}(t) = M f_n(B(T) - \sigma t) e^{\left(\mu - \sigma^2 /2 \right)t + \sigma B(t)},
$$
is a solution to~\eqref{ak1} by~\eqref{solfic}, then
$$
S_1^{(n)}(t) = S_1^{(n)}(0) + \mu \int_0^t S_1^{(n)}(s) \, ds + \sigma \int_0^t S_1^{(n)}(s) \, d^* B(s)
$$
by~\eqref{eqmeans}. Note that
\begin{eqnarray}\nonumber
\int_0^t |S_1^{(n)}(s) - S_1(s)| \, ds &=& M \int_0^t |f_n(B(T) - \sigma s) - f(B(T) - \sigma s)|
\\ \nonumber
&& \times e^{\left(\mu - \sigma^2 /2 \right)s + \sigma B(s)} \, ds \\ \nonumber
&\le& M \max_{0 \le s \le t} \left\{
e^{\left(\mu - \sigma^2 /2 \right)s + \sigma B(s)} \right\}
\\ \nonumber
&& \times \int_0^t |f_n(B(T) - \sigma s) - f(B(T) - \sigma s)| \, ds \\ \label{convl1}
&\to& 0 \qquad \text{almost surely as} \quad n \to \infty
\end{eqnarray}
by~\eqref{lusin}. In consequence
\begin{equation} \label{subseq}
S_1^{(n_j)}(t) \to S_1(t)
\qquad \text{for almost every} \quad t \in [0,T]
\end{equation}
as $j \to \infty$ by passing, if necessary, to a suitable subsequence
$\left\{S_1^{(n_j)}\right\}_{j \in \mathbb{N}} \subset \left\{S_1^{(n)}\right\}_{n \in \mathbb{N}}$.
Now, by taking the limit $j \to \infty$, we find
\begin{equation}\label{fin1}
\sigma \lim_{j \to \infty} \int_0^t S_1^{(n_j)}(s) \, d^* B(s) = S_1(t) - S_1(0) - \mu \int_0^t S_1(s) \, ds
\end{equation}
for almost every $t \in [0,T]$ almost surely, and hence in probability, by~\eqref{convl1} and~\eqref{subseq}; note that
$$
S_1(0)= M f(B(T))
$$
is well defined almost surely. Therefore
condition (b) in Lemma~\ref{lemmappr} is met.
For condition (a) compute
\begin{eqnarray}\nonumber
\int_0^t |S_1^{(n)}(s) - S_1(s)|^2 \, ds &=& M^2 \int_0^t |f_n(B(T) - \sigma s) - f(B(T) - \sigma s)|^2
\\ \nonumber
&& \times e^{2\left(\mu - \sigma^2 /2 \right)s + 2\sigma B(s)} \, ds \\ \nonumber
&\le& M^2 \max_{0 \le s \le t} \left\{
e^{2\left(\mu - \sigma^2 /2 \right)s + 2\sigma B(s)} \right\}
\\ \nonumber
&& \times \int_0^t |f_n(B(T) - \sigma s) - f(B(T) - \sigma s)|^2 \, ds \\ \label{fin2}
&\to& 0 \qquad \text{almost surely as} \quad n \to \infty
\end{eqnarray}
by~\eqref{convl2}.
Now by Lemma~\ref{lemmappr}, \eqref{fin1}, and~\eqref{fin2} we conclude
$$
S_1(t) = S_1(0) + \mu \int_0^t S_1(s) \, ds + \sigma \int_0^t S_1(s) \, d^* B(s)
$$
for almost every $t \in [0,T]$ almost surely, and the existence and explicit representation part of the proof is finished.
For the uniqueness assume on the contrary that there exist two solutions $S_1'(t)$ and $S_1''(t)$ to find
$$
S_1^{(-)}(t) = \mu \int_0^t S_1^{(-)}(s) \, ds + \sigma \int_0^t S_1^{(-)}(s) \, d^* B(s),
$$
where $S_1^{(-)}(t) := S_1'(t) - S_1''(t)$. Since this equation has the unique solution
$S_1^{(-)}(t)=0$~\cite{hksz}, we have reached a contradiction.
\end{proof}
Finally, we can establish the optimal investment strategy for the insider under Ayed-Kuo integration. For this we assume as before the no-shorting condition $0 \leq f \leq 1$ and we employ the notation
$$
M^{\text{(AK)}}(T) := S_0(T) + S_1(T).
$$
\begin{theorem}\label{mainthak}
Let $f$ be a function of $B(T)$ such that $f \in L^\infty(\mathbb{R})$ and $0 \leq f \leq 1$. The optimal investment strategy under Ayed-Kuo integration is
\begin{eqnarray}\nonumber
f(B(T)) &=& 1,
\end{eqnarray}
for model~\eqref{rode1}-\eqref{rode2} and~\eqref{ak1}-\eqref{ak2}.
\end{theorem}
\begin{proof}
By Theorem~\ref{exunak} we can solve problem~\eqref{ak1}-\eqref{ak2} explicitly to find
\begin{subequations}
\begin{eqnarray}\nonumber
S_0(t) &=& M \left(1-f(B(T))\right) e^{\rho t} \\ \nonumber
S_1(t) &=& M f(B(T) - \sigma t) e^{\left(\mu - \sigma^2 /2 \right)t + \sigma B(t)}.
\end{eqnarray}
\end{subequations}
Our aim is to find the strategy $f$ such that $\mathbb{E}\left[M^{\text{(AK)}}(T)\right]$ is maximized. Hence, we have
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{\text{(AK)}}(T)\right] &=& \mathbb{E}\left[S_0(T)\right] + \mathbb{E}\left[S_1(T)\right] \\ \nonumber
&=& M \left( 1- \mathbb{E}\left[f(B(T))\right]\right) e^{\rho T} \\ \nonumber
& & + M \mathbb{E}\left[f(B(T) - \sigma T)e^{\sigma B(T)}\right] e^{\left(\mu - \sigma^2/2\right)T} \\ \nonumber
&=& M e^{\rho T} \left( 1 - \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}}dx \right) \\ \nonumber
& & + Me^{\left(\mu - \sigma^2 /2 \right)T} \left( \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x-\sigma T) e^{-\frac{x^2}{2T}} e^{\sigma x} dx \right),
\end{eqnarray}
where we have used that $B(T) \sim \mathcal{N}(0,T)$. By changing variables
\begin{eqnarray}\nonumber
y &=& x - \sigma T,
\end{eqnarray}
and
\begin{eqnarray}\nonumber
\Bar{M}(T) &=& \frac{\mathbb{E}\left[M^{\text{(AK)}}(T)\right]}{M},
\end{eqnarray}
where the first change is implemented in the second integral only,
we find
\begin{eqnarray}\nonumber
\Bar{M}(T) &=& e^{\rho T} \left( 1 - \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}}dx \right) \\ \nonumber
&& +e^{\left(\mu - \sigma^2 /2 \right)T} \left( \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x-\sigma T) e^{-\frac{x^2}{2T}} e^{\sigma x} dx \right) \\ \nonumber
&=& e^{\rho T} - e^{\rho T} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}} dx \\ \nonumber
&& +\int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(y) e^{-\frac{\left(y + \sigma T \right)^2}{2T}} e^{\left(\mu - \sigma ^2 /2 \right)T} e^{\sigma \left(y+\sigma T\right)}dy \\ \nonumber
&=& e^{\rho T} - e^{\rho T} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}} dx \\ \nonumber
&& +\int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(y) e^{-\frac{\left(y^2 + \sigma^2 T^2 +2y\sigma T \right)}{2T}} e^{\mu T - \sigma ^2 T /2} e^{\sigma y+\sigma^2 T}dy \\ \nonumber
&=& e^{\rho T} - e^{\rho T} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(x) e^{-\frac{x^2}{2T}} dx + e^{\mu T} \int_{-\infty}^{\infty} \frac{1}{\sqrt{2 \pi T}} f(y) e^{-\frac{y^2}{2T}} dy.
\end{eqnarray}
Therefore, we may summarize this as
\begin{eqnarray}\nonumber
\Bar{M}(T) &=& e^{\rho T} - e^{\rho T} \mathbb{E}\left[f(B(T))\right] + e^{\mu T} \mathbb{E}\left[f(B(T))\right].
\end{eqnarray}
By assumption, $f$ is a function of $B(T)$ such that $0 \leq f \leq 1$, and $\mu > \rho$; so we have a convex linear combination and, since the exponential function is strictly monotone, we conclude $\mathbb{E}\left[M^{\text{(AK)}}(T)\right] \in [Me^{\rho T}, Me^{\mu T}]$. Then clearly $\mathbb{E}\left[M^{\text{(AK)}}(T)\right]$ is maximized whenever
\begin{eqnarray}\nonumber
\mathbb{E}\left[f(B(T))\right] &=& 1.
\end{eqnarray}
Equivalently we have
\begin{eqnarray}\nonumber
\frac{1}{\sqrt{2 \pi T}} \int_{-\infty}^{\infty} f(x) e^{-\frac{x^2}{2T}} dx &=& 1,
\end{eqnarray}
and consequently $f(B(T)) = 1$.
\end{proof}
\begin{remark}\label{maincolak}
The same conclusion found in Theorem~\ref{mainthak} can be reached by approximating $\mathbb{E}\left[f(B(T))\right]$
instead of $f$ (as done in Theorem~\ref{exunak}); let us show how.
We start considering $\{f_n\}_{n=1}^\infty$, a sequence of functions such that $f_n \in \mathcal{C}(\mathbb{R})$ for all $n \in \mathbb{N}$. Then, we have
\begin{eqnarray}\nonumber
| \mathbb{E}\left[f(B(T))\right] - \mathbb{E}\left[f_n(B(T))\right] | &=& | \mathbb{E}\left[f(B(T)) - f_n(B(T)) \right] | \\ \nonumber
&\leq& \mathbb{E}\left[| f(B(T)) - f_n (B(T)) | \right] \\ \nonumber
&=& \frac{1}{\sqrt{2 \pi T}} \int_{-\infty}^{\infty} |f(x) - f_n(x)| e^{-\frac{x^2}{2T}} dx.
\end{eqnarray}
By \textit{Lusin Theorem}~\cite{lusin}, there exists a family $\{f_n\}_{n \in \mathbb{N}}$ of continuous functions,
$||f_n||_\infty \le ||f||_\infty$, such that
\begin{eqnarray}\nonumber
\frac{1}{\sqrt{2 \pi T}} \int_{-\infty}^{\infty} |f(x) - f_n(x) | e^{-\frac{x^2}{2T}} dx \to 0
\end{eqnarray}
as $n \to \infty$, and where $0 \le f_n \le 1$ after a possible redefinition of their negative parts by zero,
so they constitute a well defined family of investment allocations under the no-shorting condition. In consequence
\begin{eqnarray}\nonumber
\mathbb{E}\left[f_n(B(T))\right] &\to& \mathbb{E}\left[f(B(T))\right]
\end{eqnarray}
as $n \to \infty$. Since we found in the proof of Theorem~\ref{mainthak} that
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{(AK)}(T)\right] &=& e^{\rho T} - e^{\rho T} \mathbb{E}\left[f(B(T))\right] + e^{\mu T} \mathbb{E}\left[f(B(T))\right],
\end{eqnarray}
we may conclude
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{(AK)}(T)\right] &=& \lim_{n \to \infty}
\left\{ e^{\rho T} - e^{\rho T} \mathbb{E}\left[f_n(B(T))\right] + e^{\mu T} \mathbb{E}\left[f_n(B(T))\right]
\right\}.
\end{eqnarray}
So the same optimal investment is found by considering $f \in \mathcal{C}(\mathbb{R})$ in~\eqref{ak2} (so the existence, uniqueness, and explicit representation theory of~\cite{hksz} can be employed for the solution of~\eqref{ak1}) and then arguing by approximation as exposed in this Remark. Note anyway that this fact just illustrates the consistency of our approach overall, but the only complete proof is the one that goes through Theorem~\ref{exunak}.
\end{remark}
\begin{remark}
The function $f$ from Theorem~\ref{mainthak} suggests that the formalization of the problem based on the Ayed-Kuo integral does not take advantage of the anticipating initial condition, as this optimal investment strategy is the same as the one of the honest trader.
Indeed, it implies to invest the whole amount $M$ in the stock, in such a way that
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{(AK)}(T)\right] &=& M e^{\mu T}.
\end{eqnarray}
Thus, the result of the use of the Ayed-Kuo integral seems to be counterintuitive in the financial sense, at least for this formulation of the insider trading problem.
\end{remark}
\section{The Hitsuda-Skorokhod integral}\label{hsi}
The Hitsuda-Skorokhod integral is an anticipating stochastic integral that was introduced by Hitsuda~\cite{hitsuda} and Skorokhod~\cite{skorokhod}
by means of different methods.
The following definition makes use of the Wiener-It\^o chaos expansion; background on this topic can be found for instance in~\cite{noep,hoeuz}.
\begin{definition}
Let $X \in L^2([0,T]\times \Omega)$ be a square integrable stochastic process. By the Wiener-It\^o chaos expansion, $X$ can be decomposed into an orthogonal series
$$X(t,\omega) = \sum_{n=0}^{\infty} I_n(f_{n,t}),$$
in $L^2(\Omega)$, where $f_{n,t}\in L^2([0,T]^n)$ are symmetric functions for all non-negative integers $n$. Thus, we write
$$ f_{n,t}(t_1,\ldots,t_n)=f_n(t_1,\ldots,t_n,t),$$
which is a function defined on $[0,T]^{n+1}$ and symmetric with respect to the first $n$ variables.
The symmetrization of $f_n(t_1,\ldots,t_n,t_{n+1})$ is given by
\begin{align*} \label{simetrizacion}
&\hat{f}_n(t_1,\ldots,t_{n+1})= \\
&\frac{1}{n+1}\left[f_n(t_1,\ldots,t_{n+1})+ f_n(t_{n+1},t_2,\ldots,t_1)+\ldots+f_n(t_1,\ldots,t_{n+1},t_{n})\right],
\end{align*}
because we only need to take into account the permutations which exchange the last variable with any other one.
Then, the Hitsuda-Skorokhod integral of $X$ is defined by
\begin{equation*}
\int_0^T X(t,\omega) \, \delta B(t) := \sum_{n=0}^{\infty} I_{n+1}(\hat{f}_n),
\end{equation*}
provided that the series converges in $L^2(\Omega)$.
\end{definition}
For the Hitsuda-Skorokhod integral we arrive at the initial value problem
\begin{subequations}
\begin{eqnarray}\label{sko1}
\delta S_1 &=& \mu \, S_1 \, d t + \sigma \, S_1 \, \delta B_t \\ \label{sko2}
S_1(0) &=& M f(B(T)),
\end{eqnarray}
\end{subequations}
for a Hitsuda-Skorokhod stochastic differential equation, where $\delta$ denotes the Hitsuda-Skorokhod stochastic differential.
The existence and uniqueness theory for linear stochastic differential equations of Hitsuda-Skorokhod type, which covers the present case,
can be found in~\cite{lssdes}; as before, $f$ is a function of $B(T)$, such that $f \in L^{\infty}(\mathbb{R})$ and $0 \leq f \leq 1$. In this case we have the following result.
\begin{theorem}\label{mainthhs}
The unique solution of~\eqref{sko1} and~\eqref{sko2} is
$$
S_1(t) = M f(B(T) - \sigma t) e^{\left(\mu - \sigma^2 /2 \right)t + \sigma B(t)}.
$$
\end{theorem}
\begin{proof}
The statement is a particular case of the existence and uniqueness result in~\cite{buckdahn}, which in turn states that the solution to stochastic differential equations of the form
\begin{equation*}\label{buckdahn1}
\delta Y_t = \mu_t \, Y_t \, dt + \sigma_t \, Y_t \, \delta B_t, \quad Y_0=\eta, \qquad 0 \leq t \leq T,
\end{equation*}
with $\sigma_t \in L^{\infty}([0,1])$, $\mu_t \in L^{\infty}([0,1]\times \Omega)$, and $ \eta \in L^p(\Omega)$, $p > 2$ is given by
\begin{equation}\label{buckdahn2}
Y_t = \eta(U_{0,t}) \, \exp \left[ \int_0^t \mu_s(U_{s,t}) \, ds \right] X_t, \ \ \ \
\mbox{almost surely}, \ \ \ \ 0 \leq t \leq T,
\end{equation}
where
$$
X_t = \exp \left( \int_0^t \sigma_s \, \delta B_s - \frac{1}{2} \int_0^t \sigma_s^2 \, ds \right), \qquad 0 \leq t \leq T,
$$
and, for $0\leq s \leq t \leq T$, the Girsanov transformation is
$$
U_{s,t} = B(u) - \int_0^u \mathlarger{\mathlarger{\mathbbm{1}}}_{[s,t]}(r) \, \sigma_r \, dr, \ \ \ \ 0 \leq u \leq T.
$$
Now taking $\sigma_s=\sigma$ and $\mu_s=\mu$ as two constant processes, and $\eta=M f(B(T))$,
the different factors in $\eqref{buckdahn2}$ become
\begin{eqnarray}\nonumber
e^{\mu t} &=& \exp\left[ \int_0^t \mu_s(U_{s,t}) ds \right], \\ \nonumber
X_t &=& \exp\left[\sigma B(t) - \sigma^2 t /2 \right], \\ \nonumber
\eta(U_{0,t}) &=& M f(B(T)-\sigma t).
\end{eqnarray}
Since
$$
\mathbb{E} \left( \left|\eta\right|^p \right) \le \mathbb{E} (M^p) = M^p < \infty,
$$
the result follows.
\end{proof}
\begin{corollary}
Let $f$ be a function of $B(T)$ such that $f \in L^{\infty}(\mathbb{R})$ and $0 \leq f \leq 1$. The optimal investment strategy under Hitsuda-Skorokhod integration is
\begin{eqnarray}\nonumber
f(B(T)) &=& 1,
\end{eqnarray}
for model~\eqref{rode1}-\eqref{rode2} and~\eqref{sko1}-\eqref{sko2}.
\end{corollary}
\begin{proof}
The statement is a direct consequence of Theorem~\ref{mainthhs} and the proof of Theorem~\ref{mainthak}.
\end{proof}
\begin{remark}
Note that this result is the same as the one found in the previous section for the Ayed-Kuo integral,
that is
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{(HS)}(T)\right] &=& M e^{\mu T}.
\end{eqnarray}
Therefore the same financial conclusions hold in this case as well.
\end{remark}
\section{Further results and comments}\label{frc}
In this section we address some questions that complement our previous developments.
First, let us consider the maximization of $\mathbb{E}\left[M(T) | B(T) \right]$ rather than the maximization of $\mathbb{E}\left[M(T)\right]$. This quantity is no longer a real number, but a random variable, so the
optimization problem can only be realized by means of the introduction of a notion of stochastic ordering.
Herein we will assume stochastic orderings given by different orders of stochastic dominance~\cite{levy}; although
other notions of stochastic order are possible~\cite{ss}, we base our decision on the fact that first order stochastic dominance
is the usual stochastic order.
The simplest situation is the one corresponding to the Russo-Vallois forward integral, that is,
to the problem analyzed in section~\ref{rvi};
we remind that $f$ is a function of $B(T)$ such that $f \in L^{\infty}(\mathbb{R})$ and $0 \leq f \leq 1$.
In this case, the conditional expectation of the insider wealth is
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{\text{(RV)}}(T) | B(T)\right]
&=& M \left[ \left( 1- f(B(T)) \right) e^{\rho T} + f(B(T)) e^{\left(\mu - \sigma^2/2\right)T + \sigma B(T)} \right]
\end{eqnarray}
almost surely, since $M^{\text{(RV)}}(T)$ is measurable by the sigma field generated by $B(T)$.
The last expression is nothing but a convex linear combination and, since the exponential function is strictly monotone,
we conclude that the optimal investment strategy is
\begin{eqnarray}\nonumber
f(B(T)) &=& \mathlarger{\mathlarger{\mathbbm{1}}}_{\big\lbrace B(T) > \frac{T}{\sigma} \left( \rho - \mu + \frac{\sigma^2}{2} \right ) \big\rbrace}
\end{eqnarray}
almost surely. This maximizer coincides with the one provided by Theorem~\ref{mainthrv}.
Note that the notion of stochastic order employed is zeroth order stochastic dominance, which is the strongest notion
of stochastic dominance and implies all the higher orders~\cite{levy}.
In the case of either the Ayed-Kuo or the Hitsuda-Skorokhod integral,
see sections~\ref{aki} and~\ref{hsi} respectively, we find
\begin{eqnarray}\nonumber
\mathbb{E}\left[M^{\text{(AK/HS)}}(T) | B(T)\right] &=&
M \left[ \left( 1- f(B(T)) \right) e^{\rho T} \right. \\ \nonumber
&& \left. + f(B(T)-\sigma T) e^{\left(\mu - \sigma^2/2\right)T + \sigma B(T)} \right]
\end{eqnarray}
almost surely, since $M^{\text{(AK/HS)}}(T)$ is measurable by the sigma field generated by $B(T)$.
Since $\mathbb{E}\left[M^{\text{(AK/HS)}}(T)\right]$ is maximized for $f \equiv 1$, this should be the maximizer
in this case too, of course provided a maximizer exists~\cite{levy}. Note however that
\begin{eqnarray}\nonumber
\mathbb{E} \left[ \left. M^{\text{(AK/HS)}}(T) \right|_{f \equiv 1} \right] &=& e^{\mu T} \\ \nonumber
&>& e^{\rho T} \\ \nonumber
&=& \mathbb{E} \left[ \left. M^{\text{(AK/HS)}}(T) \right|_{f \equiv 0} \right],
\end{eqnarray}
by the assumption $\mu > \rho$ and the monotonicity of the exponential; but nevertheless
\begin{eqnarray}\nonumber
\inf \left\{ \left. M^{\text{(AK/HS)}}(T) \right|_{f \equiv 1} \right\} &=& 0 \\ \nonumber
&<& e^{\rho T} \\ \nonumber
&=& \min \left\{ \left. M^{\text{(AK/HS)}}(T) \right|_{f \equiv 0} \right\}.
\end{eqnarray}
These two results combined imply that $\left. M^{\text{(AK/HS)}}(T) \right|_{f \equiv 1}$
does not stochastically dominate $\left. M^{\text{(AK/HS)}}(T) \right|_{f \equiv 0}$
in the $m^{\text{th}}-$order for any $m=0,1,2,3,\cdots$~\cite{levy}. In other words, assuming a stochastic ordering
given by the $m^{\text{th}}-$order stochastic dominance, there exists no optimal value for any $m=0,1,2,3,\cdots$.
On one hand, this should not be regarded as a bizarre outcome since stochastic orders are in general partial orders.
On the other hand, it seems possible to turn $f \equiv 1$ into the maximizer of the problem by means of the introduction
of a suitable risk-seeking stochastic order. In any case, the obvious difference between the results presented in this section,
along with the fact that
$$
\mathbb{E} \left[ \left. M^{\text{(AK/HS)}}(T) \right|_{f \equiv 1} \right] <
\mathbb{E} \left[ \left. M^{\text{(RV)}}(T) \right|_{f=
\mathlarger{\mathlarger{\mathbbm{1}}}_{\big\lbrace B(T) > \frac{T}{\sigma} \left( \rho - \mu + \frac{\sigma^2}{2} \right )
\big\rbrace}} \right],
$$
both point in the same direction of support of the Russo-Vallois forward integral,
at least under the particular conditions we are considering herein.
It is also important to clarify that the insider problem we have addressed is solvable by means of much simpler methods that
do not require the introduction of anticipating stochastic integration; a fact that
we anticipated in section~\ref{insider}.
To see this consider our basic model subject to unit initial conditions, that is
\begin{subequations}
\begin{eqnarray}\label{fr0}
d S_0 &=& \rho \, S_0 \, d t \\ \label{fr1}
S_0(0) &=& 1,
\end{eqnarray}
\end{subequations}
and
\begin{subequations}
\begin{eqnarray}\label{fs0}
d S_1 &=& \mu \, S_1 \, d t + \sigma \, S_1 \, d B(t) \\ \label{fs1}
S_1(0) &=& 1.
\end{eqnarray}
\end{subequations}
Clearly~\eqref{fs0}-\eqref{fs1} is an It\^o stochastic differential equation and~\eqref{fr0}-\eqref{fr1} is an ordinary differential equation. With these two assets one can build the portfolio
\begin{eqnarray}\nonumber
M(t) &=& M_0 S_0(t) + M_1 S_1(t)
\\ \nonumber
&=& M \left(1-f\right) S_0(t) + M f S_1(t) \\ \label{solito}
&=& M \left[ \left( 1- f \right) e^{\rho T} + f e^{\left(\mu - \sigma^2/2\right)T + \sigma B(T)} \right],
\end{eqnarray}
where we have assumed that the total initial wealth $M=M_0 + M_1$ is constant and $f$ denotes the fraction of the total wealth
invested in the stock. The computation of the expected final wealth depends on whether $f$ is a constant or
we allow $f=f(B(T))$. In the first case we find
\begin{eqnarray}\nonumber
\mathbb{E}\left[M(T)\right]
&=& M\left(1-f\right) \, e^{\rho T} + M \, f \, e^{\mu T},
\end{eqnarray}
that is, our classical convex linear combination, which gets maximized for
\begin{eqnarray}\nonumber
f &=& 1.
\end{eqnarray}
Of course, we find agreement with the corresponding result in section~\ref{introduction}.
In the latter case we get
\begin{eqnarray}\nonumber
\mathbb{E}\left[M(T)\right] &=& M \left( 1- \mathbb{E}\left[f(B(T))\right] \right) e^{\rho T} +
M \mathbb{E}\left[ f(B(T)) e^{\sigma B(T)} \right] e^{\left(\mu - \sigma^2/2\right)T},
\end{eqnarray}
in agreement with our development based on the Russo-Vallois integral, see section~\ref{rvi};
as there we conclude the optimizer is
\begin{eqnarray}\label{maxito}
f(B(T)) &=& \mathlarger{\mathlarger{\mathbbm{1}}}_{\big\lbrace B(T) > \frac{T}{\sigma} \left( \rho - \mu + \frac{\sigma^2}{2} \right ) \big\rbrace}.
\end{eqnarray}
This computation allows us to highlight two things. First, we find yet another argument in favor of the forward integral (as always, in our limited setting). Second, it is a reminder of the fact that our goal is to establish a comparison, as clear as possible, between the
use of different anticipating stochastic integrals in a financial context. To this end we need a simple enough problem, which solution
is not the objective of this work.
Finally, let us remark that our selection of the possible interpretations of noise is not exhaustive.
One could consider, for instance, pathwise integration theories such as the one by F\"ollmer~\cite{follmer}
or Rough Path Theory (with It\^o enhanced Brownian motion in our case)~\cite{fh}. Since these theories replicate
It\^o calculus, their use will presumably
lead to the same result as the Russo-Vallois forward integral. Moreover,
these theories are able to replicate neither Ayed-Kuo nor Hitsuda-Skorokhod calculus~\cite{fh}.
On the other hand, expressions such as~\eqref{solito} make appealing the use of a pathwise theory in the
context of the present problem since, as we have discussed at the beginning of this section, the strategy
given by~\eqref{maxito} maximizes the wealth of the insider not just in mean but almost surely. At this point,
however, it is convenient to realize again that the problem under consideration is a simple model which permits
a full comparison between the three anticipating integrals. In more complex problems, for instance with partial
insider information, it seems to be too optimistic to hope for almost surely optimal strategies.
Note that an akin situation arises in the case treated in the second paragraph of this section. In such general cases,
in which some type of average or suitable stochastic ordering seems to be needed, it is perhaps too restrictive
to ask for a purely analytical solution to the optimization problem.
\section{Conclusions}
In this work we have considered a version of insider trading that allows us to compare the optimal investment strategies provided by three different anticipating stochastic integrals: the Russo-Vallois forward, the Ayed-Kuo and the Hitsuda-Skorokhod integrals.
Specifically, we have considered the following formulation of insider trading for the bank account
\begin{subequations}
\begin{eqnarray*}\label{rode61}
d S_0 &=& \rho \, S_0 \, d t \\ \label{rode62}
S_0(0) &=& M \left(1-f(B(T))\right),
\end{eqnarray*}
\end{subequations}
and for the stock
\begin{subequations}
\begin{eqnarray*}\label{s61}
\dj S_1 &=& \mu \, S_1 \, d t + \sigma \, S_1 \, \dj B(t) \\ \label{s610}
S_1(0) &=& M f(B(T)),
\end{eqnarray*}
\end{subequations}
where the anticipating initial condition $f$ is a function of $B(T)$, such that $f \in L^{\infty}(\mathbb{R})$ and $0 \leq f \leq 1$,
and $\dj \in \{d^-,d^*,\delta\}$ is a stochastic differential of one of the types under consideration. That is, we have only considered
buy-and-hold strategies for which shorting is not allowed.
Our main task has been to establish the optimal investment strategy for each of the anticipating integrals. When the choice is the Russo-Vallois integral, the investment strategy that maximizes the expected wealth is to invest the whole amount $M$ in the asset whose actual value is larger at maturity time. Thus,
\begin{eqnarray}\nonumber
f(B(T)) &=& \mathlarger{\mathlarger{\mathbbm{1}}}_{\big \lbrace B(T) > \frac{T}{\sigma} \left( \rho - \mu + \frac{\sigma^2}{2} \right) \big \rbrace }.
\end{eqnarray}
For the Ayed-Kuo and the Hitsuda-Skorokhod integrals, the optimal investment strategy is the same as the one in the It\^o case (that is, the uninformed case), which implies to invest the whole amount $M$ in the stock. Hence,
\begin{eqnarray}\nonumber
f(B(T)) &=& 1.
\end{eqnarray}
These results suggest that the Russo-Vallois integral works as one would expect from a financial perspective, while the Ayed-Kuo and Hitsuda-Skorokhod integrals provide a solution that seems to be counterintuitive in the financial sense. Indeed, from our present results along with those in~\cite{escudero} it follows that
\begin{equation}\nonumber
\mathbb{E}\left[S^{\text{(I)}}(T)\right] = \mathbb{E}\left[M^{\text{(AK)}}(T)\right] = \mathbb{E}\left[M^{\text{(HS)}}(T)\right] < \mathbb{E}\left[M^{\text{(RV)}}(T)\right],
\end{equation}
where we have chosen the optimal investment allocation in each case. If we selected the optimal Russo-Vallois strategy for all the three anticipating cases the situation becomes even worse, see~\cite{escudero}; and it could be even more critical if we allowed different strategies, see~\cite{bastonsescudero}. All in all, our results suggest that while the Russo-Vallois forward integral allows the insider to make full use of the privileged information, both the Ayed-Kuo and Hitsuda-Skorokhod integrals effectively transform the insider into an uninformed trader.
Let us finish mentioning that it is also notorious that the Ayed-Kuo and Hitsuda-Skorokhod integrals give
simpler mathematical results. This is an indication, at least to us, of their potential use in different applications, be them financial or other. As the more classical versions of the interpretation of
noise problem show, it is not easy to anticipate what they could be, since the right interpretation
should in general be chosen on a problem by problem basis~\cite{escudero2}.
\section*{Acknowledgments}
We are grateful to two anonymous referees for their insightful comments and suggestions.
\bibliographystyle{amsplain}
|
1,314,259,993,550 | arxiv | \section{Introduction}
For some years now the cosmological standard model (or $\Lambda$CDM-model) has been widely successful in explaining the vast majority of cosmological observations. Still the nature of two of its key components, dark matter and dark energy, remains a mystery. So far both of these substances have eluded any direct detection and can be seen only through their gravitational effects. This leaves plenty of room for speculation, in particular a comparatively strong coupling between the two dark sectors is possible \cite{Amendola:1999er,Pettorino:2012ts,Amendola:2011ie}.
Furthermore the $\Lambda$CDM model faces two severe theoretical problems. In order to explain the current accelerated expansion of the universe, the cosmological constant $\Lambda$ needs to have a tiny value, contradicting expectations from quantum field theory. This is known as the {\it cosmological constant problem}. The other issue concerns the question why the energy density associated with $\Lambda$ is of the same order of magnitude as the energy density of the universe just recently (the {\it why now} or {\it coincidence problem}).
It is possible to alleviate both of these problems in alternative scenarios where dark energy is dynamical and evolves in time. The first examples of such models, thought up long before the observational discovery of cosmic acceleration, were closely related to the concept of anomalous dilatation symmetry \cite{Wetterich:1987fk,Wetterich:1987fm,Peebles:1987ek,Ratra:1987rm}. Recently, studies of dilatation symmetric theories of gravity in higher dimensions have brought renewed attention to the subject \cite{Wetterich:2008bf,Wetterich:2009az,Wetterich:2010kd}. Amongst such models, those which allow for a dimensional reduction to an effective four-dimensional theory are of particular interest. It was found that for this class, all stable quasistatic solutions of the field equations lead to a vanishing effective four-dimensional cosmological constant. If such a theory is realized as a fixed point of the renormalization group flow of the effective action, it is natural to assume that the cosmological vacuum solution approaches this fixed point for $t \rightarrow \infty$. At the fixed point dilatation symmetry implies the existence of a massless goldstone boson, the dilaton, but for finite times the vacuum solution breaks this symmetry. As a result, the dilaton becomes a pseudo-goldstone boson with a small, time-dependent mass.
In a cosmological setting the dilaton can play the role of a scalar ''cosmon'' field, rolling down its anomalous potential with a slowly decreasing mass and acting as dark energy. The anomalous potentials generated in such models typically give rise to scaling solutions, in which the energy density of the cosmon tracks that of the dominant background fluid, thereby alleviating the coincidence problem \cite{Wetterich:2009az,Copeland:1997et,Liddle:1998xm,Steinhardt:1999nw,Zlatev:1998tr}.
Besides the cosmon, dimensional reduction of higher-dimensional theories of gravity usually divulges additional scalar degrees of freedom. In a recent paper \cite{Beyer:2010mt} we discussed a class of simple two scalar field models motivated by this scenario and showed that one can obtain a realistic model of coupled dark energy and dark matter already for rather simple anomalous potentials. We named the second field ''bolon'' (from the Greek \textit{bolos} meaning "lump"), since it is responsible for the formation of structure in the universe in this picture. In this work we will refine our previous analysis to include linear perturbations as well as predictions of abundances of cosmic structure using the extended Press-Schechter (ePS) formalism. We will at several points refer to an accompanying paper \cite{EarlyScalings}, which analyzes the evolution of the cosmon and bolon dynamics during the early universe.
This paper is organized as follows:
In the Sec. \ref{sec:Model} we will introduce the coupled cosmon-bolon model and elaborate on the classes of scalar potentials we are investigating. We continue to study the background evolution of these models in Sec. \ref{sec:Background}. This was already discussed in ref. \cite{Beyer:2010mt}, but here we present a much more detailed analysis, the results of which will be essential in our treatment of the linear perturbations. Those will be dealt with in section \ref{sec:LinearPerturbations}, where we deduce an effective description of linear perturbation growth valid for late enough times and provide numerical simulations, building on the results of ref. \cite{EarlyScalings}. Finally we calculate some testable predictions of our model in section \ref{sec:PressSchechter}, where we apply the ePS mechanism to predict structure and substructure abundances in the universe and compare our results to warm dark matter models as well as the cosmological standard model.
We present our conclusions in Sec. \ref{sec:Conclusion}.
\section{The model}
\label{sec:Model}
The breaking of dilatation symmetry by the cosmological vacuum solution introduces an anomalous potential $V(\varphi,\chi)$ for the cosmon $\varphi$ and the bolon $\chi$. Dilatation symmetry at the fixed point ensures that the potential vanishes for $\varphi \rightarrow \infty$ (we assume wlog that the cosmon is rolling towards higher field values throughout its evolution). It is this requirement which makes a dark sector coupling an integral part of our model, since asymptotic dilatation symmetry could not be guaranteed otherwise.
Since the potential arises from anomalies in the quantum effective action, the field equations which can be derived from the principle of stationary action are exact, no additional quantum corrections are present.
We will perform our analysis in the effective four-dimensional theory in the Einstein-frame where the Planck mass is fixed and restrict our attention to a simple class of models for which the scalar potential can be split up as follows:
\begin{equation}
V(\varphi,\chi) = V_1(\varphi) + V_2(\varphi,\chi) \, .
\end{equation}
The quintessence potential we are employing in this work is the often used exponential potential, which arises naturally as an anomalous potential in the context of higher dimensional dilatation symmetric theories \cite{Wetterich:2008bf,Wetterich:2009az,Wetterich:2010kd}, but in other theories of particle physics as well \cite{Wetterich:2008sx}:
\begin{align}
\label{CosmonPotential}
V_1(\varphi) = M^4 {\rm e}^{-\alpha \varphi/M} \, .
\end{align}
The bolon-potential is assumed to have a minimum with a non-vanishing second derivative around which it will stabilize during the later stages of its evolution. Such a behaviour ensures that is will behave like a dark-matter candidate. The specific form of the potential we use is similar to one originally introduced in \cite{Matos:2000ng,Matos:2000ss}, but with an additional coupling to the cosmon field:
\begin{align}
\label{BolonPotential}
V_2(\varphi,\chi) = M^4 c^2 {\rm e}^{-2 \beta \varphi/M} \left( {\rm cosh} \left( \frac{\lambda \chi}{M} \right) -1\right) \, .
\end{align}
The dimensionless constant $c$ in this potential is closely related to the scale of anomalous symmetry breaking of dilatation symmetry, the only mass scale intrinsic to our model other than the Planck scale, as is discussed in more detail in ref. \cite{Beyer:2010mt}.
Asymptotically the bolon-dependence of this potential can be decomposed as follows
\begin{equation}
\label{AsymptoticCoshPotential}
{\rm cosh}\left( \lambda \chi / M \right) - 1 \approx \left\{
\begin{array}{l l}
\frac{1}{2} e^{ \left| \lambda \chi / M \right|} \quad \left| \lambda \chi / M \right| \gg 1 \\
\frac{1}{2} \frac{\lambda^2 \chi^2}{M^2} \quad \; \; \, \left| \lambda \chi / M \right| \ll 1 \\
\end{array}
\right. \, ,
\end{equation}
which ensures the required quadratic $\chi$-dependence for small field values, with a mass given by
\begin{equation}
m_\chi(\varphi)^2 = c^2 M^2 \lambda^2 {\rm e}^{-2 \beta \varphi / M} \, .
\end{equation}
For larger field values the potential gets much steeper, which is a necessary feature for a model exhibiting scaling solutions that lead to an insensitivity of the cosmological evolution to initial conditions for a wide range of initial field values.
Furthermore, at the level of linear perturbations, it ensures the existence of a dominant adiabatic mode in the early-universe evolution, i.e. it suppresses potentially strongly growing isocurvature modes which can be present in the case of a ''frozen'' field or power-law potentials (see accompanying paper ref. \cite{EarlyScalings} for more details on this).
We choose $\alpha > 0$ and $\lambda > 0$ throughout this paper, which can always be achieved by a suitable redefinition of $\varphi$ and $\chi$. Our motivation then requires $\beta > 0$ also. We will however at some stages refrain from this constraint and consider $\beta < 0$ as well.
Interesting generalizations of our model which can also arise from dilatation anomalies include a $\varphi$-dependent exponential $\alpha(\varphi)$ in the quintessence potential, a $\varphi$-dependent coupling $\beta(\varphi)$ or a non-constant minimum of the bolon-potential $\chi_0(\varphi)$. We will discuss some of these possibilities and their connection to an acclerated late-time expansion of the universe below.
The common part of the potential introduces a coupling between the two scalar fields. The form of this coupling can be easily derived from (non-)conservation of the energy-momentum tensor, which reads for a generic component of the cosmic fluid (denoted by subscript $\alpha$)
\begin{equation}
\mathcal{D}_\mu T^{\mu \nu}_{\alpha} = Q^\nu_{\alpha} \, .
\end{equation}
The total energy-momentum tensor is of course conserved and therefore the sum of all couplings is subject to the constraint
\begin{equation}
\sum_{\alpha} Q_{\alpha}^\nu = 0 \, .
\end{equation}
Furthermore, in an FLRW-background, each coupling is constrained by the usual symmetry assumptions of spatial isotropy and homogeneity, implying
\begin{equation}
\label{CouplingDefinition}
Q^{0 \, \mu}_{\alpha} = (-a Q_{\alpha},0,0,0) \, ,
\end{equation}
where the superscript $0$ denotes a background quantity. We introduce a commonly used dimensionless coupling via
\begin{equation}
\label{DimensionlessCouplingDefinition}
q_{\alpha} \equiv \frac{aQ_{\alpha}}{3h(1+\omega_{\alpha})\rho_{\alpha}} \, ,
\end{equation}
where the background energy-densities and pressure for the scalar fields are defined as follows:
\begin{align}
\label{EnergyDefinition}
\rho_{\varphi} =& \frac{1}{2 a^2} \varphi'^2 + V_1(\varphi) \, , \quad
\rho_\chi =& \frac{1}{2a^2} \chi'^2 + V_2(\varphi,\chi) \, , \\
\label{PressureDefinition}
p_{\varphi} =& \frac{1}{2 a^2} \varphi'^2 - V_1(\varphi) \, , \quad
p_\chi =& \frac{1}{2a^2} \chi'^2 - V_2(\varphi,\chi) \, .
\end{align}
Here and below a prime denotes a derivative with respect to conformal time. We will always assume standard kinetic terms for both scalar fields throughout this work. Non-standard kinetic terms can arise in the process of dimensional reduction \cite{Wetterich:2009az,Wetterich:2010kd} and can have interesting cosmological consequences, for example in k-essence models \cite{Garriga:1999vw,ArmendarizPicon:1999rj,ArmendarizPicon:2000dh,Chiba:1999ka,ArmendarizPicon:2000ah}, but we will not consider them further here.
The equation of energy conservation can now be written as
\begin{equation}
\label{GenericEnergyConservation}
\rho_{\alpha}' = -3h(1-q_{\alpha})(1+\omega_{\alpha})\rho_{\alpha} \, ,
\end{equation}
where (as usual) $\omega_{\alpha} = p_{\alpha}/\rho_{\alpha}$ is the equation of state of the component denoted by $\alpha$ and $h=a'/a$ is the conformal Hubble rate. For our model the dimensionless couplings read
\begin{align}
\label{CosmonCoupling}
q_\varphi & = - \frac{\varphi' V_{2,\varphi}}{3h\rho_\varphi (1+\omega_\varphi)} \, , \\
\label{BolonCoupling}
q_\chi & = - \frac{(1+\omega_\varphi) \rho_\varphi}{(1+\omega_\chi) \rho_\chi} q_\varphi = \frac{\varphi' V_{2,\varphi}}{3h\rho_\chi (1+\omega_\chi)} \, .
\end{align}
We note that a coupling to the trace of the energy momentum tensor, as is often assumed in models of coupled quintessence \cite{Amendola:1999er}, is not inherent to our model. It will however arise as a consequence of the dynamical evolution of both fields during the later stages of the evolution of the universe, as we will see in the next section \ref{sec:Background}. The same holds true for perturbations at the linear level, which we will discuss in section \ref{sec:LinearPerturbations}.
\section{Background evolution}
\label{sec:Background}
\subsection{Basic equations}
\label{sec:BackgroundEquations}
In a standard Friedmann-Lemaitre-Robertson-Walker (FLRW) universe, the field equations for two canonical scalar fields which couple through their potential read
\begin{align}
\label{CosmonFieldEquation}
\varphi'' + 2h\varphi' + V_{,\varphi} &= 0 \, , \\
\label{BolonFieldEquation}
\chi'' + 2h\chi' + V_{,\chi} &=0 \, ,
\end{align}
where $V_{,\varphi}$ and $V_{,\chi}$ are derivatives of the potential with respect to the scalar fields. Einstein's equations give the usual Friedmann equations:
\begin{align}
\label{FriedmannEquation}
h^2 &= \frac{a^2}{3M^2} \sum_\alpha \rho_\alpha \, , \\
\label{FriedmannEquation2}
h' &= -\frac{a^2}{6 M^2} \sum_\alpha \left( \rho_\alpha+ 3 p_\alpha \right) \, ,
\end{align}
where $M$ is the reduced Planck mass. The index $\alpha$ runs over all particle species present in the universe. In this work we use the common convention that the scale factor today should equal 1.
\subsection{Tracking in the early universe}
\label{sec:TheEarlyUniverse}
The dynamics of the evolution of the cosmon-bolon system in the early universe have been investigated in an accompanying paper \cite{EarlyScalings}, using the approximation of the common potential given by equation (\ref{AsymptoticCoshPotential}) for large field values. A dynamical system analysis revealed that the existing stable scaling solutions split the parameter space for this model into disjunct sections. The scaling solution relevant for our scenario is the one denoted by R4 in ref. \cite{EarlyScalings}, which is the only one allowing for a radiation-like expansion with a continuous range of couplings extending from the negative to the positive realm. It is also the unique stable scaling solution for the range of parameters which are phenomenologically interesting, as we will see in section \ref{sec:LinearPerturbations}. All other stable fixed points do either not provide a realistic early cosmology (i.e. a radiation-like expansion) or require large couplings, which can be observationally excluded, as we discuss below. The constraints on the model-parameters arising from the demands of existence and stability of this fixed point can be read off of Table II in ref. \cite{EarlyScalings} and are given by
\begin{equation}
\label{EarlyUniverseParameterConstraints}
\beta/\alpha < \frac{1}{2} \, , \; \frac{\alpha \beta}{\lambda^2 + 4 \beta^2} \leq \frac{1}{2} \, , \; \alpha>2 \, , \; \lambda^2 > \frac{4(2\beta - \alpha)^2}{\alpha^2-4} \, .
\end{equation}
For the positive range $0 < \beta / \alpha < 1/2$ the additional condition
\begin{equation}
\lambda^2 > 2 \beta (\alpha-2 \beta)
\end{equation}
is required.
We will therefore restrict ourselves to this parameter range from now on.
The evolution of the scalar fields during this phase of the cosmic evolution can be obtained analytically from Table I in ref. \cite{EarlyScalings} and reads
\begin{align}
\label{EarlyPhiEvolution}
\varphi(h) &= - \frac{M}{\alpha} {\rm ln}\left[ \text{f}(\alpha, \beta, \lambda) \frac{h^2}{a^2 M^2} \right] \, , \\
\label{EarlyChiEvolution}
\chi(h) &= \frac{M}{\lambda} {\rm ln} \left[ 8 (1-2\beta/\alpha) \frac{h^2}{a^2 m_\chi(\varphi)^2} \right] \, ,
\end{align}
where
\begin{equation}
\text{f}(\alpha,\beta,\lambda) = \frac{4 (\lambda^2 + 4 \beta^2 -2 \alpha \beta)}{\alpha^2 \lambda^2} \, .
\end{equation}
The energy densities of the cosmon and the bolon contribute only a fraction of the total early energy density of the universe for this solution and the scalar density parameters are given by
\begin{align}
\label{omegaPhiES}
\Omega_{\varphi,{\rm es}} = \frac{4}{\alpha^2} + \frac{8}{3} \frac{\beta/\alpha (-1+2 \beta/\alpha)}{\lambda^2} \, , \\
\label{omegaChiES}
\Omega_{\chi,{\rm es}} = \frac{4}{\lambda^2} + \frac{8}{3} \frac{\beta/\alpha (-5+4 \beta/\alpha)}{\lambda^2} \, ,
\end{align}
where the subscript ''es'' stands for ''early scaling''.
The stability of these solutions is of course only guaranteed as long as baryons can be neglected and the exponential approximation for the bolon-potential is valid. Both of these assumptions will eventually be violated and the early scaling solution will be broken by a transition to a matter dominated era, where the bolon quickly oscillates around the minimum of its potential and acts like dark matter. To estimate when this happens, we simply extrapolate the solution of the bolon evolution during the early scaling epoch until it reaches a values of $M/\lambda$. According to equation (\ref{EarlyChiEvolution}) this happens when $h^2/a^2 m_\chi(\varphi)^2 \approx {\rm e}/8(1-2\beta/\alpha)$, which is of order $1$.
\subsection{The late universe}
\label{sec:LateUniverse}
For sufficiently late times the cosmic evolution drives the bolon towards the minimum of its potential, so that it eventually acquires very small field values $\chi/M$. For such small deviations from the minimum we can approximate the bolon potential by
\begin{equation}
V_2(\varphi,\chi) = \frac{m_\chi(\varphi)^2}{2} \chi^2 \, ,
\end{equation}
with $m_\chi(\varphi) = m_0 \, {\rm exp}(-\beta \varphi / M)$ and $m_0 \equiv Mc\lambda$. The dynamics of the bolon in this regime depend on the ratio $\mu \equiv h/am_\chi(\varphi)$. For $\mu > 1$ the Hubble friction is strong enough to keep the bolon field frozen at some (almost) constant value, whereas for $\mu < 1$ we expect rapid oscillations to occur. As we have seen in section \ref{sec:TheEarlyUniverse}, a scaling scenario in the early universe will deliver the bolon to field values of the order $M/\lambda$ when $\mu \approx 1$. We can track the subsequent behavior of this parameter by employing the following formulae:
\begin{align}
\label{MuHEquation}
\mu =& \frac{h}{a m_0} {\rm e}^{\beta \varphi/M} = \frac{h}{a m_0} \left( 3 \frac{h^2}{a^2 M^2} \Omega_{\varphi,{\rm pot}} \right)^{-\beta/\alpha} \, , \\
\label{dMuEquation}
\mu' =& \mu h \left(-\frac{3}{2} (1+\omega_{\rm eff}) + \beta \sqrt{6 \Omega_{\varphi,{\rm kin}}} \right) \, .
\end{align}
If the quintessence field exhibits a scaling or tracking behavior, i.e. the the density parameter $\Omega_\varphi$ is (almost) constant and not bigger than a few percent, equation (\ref{dMuEquation}) directly tells us that $\mu$ is decreasing (as long as $H=h/a$ is getting smaller, which is a very generic requirement, and $\beta$ is not excessively large). The bound on the coupling $\beta$ we get from equation (\ref{MuHEquation}) in conjunction with the requirement of a decreasing $\mu$ is:
\begin{equation}
\beta/\alpha < 1/2 \, .
\end{equation}
This is a restriction we already found in section \ref{sec:TheEarlyUniverse} by requiring the existence of a radiation-dominated scaling solution in the early universe (see equation (\ref{EarlyUniverseParameterConstraints})) and also not in conflict with the bound coming from the existence of a bolon-dominated scaling solution in the late universe (see below) for our specific model. Furthermore equation (\ref{MuHEquation}) depends crucially on the exponential shape of the quintessence potential, while the conclusion we have drawn from equation (\ref{dMuEquation}) is much more general and is valid for all quintessence fields with a canonical kinetic term and a potential exhibiting scaling or tracking behavior. We have therefore shown that a decreasing $\mu$ is a quite generic feature in realistic quintessence scenarios and we will now consider the dynamics in the regime $\mu \ll 1$.
The evolution of this system has already been described in ref. \cite{Beyer:2010mt}, but we will present a much more detailed analysis here, which we will need later when we treat linear perturbations in section \ref{sec:LinearPerturbations}. The method we apply is derived from one used for the case of a single scalar field in a harmonic potential (see e.g. ref. \cite{Gorbunov:2011zzc}), which we will now generalize to our coupled scenario.
The basic idea is to first expand all dynamical quantities in a Taylor-series in $\tilde{\mu}=h_0/m_0$. Since $\tilde{\mu}=\mu \, {\rm e}^{\beta \varphi/M} \, a h_0/h$, this quantity is always smaller than $\mu$ for $a<1$ as long as the coupling is not too large, but has the advantage of being time-independent. To give an example, the bolon-field $\chi$ can be expanded as follows:
\begin{equation}
\label{MuExpansion}
\chi = \sum_j \tilde{\mu}^j \chi_j \, .
\end{equation}
We then segment all the Taylor-Coefficients $\chi_j$ into a Fourier-type sum, given by
\begin{equation}
\label{FourierTypeExpansion}
\chi_j = \sum_{n \epsilon \mathbb{N}} \chi_{j1,n} {\rm cos}(nx) + \chi_{j2,n} {\rm sin}(nx) \, ,
\end{equation}
with $n \in \mathbb{N}$.
Here the coefficients $\chi_{i1,n}$ and $\chi_{i2,n}$ are of course time-dependent, but are evolving slowly, i.e. remain almost constant at a time scale $1/m_\chi(\varphi)$. The oscillation frequencies are also time-dependent and given by multiples of the base frequency
\begin{equation}
\label{baseFrequency}
x(\eta)=\int_{t_0}^{t(\eta)} m_\chi(\varphi(t')) dt' \, ,
\end{equation}
with t being the cosmic time and $t_0$ being some suitable early time, chosen such that any phase potentially appearing in the trigonometric functions gets cancelled. The whole expression should of course be read as a function of conformal time.
Which frequencies appear at which order in $\tilde{\mu}$ can now be seen simply by plugging the most general ansatz into the field equations (\ref{CosmonFieldEquation}) - (\ref{FriedmannEquation}) and comparing coefficients for each trigonometric function at each order in $\tilde{\mu}$. The whole set of resulting differential equations for the Taylor-coefficients can be found in appendix \ref{app:bgExpansion}. Here we simply give the results.
The scale factor $a$, the hubble $h$, the cosmon-field $\varphi$ and all additional energy densities (i.e. photons, neutrinos and baryons) evolve slowly to leading order. We denote the first term in the Taylor-expansion (\ref{MuExpansion}) of such adiabatic quantities with a bar, e.g. $\bar{a}$. The only quantity which is oscillatory at leading order is the bolon field $\chi$, the leading order coefficient is $\chi_{11,1}$ which we denote by $\chi_0$ for simplicity.
Evaluating the Friedmann equation to leading order gives
\begin{equation}
\label{AveragedFriedmannEquation}
\bar{h}^2 = \frac{\bar{a}^2}{3M^2} \left( \bar{\rho}_\chi + \bar{\rho}_\varphi + \bar{\rho}_{\rm ext} \right) \, ,
\end{equation}
where $\bar{\rho}_\chi$ and $\bar{\rho}_\varphi$ denote the (non-oscillatory) leading order contributions to the bolon- and cosmon energy-densities, respectively. These are given by
\begin{align}
\bar{\rho}_\chi &= \frac{1}{2} \bar{m}_\chi^2(\bar{\varphi}) \chi_0^2 \, , \\
\bar{\rho}_\varphi &= V_1(\bar{\varphi}) + \frac{1}{2 \bar{a}^2} \bar{\varphi}'^2 \, ,
\end{align}
where $\bar{m}_\chi(\bar{\varphi})$ simply denotes the leading order term in the $\tilde{\mu}$-expansion for the mass, which is of course only $\bar{\varphi}$-dependent:
\begin{equation}
\bar{m}_\chi(\bar{\varphi}) = m_0 {\rm e}^{-\beta \bar{\varphi}/M} \, .
\end{equation}
We will drop the explicit $\bar{\varphi}$-dependence of $\bar{m}_\chi$ in all the formulas below. The additional quantity $\bar{\rho}_{\rm ext}$ labels the sum of all energy densities which are present in addition to the scalar fields, in particular photons, neutrinos and baryons.
From the bolon field equation (see appendix \ref{app:bgExpansion}) we can directly see that $\chi_0 \propto \bar{a}^{-3/2} {\rm exp}(\beta \bar{\varphi} / 2M) $ and therefore
\begin{equation}
\label{AveragedBolonSolution}
\bar{\rho}_\chi \propto \bar{a}^{-3} {\rm exp} (-\beta \bar{\varphi}/M) \, .
\end{equation}
The cosmon evolution follows the following equation
\begin{align}
\label{LeadingOrderCosmonEquation}
\bar{\varphi}'' &= -2 \bar{h} \bar{\varphi}' - \bar{a}^2 V_{1,\bar{\varphi}} + \frac{\beta}{M} \bar{a}^2 \bar{\rho}_\chi \, .
\end{align}
Using our expansion method we have recovered the expected result that to leading order the cosmon and bolon form a system of coupled quintessence governed by the equations (\ref{AveragedFriedmannEquation}), (\ref{LeadingOrderCosmonEquation}) and the equation of energy conservation derived from equation (\ref{AveragedBolonSolution}):
\begin{equation}
\bar{\rho}_\chi' + 3 \bar{h} \bar{\rho}_\chi = -\beta \frac{\bar{\varphi}'}{M} \bar{\rho}_\chi \, .
\end{equation}
If we include additional components into our cosmic fluid, we will of course have to add the corresponding equations of energy conservation for those as well.
This system was already analyzed in ref. \cite{Beyer:2010mt}. One should note that for the range of parameters we are investigating it allows for accelerated expansion only in the case of large negative $\beta$, specifically $\alpha < -2 \beta$. This is not only inconsistent with our original motivation of an asymptotically vanishing bolon mass, but also excludes the possibility of a prolonged matter dominated epoch in our scenario, as we have checked numerically. \footnote{This is of course connected to the fact that the coupled quintessence scenario does not describe our model in the early universe. Within a pure coupled quintessence model realistic accelerated cosmologies are of course possible and were discussed in ref. \cite{Amendola:1999er}.} The universe simply transitions quickly from radiation-domination to accelerated expansion in this case.
Furthermore current bounds on coupled quintessence models already rule out such strong couplings (see e.g. ref. \cite{Pettorino:2012ts}), as we will discuss further below.
We therefore exclude this possibility as unrealistic and focus on smaller couplings.
For these our cosmology will quickly adjust itself to a matter-dominated scaling solution (see ref. \cite{Beyer:2010mt}), with the bolon energy-density scaling slightly differently than baryons due to the coupling. The parameter bounds resulting from the conditions of existence and stability of this solution can be found in ref. \cite{Amendola:1999er} and read:
\begin{align}
\left| \alpha - \beta \right| >&\sqrt{\frac{3}{2}} \, , \quad \alpha < \left( \beta + 3/\beta \right) \, , \quad \alpha > \sqrt{\frac{2}{3}} 4 \beta \nonumber \\
& {\rm and } \quad \alpha > \frac{1}{2} \left( \beta + \sqrt{12+\beta^2} \right) \, .
\end{align}
None of these constraints are in conflict with the ones we found above from considerations of an attractive radiation dominated era in the early universe or the condition of a decreasing $\mu$. All parameter choices used below will respect all the bounds mentioned in this section.
\begin{comment}
Moving on to the equation of state for the cosmon we first note that the leading order contribution is given by
\begin{align}
\bar{\omega}_\varphi =& \frac{\bar{\varphi}'^2/\bar{a}^2 - 2 M^4 {\rm e}^{-\alpha \bar{\varphi}/M}}{\bar{\varphi}'^2/\bar{a}^2 + 2 M^4 {\rm e}^{-\alpha \bar{\varphi}/M}} \\
\omega_\varphi' =& \alpha (1-\bar{\omega}_\varphi) \frac{\varphi _0'}{M}
+\beta \frac{\bar{\rho }_{\chi } }{\bar{\rho }_{\varphi } }(1-\bar{\omega}_\varphi)\frac{\varphi _0'}{M} \left( 1+{\rm cos}(2x) \right) \nonumber \\
& - 3 \bar{h} (1-\bar{\omega}_\varphi) (1+\bar{\omega}_\varphi) \\
\rightarrow \; <\omega_\varphi' & > = \alpha (1-\bar{\omega}_\varphi) \frac{\varphi _0'}{M}
+\beta \frac{\bar{\rho }_{\chi } }{\bar{\rho }_{\varphi } }(1-\bar{\omega}_\varphi)\frac{\varphi _0'}{M}
\nonumber \\
& - 3 \bar{h} (1-\bar{\omega}_\varphi) (1+\bar{\omega}_\varphi)
\end{align}
where $<.>$ denotes a time-averaging over one oscillation period.
We will adopt the notation $\bar{.}$ for all leading order quantities which remain approximatley constant during one oscillation period in order to differentiate them from averaged quantities.
\end{comment}
\subsection{Accelerated expansion}
As was already pointed out, the specific model treated here, with the restrictions on parameters from sections \ref{sec:TheEarlyUniverse} and \ref{sec:LateUniverse}, does not result in an accelerated expansion. We are not too concerned about this issue, since there are several ways out of it, as we already discussed in ref. \cite{Beyer:2010mt}. One could for example assume a slightly different form of the quintessence potential by considering a $\varphi$-dependent $\alpha$ \cite{Wetterich:1994bg}, alternatively a non-standard kinetic term \cite{Hebecker:2000zb}, a $\varphi$-dependent minimum in the bolon-potential \cite{Beyer:2010mt}, a $\varphi$-dependent coupling $\beta$ \cite{TocchiniValentini:2001ty} or an additional coupling to other components of the cosmic fluids (e.g. neutrinos, see \cite{Amendola:2007yx}) can break the scaling behaviour and effectively stop the evolution of the cosmon, leading to an accelerated expansion. We will choose one of these possibilities in our numerics below in order to get realistic results.
\subsection{Observational parameter bounds}
Current experimental bounds on the model parameters go far beyond the theoretical limits we cited above, originating from the desire to obtain a realistic cosmology independent of model parameters. To our knowledge the most stringent observational bounds on a coupled theory such as ours come from big bang nucleosynthesis (BBN) \cite{Ferreira:1997au,Ferreira:1997hj,Bean:2001wt} and cosmic microwave background (CMB) observations \cite{Calabrese:2011hg,Reichardt:2011fv,Xia:2013dea,Pettorino:2013ia}.
Let us start with the BBN bounds. Adding a tracking quintessence field to the early radiation dominated era in the universe modifies expansion of the universe and thus standard big bang nucleosynthesis. Observations of the abundances of the lightest elements in the cosmos can therefore put an upper bound on the quintessence density parameter, Bean, Hansen and Melchiorri set it at $0.045$, the tightest constraint known to us \cite{Bean:2001wt}. In our model this should be seen as a bound for the combined scalar density parameter $\Omega_{sc,es}=\Omega_{\varphi,es}+\Omega_{\chi,es}$ as given in equations (\ref{omegaPhiES}) and (\ref{omegaChiES}).
Constraints from the CMB on a tracking scalar quintessence component are strong whenever the scalar field makes up a non-negligible fraction of the energy density of the universe at decoupling, which is the case in our model. Recently bounds from this era have been improved to give an upper limit of about $0.02$ \cite{Pettorino:2013ia}, which in our model has to be interpreted as a bound on the quintessence density parameter alone, since the bolon has already started to oscillate at decoupling and does not follow its scaling solution anymore. From ref. \cite{Amendola:1999er} one directly obtains
\begin{equation}
\Omega_{\varphi,cq} = \frac{3-\alpha \beta + \beta^2}{(\alpha-\beta)^2} \lesssim 0.02 \, .
\end{equation}
The subscript "cq" stands for coupled quintessence.
Furthermore CMB analyses of coupled quintessence models also put a bound on the coupling \cite{Pettorino:2012ts}, currently at the order of $\beta^2 \lesssim 0.01$.
These CMB bounds were derived for standard cold dark matter coupled to quintessence, but we expect similar constraints to hold in our scenario. As we will see below, the evolution of perturbations in our model is different than in coupled cold dark matter models, but we expect these differences to be largely irrelevant for the CMB constraints, as they are only important for scales much smaller than the ones corresponding the multipole moments where the CMB has the most constraining power.
Future constraints using Planck and Euclid data sets are expected to improve these bounds by about two orders of magnitude \cite{Amendola:2011ie}.
\subsection{Parameter adjustment}
At the background level our model has 4 parameters determining the behavior of the two scalar fields, the exponents $\alpha$ and $\lambda$, the coupling $\beta$ and the mass-parameter $c^2$. To fully determine the background evolution (after some suitable very early time, in particular after neutrino-decoupling and electron-positron annihilation) we also have to fix the current radiation density $\rho_{r,0}$ and the baryon density $\rho_{b,0}$. Adopting a procedure introduced in ref. \cite{Matos:2000ss} we can predict the current density parameters for both the bolon and the cosmon from the fundamental model parameters.
First, we define a scale factor $a^*$ as the scale-factor at which oscillations start, i.e. when $\chi=M/\lambda$. As an approximation, we then extrapolate the analytically known bolon-evolution for the early scaling solution (eq. (\ref{EarlyChiEvolution})) up to that point, also assumimg that $h \propto 1/a$ still holds. Furthermore we can estimate the cosmon value $\varphi^* = \varphi(a^*)$ by extending the analytic formula for the early scaling solution given by equation (\ref{EarlyPhiEvolution}). This gives the following estimate for $a^*$:
\begin{equation}
\label{aStarEq1}
a^* = \left( \frac{\rho_{r,0}}{3 M^4} \right)^\frac{1}{4} \left[ \frac{8 (1-2\beta/\alpha)\, {\rm f}^{-2\beta/\alpha} }{{\rm e} c^2 \lambda^2 (1-\Omega_{sc,es})^{1-2\beta/\alpha}} \right]^\frac{1}{4(1-2 \beta/\alpha)}
\end{equation}
where $\Omega_{sc,es}=\Omega_{\varphi,es} + \Omega_{\chi,es}$ and $\text{f}=\text{f}(\alpha,\beta,\lambda)$ as defined in section \ref{sec:TheEarlyUniverse}.
To make contact with current energy densities, we can assume that from $a^*$ onwards, the bolon will follow its evolution determined by eq. (\ref{AveragedBolonSolution}). Since the coupling causes the bolon-density to scale slightly differently than the baryons, we can not simply fix the ratio of the two energy densities, but have to specify $\rho_{\chi}$ at some redshift, say at $z=0$. Once this is set, all that remains to consider is the cosmon energy density, which should dominate the cosmic evolution at late times. In particular we need to stop the evolution of the cosmon at some low redshift $\tilde{z}$ (appoximately $5$). The value of the cosmon $\varphi_0 = \varphi(z=0) \approx \varphi(\tilde{z})$ can the simply be obtained by extending the late time cosmon scaling solution for coupled quintessence to $\tilde{z}$. Extrapolating this evolution back to $a^*$ and estimating the bolon energy-density at $a^*$ by the one given by the scaling solution then gives
\begin{equation}
\label{aStarEq2}
a^*=\frac{\rho_{r,0}}{\rho_{\chi,0}} \frac{\Omega_{\chi,es}}{1-\Omega_{sc,es}} {\rm e}^{\beta (\varphi*-\varphi_0)/M} \, ,
\end{equation}
where we have not yet inserted the cosmon evolution in order to keep the equation simple.
Plugging in this information using the approximations described above and equating the right hand sides of (\ref{aStarEq1}) and (\ref{aStarEq2}) gives us an approximate expression for the current bolon energy density $\rho_{\chi,0}$ given a mass, or vice versa. We can make this and exact expression by including a numerical adjustment factor $N$:
\begin{align}
\label{c2Guess}
\left(\frac{\rho_{\chi,0}}{\rho_{r,0}}\right)^{1-\beta/\alpha}=& N \, \Omega_{\chi,es} (1-\Omega_{sc,es})^{-3/4} \left( \frac{\rho_{r,0}}{M^4} \right)^{-\frac{1}{4}+\beta/\alpha} \nonumber \\
&\left(\frac{8 (1-2\beta/\alpha)}{3{\rm e} \lambda^2 c^2} \right)^{-\frac{1-4\beta/\alpha}{4(1-2\beta/\alpha)}} \left( \frac{{\rm f}}{3} \right)^\frac{-\beta/\alpha}{2(1-2\beta/\alpha)} \nonumber \\
& \left( 6^3 (1+\tilde{z}) \frac{\rho_{\chi,0}}{\rho_{r,0}} \frac{{\rm g}}{2-{\rm g}} \tilde{\rm g} \right)^{\beta/\alpha}
& \end{align}
where
\begin{align}
\text{g}=&\text{g}(\alpha,\beta,\lambda)= 1 + \frac{6 \beta^2 - 6 \alpha \beta + 18}{6 (\alpha - \beta)^2} \quad {\rm and} \\
\tilde{\text{g}} =& \tilde{\text{g}} (\alpha,\beta,\lambda) = \text{g}(\alpha, \beta, \lambda) - \frac{3}{2 (\alpha - \beta)^2} \, .
\end{align}
The adjustment factor $N$ is of order $1$, but always smaller, roughly $0.4$. This is due to the fact that estimating the bolon energy density at $a^*$ by the scaling solution is quite accurate, but using the averaged evolution from that point on leads to an overestimate of $\rho_{\chi,0}$ since the bolon energy dilutes faster than non-relativistic matter during the early oscillatory phase. Furthermore $N$ is a function of the model parameters $\alpha$, $\beta$ and $\lambda$ as well as the energy densities $\rho_{r,0}$, $\rho_{\chi,0}$ and $\tilde{z}$. It of course will also take (slightly) different values depending on which scenario is chosen to achieve late time cosmic acceleration. For practical purposes, we will choose one such scenario in section \ref{sec:Numerics} where we present numerical results and determine $N$ (for fixed $\tilde{z}=5$, $\rho_{r,0}$ and $\rho_{\chi,0}$) by running through a grid on $c^2$ and the remaining model parameters.
\section{Linear Perturbations}
\label{sec:LinearPerturbations}
\begin{comment}
In this section we will turn our attention to the evolution of perturbations in the early universe in models containing two non-minimally coupled scalar fields. We will eventually come back to the specific Cosmon-bolon potential given in equation (\ref{CosmonBolonPotential}), but in a first analysis we will deal with a more generic setup, consisting of two canonical scalar fields with a common potential which can be decomposed as follows:
\begin{equation}
\label{GenericTwoScalarPotential}
V(\varphi,\chi) = V_1(\varphi) + V_2(\chi) \,
\end{equation}
with
\begin{equation}
V_2(\varphi,\chi) = {\rm e}^{-2 \beta \varphi/M} \tilde{V}_2(\chi) \, .
\end{equation}
By "early universe" we mean an era during which the cosmic expansion proceeds as if dominated by radiation, i.e. $h(\eta) =\eta^{-1}$. Always keeping in mind that the bolon is supposed to be a dark matter candidate, we restrict its potential $\tilde{V}_2$ to be (nearly) harmonic for sufficiently small field values. For large field values however, there is no such constraint and we can envision a very different functional $\chi$-dependence. In order to avoid fine-tuning problems, we favor models with a steeper potential for larger field values, since those typically give rise to tracker-solutions at the background level, leading to a cosmic evolution determined by model parameters alone for a very large range of initial conditions \cite{...}. Furthermore, steep (in particular exponential) potentials arise naturally in many contexts in particle physics (see e.g. \cite{...}). We will therefore consider a background-evolution during which both fields follow a tracking-type behavior, in particular the equations of state are assumed to be (almost) constant. Note that this does not imply a strongly subdominant role of the scalar fields or equations of state of exactly $1/3$. The coupling between the two fields enables a wide range of possible $\omega_\varphi$'s and $\omega_\chi$'s while still allowing for a radiation-like expansion ($a(\eta) \propto \eta$) and non-negligible scalar field contributions to the energy-densities already in very simple models, as we will see below.
Former studies of perturbations in the early universe have usually been limited to cosmologies containing either no scalar fields \cite{...}, only scalar fields (usually in the context of inflation) \cite{...} or a single quintessence field. While the results of these works, in particular \cite{...}, are helpful when analyzing our system in some respects, they do differ considerably from our model in others and are not suited to draw the specific initial conditions we need for our numerical simulations below. The main difference is that all these models consider only a single scalar field in the presence of a background fluid, which typically contains a cold dark matter component. In our model the cold dark matter component gets replaced by a second scalar field. It is of course possible to match the field evolution onto a fluid description, but its properties are quite different from a CDM component, even a coupled one. Furthermore we are aware of only one study which systematically analyzes the evolution of superhorizon-perturbations in a coupled quintessence model \cite{...}. This analysis is however limited to a very specific form of the coupling given by $Q_\varphi = - \beta \rho_{\rm cdm}$, while our simple potential (\ref{GenericTwoScalarPotential}) already results in a more complicated scenario (see equations (\ref{CosmonCoupling}) and (\ref{GenericTwoScalarPotential})):
\begin{equation}
Q_\varphi = \frac{\beta}{aM} (1-\omega_\chi) \rho_\chi \varphi' \, .
\end{equation}
Before moving on to our analysis, let us recap some relevant results from the previous studies. As was shown in \cite{...}, even in generic non-minimally coupled models (which of course includes ours), purely adiabatic perturbations remain purely adiabatic on superhorizon scales, no matter what the interaction is. Here a mode is called adiabatic if all entropy perturbations vanish. This includes relative entropy perturbations between different components of the cosmic fluid as well as internal entropy perturbations for a single component - these can exist for non-perfect fluids. However, as was noted in \cite{...}, the existence of such an adiabatic mode is non-trivial, since demanding all entropies to vanish often introduces more constraints than we have degrees of freedom in the equations. Furthermore this result is not sufficient to explain a suppression non-adiabatic modes, since any (initially small) admixture of such modes could in principle eventually outgrow the adiabatic perturbations. Such a growth can happen for example during the ''adjustment'' phases in minimally coupled tracking quintessence models when the quintessence field does not yet follow the tracker trajectory. However, a sufficiently long tracking regime typically erases these modes \cite{...} (though not necessarily). We will therefore not be concerned with the evolution before tracking, i.e. we assume that the tracking regime lasts long enough to erase all modes which, during this era, decay faster (or grow less fast) than the fastest growing modes.
In our analysis we will follow the same procedure used in \cite{...}. We will therefore recast the equations of energy- and momentum (non-)conservation in terms of dimensionless variables (see appendix \ref{app:LinearPerturbations} for our conventions concerning linear perturbation theory):
\begin{widetext}
\begin{align}
\label{dDeltaEquation}
\frac{{\rm d} \Delta_\alpha}{{\rm d} \, {\rm ln}(x)} = \frac{2}{(1+3\omega_{\rm eff})} & \left[ 3(1+\omega_\alpha) q_\alpha \Phi -3 \omega_\alpha \Gamma_\alpha -x^2 (1+\omega_\alpha) V_\alpha + 3(1+\omega_\alpha) q_\alpha\tau_\alpha -3 \left( (1+\omega_\alpha)q_\alpha + (c_{s,\alpha}^2-\omega_\alpha) \right) \Delta_\alpha \right. \nonumber \\
& \left. +3 (1+\omega_\alpha) q_\alpha \Psi'/h +3 (1+\omega_\alpha) \left(q_\alpha'/h - 3(1-q_\alpha) q_\alpha (1+c_{s,\alpha}^2)\right) \Psi \right] \, , \\
\label{dVEquation}
\frac{{\rm d} V_\alpha}{{\rm d} \, {\rm ln}(x)} = \frac{2}{(1+3\omega_{\rm eff})} & \left[ \frac{c_{s,\alpha}^2}{1+\omega_\alpha} \Delta_\alpha + \left( -\frac{3}{2} (1+\omega_{\rm eff}) - 3 q_\alpha + 3 c_{s,\alpha}^2 (1-q_\alpha) \right) V_\alpha - \frac{2}{3} \frac{\omega_\alpha}{(1+\omega_\alpha)} x^2 \tilde{\Pi}_\alpha \right. \nonumber \\
& \left. + \frac{\omega_\alpha}{1+\omega_\alpha} \Gamma_\alpha - \frac{a f_\alpha}{(\rho_\alpha + p_\alpha)}+ \left( 1+\frac{2 q_\alpha}{1+\omega_{\rm eff}} \right) \Phi + 3c_{s,\alpha}^2 (1-q_{\alpha}) \Psi + \frac{2 q_\alpha}{1+\omega_{\rm eff}} \Psi'/h \right] \, .
\end{align}
\end{widetext}
Here we adopted the (slightly non-standard) definitions of the density-contrast and velocity potential given by
\begin{align}
\Delta_\alpha \equiv & \frac{\delta \rho_\alpha}{\rho_\alpha} + \frac{\rho_\alpha'}{h \rho_\alpha} \Psi = \frac{\delta \rho_\alpha}{\rho_\alpha} - 3 (1-q_\alpha) (1+\omega_\alpha) \Psi \, , \\
V_\alpha \equiv & -h \frac{\left[ (\rho+p)v \right]_\alpha}{\rho_\alpha + p_\alpha} \, .
\end{align}
Note that our $V_\alpha$ corresponds to the quantity $\tilde{V}_\alpha$ in \cite{...}. We have also introduced the adiabatic sound-speed and the non-adiabatic pressure perturbation given by
\begin{align}
\label{SoundSpeedDefinition}
c_{s,\alpha}^2 \equiv \, & \omega_\alpha + \frac{\rho_\alpha}{\rho_\alpha'} \omega_\alpha' = \omega_\alpha - \frac{\omega_\alpha'}{3h(1-q_\alpha)(1+\omega_\alpha)} \, , \\
\Gamma_\alpha \equiv& \frac{1}{p_\alpha} \left( \delta p_\alpha - c_{s,\alpha}^2 \delta \rho_\alpha \right) \, ,
\end{align}
and a dimensionless quantity for the anisotropic stress:
\begin{equation}
\tilde{\Pi}_\alpha \equiv \frac{h^2 \Pi_\alpha}{p_\alpha} \, .
\end{equation}
Furthermore all derivatives with respect to conformal time have been replaced by derivatives respect to ${\rm ln}(x)$, where $x=k/h$.
The content of our early universe consists of neutrinos, photons, baryons and the two scalar fields, the cosmon and the bolon. Strictly speaking, the evolution of neutrinos and photons is described in terms of a multipole-expansion of the respective phase-space distribution functions. Following \cite{...} we truncate these expansions after the dipole for photons, leaving only the density contrast $\Delta_\gamma$ and velocity potential $V_\gamma$ as dynamical quantities. In the case of neutrinos, we keep the quadrupole,which gives us an additional anisotropic stress contribution $\tilde{\Pi}_\nu$. Furthermore we assume that baryons and photons form a tightly coupled plasma long before CMB-emission through momentum exchange, this means we can set $V_{\gamma b} \equiv V_\gamma = V_b$. These approximations are valid for early enough times in the universe \cite{...}. Since scalar fields, even when coupled, do not develop any anisotropic stress, we have $\tilde{\Pi}_{\rm tot} = \tilde{\Pi}_\nu$ and the evolution of this quantity is governed by the following equation \cite{...}:
\begin{equation}
\label{dPinuEquation}
\frac{{\rm d} \tilde{\Pi}_\nu}{{\rm d} \, {\rm ln}(x)} = \frac{2}{1+3\omega_{\rm eff}} \left( \frac{8}{5} V_\nu - 2 \tilde{\Pi}_\nu \right) \, .
\end{equation}
By now we are ready to set up the equations for neutrinos, photons and baryons, but we still need to evaluate the non-adiabatic pressure perturbation $\Gamma_\alpha$ and the coupling terms $f_\alpha$ and $\tau_\alpha$ for the scalar fields. This can be done in a straightforward (if a little tedious) way from the energy-momentum tensor to linear order.
Let us start with the non-adiabatic pressure terms. These read (for our choice of potentials):
\begin{align}
\label{GammavarphiSub}
\omega_\varphi \Gamma_\varphi= &
- \frac{2 V_{1,\varphi} \varphi'}{h \rho_\varphi} V_\varphi
+ \left(1 - c_{s,\varphi}^2 \right) \Delta_\varphi \nonumber \\
&+ \left( 3(1+\omega_\varphi) (1-q_\varphi ) (1-c_{s,\varphi}^2)\right) \Psi \, , \\
\label{GammachiSub}
\omega_\chi \Gamma_\chi =&
- \frac{2 V_{2,\chi} \chi'}{h \rho_\chi} V_\chi
+ \left(1 - c_{s,\chi}^2 \right) \Delta_\chi \nonumber \\
& + 6 \frac{(1+\omega_\varphi) \rho_\varphi}{\rho_\chi} q_\varphi V_\varphi \nonumber \\
& + 3 (1+\omega_\chi) (1-q_\chi)(1-c_{s,\chi}^2) \Psi \, .
\end{align}
Evaluating the coupling perturbations gives
\begin{align}
\label{fvarphiSub}
a f_\varphi =& 3 q_\varphi (1+ \omega_\varphi) \rho_\varphi \left( \frac{2}{3} \frac{(\Psi'/h + \Phi)}{(1+\omega_{\rm eff})} - V_\varphi \right) \\
\label{tauvarphiSub}
\tau_\varphi =&\frac{1}{1+\omega_\varphi} \Delta_\varphi - \left( \frac{V_{1,\varphi} \varphi'}{(1+\omega_\varphi) h \rho_\varphi} + 6 \frac{(1+\omega_\varphi) \rho_\varphi}{(1-\omega_\chi) \rho_\chi} \right) V_\varphi \nonumber \\
&+ \frac{2 \chi' V_{2,\chi}}{(1-\omega_\chi) \rho_\chi h} V_\chi + 3 (1-q_\varphi) \Psi
\end{align}
while of course $f_\chi = -f_\varphi$ and $\tau_\chi = \tau_\varphi$.
At this point we are almost done, but still need to replace the potential derivatives. This can be done by relating these with the time-evolutions of the equations of state. For our choice of potential we obtain for the cosmon
\begin{align}
\omega_\varphi' =& -3h(1+\omega_\varphi) (1-q_\varphi) (1-\omega_\varphi) - \frac{2 V_{1,\varphi} \varphi'}{\rho_\varphi}
\end{align}
and we can employ equation (\ref{SoundSpeedDefinition}) to get
\begin{align}
\label{dV1Sub}
V_{1,\varphi} =& - \frac{3}{2} \frac{h}{a^2} (1-q_\varphi) (1-c_{s,\varphi}^2) \varphi' \, .
\end{align}
The same calculation for the bolon yields
\begin{align}
\label{dV2Sub}
V_{2,\chi} =& - \frac{3}{2} \frac{h}{a^2} (1-c_{s,\chi}^2) \chi' - \frac{3}{2} \frac{h}{a^2} q_\chi (1-c_{s,\chi}^2) \chi' \, .
\end{align}
Finally we have to eliminate the field derivatives $\varphi'$ and $\chi'$. We exclude the possibility of oscillating solutions in the early universe and assume without loss of generality that both fields roll towards higher field values. This means we can set
\begin{equation}
\label{dvarphiSub}
\varphi' = + \, a \sqrt{(1+\omega_\varphi) \rho_\varphi}
\end{equation}
and accordingly
\begin{equation}
\label{dchiSub}
\chi' = + \, a \sqrt{(1+\omega_\chi) \rho_\chi} \, .
\end{equation}
At this point we have all the ingredients to set up the differential equations governing the evolution of perturbations in the early universe. We combine the dynamical variables into a single perturbation vector given by
\begin{align}
U = \left\{ \Delta_\gamma, \Delta_\nu, \Delta_b, \Delta_\varphi, \Delta_\chi, V_{\gamma b}, V_\nu, V_\varphi, V_\chi, \tilde{\Pi}_\nu \right\}^{\rm T} \, .
\end{align}
Each of these quantities evolves according to a closed system of first order differential equations. Before spelling out this system, we note that in order to keep the equations simpler we keep the gravitational potentials in the equations explicitly. They can of course easily be removed using the relations (\ref{PoissonEquation}) - (\ref{PhiPsiEquation}) in terms of our new variables:
\begin{align}
\label{PsiSub}
\Psi =& -\frac{3}{2} \frac{\sum_\alpha \Omega_\alpha \left( \Delta_\alpha + 3 (1+\omega_\alpha) V_\alpha \right)}{x^2 + \frac{9}{2} (1+\omega_{\rm eff})} \, , \\
\label{dPsiSub}
\Psi'/h =& -\Phi + \frac{3}{2} \sum_\alpha \Omega_\alpha (1+\omega_\alpha) V_\alpha \, , \\
\label{PhiSub}
\Phi =& \Psi - 3 \sum_\alpha \Omega_\alpha \omega_\alpha \tilde{\Pi}_\alpha = \Psi - \Omega_\nu \tilde{\Pi}_\nu\, .
\end{align}
Let us start with the equations for photons, neutrinos and baryons. These can be directly derived from equations (\ref{dDeltaEquation}), (\ref{dVEquation}) and (for the anisotropic stress) (\ref{dPinuEquation}). One simply has to put all couplings and non-adiabatic pressure perturbations to zero and also set
\begin{align}
\omega_\gamma = \omega_\nu = c_{s,\gamma}^2 = c_{s,\nu}^2 = 1/3 \, , \\
\omega_b = c_{s,b}^2 = \tilde{\Pi}_\gamma = \tilde{\Pi}_b = 0 \, .
\end{align}
It is important to note that the equation for the combined baryon-photon velocity potential requires to use the properties of the combined baryon-photon fluid, i.e.
\begin{align}
\Delta_{\gamma b} = \frac{\Omega_\gamma \Delta_\gamma + \Omega_b \Delta_b}{\Omega_\gamma + \Omega_b} \, , \\
\omega_{\gamma, b} = \frac{1}{3} \frac{\Omega_\gamma}{\Omega_\gamma + \Omega_b} \, , \\
c_{s,\gamma b} = \frac{1}{3} \frac{\Omega_\gamma}{\Omega_\gamma + \frac{3}{4} \Omega_b} \, , \\
\Gamma_{\gamma b} = \frac{\Omega_b (3 \Delta_\gamma - 4 \Delta_b)}{4 \Omega_\gamma + 3 \Omega_b} \, .
\end{align}
We will however see below that all these quantities reduce to the standard values for pure radiation in the superhorizon limit.
This leaves us with the following set of equations (where we already set $\omega_{\rm eff} = 1/3$ since we assume a radiation-like expansion):
\begin{align}
\label{dDeltagamma}
\frac{{\rm d} \Delta_\gamma}{{\rm d} \, {\rm ln}(x)} = & - \frac{4}{3} x^2 V_{\gamma b} \, , \\
\frac{{\rm d} \Delta_\nu}{{\rm d} \, {\rm ln}(x)} = & - \frac{4}{3} x^2 V_\nu \, , \\
\frac{{\rm d} \Delta_b}{{\rm d} \, {\rm ln}(x)} = & -x^2 V_{\gamma b} \nonumber \\
\frac{{\rm d} V_\nu}{{\rm d} \, {\rm ln}(x)} = & \frac{1}{4} \Delta_\nu + \left( -\frac{3}{2} (1+\omega_{\rm eff}) + 1 \right) V_\nu \nonumber \\
& - \frac{1}{6} x^2 \tilde{\Pi}_\nu + 2 \Psi - \Omega_\nu \tilde{\Pi}_\nu \\
\label{dVgammab}
\frac{{\rm d} V_{\gamma b}}{{\rm d} \, {\rm ln}(x)} = & \frac{\Omega_\gamma}{(4 \Omega_\gamma + 3 \Omega_b)} \Delta_\gamma - \Omega_\nu \tilde{\Pi}_\nu + \frac{8 \Omega_\gamma + 3 \Omega_b}{4 \Omega_\gamma + 3 \Omega_b} \Psi \nonumber \\
& + \left( -\frac{3}{2} (1+\omega_{\rm eff}) + \frac{4 \Omega_\gamma}{(4 \Omega_\gamma + 3 \Omega_b)} \right) V_{\gamma b} \, ,
\end{align}
and equation (\ref{dPinuEquation}) for the neutrino anisotropic stress.
Let us now turn to the equations for the scalar fields. Using the derived expressions for the non-adiabatic pressure perturbations (equations (\ref{GammavarphiSub}) and (\ref{GammachiSub})) and the couplings (equations (\ref{fvarphiSub}) and (\ref{tauvarphiSub})) we can evaluate equations (\ref{dDeltaEquation}) and (\ref{dVEquation}) for the scalar fields. Furthermore we can use formulae (\ref{dV1Sub}) - (\ref{dchiSub}) to express all background quantities in terms of $\omega_\alpha$, $c_{s,\alpha}^2$ and $\Omega_\alpha$. The equations this yields are quite complicated and we will not display them here in their entirety. Instead we will already make the tracker-approximation here, i.e. set $\omega_\chi' = \omega_\varphi' = 0$ which implies $c_{s,\varphi}^2 = \omega_\varphi$ and $c_{s,\chi}^2 = \omega_\chi$. Also assuming a radiation-like expansion we set $\omega_{\rm eff} = 1/3$. This yields the following set of equations:
\begin{widetext}
\begin{align}
\label{dDeltavarphi}
\frac{{\rm d} \Delta_\varphi}{{\rm d} \, {\rm ln}(x)} = & -3 \left( 1-\omega _{\varphi } + q_{\varphi } \omega _{\varphi } \right) \Delta_\varphi
- 9 q_{\varphi } \left(1+\omega _{\varphi }\right) \left(1+\omega _{\chi }\right) \left(1-q_{\varphi } \frac{\left(1+\omega _{\varphi }\right) \Omega _{\varphi }}{\left(1-\omega _{\chi }\right) \Omega _{\chi }}\right) V_\chi + 3 q_{\varphi } \left(1+\omega _{\varphi }\right) \left( \Psi'/h + \Phi \right)\nonumber \\
& -(1+\omega_\varphi) \left[ x^2 + 9 (1-\omega_\varphi) - \frac{27}{2} q_{\varphi } \left(1-\omega _{\varphi }\right)+\frac{1}{2} q_{\varphi }^2 \left(9 (1-\omega_\varphi) + 36 \frac{\left(1+\omega _{\varphi }\right) \Omega _{\varphi }}{\left(1- \omega _{\chi }\right) \Omega _{\chi }}\right) \right] V_\varphi \nonumber \\
& - 3 (1+\omega_\varphi) \left[ 3 \left(1-\omega _{\varphi }\right) + \frac{1}{2} q_{\varphi } \left(-7+9 \omega _{\varphi }+6 \omega _{\chi }\right) + \frac{3}{2} q_{\varphi }^2 \left( \left(1-\omega _{\varphi }\right) + 2 \left(1+\omega _{\varphi }\right) \frac{\Omega _{\varphi }}{\Omega_\chi} \right) \right] \Psi \, , \\
\frac{{\rm d} \Delta_\chi}{{\rm d} \, {\rm ln}(x)} = & - 3 \left(1-\omega _{\chi } - 3 q_{\varphi } \left(1+\omega _{\varphi }\right) \frac{\Omega _{\varphi }}{\Omega _{\chi }} \right) \Delta_\chi - 3 q_{\varphi } \frac{\Omega _{\varphi }}{\Omega _{\chi }} \Delta_\varphi - 3 q_{\varphi } \left(1+\omega _{\varphi }\right) \frac{\Omega _{\varphi }}{\Omega _{\chi }} \left( \Psi'/h + \Phi \right) \nonumber \\
& -(1+\omega_\chi) \left[ x^2 +9 (1-\omega_\chi) - 9 q_{\varphi }^2 \frac{\left(1+\omega _{\varphi }\right){}^2}{\left(1-\omega _{\chi }\right) } \frac{ \Omega _{\varphi }^2}{\Omega _{\chi }^2} + 18 q_{\varphi } \left(1+\omega _{\varphi }\right) \frac{\Omega _{\varphi }}{\Omega _{\chi }} \right] V_\chi \nonumber \\
& + \frac{9}{2} q_{\varphi } \left(1+\omega _{\varphi }\right) \frac{\Omega _{\varphi }}{\Omega_\chi} \left[ \omega _{\varphi } - 5 + q_{\varphi } \left(1-\omega_\varphi + 4 \frac{\left(1+\omega _{\varphi }\right) \Omega _{\varphi }}{(1-\omega_\chi) \Omega_\chi} \right)\right] V_\varphi \nonumber \\
& - 3 (1+\omega_\varphi) \left[ 3 \frac{\left(1-\omega _{\chi }^2\right)}{(1+\omega_\varphi)} - \frac{1}{2} q_{\varphi } \left(-7+3 \omega _{\varphi }+12 \omega _{\chi }\right) \frac{\Omega _{\varphi }}{\Omega _{\chi }} - \frac{3}{2} q_{\varphi }^2 \frac{\Omega _{\varphi }}{\Omega_\chi} \left(2 \left(1+\omega _{\varphi }\right) \frac{\Omega _{\varphi }}{\Omega_\chi} + \left(1- \omega _{\varphi }\right) \right) \right] \Psi \, ,\\
\frac{{\rm d} V_\varphi}{{\rm d} \; {\rm ln}(x)} = & \left(1-3 q_{\varphi }\right) V_{\varphi }+\frac{1}{1+\omega _{\varphi }}\Delta _{\varphi } + \Phi +3(1 - q_{\varphi }) \Psi \, ,\\
\label{dVchi}
\frac{{\rm d} V_\chi}{{\rm d} \; {\rm ln}(x)} = &V_{\chi }+\frac{1}{1+\omega _{\chi }}\Delta _{\chi } + 3 q_{\varphi } \frac{\left(1+\omega _{\varphi }\right) \Omega _{\varphi }}{\left(1+\omega _{\chi }\right) \Omega _{\chi }} V_{\varphi } + \Phi + 3 \left(1 + q_{\varphi } \frac{\left(1+\omega _{\varphi }\right) \Omega _{\varphi }}{\left(1+\omega _{\chi }\right) \Omega _{\chi }}\right) \Psi \, .
\end{align}
\end{widetext}
In all of these equations we still need to replace the couplings using
\begin{align}
q_\varphi = \frac{\beta}{\sqrt{3}} \frac{(1-\omega_\chi) \Omega_\chi}{(1+\omega_\varphi) \Omega_\varphi} \sqrt{1+\omega_\varphi} \sqrt{\Omega_\varphi}
\end{align}
and equation (\ref{BolonCoupling}).
At this point we continue by employing the basic idea of \cite{...} and write the differential equations governing the evolution of linear perturbations in a convenient matrix form. This procedure yields
\begin{equation}
\label{MatrixEquation}
\frac{{\rm d} U(x)}{{\rm d} \, {\rm ln}(x)} = A(x) U(x)
\end{equation}
where the coefficients of the matrix $A(x)$ can be read off from equations (\ref{dPinuEquation}), (\ref{dDeltagamma}) - (\ref{dVgammab}) and (\ref{dDeltavarphi}) - (\ref{dVchi}) after replacing the gravitational potentials.
Before analyzing the solutions for the perturbations we have to investigate the background quantities appearing in $A(x)$. Those are generally x-dependent, but in this analysis we are interested in the superhorizon-limit, which is defined by $x \ll 1$ and we can therefore work with Taylor-expansions in x. First we note that during the early universe we assume a radiation-like expansion, which implies that the density parameters for radiation and neutrinos remain (almost) constant, while the one for baryons grows linearly in x:
\begin{equation}
\Omega_b = \Omega_{b,{\rm in}} a = \Omega_{b,{\rm in}} \frac{a_{\rm in}}{k} x \, .
\end{equation}
For the scalar fields we note that their combined energy density $\rho_{\rm sc} \equiv \rho_\varphi + \rho_\chi$ has to decay according to a combined equations of state $\omega_{\rm sc} = (\omega_\varphi \rho_\varphi + \omega_\chi \rho_\chi)/(\rho_\varphi + \rho_\chi)$ with a value of $1/3$ or be subdominant in order to be consistent with our assumption of a radiation-like expansion. The rather complicated form of our coupling makes an analysis similar to \cite{...} impossible. However, we can make progress by using a simple assumption, namely that the density parameters for the scalar fields evolve like
\begin{align}
\Omega_\varphi = \Omega_{\varphi,{\rm in}} \left( \frac{x}{x_{\rm in}} \right)^{n_\varphi} \, , \\
\Omega_\chi = \Omega_{\chi,{\rm in}} \left( \frac{x}{x_{\rm in}} \right)^{n_\chi} \, ,
\end{align}
with constant and non-negative $n_\varphi$ and $n_\chi$. Such a simple scaling is not guaranteed in any way, but we will see below that a rather simple cosmon-bolon model defined by the potential (\ref{CosmonBolonPotential}) already naturally leads to situation with $n_\varphi = n_\chi = 0$ and we believe that most two-scalar field tracking models will follow such a behavior, at least to good approximation.
Using this, it becomes obvious that we can expand the matrix $A(x)$ as a type of Taylor-series in x and recover at leading order a constant matrix:
\begin{align}
A(x) = A_0 & + A_1x + A_2x^2 + \sum_{i=0}^2 \sum_{m=1}^N A_{\varphi,i} x^{m n_\varphi + i} \nonumber \\
& + \sum_{i=0}^2 \sum_{m=1}^N A_{\chi,i} x^{m n_\chi + i} + \mathcal{O}(x^3) \, ,
\end{align}
where $N$ is chosen such that all order below $\mathcal{O}(x^3)$ are included. At leading order in x we can approximate $A(x)$ by a constant matrix $A_0$. With this approximation we can immediately write down the general solution for equation (\ref{MatrixEquation}), which reads
\begin{equation}
U(x) = \sum_i c_i x^{\lambda_i} U_0^{(i)} \, ,
\end{equation}
where $U_0^{(i)}$ are eigenvectors of the matrix $A_0$ and $\lambda_i$ the corresponding eigenvalues. Both can of course be determined by solving the eigensystem
\begin{equation}
(\lambda \mathbb{I} - A_0)U_0 = 0 \, .
\end{equation}
With this as a starting point, one can now easily determine higher order corrections to the found solution by expanding the eigenvectors as follows:
\begin{align}
U^{(i)}(x) = U_0^{(i)} &+ U_1^{(i)} x + U_2^{(i)} x^2 + \sum_{i=0}^2 \sum_{m=1}^N U_{\varphi,i}^{(i)} x^{m n_\varphi + i} \nonumber \\
& +\sum_{i=0}^2 \sum_{m=0}^N U_{\chi,i}^{(i)} x^{m n_\chi + i} + \mathcal{O}(x^3) \, .
\end{align}
where the first order correction is then given by
\begin{equation}
U_1^{(i)} = (\lambda_i \mathbb{I}-A_0)^{-1} A_1 U_0^{(i)}
\end{equation}
and the other corrections can be calculated in a similar fashion, but we will not need them here. Of course, the general solution is given by
\begin{equation}
U(x) = \sum_i c_i x^{\lambda_i} U^{(i)}(x) \, .
\end{equation}
\end{comment}
\begin{comment}
\section{Standard SFDM with gauge invariant quantities}
Here we analyse the standard SFDM model, where dark matter consists of a single canonical scalar field, minimally coupled to gravity with a potential given by
\begin{equation}
V(\chi) = \frac{m^2}{2} \chi^2 .
\end{equation}
\subsection{Background equations}
The scalar field equation reads at the background level
\begin{equation}
\chi'' + 2h\chi' + a^2 m^2 \chi = 0 .
\end{equation}
We are now only interested in the case where $h \ll a m$, in which case the scalar field will undergo rapid dampened oscillations. In all of the following analysis we will expand our system in terms of the small quantity $\mu = h/(am)$. At the background level we can therefore make the following ansatz
\begin{align}
\chi(\eta) &= \chi_0 {\rm cos}(mt) + \chi_1 {\rm sin} (mt) + \chi_2 {\rm cos}(3mt) + \chi_3 {\rm sin}(3mt) + \chi_{\rm ig} \\
a(\eta) &= \bar{a} + a_{\rm osc} + a_{\rm ig}
\end{align}
where $\chi_0$, $\chi_1$, $\chi_2$, $\chi_4$ and $\bar{a}$ are slowly evolving quantities (compared to the rapid oscillations in the trigonometric functions) and $\chi_0$ is of the order $\mathcal{O}(\mu M)$, $\chi_1$ is of the order $\mathcal{O}(\mu^2 M)$, $\chi_2$ and $\chi_3$ are of the order $\mathcal{O}(\mu^3 M)$ (or smaller), $\chi_{\rm ig}$ is of the order $\mathcal{O}(\mu^4M)$ (or smaller), $a_{\rm osc}$ is of the order $\mathcal{O}(\mu^2 \bar{a})$ and $a_{\rm ig}$ is of order $\mathcal{O}(\mu^3 \bar{a})$ (or smaller). Here t denotes the cosmic time $t=\int a(\eta)d\eta$.
\subsubsection{Motivating this}
This ansatz can be very simply motivated. We first note that (to a first approximation) we expect $\chi$ to undergo dampened oscillations with frequency $m$. Since we can choose the phase of these oscillations freely (e.g. by shifting the 0-point of cosmic time by a tiny amount) we can set $\chi_1 = 0$ in the above ansatz for $\chi$, ignore oscillations induced in the scale factor as well as higher order harmonics in the $\chi$-expansion to obtain to the lowest order in $\mu$:
\begin{equation}
\chi_0' + \frac{3}{2} \bar{h} \chi_0 = 0 \rightarrow \chi_0 \propto \bar{a}^{-\frac{3}{2}} ,
\end{equation}
where $\bar{h}$ is the slowly evolving leading order contribution to the (conformal) Hubble-rate (see below).
Therefore the energy density scales like $\bar{a}^{-3}$ in this case, just like CDM. This is consistent with a slowly evolving background at this order. Extending this approximation to lower orders we take the most general ansatz for a damped oscillation for the $\chi$ field, but suppress the sin-terms by a factor $\mu$. In order to remain consistent with Friedmann's equation the oscillating terms in $a$ have to be of order $\mathcal{O}(\mu^2 \bar{a})$, otherwise we get quick oscillations in the Hubble rate $h$ at leading order which are inconsistent with the (to leading order) slowly evolving energy density $\rho_\chi$ we just found. This means we have
\begin{equation}
h = \bar{h} + h_{\rm osc} \quad {\rm where} \, {\rm to} \, {\rm lowest} \, {\rm order} \quad h_{\rm osc} = \frac{a_{\rm osc}'}{\bar{a}}
\end{equation}
One important subtlety when comparing orders of magnitude in these calculations is to consider that a derivative of an oscillating term (like $a_{\rm osc}$ or the trigonometric terms in the $\chi$-ansatz) always give a factor $am \approx \bar{a} \bar{h}/ \mu$ (to lowest order). With this in mind it is now also obvious why $a_{\rm osc}$ needs to be of order $\mathcal{O}(\mu^2 \bar{a})$. If we also take into account the backreaction of the background oscillations onto the scalar field, we have to include higher order harmonics (with frequencies $2mt$ and higher) into the $\chi$-expansion, but at higher orders. The specific frequencies included in the above ansatz at the given orders is chosen such that it fulfills the field equations to subleading orders (see below).
\subsubsection{Equations to subleading order}
Now we can evaluate Friedmann's equations
and the scalar field equation in principal to arbitrary order in $\mu$. Here we assume that the cosmic fluid consists of the scalar field plus many other components, which we combine into one energy density $\rho_{\rm ext} = \sum_i \rho_i$. We assume here that all $\rho_i$ are quantities with constant or slowly evolving equations of state, at least to leading order. (We will continue to use greek indices in all sums to indicate all components of the cosmic fluid including the scalar field and latin indices for all sums over the ''extra''-components throughout this section.) In particular there are no additional quickly oscillating quantities, so that all oscillations in $a$ and $h$ (on this timescale) are caused by the scalar field. Since we assume the the ''extra''-sources are coupled with the scalar field only gravitationally, they obey the equation of energy conservation given by
\begin{equation}
\rho_{\rm ext}' = -3h(1+\omega_{\rm ext})\rho_{\rm ext}
\end{equation}
and since $h$ and $\omega_{\rm ext}$ are slowly evolving to first order, $\rho_{\rm ext}$ has to be slowly evolving to second order (at least). Writing
\begin{equation}
\rho_{\rm ext} = \bar{\rho}_{\rm ext} + \rho_{\rm ext, osc}
\end{equation}
with $\rho_{\rm ext,osc}$ of the order $\mathcal{O}(\mu^2 \bar{\rho}_{\rm ext})$ (or less) we get at leading order:
\begin{equation}
\bar{h}^2 = \frac{\bar{a}^2}{3 M^2} \left( \bar{\rho}_{\rm ext} + \frac{1}{2} m^2 \chi_0^2\right)
\end{equation}
and therefore
\begin{equation}
2 \bar{h} \bar{h}' = \frac{\bar{a}^2}{3 M^2} \left( \bar{\rho}_{\rm ext}' + 2 \bar{h} \bar{\rho}_{\rm ext} - \frac{1}{2} \bar{h} m^2 \chi_0^2 \right) \, .
\end{equation}
To subleading order we obtain:
\begin{align}
h_{\rm osc} &= \frac{1}{8} \bar{a} m \frac{\chi_0^2}{M^2} {\rm sin}(2mt) \\
a_{\rm osc} &= -\frac{1}{16} \bar{a} \frac{\chi_0^2}{M^2} {\rm cos}(2mt)
\end{align}
and an equation for $\chi_1'$, which is given by
\begin{align}
\chi_1' {\rm cos}(mt) = &- \left[ \frac{3}{2} \bar{h} \chi_1+ \frac{1}{2m\bar{a}} \left( \chi_0'' + 2 \bar{h} \chi_0' \right) \right] {\rm cos}(mt) \nonumber \\
& + \frac{3}{16 M^2} m \bar{a} \chi_0^3 {\rm sin}(mt) {\rm sin}(2mt) + 4 m \bar{a} \left( \chi_2 {\rm cos}(3mt) + \chi_3 {\rm sin}(3mt) \right)
\end{align}
Using ${\rm sin}(mt) {\rm sin}(2mt) = \frac{1}{2} \left( {\rm cos}(mt) - {\rm cos} (3mt)\right)$ we see that in order to fulfill this equation we need
\begin{equation}
\chi_2 = \frac{3}{128} \frac{\chi_0^3}{M^2} \, , \quad \chi_3 = 0 \, .
\end{equation}
Using this yields a first order equation for $\chi_1$ which reads
\begin{equation}
\chi_1' = - \frac{3}{2} \bar{h} \chi_1- \frac{1}{2m\bar{a}} \left( \chi_0'' + 2 \bar{h} \chi_0' \right) + \frac{3}{32 M^2} m \bar{a} \chi_0^3 \, .
\end{equation}
\subsection{Linear perturbations}
We will now look at the linear perturbations of the scalar field $\chi$. For the uncoupled case with a simple quadratic potential the linearized field equation reads
\begin{equation}
Y'' + 2hY' + k^2 Y + a^2 m^2 Y + 2 a^2 m^2 \chi \Phi - \chi' \Phi' - 3\chi' \Psi' = 0 \, .
\end{equation}
We will now assume that the anisotropic inertia vanish, i.e. $\Phi = \Psi$, which gives
\begin{equation}
Y'' + 2hY' +( k^2+ a^2 m^2 ) Y + 2 a^2 m^2 \chi \Phi - 4 \chi' \Phi' = 0 \, .
\end{equation}
We will restrict our attention here to modes with $am \gg k$, so that we can as a first approximation read the above equation as a dampened harmonic oscillator with frequency m (in cosmic time) and with an external force given by the gravitational potential. We therefore expect dampened oscillations of $Y$ and make the ansatz
\begin{equation}
Y = B {\rm sin}(mt) + A {\rm cos}(mt) + Y_{\rm ig}
\end{equation}
where $A$ and $B$ are slowly evolving functions of conformal time and $Y_{\rm ig}$ again holds terms which oscillate with higher frequencies ($2m$, $3m$ etc.), and we assume this quantity to be surpressed by at least one power of $\mu$ compared to $A$ and $B$.
With this ansatz, we expect the gauge-invariant density contrast $\delta_\chi$ to evolve slowly (to first order in $\mu$) and therefore the gravitational potential should do the same, i.e.
\begin{equation}
\Phi = \bar{\Phi} + \Phi_{\rm osc}
\end{equation}
where $\Phi_{\rm osc}$ is of the order $\mathcal{O}(\mu \bar{\Phi})$.
At this stage we have to differentiate between two different regimes of the wavenumber $k$, namely those for which $\mu k^2/h^2 = k^2/(ahm) \ll 1 $ during the entire period of interest (i.e. the time for which $m \gg h/a_{\rm onset}$) and those for which $\mu k^2/ h^2 \gtrsim 1$ for at least some time. Here $a_{\rm onset}$ refers to the scale factor at $t=t_{\rm onset}$, the time at which $h \ll am $ ''for the first time'' and oscillations are quick.
\vspace{20pt}
{\it Just as a quick remark: The quantity $k^2/ahm$ is constant during radiation domination, decreasing $\propto t^{-1/3}$ during matter domination and $\propto {\rm exp}(-2Ht)$ during inflationary expansion. So once the conditions holds for a given k-mode, it will continue to do up to today.}
\vspace{20pt}
{\bf Sidenote: It is sufficient to find a consistent approximation (to the desired order in $\mu$) for the scalar field equations and eq. (\ref{EinsteinEQ1}). This is equivalent to finding an approximation for the equations of energy and momentum (non-)conservation and eq. (\ref{EinsteinEQ1}). The other Einstein-equations can then simply be used as a trivial check of the correctness of the calculations.}
\subsubsection{Case 1: $\mu k^2/h^2 \ll 1$}
First we need to work out the correct orders of magnitude for $A$ and $B$. To do this, we simply take eq. (\ref{EinsteinEQ1}), which reads
\begin{equation}
\frac{2 k^2}{3 h^2} \Phi = - \sum_{\alpha} \Omega_{(\alpha)} (\delta_{(\alpha)} +3 (1+\omega_{(\alpha)}) V_{(\alpha)}) \, ,
\end{equation}
define
\begin{equation}
S_{\rm ext} = \sum_i \Omega_i \left( \delta_i + 3 (1+\omega_i)V_i \right)
\end{equation}
where again the index $i$ runs over all components of the cosmic fluid, except for the scalar field, and use
\begin{equation}
\Omega_\chi (\delta \chi + 3 (1+\omega_\chi)V_\chi) = \frac{1}{3 M^2 h^2} \left(\chi' Y' - \Phi \chi'^2 + a^2 m^2 \chi Y + 3 h \chi' Y \right)
\end{equation}
to obtain
\begin{equation}
\left( \frac{2k^2}{3h^2} + \frac{\chi'^2}{3 M^2 h^2} \right) \Phi = \frac{\chi' Y' + a^2 m^2 \chi Y + 3 h \chi' Y}{3M^2 h^2} - S_{\rm ext} \, .
\end{equation}
We assume here that the density contrasts and velocity potentials are evolving slowly to leading order for all components of the cosmic fluid except the scalar field, in which case $S_{\rm ext}$ is slowly evolving to that order (since the $\Omega_i$ are).
Evaluating this expression to leading order (assuming that $A$ and $B$ are of the same order), we get
\begin{equation}
\left( \frac{2k^2}{3\bar{h}^2} + \frac{\bar{a}^2 m^2 \chi_0^2 {\rm sin}^2 (mt)}{3 M^2 \bar{h}^2} \right) \bar{\Phi} = \frac{\bar{a}^2 m^2 \chi_0 A}{3M^2 \bar{h}^2} - \bar{S}_{\rm ext}
\end{equation}
where again $S_{\rm ext}$ is assumed to be evolving slowly to first order (which is to be expected when $h$ and $\Phi$ do so), i.e.
\begin{equation}
S_{\rm ext} = \bar{S}_{\rm ext} + S_{\rm ext, osc}
\end{equation}
with $S_{\rm ext,osc}$ of the order $\mathcal{O}(\mu \bar{S}_{\rm ext})$.
This however is clearly inconsistent, since the sin$^2$-term on the lhs is of order $\mathcal{O}(\mu^0\Phi)$, whereas the first term on the rhs is of order $\mathcal{O}(\mu^{-1} A/M)$. For the $\mu k^2/h^2 \ll 1$ regime of wavenumbers the $k^2/h^2$-term on the lhs can not make up for this difference, i.e. this is only consistent if the amplitude of the perturbations $A$ is so small that the scalar field does not make any considerable contribution to the evolution of the gravitational potential. This is clearly inconsistent with our attempt of modeling a dark-matter candidate with (at least almost) adiabatic perturbations. We will therefore neglect this scenario and set A to be of the order $\mathcal{O}(\mu B)$. The idea now is to take this ansatz and average the equations governing the scalar field perturbations to leading order, to see how they behave on timescales much larger than an oscillation period. We start by evaluating the linearized field equation to lowest order to obtain
\begin{equation}
B' = -\frac{3}{2} \bar{h} B - \bar{a} m \chi_0 \bar{\Phi} \, .
\end{equation}
For the gravitational potential we obtain (to the lowest order):
\begin{align}
& \frac{2k^2}{3\bar{h}^2} \bar{\Phi} = \frac{1}{3M^2 \bar{h}^2} \left(\frac{3}{2} \bar{h} \chi_0 \bar{a}mB - \bar{a}^2 m^2 (\chi_0 A + \chi_1 B) \right) - \bar{S}_{\rm ext}
\end{align}
In order to obtain $\Psi_{\rm osc}$ to leading order we evaluate the equation for $\Phi'$ to leading order, which reads
\begin{equation}
\Phi' = \frac{3}{2} h (1+\omega_{\rm eff}) V_{\rm tot} -h \Phi \, .
\end{equation}
Going to next to leading order (and using the leading order results from above) we get from the scalar field equation the following equation for $A$:
\begin{equation}
A' = -\frac{3}{2} \bar{h} A - \frac{3}{32} \bar{a} \bar{m} \frac{\chi_0^2}{M^2} B + \bar{a} \bar{m} \chi_1 \bar{\Psi} + \frac{\bar{h}}{\bar{a} \bar{m}} B' + \frac{1}{2 \bar{m} \bar{a}} B'' + \frac{k^2}{2 \bar{a} \bar{m}} B
\end{equation}
Instead of evaluating the Poisson-equation to next to leading order {\bf DO THIS AT SOME POINT!!!} we choose to take the $\Phi'$-equation to leading order, using the above results for $A'$ and $B'$ and employing
\begin{equation}
(1+\omega_{\rm eff}) V_{\rm tot} = \sum_\alpha \Omega_\alpha (1+\omega_\alpha) V_\alpha
\end{equation}
and defining
\begin{equation}
U_{\rm ext} = \frac{3}{2} \sum_i \Omega_i (1+\omega_i) V_i
\end{equation}
where again the index $i$ runs over all components of the cosmic fluid except the scalar field $\chi$. $U_{\rm ext}$ again is evolving slowly to leading order if the velocity potential $V_i$ are. We obtain
\begin{equation}
\Phi' =-h \Phi + \frac{3}{2} h \Omega_\chi (1+\omega_\chi) V_\chi + h U_{\rm ext} \, .
\end{equation}
Evaluating this to leading order gives
\begin{equation}
\label{dPhiEQN1storder}
\bar{\Phi}' + \Phi_{\rm osc}' = -\bar{h} \bar{\Phi} - \frac{1}{2M^2} \bar{a} m \chi_0 B {\rm sin}^2(mt) + \bar{h} \bar{U}_{\rm ext}
\end{equation}
Splitting this up into an oscillatory part and one that is slowly evolving using ${\rm sin}^2(mt) = \frac{1}{2}-\frac{1}{2} {\rm cos}(2mt)$ we that we necessarily have fulfill
\begin{equation}
\label{dPsioscSol}
\Phi_{\rm osc}' = \frac{1}{4M^2} \bar{a} m B \chi_0 {\rm cos}(2mt) \, .
\end{equation}
To show that this is consistent we quickly show that the non-oscillatory part of the equation is indeed vanishing for the given solution of $\bar{\Phi}$, i.e. we calculate $\bar{\Phi'}$, for which we need $S_{\rm ext}'$. We will now quickly derive an expression for this:
In the following calculations we will make the assumption that all components of the cosmic fluid other than the scalar field are uncoupled and have vanishing $\Gamma_i$, $\Pi_i$ and $\omega_i'$ (the case of a slowly evolving $\omega_i$ can easily be included). In this case we can split up the density contrasts and velocity potentials into two parts, one slowly evolving and one oscillatory:
\begin{equation}
\delta_i = \bar{\delta}_i + \delta_{\rm osc,i} \, , \quad V_i = \bar{V}_i + V_{\rm osc,i} \, .
\end{equation}
Now the evolution equations read:
\begin{align}
\bar{\delta}_i' + \delta_{\rm osc,i}' &= 3 (1+\omega_i) \left(\bar{\Psi}' + \Psi_{\rm osc}' \right) -\frac{k^2}{h} (1+\omega_i) \left( \bar{V}_i + V_{\rm osc,i} \right) \, , \\
\bar{V}'_i +V_{\rm osc,i}'&= h \left( \bar{\Phi} + \Phi_{\rm osc} \right) + \frac{h \omega _i}{(1+\omega _i)} \left( \bar{\delta}_i + \delta_{\rm osc,i} \right) -\frac{3 h}{2} (1+\bar{\omega}_{\rm eff}+\omega_{\rm eff,osc}-2 \omega_i) \left(\bar{V}_i + V_{\rm osc,i} \right) \\
\end{align}
where $\omega_{\rm eff} = \sum_{\alpha} \Omega_{\alpha} \omega_\alpha$ is oscillatory to first order. This is consistent only if the oscillating terms in the density contrasts and velocity potentials are subdominant, in particular we need $\delta_{\rm osc,i}$ to be of the order $\mathcal{O}(\mu \bar{\delta}_i)$ and $V_{\rm osc,i}$ to be of the order $\mathcal{O}(\mu \bar{V}_i)$. Evaluating these expressions to leading order and identifying oscillatory and non-oscillatory terms, we obtain:
\begin{align}
\delta_{\rm osc,i}' &= 3(1+\omega_i)\Psi_{\rm osc}' \\
\bar{\delta}_i' &= 3 (1+\omega_i) \bar{\Psi}' - \frac{k^2}{\bar{h}} (1+\omega_i)\bar{V}_i \\
V_{\rm osc,i}' &= -\frac{3}{2} \bar{h} \omega_{\rm eff,osc}\bar{V}_i = -\frac{3}{2} \bar{h} \Omega_\chi\omega_{\chi}\bar{V}_i = \frac{3}{2} \bar{h} \bar{\Omega}_\chi {\rm cos}(2mt) \bar{V}_i \\
\bar{V}_i' &= \bar{h} \bar{\Phi} + \frac{\omega_i}{1+\omega_i} \bar{h} \bar{\delta}_i - \frac{3}{2} \bar{h} \left( 1+\bar{\omega_{\rm eff}} - 2\omega_i \right) \bar{V}_i
\end{align}
In this case we see that the oscillations in the density contrasts are only driven by the oscillations in the gravitational potential (as expected), whereas the oscillations in the velocity field come (to leading order) from the oscillations in $h'$ (i.e. the pressure oscillations in $p_\chi$), since we put a factor $h$ front of $v$ to make the quantity dimensionless. Now we can evaluate $\bar{S}_{\rm ext}'$ to obtain
\begin{equation}
\bar{S}_{\rm ext}' = -(\bar{h}+2\frac{\bar{h}'}{\bar{h}}) \bar{S}_{\rm ext} - \frac{2}{3\bar{h}}\left( k^2 + 3 \bar{h}^2 + \bar{h}' \right) \bar{U}_{\rm ext} + \frac{\bar{a}^2}{M^2 \bar{h}^2} (1+\omega_{\rm ext}) \rho_{\rm ext} (\bar{\Psi}' + \bar{h} \bar{\Psi})
\end{equation}
Now we can calculate $\bar{\Phi}'$ and insert the result into eq. (\ref{dPhiEQN1storder}), from which we recover eq. (\ref{dPsioscSol}).
\vspace{20pt}
\vspace{20pt}
Now we have sufficient information to average out the equations of energy- and momentum conservation over one oscillation period. Using standard trigonometric formulae (like $<{\rm sin}(mt) {\rm sin}(2mt)>$=0 etc.) we first obtain for the relevant quantities to leading order:
\begin{align}
\delta_\chi = \frac{2(\chi_0 A + \chi_1 B)}{\chi_0^2} - 3 \frac{\bar{h} B}{\bar{a}m\chi_0}{\rm cos}(2mt) \quad &\rightarrow \quad <\delta_\chi> = \frac{2(\chi_0 A + \chi_1 B)}{\chi_0^2} \\
V_\chi = - \frac{\bar{h} B}{\bar{a}m \chi_0} \quad &\rightarrow \quad <V_\chi> = - \frac{\bar{h} B}{\bar{a}m \chi_0} & \\
\omega_\chi = -{\rm cos}(2x) \quad & \rightarrow \quad <\omega_\chi> = 0
\end{align}
Now we use the field equations to average the full equation for the density contrast and express the result in terms of the averaged quantities $<\delta_\chi>$ and $<V_\chi>$ as well as an ''averaged internal entropy perturbation'', which we set to $<\Gamma_\chi>=0$.
We obtain:
\begin{align}
<\delta_\chi>' &= 3 <\Psi>' - \frac{k^2}{\bar{h}} <V_\chi> \, , \\
<V_\chi>' &= \bar{h} \bar{\Phi} - \frac{3}{2} \bar{h} <V_\chi> \, .
\end{align}
So in this wavelength-regime, the averaged equations for the scalar field behave just like CDM if $h\ll am$.
\subsubsection{Case 2: $\mu k^2/h^2 \gtrsim 1$}
In this case our argument for setting $A$ to be of the order $\mathcal{O}(\mu B)$ is no longer valid, since we formally have to set $k^2$ to be of the order $\mathcal{O}(\mu^{-1} h^2)$. The equation for the gravitational potential gives
\begin{equation}
\frac{2k^2}{3\bar{h}^2} \bar{\Phi} = -\frac{\bar{a}^2 m^2 \chi_0 A}{3M^2 \bar{h}^2} - \bar{S}_{\rm ext} \, .
\end{equation}
We again obtain the oscillatory part of the gravitational potential by identifying the oscillatory part in the $\Phi'$ equation to leading order, which reads in this case:
\begin{equation}
\bar{\Phi}' + \Phi_{\rm osc}' = -\bar{h} \bar{\Phi} - \frac{1}{4M^2} \bar{a} m \chi_0 B + \frac{1}{4M^2} \bar{a} m B \chi_0 {\rm cos}(2mt) - \frac{1}{4M^2} \bar{a} m A \chi_0 {\rm sin}(2mt) + \bar{h} \bar{U}_{\rm ext}
\end{equation}
We can therefore conclude:
\begin{equation}
\Phi_{\rm osc}' = \frac{1}{4M^2}\bar{a} m \chi_0 \left( B {\rm cos}(2mt) - A {\rm sin}(2mt) \right)
\end{equation}
i.e.
\begin{equation}
\Phi_{\rm osc} = \frac{1}{8M^2}\chi_0 \left( B {\rm sin}(2mt) + A {\rm cos}(2mt) \right) \, .
\end{equation}
This time the field equations yield (to leading order)
\begin{equation}
\left( k^2 A + 3 \bar{a} m \bar{h} B +2 \bar{a}^2 m^2 \chi_0 \bar{\Psi} + 2 \bar{a} m B' \right){\rm cos}(mt) + \left( k^2 B - 3 \bar{a} m \bar{h} A -2 \bar{a} m A' \right){\rm sin}(mt) = 0
\end{equation}
From this we directly obtain
\begin{align}
B' &= \frac{-1}{2\bar{a}m} \left( k^2 A + 3 \bar{a} m \bar{h} B +2 \bar{a}^2 m^2 \chi_0 \bar{\Psi} \right) \\
A' &= \frac{1}{2\bar{a}m}\left( k^2 B - 3 \bar{a} m \bar{h}A \right)
\end{align}
What remains to be shown is the consistency of our approximation, i.e. that
\begin{equation}
\bar{\Phi}' = -\bar{h} \bar{\Phi} - \frac{1}{4M^2} \bar{a} m \chi_0 B + \bar{h} \bar{U}_{\rm ext} \, .
\end{equation}
This goes exactly as in the ''small k'' case above, and turns out to be true, so our approximation is consistent.
\bigskip
Now we can move on to the averaging the equations of energy- and momentum conservation in this case. This turn out to be more straightforward if we go back to the original energy-, momentum- and pressure-perturbations. The usual argument is the following:
Let us take the equation of energy conservation. It reads
\begin{equation}
\delta \rho_\chi' = -3 h \delta \rho_\chi -3 h \delta p_\chi +k^2 \left[(\rho + p)v\right]_\chi + 3 (\rho_\chi + p_\chi) \Psi'
\end{equation}
and is fulfilled exactly by virtue of the field equations. Therefore it is also solved by our ansatz to all orders in $\mu$ for which the field equations themselves hold.
It is now easy to see that $<\delta p_\chi> = \frac{k^2}{4\bar{a}^2m^2}<\delta \rho_\chi>$. Therefore the averaged perturbations evolve like CDM, but with a very small soundspeed $c_{s,\chi}^2=\frac{k^2}{4\bar{a}^2m^2}$.
\bigskip
However, if we average the entire equation (to next to leading order) we have to take into account all oscillatory quantities, in particular $h_{\rm osc}$ and $p_\chi$ This gives
\begin{equation}
<3h\delta p_\chi> = 2 \bar{h} \frac{k^2}{4\bar{a}^2m^2}<\delta \rho_\chi> - \frac{3}{16} \frac{\chi_0^2}{M^2} \bar{a} m^3 \chi_0 B \, .
\end{equation}
Luckily this term is cancelled by a corresponding term resulting from
\begin{equation}
<3(\rho_\chi + p_\chi) \Psi'> = 3 <\rho_\chi> \bar{\Psi}' - \frac{3}{16}\frac{\chi_0^2}{M^2} \bar{a} m^3 \chi_0 B \, .
\end{equation}
Furthermore it is obvious that
\begin{equation}
<k^2 \left[ (\rho + p)v \right]_\chi> = k^2 <\left[ (\rho + p)v \right]_\chi>
\end{equation}
and
\begin{equation}
<\delta \rho_\chi>' = <\delta \rho_\chi>'
\end{equation}
to all orders, and it is also easy to check that (since $\delta \rho_\chi$ is slowly evolving to leading order)
\begin{equation}
<-3 h \delta \rho_\chi> = -3 \bar{h} <\delta \rho_\chi>
\end{equation}
to subleading order. This means that the naive argument presented above indeed yields the correct result.
Averaging the equations of momentum perturbation is consistent with this result, so overall the averaged evolution of perturbations in this regime can be described by the above sound-speed alone. (For more details see coupled averaging below.)
{\bf Outdated comment, only kept as a reminder: Averaging term by term is in fact the ONLY way to go, because if we go to higher orders in the equations we also have to take these higher orders into account in the definition of the averaged quantities! This means we have to work with well-defined quantities only!!!}
\bigskip
{\bf Our argumentation becomes flawed for very large comoving momenta, i.e. $k^2$ of order $m^2$. In this case we can expect (for some components of the cosmic fluid) perturbative quantities oscillating with frequency $k=m$ inside the horizon (like baryons and photons before decoupling) and therefore our assumption of a slow evolution to leading order is invalid. We will not be concerned with perturbations of such extremely small wavelength since they are suppressed in the power spectrum anyway.}
\end{comment}
\begin{comment}
In this section we want to generalize the procedure presented above to the coupled cosmon-bolon system, i.e. a cosmology where dark energy and dark matter are modeled by two canonical scalar fields coupled through their potential. The common potential is given by
\begin{equation}
V(\varphi,\chi) = M^4 \left[ {\rm exp}(-\alpha \varphi/M) + c^2 {\rm exp}(-2 \beta \varphi/M) \left( {\rm cosh} (\lambda \chi/M)-1\right) \right]
\end{equation}
and we split up the potential according to
\begin{equation}
V_1(\varphi) = M^4 {\rm exp}(-\alpha \varphi / M) \, , \quad
V_2(\varphi, \chi) = M^4c^2 {\rm exp}(-2 \beta \varphi/M) \left( {\rm cosh} (\lambda \chi/M)-1\right) \, .
\end{equation}
\subsection{Background}
The evolution of the background can be split up into two parts, an early evolution for relatively large field values (i.e. $\lambda \chi/M \gg 1$) for which the cosh-potential is essentially exponential, and a late evolution for relatively small field values $\lambda \chi / M \ll 1$ for which the cosh-potential becomes effectively quadratic:
\begin{equation}
V_2(\varphi,\chi) = \frac{m(\varphi)^2}{2} \chi^2
\end{equation}
with $m(\varphi)^2 = M^2 c^2 \lambda^2 {\rm exp}(-2\beta \varphi / M)$. Here we will deal only with the small-field regime and again solve the field equations in the approximation $\mu = h/am(\varphi) \ll 1$. While the mass of the bolon is of course time-dependent, it is easy to see that for a large number of parameterizations $m(\varphi)$ is only evolving slowly in time and the quantity $\mu$ is always decreasing (to leading order). The deciding criterion is the smallness of the coupling $\beta$, or to put this in more precise terms: In scenarios for which the cosmon field evolves according to a scaling solution during the matter-dominated era (and late time cosmic acceleration is achieved by some mechanism effectively stopping the scalar field evolution, see e.g. (...)), the ratio $\beta / \alpha$ should not be too big.
We generalize the ansatz used in the uncoupled case in the following way:
\begin{align}
\chi &= \chi_0 {\rm cos}(x) + \chi_1 {\rm sin}(x) + \chi_2 {\rm cos} (3x) + \chi_{\rm ig} \\
\varphi &= \bar{\varphi} + \varphi_1 {\rm cos}(2x) + \varphi_2 {\rm sin}(2x) + \varphi_{\rm ig}
\end{align}
where $x=\int_{to}^t m(\varphi(t')) dt'$ with t again being the cosmic time, $t_0$ being some early time and x should be read as a function of conformal time here. The quantities $\chi_0$, $\chi_1$, $\chi_2$, $\bar{\varphi}$ and $\varphi_1$ are again slowly evolving (i.e. almost constant at a time scale $1/m(\varphi)$ during the era considered here) and are the leading order terms in a $\mu$-expansion of the exact field evolution, i.e. $\chi_1$ is of the order $\mathcal{O}(\mu \chi_0)$, $\chi_2$ is of the order $\mathcal{O}(\mu^2 \chi_0)$, $\varphi_1$ is of the order $\mathcal{O}(\mu^2 \bar{\varphi})$ and $\varphi_2$ is of the order $\mathcal{O}(\mu^3 \bar{\varphi})$. The leading order contributions are formally related by setting $\chi_0$ to be of the order $\mathcal{O}(\mu \bar{\varphi})$, so that the energy densities are formally of the same order, as is appropiate for a scaling scenario. The terms $\chi_{\rm ig}$ and $\varphi_{\rm ig}$ represent higher order contributions and are oscillatory, but they will not play a role in our analysis.
The form of this ansatz may seem slightly ad-hoc, but it can in fact be easily motivated. First we note that the ansatz that $\chi$ is evolving with ${\rm cos}(x)$ and $\varphi$ is evolving slowly to leading order follows directly from the field equations (see argumentation in the SFDM case, just with a (slowly) time-dependent mass). Now one can treat the corrections to both fields as a generic oscillatory term of some subleading order, evaluate the field equations order by order and conclude which frequencies have to be present at which order. We chose however to present the result of this procedure as an ansatz for simplicity, it is easy to check that it indeed solves the field equations to the order presented below and that adding any additional terms at the order explicitly given here would lead to inconsistencies. (Also we know from numerical simulations that the cosmon energy-density has to be non-oscillatory and therefore $\varphi$ and $\varphi'$ both have to be slowly evolving to leading order.)
Evaluating the field equations to leading order with this ansatz (and assuming that $h$ and $a$ are evolving slowly to leading order just as in the SFDM case (and with the same notation) - a justified assumption as we will see below) yields:
\begin{align}
\chi_0' &= -\frac{3}{2} \bar{h} \chi_0 + \frac{\beta}{2} \frac{\chi_0}{M} \bar{\varphi}' \\
\varphi_1 &= - \frac{\beta}{2} \frac{\chi_0^2}{M}\\
\bar{\varphi}'' &= -2 \bar{h} \bar{\varphi}' + \alpha \bar{a}^2 M^3 {\rm exp}(-\alpha \bar{\varphi} / M) + \frac{\beta}{2} \bar{a}^2 \bar{m}(\bar{\varphi})^2 \frac{\chi_0^2}{M}
\end{align}
where $\bar{m}(\bar{\varphi})$ simply denotes the leading order term in the $\mu$-expansion for the mass, which is of course only $\bar{\varphi}$-dependent and given by
\begin{equation}
\bar{m}(\bar{\varphi}) = Mc\lambda {\rm e}^{-\beta \bar{\varphi}/M} \, .
\end{equation}
Using this we find at next to leading order from Einsteins equations
\begin{align}
h_{\rm osc} &= \frac{1}{8} \bar{a} \bar{m}(\bar{\varphi}) \frac{\chi_0^2}{M^2} {\rm sin}(2x) \\
a_{\rm osc} &= -\frac{1}{16} \bar{a} \frac{\chi_0^2}{M^2} {\rm cos}(2x)
\end{align}
while the field equation for the bolon yields at next to leading order
\begin{align}
\chi_2 &= \frac{3}{128} \frac{\chi_0^3}{M^2} \left( 1 - \frac{2}{3} \beta^2 \right) \\
\chi_1' &= - \left( \frac{3}{2} \bar{h} - \beta \frac{\bar{\varphi}'}{M}\right)\chi_1- \frac{1}{2\bar{m}(\bar{\varphi})\bar{a}} \left( \chi_0'' + 2 \bar{h} \chi_0' \right) + \frac{3-2\beta^2}{32 M^2} \bar{m}(\bar{\varphi}) \bar{a} \chi_0^3
\end{align}
while the cosmon field equation gives
\begin{equation}
\varphi_2 = - \frac{\chi_0}{16 \bar{m} \bar{a} M } \left( 4 \beta \bar{m} \bar{a} \chi_1- 2 \beta \bar{h} \chi_0 - \chi_0 \bar{\varphi}' /M \right)
\end{equation}
From eq. (146) we can directly see that ${\rm ln}(\chi_0) \propto {\rm ln}(\bar{a}^{-3/2}) + \frac{\beta}{2} \frac{\bar{\varphi}}{M}$, i.e. $\chi_0 \propto \bar{a}^{-3/2} {\rm exp}(\beta \bar{\varphi} / 2M) $ and therefore
\begin{equation}
\bar{\rho}_\chi = \frac{1}{2} \bar{m}(\bar{\varphi})^2 \chi_0^2 \propto \bar{a}^{-3} {\rm exp} (-\beta \bar{\varphi}/M)
\end{equation}
Moving on to the equation of state for the cosmon we first note that to leading order
\begin{align}
\bar{\omega}_\varphi &= \frac{\bar{\varphi}'^2/\bar{a}^2 - 2 M^4 {\rm e}^{-\alpha \bar{\varphi}/M}}{\bar{\varphi}'^2/\bar{a}^2 + 2 M^4 {\rm e}^{-\alpha \bar{\varphi}/M}} \\
\omega_\varphi' &= \alpha (1-\bar{\omega}_\varphi) \frac{\varphi _0'}{M}
+\beta \frac{\bar{\rho }_{\chi } }{\bar{\rho }_{\varphi } }(1-\bar{\omega}_\varphi)\frac{\varphi _0'}{M} \left( 1+{\rm cos}(2x) \right)
- 3 \bar{h} (1-\bar{\omega}_\varphi) (1+\bar{\omega}_\varphi) \\
\rightarrow \; <\omega_\varphi'> &= \alpha (1-\bar{\omega}_\varphi) \frac{\varphi _0'}{M}
+\beta \frac{\bar{\rho }_{\chi } }{\bar{\rho }_{\varphi } }(1-\bar{\omega}_\varphi)\frac{\varphi _0'}{M}
- 3 \bar{h} (1-\bar{\omega}_\varphi) (1+\bar{\omega}_\varphi) \\\end{align}
Since $\rho_\varphi$ and $\omega_\varphi$ are slowly evolving to leading order we have adopted (as for the bolon) the notation $\bar{\rho}_\varphi$ and $\bar{\omega}_\varphi$ for these quantities.
Evaluating the equations of energy conservation for both fields gives
\begin{align}
(1+\omega_\chi) q_\chi = -\beta \frac{\bar{\varphi}'}{3 \bar{h} M} \left( 1 + {\rm cos}(2x) \right) \quad & \rightarrow \quad <(1+\omega_\chi) q_\chi> = -\beta \frac{\bar{\varphi}'}{3 \bar{h} M} \\
(1+\omega_\varphi) q_\varphi = \beta \frac{\bar{\rho}_\chi}{\bar{\rho}_\varphi} \frac{ \bar{\varphi}' }{3\bar{h}M } \left(1+ {\rm cos}(2x) \right)\quad &\rightarrow \quad <(1+\omega_\varphi) q_\varphi> = \beta \frac{\bar{\rho}_\chi}{\bar{\rho}_\varphi} \frac{ \bar{\varphi}' }{3\bar{h}M }
\end{align}
\end{comment}
In this section we analyze the behaviour of linear perturbations in the late universe, i.e. after the bolon started to perform rapid oscillations around the minimum of its potential. The behaviour of perturbations in the early universe was already analyzed in ref. \cite{EarlyScalings}. As the perturbations also perform rapid oscillations, a numerical evolution of the full field equations is not feasible. We will therefore first deduce a set of effective perturbation equations averaged over one oscillation period and then use the results in our numerical simulations below.
In the interest of brevity we will not discuss our conventions concerning linear perturbation theory here, we merely mention that they precisely coincide with the ones defined in the appendix of the accompanying work ref. \cite{EarlyScalings}. In particular we are only interested in the scalar sector of linear perturbation theory and will employ the following quantities: The gauge-invariant field perturbations $X$ and $Y$, the Bardeen potentials $\Psi$ and $\Phi$, the gauge-invariant energy density and momentum perturbations $\delta \rho_\alpha$ and $\left[ (\rho + p) v \right]_\alpha$, their dimensionless versions, i.e. the density contrasts $\delta_\alpha$ and velocity potentials $\Theta_\alpha$, and the total anisotropic stress $\Pi_{\rm tot}$.
\subsection{Averaged evolution}
We start by considering the exact gauge-invariant linearly perturbed scalar field equations
\begin{align}
\label{CosmonPEquation}
X'' + 2hX' + k^2 X + a^2 V_{,\varphi \varphi} X + a^2 V_{,\varphi \chi} Y \nonumber \\
+ 2 a^2 V_{,\varphi} \Phi - \varphi' \Phi' - 3 \varphi' \Psi' & = 0 \, , \\
\label{BolonPEquation}
Y'' + 2hY' + k^2 Y + a^2 V_{,\varphi \chi} X + a^2 V_{,\chi \chi} Y \nonumber \\
+ 2 a^2 V_{,\chi} \Phi - \chi' \Phi' - 3\chi' \Psi' & = 0 \, ,
\end{align}
and the Poisson-equation
\begin{align}
\label{PsiEquation}
k^2 \Psi &= -\frac{a^2}{2M^2} \sum_\alpha \left( \delta \rho_\alpha - 3 h \right[(\rho + p)v\left]_\alpha \right) \, .
\end{align}
Furthermore we relate the two Bardeen potentials via
\begin{equation}
\label{phiPsiEQ}
\Phi = \Psi - a^2 \Pi_{\rm tot}/M^2 \, .
\end{equation}
If we add a suitable set of equations describing additional components of the cosmic fluid, these equations form a closed set and uniquely determine the evolution of the scalar linear perturbations. These additional equations would be equations of energy- and momentum-conservation in the case of a fluid description (e.g. for baryons) or the equations for the higher momenta in the multipole-expansion of the phase-space distribution function resulting from the corresponding Boltzmann-equations for more complex descriptions (typically for photons and neutrinos, see e.g. ref. \cite{Ma:1995ey}). In what follows we will ignore such additional equations, but note that the entire averaging procedure presented below can easily be extended to include them, and they emerge unchanged. We explain this in more detail in appendix \ref{app:bgExpansion}.
In order to average the perturbation equations we use the same idea we applied to the background evolution in section \ref{sec:LateUniverse}, i.e. we expand all dynamical perturbative quantities first in a Taylor-series in $\tilde{\mu}$ and then each coefficient in a Fourier-type sum of harmonic functions with time-dependent frequencies, where all occurring frequencies are multiples of a base-frequency (see equations (\ref{MuExpansion}) - (\ref{baseFrequency})). By plugging the results into the linearized field equations (\ref{CosmonPEquation}) - (\ref{PsiEquation}) and comparing coefficients we see which frequencies are present at which order and can then use the results to calculate the evolution equations for the energy density and momentum perturbations for the bolon averaged over one oscillation period.
The precise details of the calculation can be found in appendix \ref{app:bgExpansion}.
The procedure reveals that the decompositions of the scalar-field perturbations and the gravitational potential change for different wavenumbers and we have to split up our analysis into two regimes: Large wavelength perturbations for which $\mu k^2/h^2 \ll 1$ and small wavelength perturbations which have $\mu k^2/h^2 \gtrsim 1$.
\subsubsection{$\mu k^2/h^2 \ll 1$}
For large wavelengths the equations resulting from the averaging procedure (to subleading order in $\tilde{\mu}$) are
\begin{align}
\label{dChiEQAV}
<\delta_\chi>' =& -\bar{\Theta}_\chi + 3 \bar{\Psi}' - \beta P'/M + \beta \frac{\bar{\varphi}'}{M} \left( \bar{\Psi} - \bar{\Phi} \right) \, , \\
\bar{\Theta}_\chi' =& - \bar{h} \bar{\Theta}_\chi + k^2 \bar{\Phi} + \beta \frac{\bar{\varphi}'}{M} \bar{\Theta}_\chi - \beta k^2 P/M \, , \\
\label{pEQAV}
P'' =& - 2 \bar{h} P' - \left( k^2 + \bar{a}^2 V_{1,\bar{\varphi} \bar{\varphi}} \right) P \nonumber - 2 \bar{a}^2 V_{1,\bar{\varphi}} \bar{\Phi} \nonumber \\
& + \bar{\varphi}' \left( \bar{\Phi}' + 3 \bar{\Psi}' \right)
+ \frac{\beta \bar{a}^2}{M} \bar{\rho}_\chi \left( <\delta_\chi> +2 \bar{\Phi} \right) \, .
\end{align}
Here the dynamical quantities describing the bolon are its density contrast defined as $\delta_\chi = \delta \rho_\chi/\rho_\chi$, and its averaged velocity potential, given by
\begin{equation}
\bar{\Theta}_\chi = -k^2 \frac{<\left[ (\rho + p)v \right]_\chi>}{\bar{\rho}_\chi + <p_\chi>} = -k^2 \frac{<\left[ (\rho + p)v \right]_\chi>}{\bar{\rho}_\chi} \, .
\end{equation}
Triangular brackets denote an oscillatory quantity averaged over one oscillation period, whereas, as before, a bar denotes a quantity which is evolving adiabatically at leading order.
Note that the velocity potential $\bar{\Theta}_\chi$ is a well defined variable, despite that fact that trying to define the same velocity potential with the non-averaged quantities would yield a badly defined (periodically divergent) $\Theta_\chi$.
The quintessence field is evolving slowly to leading order, as are the gravitational potentials, and we denote the leading order quantities by $P$, $\bar{\Psi}$ and $\bar{\Phi}$ respectively.
To complete our equations we have to evaluate the Poisson equation, which gives
\begin{align}
\label{poissonEQAV}
\bar{\Psi} =\frac{-\bar{a}^2}{2 M^2 k^2 - \bar{\varphi}'^2} & \left[\bar{\rho}_{\rm ext} \left( \bar{\delta }_{\rm ext} + \frac{3 \bar{h}}{k^2} (1+\bar{\omega}_{\rm ext}) \bar{\Theta}_{\rm ext} \right) \right. \nonumber \\
&+ \bar{\rho}_\chi \left( <\delta_\chi> + \frac{3 \bar{h}}{k^2} \bar{\Theta}_\chi \right) + \bar{\varphi}'^2 \frac{\bar{\Pi}_{\rm tot}}{M^2}\nonumber \\
&\left. + V_{1,\bar{\varphi}} P + 3 \bar{h} \frac{\bar{\varphi}' P}{\bar{a}^2
} + \frac{\bar{\varphi}' P'}{\bar{a}^2} \right] \, .
\end{align}
Here the label "ext" labels all additional components of the cosmic fluid, i.e. neutrinos, photons and baryons. As mentioned above, the corresponding energy densities evolve adiabatically to leading order, the velocity potentials and higher momenta of the phase space distribution function even to subleading order.
Equations (\ref{dChiEQAV})-(\ref{pEQAV}) together with eq. (\ref{poissonEQAV}) and eq. (\ref{phiPsiEQ}) are equivalent to the set of equations (25) - (31) in ref. \cite{Amendola:2003wa} for a constant coupling and $\omega=0$. The apparent differences arise from the fact that the author of ref. \cite{Amendola:2003wa} employed slightly different definitions of the veolocity potential $\Theta$ and the coupling $\beta$, ignored anisotropic stress contributions and used derivatives taken with respect to $\rm{ln}(a)$, not with respect to conformal time. The averaged cosmon and bolon perturbations therefore behave exactly as standard cold dark matter coupled to quintessence with a constant coupling in this wavelength-regime.
\subsubsection{$\mu k^2/h^2 \gtrsim 1$}
For smaller wavelengths the results from the averaging procedure are almost the same, except that pressure perturbations can not be neglected at subleading order. The equations of energy- and momentum-conservation for the bolon now give:
\begin{align}
<\delta_\chi>' =&- \bar{\Theta}_\chi + 3 \bar{\Psi}' - \beta \frac{P'}{M} + \beta \frac{\bar{\varphi}'}{M} \left( \bar{\Psi} - \bar{\Phi} \right) \nonumber \\
& - \left( 3 \bar{h} - \beta \frac{\bar{\varphi}'}{M} \right) c_{s,\chi}^2 <\delta_\chi> \, , \\
\bar{\Theta}_\chi' =& - \bar{h} \bar{\Theta}_\chi + k^2 \bar{\Phi} + \beta \frac{\bar{\varphi}'}{M} \bar{\Theta}_\chi - \beta k^2 \frac{P}{M} \nonumber \\
& + k^2 c_{s,\chi}^2 <\delta_\chi> \, ,
\end{align}
where the bolon sound speed is given by
\begin{equation}
\label{soundSpeed}
c_{s,\chi}^2 = \frac{k^2}{4 \bar{m}_\chi^2 \bar{a}^2} \, .
\end{equation}
Such a sound speed modifies the growth of perturbations on the dark matter sector considerably, as we will show below. This generalizes the previous result found in refs. \cite{Hu:2000ke,Matos:2000ss}, where an oscillating scalar field but no coupling to the cosmon was considered. Furthermore we have provided a rigorous way to handle the quick oscillations present in both the background and the full set of gauge-invariant perturbation equations, something not provided in references \cite{Hu:2000ke,Matos:2000ss}. The field equation for the cosmon and the Poisson equation are exactly the same as in the large wavelength regime.
A comparison of these equations with those found in equations (25) to (31) in ref. \cite{Amendola:2003wa} yields obvious differences. These go beyond the simple fact that the definitions of the velocity potential, the coupling and the time-variable are different, but originate from the fact that the fluids considered in ref. \cite{Amendola:2003wa} are assumed to be barotropic, i.e. the equation of state is assumed to be a function of the energy density alone, no non-adiabatic pressure perturbations are present. Our results however do correspond to a non-barotropic fluid and the pressure-perturbation is indeed a non-adiabatic one.
\subsection{Damping of the power spectrum}
The basic features of the evolution of the bolon density contrast can be understood fairly easily. Perturbations with wavenumbers in the regime $\mu k^2/h^2 \ll 1$ follow the evolution of a coupled cold dark matter component, whereas perturbations in the regime $\mu k^2/h^2 \gtrsim 1$ are expected to be strongly surpressed, due to the small sound speed. Since this is a time-dependent condition, let us see how the quantity $\mu k^2/h^2$ evolves during the main stages of the cosmic evolution. We can generally assume that $\bar{h} \propto \bar{a}^\eta$, where $\eta = -1, -1/2$ and $1$ during radiation-domination, matter domination and de-Sitter expansion respectively. The coupling between the bolon and the cosmon will of course change this behavior slightly, but that is not crucial for our argument. We then have
\begin{equation}
\mu k^2/\bar{h}^2 \propto \bar{a}^{-1-\eta} {\rm e}^{\beta \bar{\varphi}/M} \, ,
\end{equation}
which shows that during the eras of bolon- or cosmon-domination (and for not too large couplings), this quantity is decreasing. Therefore modes which have been suppressed in their evolution during some early era will eventually enter the regime where they behave just like coupled cold dark matter. Due to this delayed onset of growth for these modes, the power spectrum exhibits a sharp cutoff at a scale which can be approximated by
\begin{equation}
\label{jeansWavenumber}
k_{\rm J}^2 = m_0 {\rm e}^{-\beta \bar{\varphi}^*/M} \sqrt{\frac{\rho_{r,0}}{3 M^2 (1-\Omega_{sc,es})}} \, ,
\end{equation}
where the value $\varphi^*$ of the cosmon at $a^*$ can once again be approximated by extrapolating the early scaling solution given in eq. (\ref{EarlyPhiEvolution}). This formula generalizes eq. (63) in ref. \cite{Matos:2000ss} and is of course only an estimate, but it is the best definition of a Jeans length available in our model as it describes the smallest wavenumber for which the pressure balances out the gravitational attraction during some stage of the cosmic evolution. We can assign a corresponding Jeans mass in the usual way:
\begin{equation}
\label{jeansMass}
M_{\rm J} = \frac{4 \pi}{3} k_{\rm J}^{-3} \bar{\rho}_\chi \, .
\end{equation}
\subsection{Numerical evolution}
\label{sec:Numerics}
Now we move on to numerically evolve the linear perturbations in our model. We draw the initial conditions for our simulations from the results of an accompanying paper \cite{EarlyScalings}. There it was shown that for the coupled cosmon-bolon system there exists an adiabatic mode which will evolve to be dominant if a sufficiently long era of tracking is present, which is a quite generic feature in our model. To avoid such an era initial conditions would have to be very skewed (see ref. \cite{Beyer:2010mt}).
We therefore work with purely adiabatic initial conditions (see equation (47) in ref. \cite{EarlyScalings}) and ignore possible isocurvature contributions. Our numerical simulation then simply evolves the evolution equations for all the components of the cosmic fluid, i.e. photons, neutrinos, baryons and the two scalar fields. As is common in Boltzmann-codes, we use several approximations in order to speed up the calculation, which we now quickly describe.
For the neutrino- and the baryon-photon-sector we use a set of approximations that are also implemented in the recent CLASS-code (\cite{Lesgourgues:2011re,Blas:2011rf}) and discussed in detail in ref. \cite{Blas:2011rf}. For the early universe we use the exact first order version of the tight coupling approximation, corresponding to the setting \textit{first\_order\_CLASS} in the CLASS-code. In order to deal with the quick oscillations in the relativistic species (neutrinos and photons) in the late universe we also employ the ultra-relativistic fluid approximation corresponding to the setting \textit{ufa\_class} and the relativistic streaming approximation in its simplest version, corresponding to the setting \textit{rsa\_MD}.
The thermodynamic history of the universe was calculated using a modified version of the latest fortran-release of recfast \cite{Seager:1999bc,Seager:1999km,Wong:2007ym,Scott:2009sz}.
Since we do not use synchronous gauge but a manifestly gauge-invariant approach (see appendix of ref. \cite{EarlyScalings}) instead, we had to rederive some of the approximations mentioned above. To check the correctness of our equations as well as their implementation and the validity of our choices for the triggers determining the switches between the approximation schemes, we have compared the results our code gives for the standard $\Lambda$CDM-model with the CLASS results. As it turns out, the matter power spectra (which is the quantity we are after) agree to excellent accuracy (see appendix \ref{app:ClassComp}).
In our treatment of the scalar fields, we start by evolving the exact scalar field equations and switch to the effective averaged fluid description for the bolon at some suitably late time, i.e. after the oscillations have started. The initial values for the effective fluid description are obtained by an explicit numerical integration over one oscillation period.
For large wavenumbers and late enough times (i.e. in the subhorizon regime), the cosmon field will exhibit quick oscillations, similar to those present in the photons or neutrinos. We therefore extended the RSA-approximation to include the cosmon by employing equation (43) in ref. \cite{Amendola:2003wa}, which reads for our conventions
\begin{equation}
P = 3 \beta M \frac{\bar{h}^2}{k^2} \bar{\Omega}_\chi <\delta_\chi> \, .
\end{equation}
We have checked that this is a good approximation by varying the RSA-onset trigger.
To address the issue of accelerated expansion we simply change the exponent of the quintessence potential for large field values by making a smooth transition from $\alpha$ to $0.1$ at a suitable point:
\begin{equation}
\alpha (\varphi) = \left\{
\begin{array}{l l}
\alpha \quad & \varphi<\varphi_0 \\
\alpha - t(\varphi) \quad & \varphi_0 < \varphi < \varphi_1 \\
1 \quad & \varphi_1 < \varphi \\
\end{array}
\right. \, ,
\end{equation}
with
\begin{equation}
t(\varphi) = \left( 3 \frac{(\varphi-\varphi_0)^2}{(\varphi_1-\varphi_0)^2} + 2 \frac{(\varphi-\varphi_0)^3}{(\varphi_1-\varphi_0)^3}\right) (\alpha-0.1) \, .
\end{equation} This is of course slightly ad-hoc and should be taken as only one of many ways to slow down the evolution of the cosmon, several other possibilities have been discussed in section \ref{sec:Background}. We do this merely to get a realistic cosmology for the numerics.
In this section we will always choose the value 20 for $\alpha$. The values for $\lambda$ and $\beta$ will vary, and for each choice the values for $c^2$, $\varphi_0$ and $\varphi_1$ will be adjusted to give the correct current Hubble rate and density parameters. For each set of parameters used will quote the values for $\varphi_0$ and $\varphi_1$ as well as the normalization-factor $N$ (see eq. (\ref{c2Guess})) from which $c^2$ can be deduced.
We adjust the cosmological parameters to the WMAP7-values \cite{Jarosik:2010iu}, i.e.
\begin{table}[h]
\label{densityParameters}
\begin{tabular}{ccc}
$h=0.71 \, ,$ & $\Omega_{\chi,0}=0.222 \, ,$ & $\Omega_{b,0}=0.0449 \, ,$ \\
$T_{CMB}=2.728 \, {\rm K} \, ,$ & $n_s=0.961 \, ,$ & $\sigma_8=0.801 \, .$
\end{tabular}
\end{table}
The evolution of the density parameters can be seen in Figure \ref{fig:omegas} for different couplings. We choose initial conditions corresonding to the scaling solution given in equations (\ref{EarlyPhiEvolution}) and (\ref{EarlyChiEvolution}). Different initial conditions do not change the late time cosmology unless they are very skewed (see ref. \cite{Beyer:2010mt}).
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure1}
\caption{Density parameters for photons (dark green), neutrinos (light green), bolon (dark blue), baryons (light blue) and cosmon (grey). The solid lines represent the uncoupled model (with $N=0.38$, $\varphi_0=13.806$, $\varphi_1=15.493$), the dashed lines $\beta=0.05$ (with $N=0.3727$, $\varphi_0=13.786$, $\varphi_1=16.248$) and the dotted lines $\beta=-0.05$ (with $N=0.378$, $\varphi_0=13.789$, $\varphi_1=16.168$ ).}
\label{fig:omegas}
\end{figure}
The connection between the evolution of the bolon density contrast and the quantity $\mu h^2/k^2$ is shown in Figure \ref{fig:kmodes1}. We display numerical for two different wavenumbers, $k=0.96$ h/Mpc (green) and $k=10.7$ h/Mpc (blue). The solid lines represent the bolon evolution, the dashed lines standard cold dark matter modes with the same wavenumbers evolved in the same background, given here for comparison. The normalization for both modes is arbitrary. The first difference to note is the slightly elevated value for the initial density contrast in the adiabatic mode for the bolon compared to standard CDM. We also clearly see the onset of oscillations in both modes, but while the $k=0.96$ h/Mpc mode, when averaged, follows the evolution of the cold dark matter mode exactly, the growth of the $k=10.7$ h/Mpc mode is suppressed during a prolonged stage of its evolution. This was to be expected from the evolution of $\mu h^2/k^2$, which can be seen in the lower panel of the picture. For the $k=0.96$ h/Mpc mode we have $\mu h^2/k^2 \ll 1$ throughout the entire evolution, whereas for $k=10.7$ h/Mpc $\mu h^2/k^2>1$ initially, but the value decreases for later times and growth sets in.
\begin{figure}[t]
\centering
\includegraphics[trim=0cm 8.7cm 0cm 5.4cm, clip=true, width=1.0\linewidth]{figure2}
\caption{Bolon density contrast evolution. The upper panel shows the evolution of the linear density contrast for the bolon (solid lines) and a standard cold dark matter evolution (dashed lines) in the same background. The green lines show a $k=0.96$ h/Mpc mode, whereas the blue ones represent $k=10.7$ h/Mpc. In the lower panel we show the corresponding evolution of the quantity $\mu k^2/h^2$. The bolon exponential is $\lambda=30$ for this plot, the coupling is $\beta=0$.}
\label{fig:kmodes1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure3}
\caption{Power spectra for standard cold dark matter (black), warm dark matter with $m_{\rm wdm} = 2.284$ keV (grey, dotted) and the bolon (dashed) for different masses. The corresponding values are from left to right $\lambda=30$ (green line, $N=0.3793$), $\lambda=40$ (orange line, $N=0.3804$), $\lambda=50$ (brown line, $N=0.3792$), $\lambda = 60$ (red line, $N=0.3791$) and $\lambda=70$ (blue line, $N=0.3784$). The coupling is $\beta=0$, the scalar field transition values are $\varphi_0=13.806$ and $\varphi_1=15.493$ for all curves.}
\label{fig:psUncoupled}
\end{figure}
The synchronous gauge power spectrum, which we reconstructed from our gauge-invariant quantities, resulting from this evolution is shown in Figure \ref{fig:psUncoupled} (dashed lines). For comparison we also show a number of additional power spectra. The solid black line represents the power spectrum obtained for a pure cold dark matter component, evolved in the same background cosmology. The dashed lines represent bolon power spectra for different choices of the bolon exponent, from $\lambda = 30$ on the left to $\lambda = 70$ on the right, but with $c^2$ adjusted to give the same late background cosmology. Furthermore we show a warm dark matter power spectrum, obtained by modifying the transfer function for the cold dark matter component (gray line) in a manner suggested in refs. \cite{Barkana:2001gr,Bode:2000gq}, i.e. for thermally produced warm dark matter. The mass of the wdm-particle used here is
\begin{equation}
m_{\rm wdm} = 2.284 \; {\rm keV} \, .
\end{equation}
All power spectra have been normalized to the value of $\sigma_8=0.801$ given above.
The cutoff in the bolon power spectrum is initially much steeper than in warm dark matter models, which is typical for scalar field dark matter. Furthermore the shift of the cutoff to smaller wavenumbers for smaller $\lambda$ can be easily understood by noting that decreasing the bolon-exponential $\lambda$ and adjusting the mass-parameter $c^2$ using equation (\ref{c2Guess}) leads to effective decrease of the bolon mass. To see this, simply evaluate equation (\ref{c2Guess}) for $\beta=0$ to see that $c^2 \propto \lambda^6$, i.e. $m_0 \propto \lambda^4$ in this case. The resulting effect is a larger sound speed and as a consequence a suppression of growth extending to smaller wavenumbers. The opposite effect for larger values of $\lambda$ is also clear.
We now move on to study the influence the coupling $\beta$ has on the evolution of the bolon density contrast. It can be summarized in four effects:
\begin{enumerate}
\item The coupling causes the energy density of the bolon to scale slightly differently from a standard cold dark matter one. For a positive coupling it scales away faster then $a^{-3}$ and therefore the bolon energy density exceeds the one obtained for the $\beta=0$ case for $z>0$ after adjusting $\rho_{\chi,0}$, leading to shift of matter-radiation equality to earlier times. This results in a shift of the maximum in the matter-power-spectrum to larger wavenumbers, since the horizon size at matter-radiation-equality is suppressed, an effect only boosted by the effect the coupled evolution has on the Hubble parameter. Similarly, a negative coupling has the opposite effect, shifting the maximum to smaller wavenumbers.
\item The growth of bolon perturbations during bolon-domination also gets changed by the coupling. The evolution of the growth of linear perturbations in a coupled quintessence scenario was analyzed in ref. \cite{Amendola:1999dr} and the results can be applied to to our model in the $\mu k^2/h^2 \ll 1$ regime. For not too large coupling the effect can be summarized as follows: A positive coupling increases the growth of linear perturbations, whereas a negative coupling decreases the growth compared to the uncoupled case, but the growth is always suppressed when compared to the standard cold dark matter evolution in an Einstein-de-Sitter universe, where $\delta \propto a$. This effect leads to an increase of power for large $k$ modes in the power spectrum for positive coupling, and a suppression for a negative one.
For larger couplings this no longer holds true. The growth rate now starts to increase even for negative couplings and a growth faster than $\delta \propto a$ also becomes possible.
Note that this is not in conflict with the results found in ref. \cite{Amendola:1999er}, where a suppression of growth for all couplings was found. The difference arises from the background evolution, since in our model the quintessence field adjusts itself to a different fixed point than the one analyzed in ref. \cite{Amendola:1999er}.
\item Finally the coupling also changes the evolution of the bolon-mass and thus of the sound-speed present in the $\mu k^2/h^2 \gtrsim 1$ regime.
Adjusting the background cosmology according to equation (\ref{c2Guess}) leads to an increase in the value of the bolon mass at $a=1$ (i.e. today) with a positive coupling, in contrast to what one might expect naively from the ${\rm e}^{-\beta \varphi/M}$-dependence. Similarly, it decreases for a negative coupling. This leads to a decrease of the soundspeed for a positive coupling and thus to the cutoff in the power spectrum shifted to larger wavenumbers. The opposite of course holds for a negative coupling.
\item As a last effect, the growth of perturbations during the radiation dominated in era and during transition from radiation domination to matter domination also gets affected by the coupling. This is difficult to access analytically, and the important modifications of the power spectrum can be understood in terms of the previously discussed points alone. We will therefore not go into this any further.
\end{enumerate}
All these effects can be seen in Figures \ref{dcModes2} and \ref{fig:psCoupled}. In Figure \ref{dcModes2} we show the evolution of the bolon density contrast with wavenumber $k=10.7$ h/Mpc for a model with $\lambda=40$ and three different choices of the coupling plus a standard CDM evolution evolved in the same background (black solid line).
The matter power spectra for the same models are shown in Figure \ref{fig:psCoupled}, where we have also added the WDM-modification (gray dotted line). Otherwise the coloring is the same as in Figure \ref{dcModes2}. One can clearly see the small shift of the maximum of the power spectrum depending on the coupling, as well as the growth modification, epitomized by the different slopes in the power spectra for large $k$. The difference in the spectra for smaller wavenumbers are an effect resulting from the normalization in conjunction with the different high-$k$ slopes and position of the maximum. The position of the cutoff (as long as it is at large enough $k$) has very little influence on the normalization.
Finally we have fitted the form of the cutoff by
\begin{equation}
\label{goodFit}
P_{\chi} (k) = P_{\rm cdm} (k) \left\{ \frac{{\rm cos} \left[ (b \, x)^a \right]}{1+\sqrt{c} \, \left( x^{d_1} + x^{d_2} \right)} \right\}^2 \, ,
\end{equation}
where $x=k/k_J$ and the parameters $a,b,c,d_1$ and $d_2$ are functions of the model parameters $\lambda$ and $\beta$. We assumed a linear dependence for all functions here, optimized the parameters numerically and eliminated all terms the inclusion of which would not yield a significant improvement of the fitting. The resulting best estimate is given by
\begin{align}
a(\beta) & = 4.105 + 0.428 \beta \, , \\
b(\lambda,\beta) & = 0.827 + 0.098 \beta + 0.006 \lambda + 0.0025 \lambda \beta \, , \\
c(\lambda,\beta) & = -0.46 -1.9 \beta + 0.053 \lambda + 0.152 \lambda \beta \, , \\
d_1(\beta) & = 4.31-1.2\beta \, , \\
d_2(\beta) &= 7.66 + 1.49 \beta \, .
\end{align}
This gives a fitting better than 18\% up to suppression of $1/500$. A simpler fitting was proposed in similar models in refs. \cite{Hu:2000ke,Matos:2000ss}, but as we have checked such a simple fitting function does considerably worse. More details on the fitting procedure can be found in appendix \ref{app:CutoffFitting}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure4}
\caption{Evolution of the bolon density contrast with $k=10.7$ h/Mpc for varying couplings. The black line represents standard uncoupled cold dark matter, the dashed green line an uncoupled bolon model, the red line a model with $\beta=0.05$ ($N=0.3716$, $\varphi_0=13.786$ and $\varphi_1=16.248$) and the blue line $\beta=-0.1$ ($N=0.3777$, $\varphi_0=13.789$ and $\varphi_1=16.168$). The bolon exponential is $\lambda=40$.}
\label{dcModes2}
\end{figure}
\section{Halo abundances}
\label{sec:PressSchechter}
In this section we go beyond the theory of linear perturbations and study the distribution of halos in our model. The approach we employ is known as the extended Press Schechter excursion set formalism, a theory originally developed in ref. \cite{Press:1973iz} and later refined and extended in several works \cite{Bond:1990iw,Bower:1991kf,Lacey:1993iv,Sheth:1999su,Sheth:2001dp}. It allows for the prediction of halo mass functions and merger histories, quantities which require knowledge about the highly non-linear regime of cosmic perturbations, from the linear power spectrum.
The basic idea of Press and Schechter was to identify regions of space with an averaged density contrast above a certain threshold with collapsed objects. To put this in more precise terms: One averages the linearly evolved matter density contrast field over a radius $R$ and identifies the fraction of space which lies above a given threshold $\omega$ with the fraction of mass of the universe which is bound in objects with a mass greater than the mass associated with the size of the region, denoted by $M(R)$.
The basic problem with this approach is the so called ''cloud in cloud problem'', a term which describes the following effect: Since large mass halos consist of a number of smaller mass halos, a region might switch back and forth from being considered collapsed (i.e. above the threshold) and non-collapsed (below the threshold) depending on the filtering radius $R$ and it becomes unclear which mass it should be assigned to. While this problem cannot be resolved in a unique fashion, the most commonly used approach is the prescription described in ref. \cite{Bond:1990iw} and now known as the excursion set formalism. \footnote{Probably the most well known alternative attempt to address this problem is the peak theory, originally developed by Bardeen et al. in ref. \cite{Bardeen:1985tr}. We will not investigate this approach further here.} In this approach one starts to filter the density contrast field with very large radii, leading to an effectively vanishing averaged density contrast everywhere, and then decreases the filter radius step by step. This creates a random walk in $R$-space for each spatial point. Now the fraction of mass in the universe which is bound in collapsed objects of mass bigger than $M(R)$, denoted by $\Omega(\omega;R)$, is simply given by the fraction of trajectories which have crossed the threshold at some radius greater than $R$. This amounts to solving a so called \textit{absorbing barrier problem}, as the random walk trajectories get absorbed by the threshold $\omega$ when they cross it for the first time.
\begin{figure}[t]
\centering
\includegraphics[height=0.65 \linewidth]{figure5}
\includegraphics[height=0.65 \linewidth]{figure6}
\caption{Power spectrum of the cosmon-bolon cosmology for different couplings. The black solid and gray dotted lines represent cold and warm dark matter spectra respectively, just as in Figure \ref{fig:psUncoupled}. The dotted lines represent cosmon-bolon models with $\lambda=40$ and $\beta=0, 0.05$ and $-0.1$ for the green, red and blue lines respectively. The three gridlines in the left hand figure show the positions of the maxima of the power spectra.}
\label{fig:psCoupled}
\end{figure}
As is common practice, we always work with the density contrast field at $z=0$ and put the entire time evolution into the barrier $\omega(z)=\omega(z=0)/D(z)$, where $D(z)$ is the linear growth function.
Furthermore, instead of using the radius $R$, we rewrite everything in terms of the variance at a given filtering radius, which can be calculated from the power spectrum as follows:
\begin{equation}
\label{sigmaR}
S(R) = \int \frac{{\rm d} k }{2\pi^2} \, k^2 \, P(k,t) \left| \widehat{W}_R(k) \right|^2 \, ,
\end{equation}
where $\widehat{W}_R(k)$ is the Fourier transform of the filtering function. This function is of course always invertible, with large radii corresponding to small variances. We will therefore from now on work with a mass-assignment $M(S) \equiv M(R(S))$ and a mass fraction $\Omega(\omega;S) \equiv \Omega(\omega;R(S))$. The number density of objects of mass $M$ is then given by
\begin{equation}
\label{totalNumberDensity}
n(\omega; M) {\rm d} \, {\rm ln} M = \bar{\rho}_m f(\omega; S)\frac{{\rm d} \, S}{{\rm d} M} \, {\rm d} \, {\rm ln} M
\end{equation}
where $f(\omega;S)$ is the first crossing rate at variance $S$, which is given by
\begin{equation}
\label{totalFCR}
f(\omega; S) \equiv \frac{{\rm d} \, \Omega (\omega; S)}{{\rm d} S} \, .
\end{equation}
The details of this calculation depend on the choice of the filtering function, the mass assignment and the threshold $\omega$, which we will now discuss.
\subsection{Filter choices and mass assignments}
As a first step we calculate the variance of the matter density fluctuations
for some choice of filtering function $W_R(r)$ with Fourier transform $\widehat{W}_R(k)$. The most common choices are a real space tophat window function with Fourier transform given by
\begin{equation}
\label{tophatForm}
\widehat{W}^{th}_R(k) = \frac{3 ({\rm sin}(k R) - k R \, {\rm cos}(k R))}{k^3 R^3}
\end{equation}
and a sharp filter in k-space given by
\begin{equation}
\widehat{W}^{sh}_R(k) = \Theta (1-k R) \, .
\end{equation}
In a second step we need to assign a mass to each filtering radius $R$. For the tophat filter the obvious choice is to simply take the mass enclosed within the filter, but for the sharp-k filter there is no such canonical option. In fact, the integral over the spatial filter-function is not even well-defined in this case (see e.g. \cite{Maggiore:2009rv}). The only reasonable assumption one can make is that the mass should scale like $R^3$, i.e.
\begin{equation}
\label{massAssignment}
M(R) = \frac{4 \pi}{3} \rho_m (A R)^3 \, .
\end{equation}
The normalization $A=1$ corresponds to the tophat-filter choice, we will describe how we adjust $A$ for the sharp-k filter below.
Finally, the calculation of the first crossing rate $f(\omega; S)$ also depends critically on the choice of the filter. For a sharp k-filter one can easily see that the random walks in $S$-space consist of uncorrelated steps, which makes the problem considerably easier to tackle. In the case of very simple thresholds one can solve the first crossing rate analytically (see ref. \cite{Bond:1990iw}), and for the most straightforward case of a constant barrier one obtains
\begin{equation}
f(\omega; S) = \frac{\omega}{\sqrt{2 \pi} S^{3/2}} {\rm exp}\left( -\omega^2/2S \right) \; .
\end{equation}
When using a tophat filter the random walk becomes correlated, which complicates the problem considerably. A numerical procedure to calculate correlated random walks has been proposed in ref. \cite{Farahi:2013fca}, based on a set of earlier papers by Maggiore and Riotto \cite{Maggiore:2009rv,Maggiore:2009rw,Maggiore:2009rx}. In what follows below we will ignore this issue. While this is strictly speaking incorrect, there are two arguments in favor of this approach: First, the corrections are expected to be small for the spectra we employ here, and second, the elliptical barrier modification widely employed today has been matched to the results of CDM N-body simulations in precisely this fashion, and changing the calculation of the first crossing rate would require a re-adjustment of this barrier as well.
In the following sections we will portray results for both a sharp-k filter and a tophat filter.
While the differences for CDM-spectra are minute once the mass-assignment and the barrier for the sharp-k filter has been adjusted correctly, the results for spectra exhibiting a sharp cutoff in the power spectrum (i.e. saturation in the variance function $S(R)$) are very different. At first glance, the sharp-k filter seems to have some advantages.
First, it allows for a relatively simple and yet rigorous calculation of the first crossing rate. Second, it is known that number densities for models exhibiting a cutoff in the linear power spectrum do not always show a cutoff in the number densities calculated with the Press-Scechter excursion set approach when a tophat-filter is used \cite{Benson:2012su} (see e.g. Figure \ref{fig:totalNumbers}). This is due to the oscillatory form of equation (\ref{tophatForm}), which leads to much milder decrease in ${\rm d} S/{\rm d} M$ compared to sharp-k filter results (see Figure \ref{fig:dsdm}). One should however note that this shortcoming can at least in some cases be cured by using a suitable barrier shape (e.g. in WDM-models, see ref. \cite{Benson:2012su}).
The downside (or upside, depending on your point of view) of using a sharp-k filter, in addition to the ambiguity when assigning masses, is that we cannot readily generalize the barriers deduced for a tophat filter to this approach. On the one hand this makes the formalism less deterministic, on the other hand it gives us some additional freedom.
As we will see from the results below, when we extend the ePS formalism to the prediction of substructure abundances within a galaxy-sized dark halo, the advantages of the sharp-k filter cannot make up for its one huge problem, which is the mass assignment.
\subsection{Spherical collapse}
Usually barriers used in the Press-Schechter formalism are derived from the spherical collapse model or generalizations thereof \cite{Abramo:2007iu,Pace:2010sn,Basse:2010qp}. This originally very simple formalism has been extended to include CDM models with a coupling to a quintessence field \cite{Wintergerst:2010ui}, but it runs into serious problems for a model such as ours, where the dynamics of the collapsing component are strongly scale-dependent already in the linear regime. Let us quickly outline why this is.
First we should note that spherical collapse itself is of course an approximation. One assumes an initial overdensity which is spherically symmetric and homogeneous (i.e. has a tophat-profile) and evolves this overdensity. Depending on the model investigated, at this point one either invokes Birkhoff's theorem or some incarnation of Newtonian cosmology in order to simplify the equations of motion. In the first case one effectively treats the overdensity as an independent FLRW-universe, the only difference to the background is a different curvature constant arising from the overdensity. In the Newtonian approach one has to take care of gradient terms appearing in the equations, which are generally ignored, usually with reference to either the lack of gradients in the initial density profile or to the size of the overall perturbation. Whichever approach one employs, it has to fulfill (at least) two basic criteria:
\begin{enumerate}
\item The perturbation should retain its (tophat-)profile, at least to very good approximation.
\item The evolution of the perturbation should agree with linear cosmological perturbation theory at early times.
\end{enumerate}
Both of these demands cannot be met by either formalism throughout the mass-range which interests us. To see why, let us investigate the first condition.
The evolution of linear perturbations in a model such as ours depends strongly on the wavenumber of the perturbation, even in a simplified cosmology which is bolon-dominated throughout its evolution. This is simply due to the scale-dependent sound-speed present in the the averaged equations and a fundamental difference to CDM-models, where we have a universal growth proportional to the scale factor during matter domination. Thus it is immediately clear that an initially tophat-shaped perturbation will not retain its shape even in the linear regime and not even approximately, if modes where the soundspeed is relevant make up a sizable contribution to the Fourier-decomposition of the perturbation. This is clearly the case if the size of the perturbation is close to the Jeans mass, which is the most interesting regime. For much larger masses however, the suppressed modes play (almost) no role for the evolution of the profile, as they are almost irrelevant in the Fourier-decomposition. Thus for large masses the spherical profile is stable to very good approximation and gives, in the uncoupled case, the same result as standard CDM spherical collapse.
One might think that a useful way to get around this problem is to employ Birkhoff's theorem, as it treats the overdensity as an independent universe and thus forces it to stay spherically symmetric. But in doing this, one quickly runs into conflicts with the second demand. As the soundspeed present in the averaged equations is non-adiabatic, it is a purely perturbative quantity (in contrast to the adiabatic sound-speed, which can be calculated from the background evolution only). Thus the background equations can not reflect the suppression of growth present in large-k modes in our model. Quite the opposite, we have verified numerically that one recovers a collapse model very similar to standard CDM collapse when evolving an overdensity as an independent universe in our model, as was to be expected from the fact that the averaged equations give rise to an $\omega=0$ evolution in the background. One can obtain a slightly delayed collapse time compared to a standard CDM collapse if one adjusts the initial conditions to fit the linear evolution at some early initial time, but not by much, and the delay depends on the initial time chosen. Simply put, the reason for this is that the spherical collapse model in its simplest form (i.e. with only one component collapsing) is determined completely by two parameters, the initial overdensity and its initial time derivative. When trying to adjust the time derivative to an initially suppressed evolution for small masses, one finds numerically that the evolution of the density contrast quickly adjusts itself to the standard CDM evolution, thus resulting in only a small delay in the collapse time. This is of course very different from the linear evolution recovered in cosmological perturbation theory and thus unrealistic.
The approach from Newtonian cosmology essentially suffers from the same shortcomings. One can easily find a version of Newtonian cosmology that recovers the cosmological linear perturbation equations in the subhorizon regime, but this just puts one back to the problem of stability of the tophat shape.
The problems just outlined indicate that finding the correct barrier for the bolon model could be a highly non-trivial issue. The one thing we can say for sure is that for large masses (corresponding to large radii), the modes which are suppressed in the linear regime play (almost) no role in the evolution of a tophat-perturbation, and we can thus employ the usual spherical collapse model and recover the standard barrier. For smaller masses close to the Jeans mass however, things seem to be much more complicated, and one might have to resort to a comparison with N-body or fluid simulations to tackle this issue. To our knowledge, no suitable data from such simulations of scalar field dark matter exists at this point.
\subsection{Barrier for a sharp-k filter}
The CDM spherical collapse model gives rise to a constant collapse barrier $\omega = \delta_{sc} \approx 1.686$, which was used in the first works employing the Press-Schechter formalism. While this very simple assumption already gives very useful results, comparisons with very accurate N-body simulations led to modifications motivated by the elliptical collapse model. As was shown in ref. \cite{Sheth:2001dp}, a remapping of the barrier according to
\begin{equation}
\label{ellipticalBarrier}
\omega(S,z)=\sqrt{A} \delta_{sc}(z) \left[ 1+b\left( \frac{S}{A \delta_{sc}^2(z)} \right)^c \right]
\end{equation}
with $A=0.707$, $b=0.5$ and $c=0.6$ gives a more accurate fit. The parameters $b$ and $c$ are a result of the elliptical collapse model, whereas the parameter $A$ has to be put in by hand. It can however be argued that it is related to the way in which structures in N-body simulations are identified, a method which allows for some variability with influences on the halo abundances. The barrier was derived using a spatial tophat filter, but the corresponding first crossing rate was calculated using uncorrelated random walks. This is strictly speaking inconsistent, but the barrier has been adjusted to fit N-body simulations in ref. \cite{Sheth:2001dp} in precisely this fashion and therefore gives correct results.
The number densities obtained when using a sharp-k filter however have a different shape when the same barrier is used, due to a different mass-dependence of the variance $S(M)$. This discrepancy cannot be fixed by adjusting the mass-assignment alone. In order to get correct results, we also have to modify the barrier associated with a sharp-k filter. This is not inconsistent, as the barrier motivated by spherical (or elliptical) collapse can only be expected to give good results when a tophat-filter is used. The possibilities to do this are of course plentiful, but one can get very good agreement already for a very simple modification. Following \cite{Benson:2012su}, we simply shift the barrier upwards by multiplying it with a constant factor
\begin{equation}
\label{barrierScaling}
\omega \rightarrow B \omega \, .
\end{equation}
This rescaling is applied after other barrier modifications such as the elliptical modification in equation (\ref{ellipticalBarrier}). As it turns out, by adjusting the parameters $A$ in eq. (\ref{massAssignment}) and $B$ in the above rescaling, we can get a good fit to the total number densities obtained for a CDM spectrum.\footnote{As mentioned above, for spectra exhibiting a cutoff in the power spectrum the tophat and sharp-k filter results differ considerably for masses close to and below the cutoff scale. This discrepancy cannot be erased by a simple scaling of the barrier, but this might not even be desirable. In the WDM case the turnover seen in the halo mass function in N-body simulations cannot be reproduced with a tophat filter and the elliptical barrier \cite{Benson:2012su}.} For the choices $A=2.27$ and $B=1.1$, the relative discrepancy is below $10\%$ everywhere in the mass range from $10^6 M_\odot $ to $10^{16} M_\odot $. \footnote{Other mass assignments suggested for the sharp-k filter correspond to $A=2.5$ in \cite{Benson:2012su} or $A=2.42$ in \cite{Lacey:1993iv}, which are not too different from our fit. These assignments are however motivated by different considerations.}
\subsubsection{Modifications near the Jeans mass}
In a further extension of the formalism, several authors have studied how to apply the Press Schechter excursion formalism to WDM models \cite{Barkana:2001gr,Benson:2012su}. The differences here are twofold.
First, the power spectrum exhibits a cutoff at some wavenumber, leading to suppression of number densities for the corresponding mass through the $dS/dM$ term in equation (\ref{totalNumberDensity}). Second, the velocity dispersion of WDM particles leads to modifications in the spherical and elliptical collapse model. As was shown in ref. \cite{Barkana:2001gr}, these effects lead to later virialization times and larger virialization radii, which need to be taken into account in the Press Schechter formalism by modifying the collapse barrier. A fit for this modification (which is accurate for masses not too far below the Jeans mass) was given in ref. \cite{Benson:2012su} and reads
\begin{align}
\label{wdmBarrier}
\omega_{WDM} (M,z) = & \omega_{sc}(z) \left[ h(x)\, \frac{0.04}{{\rm exp(2.3x)}} \right. \nonumber \\
& \left. + (1-h(x)) \, {\rm exp} \left( \frac{0.31687}{{\rm exp}(0.809 x)} \right) \right]
\end{align}
where $x={\rm ln} (M/M_J)$, with $M$ the halo mass in question and $M_J$ the Jeans mass, defined in ref. \cite{Benson:2012su} as the halo mass for which pressure and gravity balance initially (in the linear regime). The function $h(x)$ is given by
\begin{equation}
h(x) = \left[ 1+{\rm exp}[(x+2.4)/0.1] \right]^{-1} \, .
\end{equation}
This modification effectively increases the barrier dramatically for masses below the Jeans mass and thus suppresses the first crossing rate $f(\omega;S(M))$ in that regime, which has a non-negligible effect on the predicted number densities, as was shown in ref. \cite{Benson:2012su}.
One should note that the modification given in equation (\ref{wdmBarrier}) was obtained from a 1-dimensional baryonic simulation, where the temperature was adjusted to fit the WDM velocity dispersion, and not from some spherical collapse model of WDM in the stricter sense, which would suffer from the same fundamental difficulties we mentioned above.
It is unclear to us what the correct shape of the barrier should be in our model, in particular whether or not it exhibits an increase around the Jeans mass or not. In order to show the full range of possibilities and the effect of the barrier choice, we will present all calculations for both cases, once for the modified elliptical collapse barrier described above and once for a barrier where the WDM-modification given in equation (\ref{wdmBarrier}) is included. In the latter case we do of course insert the Jeans mass appropriate for our model, which is given by equations (\ref{jeansWavenumber}) and (\ref{jeansMass}). Different barriers are shown in Figure \ref{fig:barriers}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure7}
\caption{Different barriers used in this work. We show the spherical collapse barrier (black), the elliptical collapse barrier for the tophat-filter given in equations (\ref{ellipticalBarrier}) (blue dashed) and the elliptical collapse barrier shifted by a factor of 1.1 (blue solid) for the sharp-k filter. Finally we show the barriers modified through equation (\ref{wdmBarrier}) to get a sharp upturn near the Jeans mass for both the tophat elliptical barrier (green, dashed) and its raised version for the sharp-k filter (green, solid). All modification have been calculated from a bolon power spectrum with $\lambda=65$, $\alpha=20$ and $\beta=0$.}
\label{fig:barriers}
\end{figure}
\subsection{Total number counts}
The total number densities resulting from the Press-Schechter excursion set formalism are shown in Figure \ref{fig:totalNumbers}. The $\Lambda$CDM model is shown in black, with the dashed curve representing the tophat filter with the elliptical collapse barrier adjusted to fit N-body results. The solid curve shows the sharp-k filter results, with the mass assignment and the barrier shift adjusted to fit the dashed curve. Both lines are almost indistinguishable. The green lines are results obtained for the bolon-model with $\lambda=65$, $\alpha=20$ and $\beta=0$, where a tophat filter was used for the dashed line and a sharp-k filter for the solid one. For the dotted and dashed-dotted lines the additional modification of the barrier due to a possible upturn at the Jeans-mass was put in, for a sharp-k filter and a tophat filter respectively. Finally the red dotted line represents the WDM model from the previous section, here we used the fully modified barrier, including the modification from equation (\ref{wdmBarrier}) and a sharp-k filter. All first-crossing rates were calculated numerically using the method presented in the appendix of ref. \cite{Benson:2012su}.
One can clearly see the effects the choices of filter, mass assignment and barrier have on the predicted number densities in the bolon model. First, when using a tophat filter, a strong suppression is only present if an upturn of the barrier near the Jeans mass included, for the standard elliptical collapse barrier the suppression is much weaker. Second, for a sharp-k filter, this is not the case, both curves are almost identical. The only difference is a slightly quicker suppression of the oscillatory part, resulting from the oscillatory shape of the power-spectrum cutoff, if the barrier is raised near the Jeans mass. This can be easily understood and nicely visualizes the main issue for the sharp-k filter: For this filter choice, the cutoff in the number densities is determined by the very sharp cutoff in the function ${\rm d} S/ {\rm d} M$ alone. For our mass-assignment, this happens at masses above the Jeans mass, which makes a possible barrier modification irrelevant. A much lower choice of the parameter $A$ would of course again lead to a bigger difference between the two barrier choices and at very low $A$ the first crossing rate would once again determine the cutoff position, similar to the tophat filter. From these simple observations it is clear that the mass-assignment for the sharp-k filter introduces huge uncertainties for models such as ours, which will translate from the total number densities to substructure abundances (which we treat below) as well. We know currently of no way to resolve this issue and stick to the tophat filter from now on.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure8}
\caption{dS/dLog(M) for a tophat filter (dashed) and a sharp-k filter (solid) with the mass assignment given in eq. (\ref{massAssignment}) for a standard CDM power spectrum (black) and the bolon model (green) with $\lambda=65$, $\alpha=20$ and $\beta=0$. The red line represents a thermal WDM model with a $2.284$ keV mass for a sharp-k filter.}
\label{fig:dsdm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure9}
\caption{Total number densities predicted by the Press-Schechter excursion set formalism for different models and different filters. The black lines show the results for a CDM power spectrum with a tophat (dashed) and a sharp-k filter (solid) respectively. The elliptical barrier and mass assignment for the sharp-k filter has been successfully adjusted to fit the tophat results, both lines are (almost) indistinguishable. The green lines show the results for a bolon model with $\lambda=65$, $\alpha=20$ and $\beta=0$. A tophat filter was used for the dashed line, and an additional upturn of the barrier near the Jeans mass inserted for the dashed-dotted one. The solid line shows the sharp-k filter results with an elliptical barrier, whereas the dotted line shows what happens if an upturn in the barrier is included. Finally the red line shows the WDM model with a sharp-k filter and Jeans mass modification already displayed in Figure \ref{fig:dsdm} for comparison.}
\label{fig:totalNumbers}
\end{figure}
\subsection{Progenitor mass functions}
Finally we move on to show what is probably the most interesting aspect of small scale power suppression, and that is the abundance of substructure within a typical galaxy such as ours. The extended Press-Schechter-formalism allows for the calculation of the so called conditional mass function, denoted by
\begin{equation}
g (M_1,z_1 | M_2,z_2) = - f(\omega_1;S_1 | \omega_2;S_2) \frac{{\rm d} S_1}{{\rm d} {\rm log} M} \Big|_{M=M_1} \, .
\end{equation}
This describes the fraction of mass from halos of mass $M_2=M(S_2)$ at redshift $z_2$ corresponding to the barrier $\omega_2$ which is contained in progenitor halos of mass $M_1=M(S_1)$ at redshift $z_1$ corresponding to the barrier $\omega_1$ per log M-interval. It is determined by the variance as a function of mass and $f(\omega_1,S_1| \omega_2,S_2)$, which corresponds to the first crossing rate of Langevin-trajectories through the barrier $\omega_1$ at $S_2$, for trajectories which originated at $\omega_2(S_2)$ at variance $S_2$.
This function can again be obtained analytically in the case of a constant barrier \cite{Bond:1990iw}, but for the complicated barrier shapes needed here we have to resort to numerics again. Luckily one can employ the same strategy as in the case of total number densities, because this conditional first crossing rate is the same as the unconditional first crossing rate with an adapted barrier shape \cite{Benson:2012su}:
\begin{equation}
f(\omega_1;S_1 | \omega_2;S_2) = f(\tilde{\omega};S_1-S_2) \, ,
\end{equation}
where
\begin{equation}
\tilde{\omega}(S) = \omega_1(S+S_2) - \omega_2(S_2) \, .
\end{equation}
The change of the barrier with redshift is of course determined by the linear growth function $D(z)$, which we obtain numerically from our Boltzmann-code. It can be somewhat approximated by $D(z)=(1+z)$, but this fails for low $z$ due to dark energy and for medium $z$ due to the modified structure growth if $\beta$ is non-zero. Furthermore, as mentioned before, the growth in our model is non-universal for all k-modes, which gives a $k$-dependent linear growth function as well. As this is difficult to incorporate into the extended Press-Schechter formalism, we simply indicate indicate its possible effect, as we did when discussing total number densities, by a barrier-modification near the Jeans mass as given in equation (\ref{wdmBarrier}).
The elliptical barrier modification given in equation (\ref{ellipticalBarrier}) has been shown to also give better results than the constant spherical collapse barrier for progenitor mass functions \cite{Sheth:2001dp}. One should note that the time-evolution of the elliptical barrier is not obtained by applying
\begin{equation}
\omega(z) = \omega(z=0) / D(z) \, ,
\end{equation}
as one might expect from the Press-Schechter logic, but by replacing the critical overdensity $\delta_{sc} \rightarrow \delta_{sc}(z)$ and applying the elliptical barrier modification afterwards, as already denoted in eq. (\ref{ellipticalBarrier}).\footnote{Exchanging these two modifications would lead to a strong underprediction of subhalo abundances for higher redshifts, i.e. a strongly accelerated version of hierarchichal structure growth, which is not realistic.}
The conditional mass function can be seen for a number of redshifts in Fig. \ref{fig:fullCondFCR}.
The reference mass in this figure is of the order of a typical galaxy mass, we chose
\begin{equation}
M_2=1.8 \times 10^{12} M_\odot \, ,
\end{equation}
and the reference redshift is $z_2=0$. We use these parameters as reference for all plots in this section.
\begin{figure*}[t]
\centering
\includegraphics[width=.44\linewidth]{figure10}
\includegraphics[width=.44\linewidth]{figure11}
\includegraphics[width=.44\linewidth]{figure12}
\includegraphics[width=.44\linewidth]{figure13}
\caption{Conditional mass functions for different power spectra. The ePS-result for a tophat filter with the shifted barrier are shown in solid black for CDM and in dash-dotted gray for a wdm model with particle mass of 2.284 keV. The dashed lines represent the bolon-model with $\lambda=65$, $\alpha=20$ and $\beta=-0.1,0.,0.05$ for the red, green and blue lines respectively, where the elliptical barrier was used. The dotted lines stand for the same models, but this time with an additional upturn of the barrier near the Jeans mass included.}
\label{fig:fullCondFCR}
\end{figure*}
We show the standard CDM prediction (black solid line), together with the WDM-model already used above (gray dash-dotted line) and three bolon models with modified barrier (dotted) and without (dashed). The three colors represent different couplings of $\beta=-0.1$ (red), $\beta=0$ (green) and $\beta=0.05$ (blue). The bolon exponent $\lambda$ is 65 for all curves, while $\alpha=20$. We used a tophat filter for all the calculations.
\subsection{Current number of subhalos}
As a last step, we now calculate the number of subhalos we expect in a typical galaxy. This is not a quantity directly accessible through the ePS-approach and we have to do some additional work. Following ref. \cite{Giocoli:2007gf}, we calculate the current number of subhalos by a simple integration in barrier space, i.e.
\begin{equation}
\frac{{\rm d} n}{{\rm d} M_1} = \int_{\delta_0}^{\infty} \frac{M_2}{M_1} f(\omega(\delta_{sc,1});S_1|\omega(\delta_{sc,2});S_2) {\rm d} \delta_{sc,1} \, ,
\end{equation}
where $\delta_0$ denotes the spherical collapse barrier at redshift 0.
Afterwards we calculate the cumulative number of subhalos through a simple integration in $M_1$:
\begin{equation}
n(>M_1) = \int_{M_1}^{M_2} \frac{{\rm d} n}{{\rm d} M_1} {\rm d} M_1 \, .
\end{equation}
A few comments are in order at this point. First, the integration over barrier-space necessarily overcounts halos, as one halo might retain its mass over a prolonged period of time. Therefore we lose the overall normalization of the cumulative number density. We solve this issue by adjusting the CDM result to N-body simulations and use the same normalization for WDM and the bolon model. Second, one should note that an additional assumption going into this ansatz is that the current distribution of subhalos corresponds directly to the distribution of progenitor halos when averaged over redshift. This is far from obvious, but seems to hold, as we recover the CDM N-body results to good accuracy (see below).
Results for cumulative number counts of subhalos can be seen in Figure \ref{fig:cumNums}. A comparison with the N-body results in Figure 11 in ref. \cite{Lovell:2013ola} shows that the CDM result agrees to excellent accuracy (as expected, as we have adjusted our normalization accordingly), whereas the WDM curve looks somewhat different. It seems to agree rather well with the N-body results down to masses slightly above the Jeans mass, but then the ePS results stagnate more quickly, whereas the N-body results continue to rise a while longer. We underestimate the asymptotic value by a factor of about 2. Two reasons for this come to mind. First, structure formation is not strictly hierarchical, there are violent mergers and disruptions which can form halos even below the Jeans mass. Such effects can not be properly included in the purely hierarchical ePS approach. Second, this is also the regime where spurious halos start to play a role, and issues with the identification of such halos could introduce additional uncertainties.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figure14}
\caption{Cumulative number of Milky Way subhaloes as a function of halo mass $M_h$. The CDM power spectrum is represented by a solid black line, the gray lines show a WDM modification for a thermally produced WDM particle of mass $m_{\rm wdm} = 2.284$ keV. The green lines stands for a bolon power spectrum with $\beta=0$ and $\lambda=69$, the blue and red lines correspond to the same $\lambda$ but with $\beta=0.05$ and $\beta=-0.1$ respectively. For all WDM and bolon models, the dashed lines are results calculated without an upturn of the barrier near the Jeans mass, whereas such a modification is included for the solid lines. As an additional orientation we added the dashed-dotted gridline at the WDM Jeans mass.}
\label{fig:cumNums}
\end{figure}
In the light of recent observations of ultra-faint dwarf-galaxies \cite{2009ApJS..182..543A}, these results can be used to put constraints on the current bolon mass $m_\chi (t_0)$. We simply demand that the number of subhalos should not fall below the number of dwarf galaxies estimated from observations. These estimates are still the subject ob ongoing debate, current numbers range from 66 \cite{Lovell:2013ola} to several hundred \cite{Tollerud:2008ze}. Here we choose the lower value of 66 in order to remain cautious. One should point out that additional uncertainties get introduced by baryonic physics. Galaxy formation on such small scales appears to be a highly stochastic process \cite{Strigari:2008ib}, potentially leaving a large number of halos void of stars. This effect may contribute to raising the number of dark halos required to explain current observations and the bounds we set here are therefore very conservative.
The masses of the dark matter halos containing ultra-faint dwarf-galaxies appear to have a common lower mass bound at around $ 10^7 M_\odot$, which is the benchmark mass we employ. As we already underestimate the WDM N-body results already by a factor of roughly $1.5$ at this mass, we artificially remedy this fact by raising our obtained number counts by this factor when we use the modified spherical collapse barrier in order to remain extra cautious. The results can be seen in Figure \ref{fig:lbplot}.
Clearly the smallest current bolon masses $m_\chi (t_0)$ are possible for the largest couplings $\beta$. However, we expect current coupling constraints from CMB observations to apply to our model as well, as the wavenumbers for which the CMB has the most constraining power are much lower than the ones where linear structure formation is modified compared to CDM. In refs. \cite{Pettorino:2012ts,Pettorino:2013oxa} the couplings bounds are roughly $|\beta| < 0.1$. From this constraint we derive an upper bound for current bolon mass, which we estimate by evaluating the boundaries presented in Figure \ref{fig:lbplot} for large couplings, we choose $\beta=0.05$. The resulting bound is
\begin{equation}
m_{\chi}(t_0) \gtrsim 9.2 \, (4.1) \times 10^{-22} {\rm eV}
\end{equation}
for the modified (ellitpical) barrier.
This bound lies at the larger end of typical ultra-light scalar field dark matter masses.
\begin{figure}[t]
\centering
\includegraphics[width=0.92\linewidth]{figure15}
\raisebox{.5\height}{\includegraphics[width=0.06\linewidth]{figure15_2}}
\caption{Allowed parameter range for the cosmon-bolon model. The colored contours show different bolon exponents $\lambda$. The solid line displays the lower boundary of parameters which yield more than 66 subhalos in the Milky way if the modified barrier is used, the dashed line shows the same exclusion curve for the standard elliptical barrier.}
\label{fig:lbplot}
\end{figure}
\section{Conclusion and Outlook}
\label{sec:Conclusion}
In this work we have studied the evolution of linear-perturbations in the coupled cosmon-bolon model in some detail. We built our analysis on the study of linear perturbations in the very early universe in coupled two scalar field models, which we provide in an accompanying paper \cite{EarlyScalings}. Our work has given a detailed procedure to average out the quick oscillations at both the background and the perturbative level to arrive at an effective theory of the evolution of linear perturbations, which we treated numerically. As a result we provide a reasonably accurate fitting formula for the power-spectrum modification when compared to standard CDM coupled to quintessence.
We then move on to investigate phenomenologically interesting predictions of our model by employing the Press-Schechter excursion set approach. We have discussed in some detail the different approaches one can take here, in particular with respect to the choice of the filter and the barrier employed, both of which can have influential consequences on the results. We have shown how the coupling influences the abundance of progenitor halos in a typical galaxy such as ours and how to translate this into a prediction for the number of virialized dark matter subhalos expected to be present today.
Our analysis does however have one shortcoming: As we have discussed, the spherical collapse model, which serves as a basis for the entire Press-Schechter-excursion approach, can not be easily generalized to our model. This problem can only be addressed by studying non-linear perturbations in our model, a work which is currently in progress. We hope to generalize the mechanism presented here to average out the linear perturbations to the non-linear regime and arrive at an effective theory which could be used not only to study spherical collapse in out model, but also serve as a basis for large numerical simulations of cosmological structure formation in coupled scalar field dark matter models.
Pending the results of this work, we still want to make one last comment concerning the small scale problem of the cosmological standard model. Traditionally this problem has been divided into two parts, the \textit{missing satellite problem} and the \textit{cusp-core problem}. While the missing satellite problem may well have a solution in the baryonic sector alone (see e.g. ref. \cite{Weinberg:2013aya} and references therein), the cusp-core problem (and the - possibly strongly related - \textit{too big to fail problem}) might still require some modification of the dark matter sector. As has been recently pointed out, warm dark matter, at least in the simplest version of being a single thermally produced component, can not resolve these issues consistently \cite{Schneider:2013wwa}. The problem here is essentially that existing constraints on WDM particle masses from Lyman-$\alpha$ constraints \cite{Viel:2013fqw} and ultra-faint dwarf galaxies \cite{Polisensky:2010rw,Polisensky:2013ppa} lead to an allowed range where the core structure of both host galaxies (the cusp-core problem) and of massive subgalaxies (the too big to fail problem) can not be explained anymore. As the WDM model appears to somewhat similar to our model in the linear regime one might be tempted to draw similar conclusions here. This would however be premature. All the constraints derived for WDM models rely on simulations of non-linear structure growth, and, as mentioned above, the non-linear dynamics of our model are still under investigation and might turn out to be somewhat different from WDM. Furthermore the presence of a coupling enlarges the parameter space and therefore could point towards a way out of this possible dilemma.
Furthermore, despite the similarities between our model and WDM models, there are observable differences. In particular, the oscillations present in the scalar field sector are expected to translate to the gravitational potential in virialized structures, similar to what happens in oscillatons or boson stars \cite{Lee:1995af,UrenaLopez:2001tw}. These oscillations are in principle observable. In a recent paper Khmelnitsky and Rubakov investigated the effects scalar field dark matter wold have on variations in pulsar timing signals and whether or not the resulting signatures are detectable in the near future \cite{Khmelnitsky:2013lxt}. The bounds we set on the current bolon mass exclude a possible near future detection by more than one order of magnitude, as can be seen by a comparison with Figure 1 in ref. \cite{Khmelnitsky:2013lxt}.
Finally, even if the small scale problems of the cosmological standard model should find a full solution in the baryonic sector, it certainly does not invalidate the scalar field dark matter model. It has a strong theoretical motivation in its connection to a possible solution of the cosmological constant problem and therefore remains an interesting alternative.
\acknowledgments{The author would like to thank Valery Rubakov for useful discussions. This work is supported by the grant ERC-AdG-290623.}
|
1,314,259,993,551 | arxiv |
\section{Acknowledgements}
We would like to thank the LCC generator working group and the ILD software working group for providing the simulation and reconstruction tools and producing the Monte Carlo samples used in this study.
This work has benefited from computing services provided by the ILC Virtual Organization, supported by the national resource providers of the EGI Federation and the Open Science GRID. In this study we widely used the National Analysis Facility (NAF)~\cite{Haupt_2010}
and would like to thank Grid computational resources operated at Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany.
We thankfully acknowledge the support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2121 "Quantum Universe" 390833306.
\section{The basic principle of the TOF particle ID}
Measurements of the momentum $p$ and the velocity $\beta$ in natural units of a charged particle determine its mass via the relativistic momentum formula depicted in Equation~\ref{eq:1}:
\begin{equation}
\label{eq:1}
m = \frac{p}{\beta}\sqrt{1 -\beta^2}
\end{equation}
The momentum is measured in the tracking system. The track reconstruction of ILD is described in~\cite{track}. It is based on a Kalman filter which provides a helix-based parametrisation of the particle's trajectory at every hit, as well as at the IP and the calorimeter front face. Of particular importance for the momentum reconstruction are the curvature $\Omega$ and the dip angle $\lambda$ of the helix. With these and the strength of the solenoidal magnetic field $B_z$, the momentum is determined via Equation~\ref{eq:2}:
\begin{equation}
\label{eq:2}
p = e \abs{\frac{B_z}{\Omega}}\sqrt{1 + \tan^2\lambda}
\end{equation}
The velocity $\beta$ of the particle is calculated from the ratio of the track length $\ell_{\mathrm{track}}$ to the time-of-flight $\tau$:
\begin{equation}
\label{eq:3}
\beta = \frac{\upsilon}{c} = \frac{\ell_{\mathrm{track}}}{\tau \cdot c}
\end{equation}
Particles are identified by their corresponding $p$ and $\beta$ which layout in separate bands that correspond to the different types of hadrons~\cite{uli_talk}. An example of these bands corresponding to $\pi^{\pm}$, $K^{\pm}$ and $p$ is shown in Figure~\ref{fig:beta_bands}. The time-of-flight is estimated based on the timing information from the ECal hits. We will discuss the procedures involved in the section below in more detail.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.75\textwidth]{images/BetaCuts_TOF0_NiceH20.png}
\caption{$\beta$-versus-momentum plane. The three separate bands correspond to $\pi^{\pm}$, $K^{\pm}$ and $p$ particles. The bands are easily separable up to momenta of 3-5\,GeV. The plot is made with $\Omega_{\mathrm{IP}}$, $\lambda_{\mathrm{IP}}$, $\tau_{\mathrm{avg}}$} and assuming perfect time resolution. See Section\,2 for the details.
\label{fig:beta_bands}
\end{figure}
The track parameters also serve to estimate the track length $\ell_{\mathrm{track}}$ with Equation~\ref{eq:4}. Therein, $\varphi_{\mathrm{ECal}}$ and $\varphi_{\mathrm{IP}}$ are the angles of the helix direction at the entry point to the ECal and the point of the closest approach to the IP, respectively.
\begin{equation}
\label{eq:4}
\ell_{\mathrm{track}} = \abs{\frac{\varphi_{\mathrm{ECal}} - \varphi_{\mathrm{IP}}}{\Omega}} \sqrt{1 + \tan^2\lambda}
\end{equation}
\subsection{The choice of the track parameters}
In the formulae above, the track parameters $\Omega$ and $\lambda$ that are used for the momentum and track length calculation are assumed to be constant. However, due to energy loss they change as the track propagates through the tracking system. In Section\,4, we will compare the final mass spectrum obtained via Equation~\ref{eq:1} when calculating $\ell_{\mathrm{track}}$ and $p$ from the track parameters at the IP ($\Omega_{\mathrm{IP}}$, $\lambda_{\mathrm{IP}}$) and at the entry point to the calorimeter ($\Omega_{\mathrm{calo}}$, $\lambda_{\mathrm{calo}}$). In principle, the track length can be calculated with better precision directly from the Kalman filter that we use for track reconstruction. We plan to study this approach in the future. Also, vertex information is not yet taken into account in this study and it is always assumed that tracks start at the point of the closest approach to the IP.
\subsection{The choice of the TOF estimator}
We test four methods to estimate the particle-level TOF from the time information given by the ECal hits. The ECal hit time is considered to be the time of the earliest MC contribution to the energy deposition in the hit. To simulate a finite hit time resolution we apply Gaussian smearing with a standard deviation equal to the assumed time resolution. For conceptual studies, this smearing is omitted in some cases, labeled as "perfect time resolution". Further digitization effects from the electronics were not taken into account in the simulation and will be addressed in future studies.
The two easiest approaches are to take either the time of the ECal hit closest to the point where the extrapolated track enters the calorimeter ($\tau_{\mathrm{closest}}$) or the time of the earliest hit in the cluster ($\tau_{\mathrm{earliest}}$). To extrapolate the time-of-flight to the ECal surface, instead of the ECal hit position, one needs to correct for the distance between the track's entry point and the center of the ECal hit ($d_{\mathrm{hit,entry}}$). Then, the corrected TOF ($\tau_{\mathrm{corr}}$) for each hit is given by Equation~\ref{eq:5}:
\begin{equation}
\label{eq:5}
\tau_{\mathrm{corr}} = t_{\mathrm{hit}} - \frac{d_{\mathrm{hit,entry}}}{c}\mathrm{,}
\end{equation}
where $t_{\mathrm{hit}}$ is the time measured in the ECal hit. This requires an assumption on the speed by which the particle travels inside the calorimeter and/or the shower propagates. For practical purposes, the speed of light is assumed here, which can
lead to biases.
As relying only on a single hit suffers from fluctuations in the shower development and the time measurement, we also study TOF estimators that rely on multiple hits in the shower.
The multi-hit estimators combine information from the first 10 layers of the ECal, thereby selecting in each layer the hit which is closest to the extrapolated track, as illustrated in Fig.~\ref{fig:02a}. For charged hadrons, this approximates the MIP part of the cluster, before the hadron actually starts to shower. In the future, this could be replaced by a dedicated shower start finder. The time information of the selected hits can then be combined either by averaging the corrected times at the calorimeter entrance ($\tau_{\mathrm{avg}}$), or by fitting the velocity of the particle's propagation ($\tau_{\mathrm{fit}}$).
In case of the fit, a linear function is used to describe the time of the hits as a function of the distance to the entry point. An example of such a fit is shown in Fig.~\ref{fig:02b}.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[t]{0.37\textwidth}
\centering
\includegraphics[width=\textwidth]{images/tof_frank.png}
\caption{Sketch of the ECal hit selection for the TOF calculation with multi-hit methods.}
\label{fig:02a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.53\textwidth}
\centering
\includegraphics[width=\textwidth]{images/fit_example.png}
\caption{Linear fit of time versus distance to the entry point for the selected ECal hits.}
\label{fig:02b}
\end{subfigure}
\caption{Illustration of multi-hit TOF estimators.}
\end{figure}
\section{Conclusions and outlook}
In this contribution, we presented how the reconstructed mass of charged hadrons depends on the choice of track parameters and the chosen TOF estimator. We conclude that using track parameters at the calorimeter surface $\Omega_{\mathrm{calo}}$, $\lambda_{\mathrm{calo}}$ and a TOF estimator based on fitting the time-of-arrival from hits in the first 10 layers of the Ecal shows the smallest mass bias, which also is of similar size for all studied charged hadrons. We present the idea to calibrate the TOF estimators based on photon clusters, for which the expected true time of arrival is simpler to predict than for hadrons. After a first attempt of such a calibration, a bias of about 1\,ps remains, which translates into a mass bias of about 3-4\,MeV for typical kaons. This is two orders of magnitude larger than the precision required to contribute to our knowledge of the kaon mass which is $O$(10\,keV).
In this study we focused on the assumption of a perfect time resolution.
However, the time resolution will affect the performance of each of the TOF estimators which may change the preferred one.
One needs to check the behavior of the TOF estimators with increasing finite time resolutions.
In addition, the implementation of a realistic digitizer is required to test TOF estimators in a more realistic environment, as the energy threshold of the electronics may cut off low-energy hit contributions which also impacts the performance.
As for now, taking the earliest MC contribution to the calorimeter hit does not show this effect.
An alternative option for the ECal TOF estimation would be to use the outermost Si tracker layer(s). This method has the advantage of being independent of the shower development and the corresponding material effects, but the disadvantage of only $O$(1) hit time measurement. This study has been performed with PFOs detected in the barrel section of the detector, however, a large fraction of PFOs going to the endcap region still remains unstudied due to the track length calculation which cannot account for multiple curls in one track. We plan to improve the track length calculation with the Kalman Filter to solve this problem and have more coverage region for PID.
\section{Introduction}
The performance requirements for detectors at a future Higgs factory have been known for a long time in terms of track momentum resolution, jet energy resolution, impact parameter resolution and hermeticity. More recently it has been realised that the particle identification (PID) ability to distinguish different kinds of charged hadrons provides important additional information~\cite{tof_ref1, tof_ref2,tof_ref3}. Some proposed detector concepts like ILD~\cite{TDR, IDR} at the ILC~\cite{ilc} or the CEPC~\cite{cepc2} detector offer PID via the specific energy loss ($dE/dx$) in their gaseous main tracking devices, which could be complemented by time-of-flight (TOF) measurements. For the other detector concepts which rely on silicon tracking only, TOF would be the only possibility for charged hadron PID. TOF measurements could be implemented e.g. with fast timing silicon sensors placed in the innermost layers of the electromagnetic calorimeter (ECal) or in the outermost tracker layer.
We present here a study based on the ECal, assuming single hit time resolutions between 10 and 100\,ps, as they can be reached by modern silicon sensor technologies~\cite{LGAD}. With this kind of resolutions, TOF-based PID would allow to identify charged hadrons with momenta up to about 5 GeV, exactly in the region where the Bethe-Bloch bands overlap, prohibiting the identification via $dE/dx$, see Fig.\,8.6 in~\cite{IDR}. The TOF-based PID approach is relevant for any Higgs factory, however, for this study we use the ILD concept and its detailed and well-established full simulation as a showcase.
In addition to the PID, TOF could potentially also be employed to improve our knowledge about the kaon mass. The two most precise measurements of the kaon mass, both from spectroscopy of kaonic atoms, are discrepant and the PDG quotes their average as $493.677 \pm 0.013 \mathrm{MeV}$~\cite{pdg}. If a mass measurement from TOF could reach a precision of about 10 keV, future Higgs factories could contribute to clarifying this discrepancy.
The results presented here were obtained using an $e^+e^- \rightarrow Z \rightarrow q\bar{q}$ sample from the IDR MC production of ILD~\cite{Production_2018} ($\sqrt{s}=500$\,GeV, iLCSoft v02-00-02, ILD\_l5\_o1\_v02). Details of this production and the employed standard reconstruction can be found in~\cite{IDR}, and we only point out that the clustering of the calorimeter hits and the matching of tracks and clusters to particle flow objects (PFOs) were performed with the Pandora particle flow algorithm~\cite{Pandora}. As a simple test ground for our methods, only PFOs with exactly on track and one cluster in the barrel part of the ECal were considered. Furthermore, the tracks were required to have not fewer than 200 of the maximum possible 220 TPC hits and have track parameters at the interaction point (IP) $\abs{d_{0}} < 10$\,mm, $\abs{z_{0}} < 20$\,mm to ensure the quality of the tracks. For the studies with photons we required a cluster longitudinal position of $\abs{z_{\mathrm{cluster}}} < 2200$\,mm and a true vertex position within 500\,$\mu$m to the IP.
\section{Results}
In this section we present results of the study of different track parameter options and TOF estimators in terms of precise mass reconstruction for charged hadrons.
\subsection{Mass reconstruction}
Reconstructed mass peaks of the $\pi^{\pm}$, $K^{\pm}$, $p$ particles are shown in Fig.~\ref{fig:03} using different track parameters and TOF estimators evaluated on true hit times, without smearing for the time resolution.
Different methods show different peak positions of the mass distribution which differ from the PDG values at the level of $O$(10\,MeV).
We define the mass bias as the difference between the peak position and the PDG value. The peak positions are determined with a Gaussian fit in a local region around the observed maximum. The mass biases for all particles and all methods are combined in one summarizing plot in Fig.~\ref{fig:04}.
Using the track parameters at the calorimeter surface leads to a consistent bias for all particles, while taking the track parameters at the IP leads to a lower reconstructed mass for $\pi^{\pm}$ and a higher reconstructed mass for $p$ than expected.
In the case of using track parameters at the calorimeter surface, $\tau_{avg}$ shows the largest bias for all particles. The explanation for this seems to be the assumption of the speed-of-light propagation of the shower, which results in a larger bias compared to the other estimators.
The estimator $\tau_{\mathrm{fit}}$ in combination with the track parameters at the calorimeter impact point shows the smallest bias, but still a remaining bias of 3-4\,MeV which is two orders of magnitude larger than the precision we want to achieve $O$(10\,keV).
\begin{figure}[!htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pion_mass_ip.png}
\caption{$\Omega_{\mathrm{IP}}$, $\lambda_{\mathrm{IP}}$ track parameters for $\pi^{\pm}$.}
\label{fig:03a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/pion_mass_calo.png}
\caption{$\Omega_{\mathrm{calo}}$, $\lambda_{\mathrm{calo}}$ track parameters for $\pi^{\pm}$.}
\label{fig:03b}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/kaon_mass_ip.png}
\caption{$\Omega_{\mathrm{IP}}$, $\lambda_{\mathrm{IP}}$ track parameters for $K^{\pm}$.}
\label{fig:03c}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/kaon_mass_calo.png}
\caption{$\Omega_{\mathrm{calo}}$, $\lambda_{\mathrm{calo}}$ track parameters for $K^{\pm}$.}
\label{fig:03d}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/proton_mass_ip.png}
\caption{$\Omega_{\mathrm{IP}}$, $\lambda_{\mathrm{IP}}$ track parameters for $p$.}
\label{fig:03e}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{images/proton_mass_calo.png}
\caption{$\Omega_{\mathrm{calo}}$, $\lambda_{\mathrm{calo}}$ track parameters for $p$.}
\label{fig:03f}
\end{subfigure}
\caption{Mass spectrum for $\pi^{\pm}$, $K^{\pm}$, $p$ using different track parameters and TOF estimators for the mass calculation.}
\label{fig:03}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.75\textwidth]{images/mass_biases.png}
\caption{The bias in the reconstructed mass by different methods that combine hit-level time information to a cluster-level time-of-flight. $\tau_{\mathrm{fit}}$ estimator with track parameters at the calorimeter surface shows the smallest bias which is also least deviant between particles.}
\label{fig:04}
\end{figure}
For further improvement one needs to find a deeper understanding of the cause of the bias by investigating each of the possible major contributors, i.e. momentum, track length and TOF estimation, separately.
\subsection{Study of TOF estimators with photons}
The bias of the reconstructed mass is caused either by the momentum measurement, the track length measurement or the TOF estimation. It is not a trivial task to study them separately with charged hadrons. However, one can use photons to make track length and momentum calculations trivial and consider only TOF effects.
Photons travel in a straight line with no energy loss in the tracking system and at a constant velocity $c$. This allows to calculate the true track length easily and then calculate the true TOF $\tau_{\mathrm{true}}$ with Equation~\ref{eq:6}:
\begin{equation}
\tau_{\mathrm{true}} = \frac{\ell_{\mathrm{track, true}}}{c}
\label{eq:6}
\end{equation}
The timing bias results for the different TOF estimators are shown in Figs.~\ref{fig:05} and~\ref{fig:06} for perfect and 10\,ps time resolution, respectively.
The least biased and most precise methods are $\tau_{\mathrm{closest}}$ and $\tau_{\mathrm{earliest}}$, respectively, but they degrade very fast when applying a finite time resolution as seen in Fig.~\ref{fig:06}. At 10\,ps time resolution they show a comparable level of precision to $\tau_{\mathrm{fit}}$, while $\tau_{\mathrm{avg}}$ performs significantly better than the others.
From the multi-hit methods, $\tau_{\mathrm{fit}}$ shows the smallest bias, albeit with some sacrifice on the precision compared to the $\tau_{\mathrm{avg}}$, especially for larger time resolutions. In future studies, we want to seek ways to improve the $\tau_{\mathrm{fit}}$ precision with applied hit time resolution.
These results can be used to further study and improve weak points of all TOF estimators or to calibrate them to reduce the bias of the reconstructed mass of charged hadrons.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.75\textwidth]{images/biases0.png}
\caption{The timing biases of different TOF estimators for photon clusters with perfect time resolution.}
\label{fig:05}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.75\textwidth]{images/biases10.png}
\caption{The timing biases of different TOF estimators for photon clusters with 10\,ps time resolution.}
\label{fig:06}
\end{figure}
Here we present the example of the calibration procedure and results of the timing bias reduction for the $\tau_{\mathrm{fit}}$ estimator. We used the ratio $\tau_{\mathrm{fit}}/\tau_{\mathrm{true}}$ as a function of the number of hits in the ECal cluster to derive a calibration curve. We fit the distribution that is shown in Fig.~\ref{fig:07} with two second order polynomial functions, one for the region between 0\,-\,45 hits and the other for the remaining region of 45\,-\,200 hits. The choice of the fit functions is done based on practical reasons to have a reasonable match with the distribution. Then, we can correct for the bias using this fit. In Fig.~\ref{fig:08} we present the timing bias distribution for the $\tau_{\mathrm{fit}}$ estimator before and after calibration. We observe an improvement in the mean of the bias distribution.
In this paper we only familiarize the reader with the idea.
The full effect of a TOF calibration procedure on the mass reconstruction of charged hadrons is still to be investigated and will be addressed in further studies.
One needs to study and ensure that the calibration made for photons will hold for charged hadrons as their showers in the ECal have different properties which can impact the final TOF estimation. Also, the effects of the hit time resolution require a more detailed examination.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.75\textwidth]{images/prof.png}
\caption{Correlation of the ratio of the reconstructed $\tau_{\mathrm{fit}}$ to the $\tau_{\mathrm{true}}$ with the number of hits in the ECal cluster. The magenta line is used as a calibration curve.}
\label{fig:07}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.75\textwidth]{images/photon_bias.png}
\caption{The timing bias distribution of $\tau_{\mathrm{fit}}$ before and after calibration. The mean of the bias is reduced by a factor of 3.}
\label{fig:08}
\end{figure}
|
1,314,259,993,552 | arxiv | \section{Introduction}
\label{sec:intro}
Modern face classifiers based on deep learning methods leverage large datasets to learn the essential features that make each face unique and are able to achieve high-performance on the recognition task, even outperforming human capabilities \cite{Taigman2014}.
However, in presence of occlusions, like masks, glasses, hair, or even food, face classifiers struggle \cite{Ekenel2009, Zeng2020}.
Face detection is the first step for recognition, the general case approaches consist of using rigid templates, \emph{e.g.}, the Viola-Jones face detector \cite{Viola2001}; deformable part models (DPM) \cite{Yan2014}; and deep learning models, like the Sigle-Shot Mutibox Detector (SSD) \cite{Liu2016}, for example.
When extremely large datasets \cite{Yang2016} are used to train deep learning methods, those are able to detect occluded faces \cite{Zhang2020}, still there are other approaches specialized in detection of faces under occlusion.
Those methods are commonly tested over the masked faces in the wild (MAFA) dataset \cite{Ge2017} and employ strategies such as locating visible facial segments to estimate the face \cite{Yang2018}, fusing detection results obtained from face sub-regions \cite{Mahbub2016}, or considering occluded faces as a particular single-class object detection problem \cite{Opitz2016}.
Usually the approaches to solve occluded face recognition (OFR) can be divided into three categories \cite{Zeng2020}:
\begin{enumerate}
\item In occlusion-robust feature extraction approaches, the aim is to extract features that are less affected by occlusions, while preserving the discriminative capability.
Feature extraction can be performed by engineered-based methods, such as local binary patterns (LBP) \cite{Ahonen2004}, and scale-invariant feature transform (SIFT) descriptors \cite{Lowe1999}, or by learning-based methods, in which deep convolution neural networks (CNN) approaches stand out by performing the training with augmented occluded faces and
reducing the spatial support for a more
local feature extraction \cite{Osherov2017}.
\item In occlusion-aware face recognition, the idea is to exclude the occlusion area, such that only part of the face is used for the classification task \cite{Wan2017,Song2019}.
\item Finally, there are occlusion-recovery-based approaches that try to solve the occlusion problem in the image space by completing the occluded area.
Again, deep learning techniques stand out either when explicitly addressing de-occlusion of faces \cite{Zhao2018,Li20}, or when used for blind inpainting through generative models employing variational autoencoders (VAE) \cite{Kingma2013}, generative adversarial networks (GAN) \cite{Goodfellow2014, He2019, Ge2020}, and partial convolutions \cite{Liu2018}.
\end{enumerate}
We used a dataset formed by images containing only aligned faces, such that face detection was not needed.
Our focus is thus on the classification part of an OFR system.
We employed data-driven approaches to present an occlusion-robust classifier leveraging robust feature extraction, and a recovery-based strategy for image completion for the occluded face classification.
The tested occlusions were artificially added to the images utilizing binary masks and filling-in with a mid grey color.
Besides the benchmarks achieved in this paper, our contribution also comprises the resulting publicly available datasets with different kinds of synthetic occlusions and their inpainted counterparts.
To the best of our knowledge, this group of datasets is quite unique in the literature and surely can be helpful to facilitate and promote research on the field.
\section{Datasets}
\label{sec:datasets}
\subsection{CelebA-HQ}
\label{sec:celeba-hq}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{Figs/celeba_hq.pdf}
\caption{Examples of the CelebA-HQ dataset.}
\label{fig:celeba_hq}
\end{figure*}
To avoid the need for face detection and to concentrate our efforts in the classification task, we choose the CelebFaces Attributes Dataset (CelebA) \cite{Liu2015}.
More specifically, we utilized the aligned and cropped version pre-processed for super-resolution at 1024-by-1024 pixels, also known as the CelebA-HQ dataset comprising of 30,000 images \cite{Karras2018, CelebA-HQ}.
In the CelebA-HQ dataset, the number of images is not equally divided among classes, so we worked only with a subset of it, in which each class (or person) appears, at least, in 15 images.
Thus, the dataset was reduced from 30,000 to 5,478 images of center-aligned faces from 307 celebrities.
The dataset was randomly divided into three fixed sets: a training set (3,943 images), with at least 10 images of each class; a validation set (921 images), with 3 images of each class; and a test set (614 images), with 2 images of each class.
Figure \ref{fig:celeba_hq} depicts examples of the dataset.
Looking through the images we noticed some characteristics worth mentioning:
\begin{enumerate}[i)]
\item Although most of the pictures are front-facing, there is some pose variation;
\item There are some faces with natural occlusions (hair, hats, glasses, hands, microphone);
\item There are some images with distorted background;
\item The dataset is not well-balanced, blond white women are the majority;
\item Celebrities tend to change a lot their appearance, making the classification task harder.
\end{enumerate}
\subsection{Synthetic occlusions}
\label{sec:synth_occlusions}
To simulate occlusions we defined 8 binary masks to account for different parts of the face, as depicted in Figure \ref{fig:masks} and discussed below.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{Figs/masks.pdf}
\caption{Binary masks used to simulate occlusions (white pixels).}
\label{fig:masks}
\end{figure}
\begin{itemize}
\item Masks 1 and 2 represent minor occlusions, removing only one eye (left or right eyes are occluded randomly) or the nose, respectively.
\item Mask 3 simulates occlusions caused by surgical masks, which, nowadays, became a quite common occlusion artifact.
\item Mask 4 occludes both eyes and most of the hair, which is a typical occlusion used to anonymize identity.
\item Mask 5 occludes one entire side of the face to simulate occlusion by pose variation (the side of the occlusion is randomly chosen).
\item Mask 6 simulates occlusion by watermarking or by some degradation in the picture.
\item Masks 7 and 8 are the control group, with the former isolating the face and occluding only the background, where minimal interference in detection is expected, while the latter completely occludes the face and correct matches are only expected by chance.
\end{itemize}
Each of the occlusions were applied only to the test dataset, thus resulting in eight new test sets.
The occluded areas were filled-in with a mid grey color.
Figure \ref{fig:datasets_w_occlusions} depicts one example of each such datasets.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{Figs/datasets_w_occlusions.pdf}
\caption{Examples of the eight resulting test datasets with occlusions.}
\label{fig:datasets_w_occlusions}
\end{figure}
\subsection{Inpainting}
\label{sec:inpainting}
Inpainting, or image completion, is the process of reconstructing the missing parts of an image.
The inpainted region should have semantic cohesion with the overall scene, continuity around the edges, and fill-in visually realistic content.
It is quite subjective since there is no prior knowledge about missing parts, and many completions may appear correct but may modify cues that may be very relevant for a person's identity.
There are many approaches for solving inpainting problems, as mentioned in Section \ref{sec:intro}.
For this paper we choose to use the pluralistic image completion (PICNet) \cite{Zheng2019}.
It is a probabilistically principled framework, which during training, uses the original image $I_{gt}=\{I_m,~I_c\}$, a degraded version $I_m$ (the masked partial image), and $I_c$ its complement partial image, to generate inpainted versions to $I_m$ by sampling from $p(I_c|I_m)$.
The PICNet is implemented with two parallel paths, as illustrated in Figure \ref{fig:picnet}.
The upper path is the reconstructive pipeline (red lines), which utilizes $I_c$ to infer the importance function $q_{\psi}(\textbf{z}_c|I_c)$ during training.
Using the sampled latent vector $\textbf{z}_c$ and the conditional feature $\textbf{f}_m$, which encodes the information of the visible regions from $I_m$, there is enough information to train the decoder to reconstruct the original image $I_g$.
\begin{figure*}[hbt]
\centering
\includegraphics[width=.7\textwidth]{Figs/picnet.pdf}
\caption{Simplified diagram of the architecture of the PICNet \cite{Zheng2019}.}
\label{fig:picnet}
\end{figure*}
The lower path is the generative pipeline (blue lines), which only uses $I_m$ to infer the conditional prior $p_{\phi}(\textbf{z}_c|I_m)$ of the holes.
In this case the sampled latent vector $\tilde{\textbf{z}}_c$ is not accurate enough to reconstruct the whole image, thus the decoder targets to reconstruct only the visible regions $I_m$, using $\textbf{f}_m$.
Both paths are supported by GANs, to ensure that the synthesized data fit in with the training set distribution, leading to higher-quality images.
Additionally, there is a short+long term attention layer that exploits distant relations among decoder and encoder features, improving appearance consistency.
Both representation and generation networks share the exact same weights, and testing is performed only in the generative path, with $\tilde{I}_{\emph{gen}}$ the inpainted output.
In our evaluations, we used a model trained on the CelebA-HQ dataset using random occlusion masks \cite{Zheng2019}.
Figure \ref{fig:inpainting_datasets} shows some examples of the inpaintings produced by PICNet.
\begin{figure}[bt]
\centering
\includegraphics[width=\columnwidth]{Figs/inpainting_datasets.pdf}
\caption{Examples of inpainting using PICNet (original-masked-inpainted). The reconstruction of images in the first row is quite faithful to the original images. Images in the second row have a high-quality reconstruction, although quite different from its original counterpart. The last two rows show
cases that can be considered as failure, as PICNet introduced unexpected artefacts and distorted faces.
}
\label{fig:inpainting_datasets}
\end{figure}
Although the PICNet is able to generate multiple and diverse plausible solutions for image completion, the inpainted datasets were created using only one solution for each image.
\section{Face classification}
\label{sec:face_classification}
\subsection{Regular Classifier}
\label{sec:regular_classifier}
For the face classification task we chose to use a ResNet \cite{He2016} architecture.
Since the chosen dataset did not provide many images from each class (15 up to 28 images for each person), data augmentation was performed.
We performed data augmentation by applying random variations of the input data for the training set images, such that the images became different, but their meaning is unchanged. These include flipping, rotation, zooming, warping, and lighting transformations.
We started training our classifier with pre-trained ResNets from \texttt{fastai} \cite{Howard2020}, trained on ImageNet \cite{ImageNet}.
Pre-trained weights make for a better initialization than random weights, since the layers are already suited to extract meaningful features for image classification.
A cross entropy loss function with flattened input was used to compare predictions and targets.
To define the size of the network, we trained ResNets with 18, 34, 50, and 101 layers.
The training for all those networks was performed using 1 frozen epoch and 25 unfrozen ones.
The learning rate was scheduled using cosine annealing and momentum \cite{Smith2019}, with its maximum at $\lambda=5\cdot10^{-3}$, found using the technique from Leslie Smith \cite{Smith2017}.
The results for the validation and test sets are shown in Table \ref{tab:resnets}.
For the regular classifier and for the experiments in the remaining of this paper, we chose to use the ResNet101, since it provided the lowest error rate in the test set. Since the purpose of the experiments in this section was to establish the baseline method that uses non-occluded faces and to decide the architecture to be used for comparison, it was fair to use the test set in this decision. In the remainder of this paper, any hyper-parameter tuning was done based on the validation set.
\begin{table}[htb]
\centering
\caption{Baselines error rates for different ResNet depths considering the test set without occlusions.}
\label{tab:resnets}
\begin{tabular}{@{}lccc@{}}
\toprule
\multirow{2}{*}{Architecture} & \multirow{2}{*}{Validation error} & \multicolumn{2}{c}{Test error} \\ \cmidrule(l){3-4}
& & Top-1 & Top-5 \\ \midrule
ResNet18 & 25.52\% & 23.94\% & 11.07\% \\
ResNet34 & 22.37\% & 23.62\% & 10.10\% \\
ResNet50 & \textbf{19.00\%} & 20.52\% & 8.47\% \\
ResNet101 & 19.98\% & \textbf{17.91\%} & \textbf{7.00\%} \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Robust Classifier}
\label{sec:robust_classifier}
To train the robust classifier we added the Cutmix \cite{Yun2019} approach as a regularization strategy.
The idea of this method is to cut and paste rectangle patches among training images, where the labels are also mixed proportionally to the area of the patches.
This approach prevents the CNN from focusing too much on a small set of intermediate activations or on a small region of input images, which improves generalization and localization by forcing the model to attend to the entire object.
Also, since the patches are formed by other images from the dataset and the ground truth labels are also mixed, this approach further enhances localization ability by requiring the model to identify the object from a partial view.
Therefore, Cutmix adds exactly the kind of robustness desired for classifying images under occlusion.
Figure \ref{fig:data_aug} illustrates the data augmentation strategies applied in the training images.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{Figs/data_aug.pdf}
\caption{Examples of data augmentation. (a) Original image. (b) Output of data augmentation transforms. (c) Cutmix, notice the label in this case is a mixture of the two images. }
\label{fig:data_aug}
\end{figure}
Training with Cutmix is considerably slower, since it becomes harder for the model to learn from the mixed examples. This is the case for any method that deals with noisy labels~\cite{cordeiro2020tutorial}.
The training was performed the same way as before, using the transfer learning procedure and learning rate schedule, but this time we had to train the model for 70 unfrozen epochs to reach a validation error of \textbf{19.00\%}. The obtained model reaches top-1 and top-5 test errors of \textbf{17.42\%} and \textbf{7.98\%}, respectively. As expected, this is a little bit better than the original method, but the obtained network should have higher generalization power.
\section{Performance assessment and analysis}
\label{seq:analysis}
\subsection{Experiment setup}
\label{sec:setup}
In order to check how the occlusions affect the classification task we use as baselines the results obtained in Section \ref{sec:face_classification} for the top-1 error over the test set without occlusions: 17.91\% and 17.42\%, respectively for the regular and for the robust classifiers.
Then, we check, for each of the test sets with occlusions, how the classifiers perform.
Finally, we compare these results with the classification error rates for the inpainted datasets.
\subsection{Results}
\label{sec:results}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{Figs/results.pdf}
\caption{Performance of the two classifiers over different occlusions and inpaintings. Top: ResNet101 regular training. Bottom: ResNet101 with Cutmix.}
\label{fig:results}
\end{figure}
After adding the synthetic occlusions, we noticed that the error rate increased for both classifiers in all tested occlusion masks, the dark blue bars in Figure \ref{fig:results}.
As for the occlusion-recovery-based approach with the inpainted images (green bars), we can see that it was not always helpful and its behavior is heavily dependent on the classifier's robustness.
From Figure \ref{fig:results}-top, we observe that the regular classifier was severelly affected by the occluded images, having its error rate doubling for all cases, except for mask 7.
The inpainted images did improve this classifier's performance, specially in the presence of small occlusions, and for the watermark case (mask 6).
The robust classifier in Figure \ref{fig:results}-bottom was indeed better at ignoring the occlusions, having lower error rates for all cases.
One interesting thing is that the inpainting did worsen the robust classifier's performance for some masks (1, 4, and 5), this makes sense when we look at some of the inpainted images and see that, in some cases, the completions completely changes subject's appearance.
\begin{figure*}[t]
\centering
\includegraphics[trim={0 0 0 0},clip,width=.96\textwidth]{Figs/inpaintings.pdf}
\caption{Example of inpainting results for all the masks.}
\label{fig:inpaintings}
\end{figure*}
The inpainting results for masks 1 and 5 were a little disappointing since the short+long term attention layer of the PICNet should supposedly ``copy'' the symmetric information, such that, in those cases, a lower error rate was expected.
Results for mask 4 are expectedly high since this kind of occlusion is indeed made to hide identity (for anonymization purposes), and inpainting in this case could not recover the true features of the faces.
The best performance gains for the inpainting technique happened with the heavy watermark mask (6). This kind of occlusion was the worst for both classifiers and inpainting brought the error rate near the non-occluded baseline.
A surprising result was that of mask 7, which we believed would not have caused any harm on either classifier because it only masks out the areas of the images which are near the borders. However, that mask degraded both classifiers, indicating that ResNet (even if trained with CutMix) clearly uses non-facial cues for person recognition. Furthermore faces in this dataset are not aligned and relevant identification information lie near the border of some images.
Results in mask 8 are as expected, both classifiers failed to deliver any sensible classification with or without inpainting, since most of the subjects' faces are not visible with that mask.
In Figure \ref{fig:inpaintings} we illustrate some examples of the inpainting results for all the masks.
\section{Conclusions}
In this paper we studied two ways to deal with the problem of face recognition under occlusions (OFR) and
created 16 modified test sets which can be used for OFR problems, as well as for detection of deep fakes.
In our results, training a robust classifier was the strategy that generated better results, but for almost every case, inpainting was helpful for improving the accuracy of the classifier.
For future works, we propose to investigate how much of the techniques presented here could be transferred for situations in which occlusion masks are not available and have to be inferred by semantic segmentation.
Furthermore, it is well known the robustness of face recognition systems is improved by registering faces by aligning their features (e.g.\ eyes and nose tip). However, our results with mask 7 (which only occludes pixels near the image border) may indicate that some peripheral features (such as hair and neck) actually have a relatively high importance on face recognition. It will be interesting to investigate that systematically, using facial feature detection and explainable AI techniques.
Finally, it will be interesting to make use of more than one output of the pluralistic image completion approach to (i) check if there is considerably variability in the results and (ii) use an ensemble strategy to combine results of multiple inpainted versions of each image.
\bibliographystyle{IEEEtran}
|
1,314,259,993,553 | arxiv | \section{Introduction}
Let $(M,g)$ be a compact Riemannian manifold without boundary, denote
by $\Delta_{g}$ its Laplace-Beltrami operator, and consider the
nonlinear Klein-Gordon equation
\be \label{KG}
(\partial_{t}^2-\Delta_{g}+V+m^2)v=-\partial_{2}f(x,v)
\ee
where $m$ is a strictly positive constant, $V$ is a smooth nonnegative
potential on $M$ and $f\in C^\infty(M\times \R)$ vanishes at least at
order 3 in $v$, $\partial_{2}f$ being the derivative with respect to
the second variable. In this work we prove that, for a special class
of manifolds and for almost every value of $m>0$, this
\textit{Hamiltonian} partial differential equation admits a Birkhoff
normal form at any order. The principal dynamical consequence is the
almost global existence of small amplitude solutions for such a
nonlinear Klein-Gordon equation.
More precisely, if $M$ is a Zoll manifold (i.e. a compact manifold
whose geodesic flow is periodic, e.g. a sphere), for almost every
value of $m>0$ and for any $N\in \Nn$, we prove that there is $s\gg 1$
such that, if the initial data
$(v\arrowvert_{t=0},\partial_{t}v\arrowvert_{t=0})$ are of size $\epsilon
\ll 1$ in $H^s \times H^{s-1}$, \eqref{KG} has a solution defined on a
time interval of length $C_{N}\ \epsilon^{-N}$. As far as we know,
this is the first result of that type when the dimension of the
manifold is larger or equal to 2.
\medskip
Let us recall some known results for the similar problem on $\R^d$,
when the Cauchy data are smooth, compactly supported, of size
$\epsilon\ll 1$. In this case, linear solutions decay in $L^\infty$
like $t^{-d/2}$ when $t\to\infty$. This allows one to get global
solutions including quasi-linear versions of (\ref{KG}), when
$d\geq2$ (see Klainerman \cite{K1} and Shatah \cite{Sh} if $d\geq3$
and Ozawa, Tsutaya and Tsutsumi \cite{OTT} if $d=2$). When $d=1$
Moriyama, Tonegawa and Tsutsumi \cite{MTT} proved that
solutions exist over intervals of time of exponential length
$e^{c/\epsilon^2}$. This result is in general optimal (see references
in \cite{D}), but global existence for small $\epsilon>0$ was proved
in \cite{D} when the nonlinearity satisfies a special condition (a ``null
condition'' in the terminology introduced by Klainerman in the case
of the wave equation in 3--space dimensions \cite{K}).
For the problem we are studying here, since we have no dispersion on a
compact manifold, we cannot hope to exploit any time decay of the
solutions of the linear equation. Instead we shall use a normal form
method. Remark that if in (\ref{KG}) the nonlinearity vanishes at
order $p\geq2$ at $v=0$, local existence theory gives a solution
defined on an interval of length $c\epsilon^{-p+1}$. Recently, in
\cite{DS1}, \cite{DS2} Delort and Szeftel proved that the solution of
the same equation exists, for almost all $m>0$, over a time interval
of length $c\epsilon^{-q+1}$, where $q$ is an explicit number strictly
larger than $p$ (typically $q=2p-1$). Actually these papers concern
more general nonlinearities than the one in \eqref{KG}, namely a
suitable class of non Hamiltonian nonlinearities depending on time and
space derivatives of $v$.
One of the ideas developed by Delort-Szeftel consists in reducing, by
normal form procedure, \eqref{KG} to a new system in which the
nonlinearity vanishes at order $q>p$ at the origin. In \cite{DS2} an
explicit computation showed that the first order normal form (which
leads to a nonlinearity of degree $q$) conserves also the $H^s$ norm
for any large $s$, whence the result cited above.
\medskip
On the other hand in \cite{BG} Bambusi and Gr{\'e}bert proved an
abstract Birkhoff normal form theorem for Hamiltonian PDEs. Although
that theorem remains valid in all dimensions, it supposes that the
nonlinearity satisfies a ``tame modulus" property. In \cite{BG} this
property was only verified for a quite general class of $1-d$ PDEs and
for a particular NLS equation on the torus $\T^d$ with arbitrary $d$.
Actually in that paper, the tame modulus property was verified by the
use of the property of ``well localization with respect to the
exponentials" established by Craig and Wayne \cite{CW}, a property which
has no equivalent in higher dimensions.
It turns out that in \cite{DS2} Delort and Szeftel proved an estimate
concerning multilinear forms defined on $M$ that implies a weaker form
of the tame modulus property assumed in \cite{BG}.
The present paper is the result of the combination of the arguments
of \cite{DS1}, \cite{DS2} and of \cite{BG}.
\medskip
We recall that some other partial normal form results for PDEs have been
previously obtained by Kuksin and P{\"o}schel \cite{KP96}, by Bourgain
\cite{Bo96,Bo04} and, for perturbations of completely resonant systems, by
Bambusi and Nekhoroshev \cite{BN98}. For a more precise discussion we
refer to the introduction of \cite{BG}.
\medskip
Let us conclude this introduction mentioning several open questions.
The first concerns the possibility of proving almost global existence
for more general nonlinearities than the Hamiltonian ones we consider
here. Of course, one cannot expect to be able to do so for any
nonlinearity depending on $v$ and its first order derivatives: in
\cite{D1} an example is given on the circle ${\mathbb{S}}^1$ of a
nonlinearity for which the solution does not exist over a time
interval of length larger than the one given by local existence theory
(Remark that this example holds true for any value of $m>0$). On the
other hand, Delort and Szeftel constructed in \cite{DS3} almost global
solutions of equations of type \eqref{KG} on manifolds of revolution,
for radial data, with a nonlinearity $f$ depending on
$(v,\partial_{t}v)$ and even in $\partial_{t}v$. We thus ask the
question of finding a ``null condition" (in the spirit of Klainerman
\cite{K}) for semi-linear nonlinearities $f(v,\partial_{t}v, \nabla
v)$, which would allow almost global existence of small $H^s$
solutions for almost every $m>0$.
The second question we would like to mention concerns the
exceptional values of $m$ which are excluded of our result. The
conservation of the Hamiltonian of equation \eqref{KG} allows one to
control the $H^1$-norm of small solutions. This implies global
existence of small $H^1$ solutions in one or two space dimensions. The
results we establish in the present paper show that for almost every
$m>0$, the $H^s$-norms of these solutions remain small over long time
intervals if they are so at $t=0$. What happens when $m$ is in the
exceptional set? In \cite{Bo96b} Bourgain constructed, in one space
dimension and for a convenient perturbation of $-\Delta$, an example
of a solution whose $H^s$-norm grows with time. Nothing seems to be
known in larger dimensions. In particular, if $d\geq 3$, one does not
even know if for all $m>0$ a solution exists almost globally,
eventually without staying small in $H^s$ ($s\gg 1$).
\section{Statement of main results}
We begin, in section \ref{subsec1.1}, by a precise exposition of our
result concerning the almost globality. The Birkhoff normal form
theorem for equation \eqref{KG} that implies the almost globality
result will be presented in section \ref{subsec1.3}, after the
introduction of the Hamiltonian formalism in section \ref{subsec1.2}.
\subsection{Almost global solution}\label{subsec1.1}
Let $(M,g)$ be a compact Riemannian manifold without boundary of
dimension $d\geq 1$. Denote by $\Delta_{g}$ its Laplace-Beltrami
operator. Let $V$ be a smooth nonnegative potential on $M$ and $m\in
(0,\infty)$. Let $f\in C^\infty(M\times \R)$ be such that $f$ vanishes at
least at order 3 in $v$. We consider the following Cauchy problem for
the nonlinear Klein-Gordon equation
\begin{align}
\begin{split} \label{KGCauchy}
(\partial_{t}^2-\Delta_{g}+V+m^2)v&=-\partial_{2}f(x,v)
\\
v\arrowvert_{t=0}&=\epsilon v_{0}
\\
\partial_{t}v\arrowvert_{t=0}&=\epsilon v_{1}
\end{split}
\end{align}
where $v_{0}\in H^s(M,\R)$, $v_{1}\in H^{s-1}(M,\R)$ are real valued
given data and $\epsilon>0$. We shall prove
that the above problem has almost global solutions for almost every
$m$ when $\epsilon>0$ is small enough and $s$ is large enough, under
the following geometric assumption on $M$:
\definition One says that $(M,g)$ is a Zoll manifold if and only if
the geodesic flow is periodic on the cosphere bundle of $M$.
\medskip
Our main dynamical result is the following:
\begin{theorem} \label{thm1}
Let $(M,g)$ be a Zoll manifold and let $V:M\to \R$ be a smooth
nonnegative potential. Let $r\in \Nn$ be an arbitrary integer. There
is a zero measure subset $\mathcal N$ of $(0,+\infty)$, and for any
$m\in (0,+\infty) \setminus \mathcal N$, there is $s_{0}\in\Nn$ such that
for any $s\geq s_{0}$, for any real valued $f\in C^\infty(M\times \R)$
vanishing at least at order 3 at $v=0$, there are $\epsilon_{0}>0$,
$c>0$, such that for any pair $(v_{0},v_{1})$ of real valued functions
belonging to the unit ball of $H^s(M,\R)\times H^{s-1}(M, \R)$, any
$\epsilon \in (0,\epsilon_{0})$, the Cauchy problem \eqref{KGCauchy}
has a unique solution
$$ v\in C^0((-T_{\epsilon},T_{\epsilon}),H^s(M,\R))\cap
C^1((-T_{\epsilon},T_{\epsilon}),H^{s-1}(M,\R))$$ with
$T_{\epsilon}\geq c\epsilon^{-r}$. Moreover there is $C>0$ such that,
for any $t\in (-T_{\epsilon},T_{\epsilon})$, one has
\be
\label{estimHs} \Vert{v(t,\cdot )}\Vert_{H^s}+\Vert\partial_{t}
v(t,\cdot )\Vert_{H^{s-1}}\leq C\epsilon \ .
\ee
\end{theorem}
\noindent
{\bf Comments}
The above theorem provides Sobolev bounded almost global solutions for
equation \eqref{KGCauchy} with small smooth Cauchy data on a
convenient class of compact manifolds. To our knowledge this is the
first result of this kind on compact manifolds of dimension larger or equal
to 2. In the case of one dimensional compact manifolds, similar
statements have been obtained by Bourgain \cite{Bo96,Bo04} (with a loss on
the number of derivatives of the solution with respect to those of the
data), by Bambusi \cite{Bam03} and by Bambusi-Gr{\'e}bert
\cite{BG}. Remark that in this case, because of the conservation of
the Hamiltonian of the equation, one controls uniformly the $H^1$-norm
of small solutions, which implies global existence of such
solutions. The results of the preceding authors allow to control
$H^s$-norms of these solutions for very long times. In the case of
compact manifolds of revolution and for convenient radial data,
Delort and Szeftel got in \cite{DS3} Sobolev bounded almost global
solutions (remark that this result is morally one-dimensional).
\medskip
The assumption that $M$ is a Zoll manifold will be used in the proof
through distribution properties of the eigenvalues of the Laplacian
of $M$. Actually we shall prove theorem \ref{thm1} for any
compact manifold without boundary ($M,g)$ such that if
\be \label{P}
P=\sqrt{-\Delta_{g} +V},
\ee
the spectrum $\sigma(P)$ of $P$ satisfies the following condition:
there are constants $\tau >0$, $\alpha \in \R$, $c_0 >0$, $\delta
>0$, $C_0 >0$, $D\geq 0$, and a family of disjoint compact
intervals $(K_{n})_{n\geq 1}$, with $K_{1}$ at the left of $K_{2}$
and for $n\geq 2$
\be \label{Kn}
K_{n}=\left[ \frac{2\pi}{\tau}n+\alpha -\frac{c_{0}}{n^\delta},
\frac{2\pi}{\tau}n+\alpha +\frac{c_{0}}{n^\delta}\right],
\ee
such that
\begin{align}\begin{split}\label{115}
\sigma(P)&\subset \bigcup_{n\geq 1}K_{n}\\
\#(\sigma(P)\cap K_{n})&\leq C_{0}n^D\ .
\end{split}\end{align}
If $M$ is a Zoll manifold, and if $\tau >0$ is the minimal period of
the geodesic flow on $M$, the results of Colin de Verdi{\`e}re
\cite{CV} (see also Guillemin \cite{G} and Weinstein \cite{W}) show
that the large eigenvalues of $P$ are contained inside the union of
the intervals
$$
\left[ \frac{2\pi}{\tau}n+\alpha -\frac{C}{n},
\frac{2\pi}{\tau}n+\alpha +\frac{C}{n}\right]
$$
for $n$ large enough and for some constant $C>0$. Making a translation
in $n$ and $\alpha$, and changing the definition of the constants, one
sees that this implies conditions \eqref{Kn}, \eqref{115} for any
$\delta \in (0,1)$ (remark that the second condition in \eqref{115}
holds true with $D=d-1$ because of Weyl law).
On the other hand conditions \eqref{Kn}, \eqref{115} are not more
general than the assumption that $M$ is a Zoll manifold, since by
theorem 3.2 in Duistermaat and Guillemin~ \cite{DG}, they imply
that the geodesic flow is periodic.
\subsection{Hamiltonian formalism}\label{subsec1.2}
We introduce here (see e.g. \cite{CheMa}) the Hamiltonian
formalism we shall use to solve the equation. We denote by
\begin{equation}
\label{1.2.1}
\langle f_1,f_2\rangle
\end{equation}
the bilinear pairing between complex valued distributions and test
functions on $M$. We shall use the same notation for vector valued
$f_1,f_2$.
If $F$ is a $C^\infty$ function on an open subset $\U$ of the Sobolev
space of real valued functions $\hsi$, $\sig\geq 0$, we define for
$p\in\U$, the $L^2$ gradient $\nabla F(p)$ by
\begin{equation}
\label{1.2.2}
\partial F(p)h= \langle\nabla F(p), h \rangle\ ,\quad \forall h \in \hsi,
\end{equation}
$\partial F$ denoting the differential. In that way $\nabla F(p) $ is
an element of $H^{-\sig}(M,\R)$. When we consider real valued
$C^\infty$ functions defined on an open subset of
$\hsi\times \hsi\equiv \hsi^2$, $(p,q)\mapsto F(p,q)$ we write
\begin{eqnarray*}
\partial F(p,q)&=& (\partial_p F(p,q),\partial_q F(p,q) )
\\
\nabla F(p,q)&=& (\nabla_p F(p,q),\nabla_q F(p,q) )\in
H^{-\sig}(M,\R)\times H^{-\sig}(M,\R).
\end{eqnarray*}
Endow $\hsi^2$ with the weak symplectic structure
\begin{equation}
\label{1.2.4}
\Omega\left((p,q),(p',q')\right):= \langle q,p'
\rangle - \langle q',p\rangle = \langle J^{-1}(p,q),(p',q') \rangle
\end{equation}
where $J$ is given by
\begin{equation}
\label{1.2.5}
J=
\left[
\begin{matrix}
0 & -\uno
\\
\uno &0
\end{matrix}
\right].
\end{equation}
If $\U$ is an open subset of $\hsi^2$ and $F\in C^\infty (\U,\R)$, then,
for $(p,q)\in\U$, we define its Hamiltonian vector field by
\begin{equation}
\label{1.2.6}
X_F(p,q)=J\nabla F(p,q)=(-\nabla_qF(p,q),\nabla_pF(p,q))
\end{equation}
which is characterized by
\begin{equation}
\label{1.2.7}
\Omega\left( X_F,(h_p,h_q)\right)=\partial F (h_p,h_q)= \partial_p F
h_p+\partial_q F h_q
\end{equation}
for any $(h_p,h_q)\in\hsi^2$.
A special role is played by the functions whose Hamiltonian vector
field is an $\hsi^2$ valued function. Thus we give the following
\begin{definition}
\label{d.1.2.1}
If $\U$ is an open subset of $\hsi^2$, we denote by $\csi$
(resp. $C^\infty_\sig(\U,\C)$) the space of real (resp. complex)
valued $C^\infty$ functions defined on $\U$ such that
\begin{equation}
\label{1.2.8}
X_F\in C^\infty(\U,\hsi^2 )\quad (\text{or}\quad \nabla F\in
C^\infty(\U,\hsi^2 ) ),
\end{equation}
resp.
\begin{equation}
\label{1.2.8bis}
X_F\in C^\infty(\U,\hsi^2 \otimes\C)\quad (\text{or}\quad \nabla F\in
C^\infty(\U,\hsi^2 \otimes\C) ).
\end{equation}
\end{definition}
We shall use complex coordinates in $\hsi^2$ identifying this space
with $\hsic$, through $(p,q)\mapsto u=(p+\im q)/\sqrt2$. We set
\begin{eqnarray}
\label{1.2.9}
&\partial_u=\frac{1}{\sqrt2}(\partial_p-\im\partial_q)
&\partial_{\bar u}=\frac{1}{\sqrt2}(\partial_p+\im\partial_q)
\\
\label{1.2.9.1}
&\nabla_u=\frac{1}{\sqrt2}(\nabla_p-\im\nabla_q)
&\nabla_{\bar u}=\frac{1}{\sqrt2}(\nabla_p+\im\nabla_q)
\end{eqnarray}
so that, if $F$ is a $C^1$ real valued function, we have an identification
\begin{equation}
\label{1.2.10}
X_F(u,\bar u)=\im \nabla_{\bar u}F(u,\bar u)\ .
\end{equation}
If $F\in\csi$, then clearly $X_F\in C^\infty(\U,\hsic)$.
For $m \in (0,+\infty)$ let us define
\begin{equation}\label{1.2.3}
\Lambda_m = \sqrt{-\Delta_g+V+m^2}.
\end{equation}
Let $\sig>(d-1)/2$. We shall write equation \eqref{KG} as a
Hamiltonian system for $p=\lm^{-1/2}\partial_t v $ and $q=\lm^{1/2}v$
on $\hsi^2$. Define
\begin{equation}
\label{1.2.11}
G_2(p,q)=\frac{1}{2}\int_M
\bigl(\bigl|\lm^{1/2}p\bigr|^2+\bigl|\lm^{1/2}q\bigr|^2\bigr) dx\
,\quad \tilde G(p,q)=\int_M f(x,\lm^{-1/2}q) dx
\end{equation}
where $dx$ is the Riemannian volume on $M$, and set
\begin{equation}
\label{1.2.12}
G=G_2+\tilde G.
\end{equation}
Then by (\ref{1.2.6})
\begin{eqnarray}
\label{1.2.13}
X_{G_2}(p,q)=(-\lm q,\lm p)\ ,\quad X_{\tilde G}(p,q)= (-\lm^{-1/2}
\partial_2f(x,\lm^{-1/2} q),0)
\end{eqnarray}
where $\partial_2f$ is the derivative with respect to the second
argument. Then one
has that $\tilde G\in \csi$ with $\U=\hsi^2$ (actually $X_{\tilde G}$ takes
values in $H^{\sig+1}(M,\R)^2$).
It follows also that equation (\ref{KG}) can be written as
\begin{equation}
\label{1.2.15}
(\dot p,\dot q)=X_G(p,q)
\end{equation}
or, using (\ref{1.2.10})
\begin{equation}
\label{1.2.16}
\dot u=\im \nabla_{\bar u}G(u,\bar u).
\end{equation}
In the rest of this section we shall give a few technical results that
we shall need for the proofs of theorems \ref{thm1}, \ref{thm2}.
\begin{definition}
\label{d.1.2.2}
Let $\U$ be an open subset of $\hsi^2$ and $F_j\in\csi$,
$j=1,2$. Then their Poisson bracket is defined by
\begin{equation}
\label{1.2.17}
\left\{F_1,F_2\right\}=\partial F_2
\cdot X_{F_1}=\Omega(X_{F_2},X_{F_1})
\end{equation}
and one has $\{F_1,F_2\}\in\csi $.
One extends the definition to complex valued functions by linearity of
the bracket relatively of each of its arguments.
\end{definition}
The fact that (\ref{1.2.17}) has a smooth vector field follows from
the well known formula
\begin{equation}
\label{1.2.17.1}
X_{\{ F_1,F_2\}}=[X_{F_1},X_{F_2}]=\partial X_{F_2}\cdot X_{F_1}-\partial
X_{F_1}\cdot X_{F_2}\ ,
\end{equation}
with the square bracket denoting the Lie bracket of vector fields (for
a proof of this formula in the case of weak symplectic manifolds see
\cite{Bam93}). In case either $F_1$ or $F_2$ do not have a smooth
vector field, one can also define their Poisson brackets by formula
(\ref{1.2.17}) but one has to check that it is a well defined
function, using the fact that we may write
\begin{equation}
\begin{split}
\label{1.2.18}
\left\{F_1,F_2\right\} &= - (\partial_p F_2)(\nabla_q F_1) +
(\partial_q F_2)(\nabla_p F_1)\\ &= - \langle\nabla_p F_2,\nabla_q F_1
\rangle + \langle\nabla_q F_2,\nabla_p F_1 \rangle \\ &= \im
(\partial_u F_2)(\nabla_{\bar u} F_1) -\im (\partial_{\bar u}
F_2)(\nabla_u F_1).
\end{split}
\end{equation}
Let us recall also the rule of transformation of vector fields and
Poisson brackets under symplectomorphism. Let $\U$ and $\V$ be open
subsets of $\hsi^2$, and $\chi:\U\to\V$ be a smooth symplectic
diffeomorphism. We have by definition for any $u \in \U$
\begin{equation}\label{1.2.19}
(\partial\chi(u))^{-1} = J{}^t(\partial\chi(u))J^{-1}.
\end{equation}
For $F\in C^{\infty}_\sig(\V,\R)$ one has
\begin{equation}
\label{1.2.20}
X_{F\circ \chi}(u)=(\partial \chi(u))^{-1}X_F(\chi(u))
\end{equation}
and therefore $F\circ\chi\in C^{\infty}_\sig(\U,\R) $ (actually
(\ref{1.2.20}) holds in the more general context where $\nabla F$ has
a domain which is left invariant by $\chi$). We also remark that for
any $C^1$ real-valued function $F_1$ on $\V$ and for any $F_2$ in
$C^{\infty}_\sig(\V,\R)$ one has
\begin{equation}
\label{1.2.21}
\left\{F_1\circ\chi,F_2\circ\chi\right\}=
\left\{F_1,F_2\right\}\circ\chi .
\end{equation}
To conclude this subsection let us state as a lemma the well known
formula that is the root of the Birkhoff normal form method as
developed using Lie transform.
\begin{lemma}
\label{l.1.2.3}
Let $F,G$ be two real valued functions defined on
$\U\subset\hsi^2$. Assume that $F\in C^\infty_\sig(\U,\R)$ and
$G\in C^\infty(\U,\R)$. Denote by
$(Ad\, F)\, h=\left\{F,h\right\}$. Then $(Ad\,F)G$ is well
defined, and if we assume that for some $n\geq 1$
\begin{equation}
\label{1.2.22}
F_n:= (Ad\,F)^n G
\end{equation}
is well defined and belongs to $C^\infty_\sig(\U,\R)$, then $F_{n+1}$
is also well defined.
Let $\V$ be such that $\overline{\V}\subset \U$. There
exists a positive $T$ such that the flow $\V\ni(p,q)\mapsto
\Phi^t(p,q)\in\U$ of $X_F$ is well defined and smooth for
$|t|<T$. Moreover, for $|t|<T$ and $(p,q)\in\V$, one has for any $r\in
\Nn$ the formula
\begin{equation}
\label{1.2.23}
G(\Phi^t(p,q))=\sum_{n=0}^{r}\frac{t^n}{n!}F_n(p,q)+\frac{1}{r!}\int
_0^t (t-s)^{r}F_{r+1}(\Phi^s(p,q))ds.
\end{equation}
\end{lemma}
\proof Remark first that $(Ad\,F)G$ is well defined by \eqref{1.2.18},
and that under our assumptions, for $n\geq 2$, $F_n$ is well defined
by definition~\ref{d.1.2.2}. Since $X_F$ is smooth on $\U$ the flow
$\Phi^t(.)$ is a smooth symplectic diffeomorphism on $\V$. For fixed
$(p,q)$ put $\phi(t)=G(\Phi^t(p,q))$. Formula (\ref{1.2.23}) follows
from Taylor formula since $\phi(t)$ is $C^\infty$. We thus have
$\phi'(t)=[(Ad\, F)\, G ](\Phi^t(p,q))=F_1(\Phi^t(p,q))$. Using
(\ref{1.2.22}) one proves by induction that
$\phi^{(n)}(t)=F_n(\Phi^t(p,q))$ and the conclusion follows. \qed
\subsection{Birkhoff Normal Form}
\label{subsec1.3}
Using the notation of section \ref{subsec1.1}, we define for $n\geq 1$
spectral projectors
\be \label{pin}
\Pi_{n}=\mathbf{1}_{K_{n}}(P)\ .
\ee
Then, for $(p,q)\in H^{s}(M,\R)^2$ we introduce the quantities
\be \label{Jn}
J_{n}(p,q)=\frac{1}{2}\left( \Vert\Pi_{n}p\Vert^2_{L^2}
+\Vert\Pi_{n}q\Vert^2_{L^2}\right)\ .
\ee
For $(p,q)\in H^{s}(M,\R)^2$ we denote
$$
\norma{(p,q)}_{s}^2:= \Vert p\Vert^2_{H^s}+\Vert q\Vert^2_{H^s}\
$$
We can now state our Birkhoff normal form result for the nonlinear
Klein-Gordon equation on Zoll manifolds:
\begin{theorem}\label{thm2}
Let $G$ be the Hamiltonian given by \eqref{1.2.11},
\eqref{1.2.12}. Then for any $r\geq 1$, there exists a zero
measure subset $\mathcal N$ of $(0,+\infty)$, and for any $m\in
(0,+\infty) \setminus \mathcal N$, there exists a large $s_0$ with
the following properties: For any $s\geq s_0$, there exist two
neighborhoods of the origin $\U$, $\V$, and a bijective canonical
transformation $\Tr:\V\to\U$
which puts the Hamiltonian in the form \begin{equation}
\label{for.nor} G\circ\Tr= G_{2}+\Ze+\resto \end{equation} where
$\Ze$ is a real valued continuous polynomial of degree at most
$r+2$ satisfying
\begin{equation}
\label{stime1}
\left\{J_n,\Ze \right\}=0\ ,\quad \forall n\geq 1
\end{equation}
and $\resto\in C^\infty_s(\V,\R)$ has a zero of order $r+3$ at
the origin. Precisely its vector field fulfills the estimate
\begin{equation}
\label{stime2}
\norma{X_{\resto}(p,q)}_s\leq
{C_s} \norma{(p,q)}_s^{r+2}\ ,\quad (p,q)\in\V.
\end{equation}
Finally the canonical transformation satisfies
\begin{equation}
\label{stime}
\norma{(p,q)-\Tr(p,q)}_s\leq C_s
\norma{(p,q)}_s^2\ ,\quad (p,q)\in\V.
\end{equation}
Exactly the same estimate is fulfilled on $\U$ by the inverse canonical
transformation.
\end{theorem}
From \eqref{stime} it follows $\Tr(0)=0$ and $\partial\Tr(0)=\uno $.
Theorem \ref{thm2} implies theorem \ref{thm1} (see the proof of theorem
\ref{thm1} in section \ref{sec:proofthm1}) but it says more:
namely, the $J_{n}$ are almost conserved
quantities for the equation \eqref{KG}. More
precisely, with the notation of theorems \ref{thm1} and \ref{thm2},
for any $n\geq 1$
\be \label{estimJn} \vert J_{n}(p(t),q(t))-
J_{n}(p(0),q(0))\vert\leq \frac C{n^{2s}}\epsilon^3
\quad \mbox{ for } |t|\leq \epsilon^{-r}
\ee
where $p(t)=\lm^{-1/2}\partial_t v(t) $ and $q(t)=\lm^{1/2}v(t)$ (for
the proof see the end of section \ref{sec:proofthm1}).
Roughly speaking, the last property means that energy transfers are
allowed only between modes corresponding to frequencies in the same
spectral interval $K_{n}$.
\section{Proof of the main results}
\label{proof}
In this section we prove theorem \ref{thm2} and then deduce theorem
\ref{thm1}. The proof uses a Birkhoff procedure described in subsection
\ref{birk}.
Formally this procedure is very close to the classical Birkhoff scheme
in finite dimension. Nevertheless, in infinite dimension, we need to
define a convenient framework in order to justify the formal
constructions. This framework, first introduced in \cite{DS2}, is
presented, and adapted to our context, in the next subsection.
\subsection{Multilinear Forms}
\label{multi}
Let us introduce some notations. If $n_1,\ldots,n_{k+1}$ are in $\Nn^*$,
we denote the second and third largest elements of this family by
\begin{equation}
\begin{split}
\label{2.1.1}
\max\!{}_2(n_1,\ldots,n_{k+1})&=\max\left(\left\{n_1,\ldots,n_{k+1}\right\}
-\left\{n_{i_0}
\right\}\right)
\\
\mu (n_1,\ldots,n_{k+1})&=\max\left(\left\{n_1,\ldots,n_{k+1}\right\}
-\left\{n_{i_0},n_{i_1}
\right\}\right)
\end{split}
\end{equation}
where $i_0$ and $i_1$ are the indices such that
$$ n_{i_0}=\max\!{}(n_1,\ldots,n_{k+1})\ ,\quad
n_{i_1}=\max\!{}_2(n_1,\ldots,n_{k+1})
$$
and where by convention, when $k=1$,
$\mu (n_1,n_2)=1$.
We define then
\begin{equation}
\begin{split}
\label{2.1.2}
S(n_1,\ldots,n_{k+1})&=\sum_{\ell=1}^{k+1}[n_\ell-\sum_{j\not=\ell}n_j]_++
\mu (n_1,\ldots,n_{k+1})
\end{split}
\end{equation}
where $[a]_+=\max(a,0)$. If $n_k$ and $n_{k+1}$ are the largest two among
$n_1,\ldots,n_{k+1}$, we have
\begin{equation}
\begin{split}
\label{2.1.3}
\mu (n_1,\ldots,n_{k+1})&\sim n_1+\cdots+n_{k-1} +1
\\
S(n_1,\ldots,n_{k+1})&\sim \left|n_k-n_{k+1}\right|+n_1+\cdots+n_{k-1}+1\ .
\end{split}
\end{equation}
We shall denote by $\E$ the algebraic direct sum of the ranges of the
$\Pi_n$'s defined by (\ref{pin}).
\begin{definition}
\label{d.2.1.1}
Let $k\in\Nn^*$, $\nu\in[0,+\infty)$, $N\in\Nn$.
\begin{itemize}
\item[i)] We denote by $\EL\nu N{k+1}$ the space of $(k+1)$--linear
forms $L:\E\times \cdots\times \E\to\C$ for which there exists $C>0$ such
that for any $u_1,\ldots,u_{k+1}\in\E$, any $n_1,\ldots,n_{k+1}$ in $\Nn^*$
\begin{equation}
\label{2.1.4}
\left|L(\Pi_{n_1}u_1,\ldots,\Pi_{n_{k+1}}u_{k+1}) \right|\leq
C\frac{\mu (n_1,\ldots,n_{k+1})^{\nu+N}}
{S(n_1,\ldots,n_{k+1})^N}\prod_{j=1}^{k+1} \norma{u_j}_{L^2}.
\end{equation}
\item[ii)] We denote by $\EM\nu N{k}$ the space of $k$--linear maps
$M:\E\times \cdots \times \E\to L^2(M,\C)$ for which there exists
$C>0$ such that for any $u_1,\ldots,u_{k}\in\E$ any
$n_1,\ldots,n_{k+1}$ in $\Nn^*$
\begin{equation}
\label{2.1.5}
\left\Vert \Pi_{n_{k+1}} M(\Pi_{n_1}u_1,\ldots,\Pi_{n_{k}}u_{k})
\right\Vert_{L^2}
\leq C\frac{\mu (n_1,\ldots,n_{k+1})^{\nu+N}}
{S(n_1,\ldots,n_{k+1})^N}\prod_{j=1}^{k} \norma{u_j}_{L^2}.
\end{equation}
\end{itemize}
The best constant $C$ in (\ref{2.1.4}), (\ref{2.1.5}) defines a norm
on the above spaces. We set also $\EL\nu {+\infty}{k+1} =
\bigcap_{N\in\Nn} \EL\nu N{k+1}$.
\end{definition}
Consider $L\in \EL \nu N{k+1}$ with $N>1$ and fix an integer
$j\in\{1,\ldots,k+1\}$ and elements $u_\ell\in\E$ for
$\ell\in\left\{1,\ldots,k+1\right\} -\left\{j\right\}$. Then by
\eqref{2.1.3}, \eqref{2.1.4}
$$
\sum_{n_j}L(u_1,\ldots,u_{j-1},\Pi_{n_j}u_j,
u_{j+1},\ldots,u_{k+1})
$$
converges for any $u_j\in L^2(M,\C)$, so $u_j\mapsto
L(u_1,\ldots,u_{k+1})$ extends as a continuous linear form on
$L^2(M,\C)$. Consequently, there is a unique element
$M_{L,j}(u_1,\ldots,\widehat {u_j},\ldots,u_{k+1})$ of $L^2(M,\C)$ with
\begin{equation}
\label{2.1.6}
L(u_1,\ldots,u_{k+1})=\left\langle u_j, M_{L,j}(u_1,\ldots,\widehat
{u_j},\ldots,u_{k+1}) \right\rangle
\end{equation}
for all $u_1,\ldots, u_{k+1}\in\E$. By \eqref{2.1.4}, $M_{L,j}$ satisfies
\eqref{2.1.5}, i.e. defines an element of $\EM\nu Nk$. Conversely, if
we are given an element of $\EM\nu Nk$, we define a multilinear form
belonging to $\EL\nu N{k+1}$ by a formula of type \eqref{2.1.6}.
The
basic example satisfying definition \ref{d.2.1.1} is provided by the
following result proved in \cite{DS2} (proposition 1.2.1).
\begin{proposition}
\label{p.2.1.2}
Let $k\in \Nn^*$. Denote by $dx$ any measure on $M$ with a $C^\infty$
density with respect to the Riemannian volume. There is
$\nu\in(0,+\infty)$ such that the map
\begin{equation}
\label{2.1.7}
(u_1,\ldots,u_{k+1})\mapsto \int_M u_1\cdots u_{k+1} dx
\end{equation}
defines an element of $\EL\nu{+\infty}{k+1}$.
\end{proposition}
\begin{remark}
\nonumber Up to now we did not use the spectral assumption
\eqref{115} on the manifold $M$. Actually proposition 1.2.1 of
\cite{DS2} is proved on any compact manifold without boundary,
replacing in \eqref{2.1.4} the spectral projectors $\Pi_n$ defined in
\eqref{pin} by spectral projectors $\Pi_\lambda$ associated to
arbitrary intervals of center $\lambda$ and length $O(1)$.
\end{remark}
We now use the fundamental example given by the previous proposition
to verify that the nonlinearity $\tilde G$ defined in \eqref{1.2.11}
is in a good class of Hamiltonian functions. If $L$ is a
$(k+1)$-linear map, and if $a\in \Nn$ satisfies $0\leq a\leq k+1$, we
set for $u,\bar u \in \mathcal E$
\be \label{2.1.10}
\underline L^a(u,\bar u)= L(u,\ldots,u,\bar u,\ldots,\bar u)
\ee
where in the right hand side one has $a$ times $u$ and $(k+1-a)$--times
$\bar u$. We then define the following class of Hamiltonian functions:
\begin{definition}
\label{multi.d}
For $k \in \Nn$ and $s, \nu \in\R$ with $s>\nu + \frac{3}{2}$, we define
$\mathcal{H}_{k+1}^s(\nu)$ as the
space of all real valued smooth functions defined on $H^s(M,\C)$,
$(u,\bar{u}) \to Q(u,\bar{u})$, such that there are for $\ell = 0,\ldots,k+1$
multilinear forms $L_\ell \in \mathcal{L}_{k+1}^{\nu,+\infty}$ with
\[Q(u,\bar{u}) = \sum_{\ell =0}^{k+1} \underline{L}_\ell^\ell(u,\bar{u}).\]
\end{definition}
This definition is obtain by adapting to our context the usual
definition of polynomial used for example in the theory of analytic
functions on Banach spaces (see for example \cite{Muj} or
\cite{Nik86}).
As a consequence of proposition \ref{p.2.1.2} one gets:
\begin{lemma}\label{lem:G}
Let $P$ be the Taylor's polynomial of $\tilde G$ at degree $k$.
Then there exists
$\nu\in(0,+\infty)$ such that $P$ can be decomposed as
$$P=\sum_{j=3}^{k} P_{j}$$
where $P_{j} \in \mathcal{H}_{j}^s(\nu)$.
\end{lemma}
Let us recall the main properties for $\EM\nu N{k}$ established
in proposition 2.1.3 and theorem 2.1.4 of \cite{DS2}.
\begin{proposition}
\label{p.2.1.3.bis}
\begin{itemize}
\item[i)] Let $\nu\in[0,+\infty)$, $s\in\R$, $s>\nu+3/2$, $N\in\Nn$,
$N>s+1$. Then, any element
$M\in\EM\nu N{k}$ extends as a bounded operator from
${\hcs s}^{k}$ to $\hcs s$.
Moreover, for any $s_0\in(\nu+3/2,s]$, there is $C>0$ such that for any
$u_\ell\in\hcs s$, $\ell\in\{1,\ldots,k\}$
\begin{equation}
\label{2.1.8.bis}
\norma{M(u_1,\ldots,u_{k})}_{H^s}\leq C \norma{M}_{\EM\nu
N{k}}\Big(\sum_{{1\leq \ell\leq
k}}\norma{u_\ell}_{H^s}\prod_{{\ell'\not=\ell}}
\norma{u_{\ell'}}_{H^{s_0}}\Big).
\end{equation}
\item[ii)] Let $k_1,k_2\in\Nn^*$, $\nu_1,\nu_2\in[0,+\infty)$, $1\leq
\ell\leq k_2$. For $M_1\in\EM{\nu_1}N{k_1}$, $M_2\in\EM{\nu_2}N{k_2}$
with $N> 1+\max(\nu_1,\nu_2)$, define a $(k_1+k_2-1)$--linear operator on
$\E^{k_1+k_2-1}$
$$
(u_1,\ldots,u_{k_1+k_2-1})\to M(u_1,\ldots,u_{k_1+k_2-1})
$$
by
\begin{equation}
\label{2.1.9.bis}
\begin{split}
M(u_1,\ldots,u_{k_1+k_2-1})= \makebox[7cm]{}\\
M_2(u_1,\ldots,u_{\ell-1},M_1(u_\ell,\ldots,u_{\ell+k_1-1}),
u_{\ell+k_1},\ldots,u_{k_1+k_2-1})\ .
\end{split}
\end{equation}
Then $M$ belongs to $\EM{\nu_1+\nu_2+1}{N-\max(\nu_1,\nu_2)-1}{k_1+k_2-1}$
and the map $(M_1,M_2)\mapsto M$ is bounded from
$\EM{\nu_1}N{k_1}\times \EM{\nu_2}N{k_2}$ to the preceding space.
\end{itemize}
\end{proposition}
Using the duality formula \eqref{2.1.6}, proposition \ref{p.2.1.3.bis}
immediately implies the corresponding properties for the multilinear forms
of $\EL\nu N{k+1}$.
\begin{proposition}
\label{p.2.1.3}
\begin{itemize}
\item[i)] Let $\nu\in[0,+\infty)$, $s\in\R$, $s>\nu+3/2$, $N\in\Nn$,
$N>s+1$. Then for any $j\in\{1,\ldots,k+1\}$, any multilinear form
$L\in\EL\nu N{k+1}$ extends as a continuous multilinear form
$(u_1,\ldots,u_{j},\ldots,u_{k+1})\mapsto L
(u_1,\ldots,u_{j},\ldots,u_{k+1}) $ on
$$ \hcs s\times \cdots\times \hcs s\times \hcs{-s}\times \hcs s\times
\cdots\times \hcs s.
$$
Moreover for any $s_0\in(\nu+3/2,s]$, there is $C>0$ such that for any
$u_\ell\in\hcs s$, $\ell\in\{1,\ldots,k+1\}-\{j\}$, any $u_j\in\hcs{-s}$
\begin{equation}
\label{2.1.8}
\left|L(u_1,\ldots,u_{k+1})\right|\leq C \norma{L}_{\EL\nu
N{k+1}}\norma{u_j}_{H^{-s}} \Big(\sum_{{1\leq\ell\leq k+1\atop \ell\not=j
}}\norma{u_\ell}_{H^s}\prod_{{\ell'\not=\ell\atop \ell'\not=j}}
\norma{u_{\ell'}}_{H^{s_0}}\Big).
\end{equation}
\item[ii)] Let $k_1,k_2\in\Nn^*$, $\nu_1,\nu_2\in[0,+\infty)$, $1\leq
\ell\leq k_2+1$. For $M\in\EM{\nu_1}N{k_1}$, $L\in\EL{\nu_2}N{k_2+1}$
with $N> 1+\max(\nu_1,\nu_2)$ define a $(k_1+k_2)$--linear form on
$\E^{k_1+k_2}$
$$
(u_1,\ldots,u_{k_1+k_2})\to\tilde L(u_1,\ldots,u_{k_1+k_2})
$$
by
\begin{equation}
\label{2.1.9}
\tilde L(u_1,\ldots,u_{k_1+k_2})=
L(u_1,\ldots,u_{\ell-1},M(u_\ell,\ldots,u_{\ell+k_1-1}),
u_{\ell+k_1},\ldots,u_{k_1+k_2}).
\end{equation}
Then $\tilde L\in\EL{\nu_1+\nu_2+1}{N-\max(\nu_1,\nu_2)-1}{k_1+k_2}$
and the map $(M,L)\mapsto \tilde L$ is bounded from
$\EM{\nu_1}N{k_1}\times \EL{\nu_2}N{k_2+1}$ to the preceding space.
\end{itemize}
\end{proposition}
We shall denote, for any $N,\nu$ by
\begin{equation}
\label{2.1.14}
\Sigma:\EL\nu N{k+1}\to \EM\nu N k
\end{equation}
the map given, using notation (\ref{2.1.6}), by
$\Sigma(L)=M_{L,k+1}$. This is an isomorphism.
In order to apply a Birkhoff procedure, it is necessary to verify that
our framework is stable by Poisson brackets.
\begin{proposition}\label{prop2.1.4}
Let $k_1,k_2\in\Nn^*$, $\nu_1,\nu_2\in[0,+\infty)$,
$N>\frac{5}{2}+\max(\nu_1,\nu_2)$. Let $L_1\in\EL{\nu_1}N{k_1+1}$,
$L_2\in\EL{\nu_2}N{k_2+1}$, $\ell_1\in \{0,\ldots,k_1+1\}$, $\ell_2\in
\{0,\ldots,k_2+1\}$. Then
$\bigl\{\underline{L}\null_1^{\ell_1},\underline{L}\null_2^{\ell_2}
\bigr\}$ may be written
\begin{equation}
\label{2.1.11}
\bigl\{\underline{L}\null_1^{\ell_1},\underline{L}\null_2^{\ell_2}
\bigr\}(u,\bar u) = \underline{L}\null_3^{\ell_1+\ell_2-1}(u,\bar
u)
\end{equation}
for a multilinear form $L_3\in
\EL{\nu_1+\nu_2+1}{N-\max(\nu_1,\nu_2)-1}{k_1+k_2}$.
\end{proposition}
\proof We can choose $s$ with
$N-1>s>\frac{3}{2}+\max(\nu_1,\nu_2)$. By i) of proposition
\ref{p.2.1.3}, $\underline{L}\null_i^{\ell_i}(u,\bar u)$ $i=1,2$ is then a
smooth function on $\hcs s $. Using \eqref{2.1.6} we may write for
any $h\in\E$, $i=1,2$
\begin{equation*}
\partial_u \underline{L}\null_i^{\ell_i}\cdot h
=
\sum_{j=1}^{\ell_i}L_i(u,\ldots,h,\ldots,u,\bar u,\ldots,\bar u)
=\sum_{j=1}^{\ell_i} \bigl\langle h,
\underline{M}_{L_i,j}^{\ell_i-1}(u,\bar u)\bigr\rangle
\end{equation*}
where in the first sum $h$ stands at the $j$-th place. We have a
similar formula for $\partial_{\bar u} \underline{L}\null_i^{\ell_i}. h
$. In other words, we may write
\begin{equation}
\begin{split}
\label{2.1.12}
\nabla_u \underline{L}\null_i^{\ell_i}(u,\bar u)&= \sum_{j=1}^{\ell_i}
\underline{M}_{L_i,j}^{\ell_i-1}(u,\bar u)
\\
\nabla_{\bar u} \underline{L}\null_i^{\ell_i}(u,\bar u)&=
\sum_{j=\ell_i+1}^{k_i+1}
\underline{M}_{L_i,j}^{\ell_i}(u,\bar u).
\end{split}
\end{equation}
By i) of proposition \ref{p.2.1.3.bis} these quantities are smooth
functions of $u$ with values in $\hcs s $,
i.e. $\underline{L}\null_i^{\ell_i}\in C_{s}^\infty(\hcs s,\C)$. We may thus
apply definition \ref{d.1.2.2} and \eqref{1.2.18} to write
\begin{equation}
\begin{split}
\label{2.1.12'}
\bigl\{\underline{L}\null_1^{\ell_1},\underline{L}\null_2^{\ell_2}
\bigr\}(u,\bar u) = \im &\Big[
\sum_{j_2=1}^{\ell_2}\sum_{j_1=\ell_1+1}^{k_1+1}
L_2(u,\ldots,\underline{M}_{L_1,j_1}^{\ell_1}(u,\bar u),\ldots, u,\bar
u,\ldots,\bar u ) \\ &-
\sum_{j_2=\ell_2+1}^{k_2+1}\sum_{j_1=1}^{\ell_1} L_2(u,\ldots,u,\bar
u,\ldots,\underline{M}_{L_1,j_1}^{\ell_1-1}(u,\bar u),\ldots,\bar u )
\Big]
\end{split}
\end{equation}
where the $M$--term in the argument of $L_2$ stays at the $j_2$-th
place. Since $M_{L_1,j_1}$ belongs to $\EM{\nu_1}{N}{k_1}$ we just
have to apply (ii) of proposition \ref{p.2.1.3} to write this last
expression in terms of a new multilinear form $L_3$.
\qed
\medskip
In order to prove our main theorem we have to decompose the
multilinear forms of $\EL\nu N{k+1}$ in the sum of a resonant
and of a non-resonant part.
\begin{definition}
\label{d.2.1.4}
(Non-resonant multilinear form) Fix $k\in \Nn$ and let $1\leq \ell\leq
k+1$ be a fixed integer.
\begin{itemize}
\item If $2\ell\not =k+1$ we set $ \ELt\nu N{k+1}= \EL\nu N{k+1}$,
$\EMt\nu N{k}= \EM\nu N{k}$ .
\item If $2\ell=k+1$ we define $\ELt\nu N{k+1}$ (resp. $\EMt\nu N{k}$)
as the subspace of those $L\in\EL\nu N{k+1} $ (resp. $M\in\EM\nu
N{k}$) such that respectively
\begin{equation}
\label{2.1.13}
L(\Pi_{n_1}u_1,\ldots,\Pi_{n_{k+1}} u_{k+1})\equiv 0\ ,\quad
\Pi_{n_{k+1}}M(\Pi_{n_1}u_1,\ldots,\Pi_{n_{k}} u_{k})\equiv 0
\end{equation}
for any $u_1,\ldots,u_{k+1}\in\E$ and any
$(n_1,\ldots,n_{k+1})\in(\Nn^*)^{k+1}$ such that
$$
\left\{n_1,\ldots,n_{\ell}\right\}=\left\{n_{\ell+1},\ldots,n_{k+1}\right\}.
$$
\end{itemize}
\end{definition}
Remark that the map $\Sigma$ given by (\ref{2.1.14}) induces an isomorphism
between $\ELt\nu N{k+1}$ and $\EMt \nu N{k}$.
\begin{definition}
\label{d.2.1.4.1} (Resonant multilinear form)
Fix $k\in \Nn$ and let $1\leq\ell\leq k+1$. We define the space of
$\ell$--resonant multilinear forms
$\widehat{\mathcal{L}}_{k+1,\ell}^{\nu,N}$ as the subspace of those
$L\in\EL \nu N{k+1}$ verifying
\begin{equation}
\label{2.1.13.1}
L(\Pi_{n_1}u_1,\ldots,\Pi_{n_{k+1}} u_{k+1})\equiv 0\ ,
\end{equation}
for any $u_1,\ldots,u_{k+1}\in\E $ and any
$(n_1,\ldots,n_{k+1})\in(\Nn^*)^{k+1}$ such that
$$ \left\{n_1,\ldots,n_{\ell}\right\}\not=\left\{n_{\ell+1},\ldots,
n_{k+1}\right\}.
$$
\end{definition}
Remark that $\widehat{\mathcal{L}}_{k+1,\ell}^{\nu,N} = 0$ if $k$ is even
or $k$ is odd and $\ell \neq \frac{k+1}{2}$. If $k$ is odd and
$\ell = \frac{k+1}{2}$, one gets a direct sum decomposition
\begin{equation}
\label{decomp}
\EL \nu N{k+1} = \widehat{\mathcal{L}}_{k+1,\ell}^{\nu,N} \oplus
\tilde{\mathcal{L}}_{k+1,\ell}^{\nu,N}.
\end{equation}
The main feature of the above definitions is captured by the following
proposition:
\begin{proposition}
\label{p.jein}
Assume that $L\in\EL\nu N{k+1} $ is $\ell$ resonant. Then for any
$a\in\Nn$, $a\geq 1$ one has
$$
\big\{ \underline L^\ell,J_a \big\}\equiv 0.$$
\end{proposition}
\proof Remark first that one has $2\ell=k+1$ and that $J_a(u,\bar
u)=\langle \Pi_a u,\Pi_a\bar u\rangle$, from which, using
\eqref{2.1.12'}, one gets
$$
\big\{ \underline L^\ell,J_a \big\}=-\im\underline{\tilde L}^\ell
$$
with
\begin{displaymath}
\begin{array}{l}
\displaystyle\tilde L(u_1,\ldots,u_{k+1})=\\
\displaystyle\Big[\sum_{j=1}^{\ell}L(u_1,\ldots,{\Pi_a} u_j
,\ldots,u_{k+1}) -\sum_{j=\ell+1}^{k+1}
L(u_1,\ldots,{\Pi_a} u_j,\ldots,u_{k+1}) \Big].
\end{array}
\end{displaymath}
Then the above expression is equal to
\begin{equation*}
\begin{split}
\sum_{{n_1,\ldots,n_{k+1}}}\Big[ \sum_{j=1}^{\ell}
L(\Pi_{n_1}u_1,\ldots,\Pi_a\Pi_{n_j}u_j,\ldots,\Pi_{n_{k+1}}u_{k+1})
\makebox[2.5cm]{} \\ -\sum_{j=\ell+1}^{2\ell}
L(\Pi_{n_1}u_1,\ldots,\Pi_a\Pi_{n_j}u_j,\ldots,\Pi_{n_{k+1}}u_{k+1})\Big]
\\ =\sum_{n_1,\ldots,n_{k+1}} \Big[\sum_{j=1}^{\ell} \delta_{n_j,a}-
\sum_{j=\ell+1}^{2\ell}\delta_{n_j,a}\Big] L(\Pi_{n_1}u_1,\dots,
\Pi_{n_{2\ell}}u_{2\ell}).
\end{split}
\end{equation*}
Since for an $\ell$ resonant form
$$
\left\{n_1,\ldots,n_{\ell}\right\} =\left\{n_{\ell+1},
\ldots,n_{2\ell}\right\} ,
$$
the quantity $\sum_{j=1}^{\ell}
\delta_{n_j,a}- \sum_{j=\ell+1}^{2\ell}\delta_{n_j,a}$ always vanishes.\qed
\begin{definition}
\label{2.1.d}
For given integers $\ell, k$ satisfying $1\leq \ell\leq k+1$,
we define an operator $\psi_\ell$ acting on $\EL\nu
N{k+1}$ by
\begin{equation}
\begin{split}
\label{2.1.10.a}
\psi_\ell(L)(u_1,\ldots,u_{k+1}) \makebox[9cm]{}
\\
=
\Bigl[\sum_{j=1}^{\ell}L(u_1,\ldots,{\Lambda_m u}_j
,\ldots,u_{k+1}) -\sum_{j=\ell+1}^{k+1}L(u_1,\ldots,{\Lambda_m
u}_j,\ldots,u_{k+1}) \Bigr].
\end{split}
\end{equation}
\end{definition}
Remark that writing $G_{2}(u,\bar u) = \langle \Lambda_{m}u, \bar
u\rangle$, and using \eqref{1.2.18} one gets
\be \label{Gpsi}
\big\{ \underline L^\ell,G_2\big\}(u,\bar u)=
-\im \psi_\ell(L)(u,\ldots,u,\bar u,\ldots,\bar u)
\ee
where in the right hand side one has $\ell$ times $u$ and $k+1-\ell$
times $\bar u$.
\begin{proposition}
\label{prop3.5}
There is a zero measure subset $\N$ of $(0,+\infty)$ such that for any
$k\in\Nn^*$, any $m\in (0,+\infty)-\N$, any $0\leq \ell\leq k+1$, there is
a $\bar \nu\in\R_+$, and for any $(\nu,N)\in\R_+\times \Nn, N>2$, there is
an operator
\begin{equation}
\label{2.1.18}
\psi_\ell^{-1}:\ELt\nu N{k+1}\to \ELt{\nu+\bar\nu} N{k+1}
\end{equation}
such that for any $L\in \ELt\nu N{k+1}$, $\psi_\ell(\psi_\ell^{-1}(L))=L$.
Moreover there exists $C>0$ such that
\begin{equation}
\label{2.1.18.1}
\norma{\psi_\ell^{-1}(L)}_{\EL{\nu+\bar\nu}N{k+1}} \leq C
\norma{L}_{\EL{\nu}N{k+1}}.
\end{equation}
\end{proposition}
\proof We reduce the proof to proposition 2.2.4 of
\cite{DS2}. Let
$\rho:\left\{1,\ldots,k+1\right\}\to\left\{-1,1\right\}$ be the map given
by $\rho(j)=1$ if $j=1,\ldots,\ell$ and $\rho(j)=-1$ if
$j=\ell+1,\ldots,k+1$, and for $M\in\EM \nu N k$ define
\begin{equation}
\label{2.1.18.b}
\begin{array}{l}
\displaystyle\tilde\psi_\ell(M)(u_1,\ldots,u_{k})=\\
\displaystyle\sum_{j=1}^{k}\rho(j)
M(u_1,\ldots,\Lambda_mu_j,\ldots,u_{k})+\rho(k+1)\Lambda_mM
(u_1,\ldots,u_{k}).
\end{array}
\end{equation}
One has, if $\Sigma$ is the map defined in \eqref{2.1.14},
\begin{equation}
\label{2.1.18.a}
\Sigma^{-1}\circ\tilde\psi_\ell(M)=\psi_\ell\circ\Sigma^{-1}(M)
\end{equation}
for any $M \in \mathcal{M}^{\nu,N}_k$ such that $\tilde\psi_\ell(M)$
belongs to $\mathcal{M}^{\nu',N}_k$ for some $\nu'\geq 0$.
By proposition 2.2.4 in \cite{DS2}, there are $\bar\nu\in\R_+$ and an
operator $\tilde \psi_\ell^{-1}:\EMt\nu Nk\to \EMt{\nu+\bar\nu} Nk$
such that for any $M\in \EMt\nu Nk$,
$\tilde\psi_\ell(\tilde\psi_\ell^{-1}(M))=M$ and such that the
equivalent for $M$ of the estimate (\ref{2.1.18.1}) holds true. We
just set $\psi_\ell^{-1}=\Sigma^{-1}\circ \tilde \psi_\ell^{-1}
\circ\Sigma$, and the conclusion follows from equation
(\ref{2.1.18.a}). \qed
\medskip
The construction of the operator $\tilde \psi_\ell^{-1}$ in \cite{DS2}
relies in an essential way on the spectral assumption (\ref{Kn}) and
(\ref{115}), i.e. on the fact that $M$ is a Zoll manifold. For the
reader's convenience, we give a direct proof of
proposition \ref{prop3.5} in the case where $M={\mathbb{S}}^d$ and $V=0$.
In this case, the eigenvalues $\lambda_n$ of $P$ and
$\omega_n$ of $\Lambda_m$ are respectively given by
\begin{equation}
\label{A.3}
\lambda_n=\sqrt{n(n+d-1)}\ ,\quad \omega_n=\sqrt{\lambda_n^2+m^2}\ ,
\end{equation}
and moreover $P\Pi_n=\lambda_n\Pi_n$,
$\Lambda_m\Pi_n=\omega_n\Pi_n$. Thus, from equation (\ref{2.1.10.a}) one
has
\begin{equation}
\begin{split}
\label{A.1}
&\psi_\ell(L)(u_1,\ldots,u_{k+1}) \\ &= \sum_{n_1,\ldots,n_{k+1}}
(\omega_{n_1}+\cdots+\omega_{n_\ell}-\omega_{n_{\ell+1}}-\cdots
-\omega_{n_{k+1}} ) L(\Pi_{n_1}u_1,\ldots,\Pi_{n_{k+1}}u_{k+1}).
\end{split}
\end{equation}
Remark also that, if $L\in\ELt \nu N{k+1}$, then the sum is restricted
to those $(n_1,\ldots,n_{k+1})$ such that
$$ \left\{n_1,\ldots,n_{\ell}\right\}\not=
\left\{n_{\ell+1},\ldots,n_{k+1}\right\}\ .
$$
The following proposition was proved in \cite{DS1} (see Proposition
4.8) and is also a minor variant of theorem 3.12 of \cite{BG}.
\begin{proposition}
\label{A.4}
There is a zero measure subset $\N$ of $(0,+\infty)$ such that for any
$m\in (0,+\infty)-\N$ and any $k\in\Nn^*$, there are $ c>0$ and
$\bar \nu\in\R_+$ such that for any $0\leq \ell\leq k+1$, one has
\begin{equation}
\label{A.2}
\left|\omega_{n_1}+\cdots+\omega_{n_\ell}-\omega_{n_{\ell+1}}-\cdots
-\omega_{n_{k+1}} \right|\geq c\mu (n_1,\ldots,n_{k+1})^{-\bar\nu}
\end{equation}
for any choice of $(n_1,\ldots,n_{k+1})$ such that
$$ \left\{n_1,\ldots,n_{\ell}\right\}\not=
\left\{n_{\ell+1},\ldots,n_{k+1}\right\}\ .
$$
\end{proposition}
It is now immediate to obtain the
\noindent
{\bf Proof of Proposition \ref{prop3.5} in the case $M={\mathbb{S}}^d$,
$V\equiv 0$.} Given $L\in\ELt \nu N{k+1}$ define
\begin{equation}
\label{A.12}
\tilde L(u_1,\ldots,u_{k+1}) = \sum_{n_1,\ldots,n_{k+1}}
\frac{L(\Pi_{n_1}u_1,\ldots,\Pi_{n_{k+1}}u_{k+1})}{
(\omega_{n_1}+\cdots+\omega_{n_\ell}-\omega_{n_{\ell+1}}-\cdots
-\omega_{n_{k+1}} ) }.
\end{equation}
Then by (\ref{A.2}) one has $\tilde L\in\ELt {\nu+\bar\nu} N{k+1}$,
and by (\ref{A.12}) $\psi_\ell(\tilde L)=L$; finally also the estimate
(\ref{2.1.18.1}) immediately follows. On a general Zoll manifold, the
construction of the map $L \to \tilde{L}$ is made in~\cite{DS1} through an
approximation argument and a suitable use of Neumann series. \qed
\medskip
Finally we end this subsection with two lemmas that will be useful to
verify that certain Hamiltonian functions are real valued.
\begin{lemma}
\label{lemma3.13}
Assume $m\in (0,+\infty)-\N$ and let $L\in \ELt\nu N{k+1}$.\\ i)
Assume that for any $u\in\E$, $\psi_\ell(L)(u,\ldots,u,\bar
u,\ldots,\bar u)=0$ (where one has $\ell$ times $u$ and $k+1-\ell$
times $\bar u$). Then $\underline L^\ell(u,\bar u)=0$.\\ ii) Assume
$\bigl\{\Im \underline{L}^\ell,G_2\bigr\} \equiv 0$. Then
$\Im\underline{L}^\ell(u,\bar{u}) \equiv 0$.
\end{lemma}
\proof i) Let $\mathfrak{S}_{\ell,k}$ be the product of the group of
permutations of $\{1,\ldots,\ell\}$ by the group of permutations of
$\{\ell+1,\ldots,k+1\}$. For
$(\sigma,\sigma')\in\mathfrak{S}_{\ell,k}$ define
\[((\sigma,\sigma')\cdot L)(u_1,\ldots,u_{k+1})=L(u_{\sigma(1)},
\ldots,u_{\sigma(\ell)},u_{\sigma'(\ell+1)},
\ldots,u_{\sigma'(k+1)}).\]
Replacing $L$ by
\[\frac{1}{\ell!(k+1-\ell)!} \sum_{a\in\mathfrak{S}_{\ell,k}}(a\cdot L)
(u_1,\ldots,u_{k+1})\]
does no affect the hypotheses nor the conclusion (since $\psi_\ell$
commutes to the $\mathfrak{S}_{\ell,k}$-action), so we can assume that
$L$ -- and thus $\psi_\ell(L)$ -- is
$\mathfrak{S}_{\ell,k}$-invariant. Write the assumption
$\psi_\ell(L)(u,\ldots,u,\bar u,\ldots,\bar u)=0$ with
\[u=u_1+\cdots+u_\ell+\overline{u_{\ell+1}}+\cdots+\overline{u_{k+1}}\]
for arbitrary $u_j$'s belonging
to $\mathcal{E}$. If one expands this expression by multilinearity,
sorts the different contributions according to their homogeneity
degree in $u_j,\bar{u}_j$, and uses the
$\mathfrak{S}_{\ell,k}$-invariance, one gets
\begin{equation}\label{inj1}
\psi_\ell(L)(u_1,\ldots,u_{k+1})=0
\end{equation}
for any $u_1,\ldots,u_{k+1}$ in $\mathcal{E}$. Take a family of
positive integers $(n_1,\ldots,n_{k+1})$ such that
$\{n_1,\ldots,n_\ell\}\neq\{n_{\ell+1},\ldots,n_{k+1}\}$ if
$2\ell=k+1$. We apply \eqref{inj1} taking for all $u_j$ an
eigenfunction associated to an eigenvalue $\lambda_{n_j}\in K_{n_j}$
$j=1,\ldots,k+1$ so that $\Lambda_mu_j=\omega_{n_j}u_j$,
$\omega_{n_j}= \sqrt{m^2+\lambda_{n_j}^2}$.
By \eqref{2.1.10.a} we obtain
\[\Big(\sum_{j=1}^\ell
\omega_{n_j}-
\sum_{\ell+1}^{k+1}\omega_{n_j}\Big)L(u_1,\ldots,
u_{k+1})=0.\] By proposition 2.2.1 and formula (2.2.3) of \cite{DS2}
(see also proposition \ref{A.4} of the present paper in the case of
the sphere), the first factor is nonzero for $m\in (0,+\infty)-\N$, so
$L(u_1,\ldots,u_{k+1})=0$ for any family $(u_1,\ldots,u_{k+1})$ of the
preceding form. The definition of $\ELt\nu N{k+1}$ implies that
$\underline L^\ell(u,\bar u)=0$.
ii) We may write when $\ell \neq \frac{k+1}{2}$
$\Im\underline{L}^\ell(u,\bar{u}) =
\underline{\Gamma}_1^\ell(u,\bar{u}) +
\underline{\Gamma}_2^{k+1-\ell}(u,\bar{u})$ for $\Gamma_1 \in \ELt\nu
N{k+1}$, $\Gamma_2 \in \tilde{\mathcal{L}}^{\nu,N}_{k+1,k+1-\ell}$. By
homogeneity, $\bigl\{G_2,\underline{\Gamma}_1^\ell +
\underline{\Gamma}_2^{k+1-\ell}\bigr\} \equiv 0$ implies that
$\bigl\{G_2,\underline{\Gamma}_1^\ell\bigr\} =$
\mbox{$\bigl\{G_2,\underline{\Gamma}_2^{k+1-\ell}\bigr\} \equiv 0$},
whence $\underline{\Gamma}_1^\ell = \underline{\Gamma}_2^{k+1-\ell}
=0$ by \eqref{Gpsi} and assertion i). If $\ell = \frac{k+1}{2}$, we
have $\Im\underline{L}^\ell(u,\bar{u}) =
\underline{\Gamma}^\ell(u,\bar{u})$ for a $\Gamma \in
\widetilde{\mathcal{L}}^{\nu,N}_{k+1}$, and the result follows again
from i). \qed
\begin{lemma}
\label{lemma3.13bis}
Assume $m\in (0,+\infty)-\N$ and $k$ odd. Set $\ell = \frac{k+1}{2}$
and consider $L_1\in \ELt\nu N{k+1}$ and $L_2 \in
\widehat{\mathcal{L}}_{k+1,\ell}^{\nu,N}$. Set $L = L_1+L_2$ and
assume that for any $u\in\E$, $\underline L^\ell(u,\bar u)$ is real
valued. Then $\underline {L_1}^\ell(u,\bar u)$ and $\underline
{L_2}^\ell(u,\bar u)$ are real valued.
\end{lemma}
\proof Since $\underline L^\ell(u,\bar u)$ is real valued,
$\{\Im\underline L^\ell,G_2\}(u,\bar u)=0$. As \[\{\underline
L^\ell,G_2\}(u,\bar u) =\{\underline {L_1}^\ell,G_2\}(u,\bar u)\] by
proposition \ref{p.jein}, this yields $\{\Im\underline
{L_1}^\ell,G_2\}(u,\bar u)=0$. Now, ii) of lemma \ref{lemma3.13}
implies $\Im\underline {L_1}^\ell(u,\bar u)=0$. Therefore,
$\underline{L_1}^\ell(u,\bar u)$ and $\underline {L_2}^\ell(u,\bar u)$
are real valued. \qed
\subsection{Proof of theorem \ref{thm2}.}\label{birk}
We use a Birkhoff scheme to put the Hamiltonian system with the
Hamiltonian $G$ of \eqref{1.2.12} in normal form. Having fixed some
$r_0\geq 1$, the idea is to construct iteratively for $r =
0,\ldots,r_0$, $\U_r$ a neighborhood of $0$ in $H^s(M,\C)$ for $s\gg
1$, a canonical transformation $\Tr_r$, defined on $\U_r$, an
increasing sequence $(\nu_r)_{r=1,\ldots,r_0}$ of positive numbers,
and functions $\Ze^{(r)}, P^{(r)}, \resto^{(r)}$ such that
\begin{equation}
\label{b.1}
G^{(r)}:=G\circ\Tr_r=G_2+\Ze^{(r)}+P^{(r)}+\resto^{(r)}.
\end{equation}
Moreover, these functions will decompose as
\begin{eqnarray}
\label{b.2}
\Ze^{(r)}&=&\sum_{j=1}^r \Ye_j
\\
\label{b.3}
P^{(r)}&=&\sum_{j=r+1}^{r_0} Q^{(r)}_j
\end{eqnarray}
where $\Ye_j$ is in $\mathcal{H}_{j+2}^s(\nu_j)$ and Poisson commutes
with $J_n$ for any $n$, $Q^{(r)}_j$ is in
$\mathcal{H}_{j+2}^s(\nu_r)$, by convention $P^{(r_{0})}=0$, and
$\resto^{(r)}\in C_s^\infty(\U_r,\R)$ has a zero of order $r_0+3$ at the
origin.
First remark that the Hamiltonian (\ref{1.2.12}) has the form
(\ref{b.1}), (\ref{b.2}), (\ref{b.3}) with $r=0$ and $\Tr_r=I$,
$P^{(0)}$ being the Taylor's polynomial of $\tilde{G}$ at degree
$r_{0}$ (see lemma \ref{lem:G}). We
show now how to pass from $r$ to $r+1$ provided one is able to solve
the homological equation below.
\begin{lemma}
\label{lemma3.14}
Assume we are given $0<\nu_r$ and functions $\Ze^{(r)}, P^{(r)},
\resto^{(r)}$ satisfying the above conditions. Assume that there are
$\nu'_r>\nu_r$ and a function $F^{(r+1)}$ of $(u,\bar u)$
with the properties that
\begin{eqnarray}
\label{b.4}
F^{(r+1)}&\in& \mathcal{H}_{r+3}^s(\nu'_r)
\\
\label{b.5}
\{F^{(r+1)},G_2\}&\in& \mathcal{H}_{r+3}^s(\nu'_r).
\end{eqnarray}
Assume moreover one is able to choose $F^{(r+1)}$ with the further
property that $\Ye_{r+1}$ defined by
\begin{equation}
\label{b.10}
\Ye_{r+1}=\bigl\{F^{(r+1)},G_2\bigr\}+Q^{(r)}_{r+1}
\end{equation}
Poisson commutes with $J_n$ for any $n$. Denote by $\Phi^t_{r+1}$ the
flow generated by $X_{F^{(r+1)}}$. Then, there are $\nu_{r+1}>\nu'_r$
and, for large enough $s$, a sufficiently small neighborhood
$\U_{r+1}$ of the origin of $H^s(M,\C)$, such that $G^{(r+1)}=
G^{(r)}\circ\Phi^1_{r+1}$ has the same structure as $G^{(r)}$ but with
$r$ replaced by $r+1$ and $\U_r$ replaced by $\U_{r+1}$.
\end{lemma}
\proof If
$\U_{r+1}$ is a sufficiently small neighborhood of the origin of
$H^s(M,\C)$, then $\Phi_{r+1}^1:\U_{r+1}\to\U_{r}$ is well defined.
We decompose $G^{(r)}\circ \Phi^1_{r+1}$ as follows
\begin{eqnarray}
\label{b.6}
G^{(r)}\circ \Phi^1_{r+1}&=& G_2+\bigl\{F^{(r+1)},G_2\bigr\}+Q^{(r)}_{r+1}+
\Ze^{(r)} + \resto^{(r)}\circ\Phi^1_{r+1}
\\
\label{b.7}
&+& P^{(r)}\circ\Phi^1_{r+1}-Q^{(r)}_{r+1}
\\
\label{b.8}
&+& \Ze^{(r)}\circ\Phi^1_{r+1}-\Ze^{(r)}
\\
\label{b.9}
&+& G_2\circ\Phi^1_{r+1}-G_2-\bigl\{F^{(r+1)},G_2\bigr\}.
\end{eqnarray}
Using the fact that $\Ye_{r+1}$ Poisson commutes with $J_n$ for any
$n$ and belongs to $\mathcal{H}_{r+3}^s(\nu'_r) \subset
\mathcal{H}_{r+3}^s(\nu_{r+1})$ by \eqref{b.5}, we may define
$\Ze^{(r+1)}:=\Ze^{(r)}+\Ye_{r+1}$.
If $s$ is large enough, \eqref{b.4} implies that $F^{(r+1)} \in
C^\infty_s(\U_r,\R)$, and we may apply lemma \ref{l.1.2.3} with
$F=F^{(r+1)}$ to $P^{(r)}\circ\Phi^1_{r+1}$ and
$\Ze^{(r)}\circ\Phi^1_{r+1}$. Using proposition~\ref{prop2.1.4} to
write the iterated Poisson brackets of the right hand side
of~\eqref{1.2.23} in terms of multilinear forms, we thus see that
\eqref{b.7}, \eqref{b.8} may be decomposed in a sum of elements of
$\mathcal{H}^s_{j+2}(\nu_{r+1})$ for $s, \nu_{r+1}$ large enough and
$j= r+2,\ldots,r_0$. Consequently these two terms will contribute to
$P^{(r+1)}, \resto^{(r+1)}$ in \eqref{b.1} written with $r$ replaced
by $r+1$. In the same way, lemma~\ref{l.1.2.3} applied to
$G_2\circ\Phi_{r+1}^1$ shows that, for large enough $r$ and
$\nu_{r+1}$, \eqref{b.9} gives a contribution to $P^{(r+1)}+
\resto^{(r+1)}$ in \eqref{b.1} at step $r+1$. The conclusion follows.
\qed
\medskip
Let us remark that the above lemma implies theorem 2.6. Actually, if
we are able to apply lemma 3.17 up to step $r_0 -1$, we get (3.31)
with $r=r_0$, which is the conclusion of the theorem. Our remaining
task is thus to solve the homological equation (3.36). This will be
achieved in the following lemma.
\begin{lemma}
\label{hom.true}
Let $2\leq r\leq r_0+2, \nu\in \R_+^*$, and assume $m\in
(0,+\infty)-\N$. For any $Q\in \Hs{r+1}(\nu)$ there are $\nu'>\nu$,
$F\in\Hs{r+1}(\nu')$ and $\Ye\in\Hs{r+1} (\nu)$, with $\Ye$ which
Poisson commutes with $J_n$ for any $n\geq1$, such that
\begin{equation}
\label{hom.1}
\left\{F,G_2\right\}+Q=\Ye\, .
\end{equation}
As a consequence one also has $\{F,G_2\}\in\Hs{r+1}(\nu)$.
\end{lemma}
\proof If $r+1$ is odd then we define $\Ye=0$. As $Q$ is
in $\mathcal{H}_{r+1}^s(\nu)$, it decomposes in the form
\begin{equation}
\label{F.11}
Q=\sum_{\ell=0}^{r+1} \underline{L_{\ell}}\null^\ell
\end{equation}
where $L_{\ell}$ are multilinear forms in $\EL{\nu} {+\infty}{r+1}$.
We remark that, since $r+1$ is odd, the $L_{\ell}$ are all
non-resonant, i.e. $L_{\ell} \in \ELt{\nu} {+\infty}{r+1}$. Therefore
by proposition \ref{prop3.5}, we can define
$F_{\ell}\in \ELt{\nu+\bar\nu} {+\infty}{r+1}$ by
\be\label{F1}
F_{\ell}=-\im \Psi_{\ell}^{-1}(L_{\ell})
\ee
and in view of \eqref{Gpsi}, the Hamiltonian function
\be\label{F2}
F=\sum_{\ell=0}^{r+1} \underline{F_{\ell}}\null^\ell
\ee
satisfies the homological equation \eqref{hom.1}.
If $r+1$ is even, set $\tilde{L}_{\ell} = L_{\ell}
$ if $\ell \neq \frac{r+1}{2}$. When $\ell=\frac{r+1}{2}$, write
\[ L_{\frac{r+1}{2}} = Y +
\tilde{L}_{\frac{r+1}{2}} \in
\widehat{\mathcal{L}}^{\nu,+\infty}_{r+1,\frac{r+1}{2}} \oplus
\widetilde{\mathcal{L}}^{\nu,+\infty}_{r+1,\frac{r+1}{2}}\]
using decomposition \eqref{decomp}. Then if
$\Ye:=\underline{Y}^{\frac{r+1}{2}}$,
\[Q-\Ye=\sum_{\ell=0}^{r+1}
\underline{(\tilde{L}_{\ell})}\null^\ell \]
and if we define $F$ by
\eqref{F2} with $F_{\ell} =
-\im\psi_\ell^{-1}(\tilde{L}_{\ell})$, we still obtain
that equation \eqref{hom.1} is satisfied.
It remains to show that $F$ is real valued.
As $Q$ is real, using \eqref{F.11} yields for any
$\ell\in\{0,\ldots,r+1\}$
\begin{equation}
\label{F.13}
\overline{\underline{L_{\ell}}\null^\ell}
=\underline{L_{r+1-\ell}}\null^{r+1-\ell}
\end{equation}
by homogeneity. If $r+1$ is even, \eqref{F.13} implies that
$\underline{L_{\frac{r+1}{2}}}\null^\frac{r+1}{2}$ is real
valued. Using lemma \ref{lemma3.13bis}, we obtain that $\Ye$ is real
valued (remark that if $r+1$ is odd, $\Ye=0$ is also real
valued). Therefore, $\left\{F,G_2\right\}$ is real valued by
\eqref{hom.1}. So $\left\{\Im F,G_2\right\}=0$ which implies by
homogeneity that $\bigl\{\Im
\underline{F_{\ell}}\null^\ell,G_2\bigr\}=0$ for any $\ell$. We may now use
lemma
\ref{lemma3.13} to obtain that $\Im
\underline{F_{\ell}}\null^\ell=0$ for any
$\ell\in\{0,\ldots,r+3\}$. Therefore, $F$ is real valued.\qed
\subsection{Proof of theorem \ref{thm1}.}\label{sec:proofthm1}
Let $\Tr$ be the canonical transformation defined in theorem \ref{thm2}.
Define on $\U=\Tr(\V)$ the function
$$
E(u,\bar u):=\sum_{n\geq 1}n^{2s}J_n\circ\Tr^{-1}(u,\bar u).
$$ We shall control $E(u,\bar{u})$ along long time intervals. To take
into account the loss of derivatives coming from the linear part of
the equation, we proceed by regularization. Fix $\sigma=s+1$ and take
the Cauchy data such that $u_0=\epsilon (\Lambda_m^{-1/2}v_{1} + \im
\Lambda_m^{1/2}v_{0})/\sqrt{2}$ is in $H^\sigma(M,\C)\cap\U$. Let
$u(t)\equiv u(t,.)$ be the corresponding solution of $\dot
u=X_G(u)\equiv \im \nabla_{\bar{u}} G(u,\bar{u})$. Since $X_G$ is semilinear and
$H^\sigma$ is its domain, as far as $\norma{u(t)}_{H^s}<\infty$ one
has $u(t)\in H^\sigma$. Thus, as far as $u(t)\in\U$
\begin{equation}\label{F.17}
\frac{dE}{dt}=\partial E\cdot X_G=\left\{G,E\right\}
\end{equation}
which is well defined since $E\in C^\infty(\U,\R)$, with $\U\subset
H^s$ and $X_G(u)\in H^s$ for $u\in H^\sigma$. So we may write
\[\left\{G,E\right\}(u,\bar u) =\displaystyle\sum_{n\geq
1}n^{2s}\left\{G,J_n\circ\Tr^{-1}\right\}.\]
If we use \eqref{1.2.21}, \eqref{for.nor} and \eqref{stime1} we get then
\begin{equation}\label{F.18}
\begin{array}{ll}
\left\{G,E\right\} &
=\displaystyle\sum_{n\geq
1}n^{2s}\left\{G\circ\Tr,J_n\right\}\circ\Tr^{-1}\\ &
=\displaystyle\sum_{n\geq
1}n^{2s}\left\{G_2+\Ze+\resto,J_n\right\}\circ\Tr^{-1}\\ &
=\left\{\resto\circ\Tr^{-1},E\right\}.
\end{array}
\end{equation}
Thus
\[\frac{dE}{dt}=\left\{\resto\circ\Tr^{-1},E\right\}\]
which, by taking an approximating sequence is seen to hold also for
initial data which are not in $H^\sigma$, but only in $\U$.
Using \eqref{stime2} one has
\begin{equation}\label{F.14}
\left|\frac{dE(t)}{dt}\right|\leq C\Vert{u(t,\cdot )}\Vert_{H^s}^{r+3}.
\end{equation}
Remark that by definition of $E(u,\bar{u})$ and because $\Tr(0) = 0$,
as long as $u$ stays in a small enough neighborhood of 0, we have
\begin{equation}
\label{la.e}
\frac{1}{2}E(u,\bar u)\leq \norma{u}^2_{H^s}\leq 2E(u,\bar u).
\end{equation}
We deduce then by integration of \eqref{F.14} the estimate
\begin{equation}\label{F.16}
\Vert{u(t,\cdot )}\Vert_{H^s}^2 \leq C'\left(\Vert{u_0}\Vert_{H^s}^2
+\left|\int_0^t\Vert{u(\tau,\cdot )}\Vert_{H^s}^{r+3}d\tau\right|\right)
\end{equation}
which holds true as long as $u$ remains in a small enough neighborhood
of 0. It is classical to deduce from this inequality that there are
$C>0, c>0, \epsilon_0>0$ such that, if the
Cauchy data $u_0$ is in the $H^s$ ball of center 0 and radius $\epsilon
<\epsilon_0$, the solution exists over an interval of length at least
$c\epsilon^{-r-1}$, and for any $t$ in that interval $\Vert
u(t,\cdot )\Vert_{H^s} \leq C\epsilon$. This concludes the proof.
\qed
\begin{remark}
The proof of \eqref{estimJn} is similar. As in \eqref{F.17} and
\eqref{F.18}, we see that
\[\frac{dJ_n\circ \Tr^{-1}(u,\bar u)}{dt}=\left\{\resto\circ\Tr^{-1},
J_n\right\}(u,\bar u)\]
which together with the bound $\Vert{u(t,\cdot
)}\Vert_{H^s}\leq C_1\epsilon$ yields
\begin{equation}\label{F.19}
|J_n\circ\Tr^{-1}(u(t),\bar{u}(t))-J_n\circ\Tr^{-1}(u_0,\bar{u}_0)|\leq
\frac{C\epsilon^3}{n^{2s}}
\end{equation}
for times $|t|\leq\epsilon^{-r}$. Finally, using \eqref{F.19},
\eqref{stime} and the inequality
\begin{displaymath}
\begin{array}{l}
|J_n(u(t),\bar{u}(t))-J_n(u_0,\bar{u}_0)|\leq
|J_n(u(t),\bar{u}(t))-J_n\circ\Tr^{-1}(u(t),\bar{u}(t))|\\
\hspace{0.4cm}+|J_n\circ\Tr^{-1}(u(t),\bar{u}(t))-J_n \circ\Tr^{-1}(u_0,
\bar{u}_0)|+|J_n(u_0,\bar{u}_0)-J_n\circ\Tr^{-1}(u_0,\bar{u}_0)|
\end{array}
\end{displaymath}
implies \eqref{estimJn}.
\end{remark}
|
1,314,259,993,554 | arxiv | \section{Introduction}
Motivated by the results of Burch \cite{Bu68}, the notion of Burch ideals and Burch submodules were introduced (and studied) by Dao-Kobayashi-Takahashi and Dey-Kobayashi in \cite{DKT20} and \cite{DK22} respectively. These include large well-studied classes of ideals and modules. In this article, our aim is to study homological dimensions of Burch ideals, submodules and their quotients, and characterize various local rings
{\it Throughout, $(R,\mathfrak{m},k)$ is a commutative Noetherian local ring. All $R$-modules are assumed to be finitely generated.} Let $M$ be a Burch submodule of some $R$-module $L$, i.e., $\mathfrak{m} (M:_L \mathfrak{m}) \neq \mathfrak{m} M$. It follows from a result of Avramov \cite[Thm.~4]{Avr96} that $M$ has maximal projective $($resp., injective$)$ complexity and curvature. From this fact and the other existing results in the literature, in Theorem~\ref{thm:H-dim-Burch-submodules}, we observe that $R$ is H \iff $\hdim_R(M)<\infty$ \iff $\hdim_R(L/M)<\infty$, where H stands for projective, injective (or regular in case of rings), complete intersection and Gorenstein respectively. In short, these are written here as proj, inj, CI and G respectively. We prove the counterpart of these results for CM (Cohen-Macaulay) dimension, and strengthen the result on G-dimension as follows.
\begin{theorem}[See Theorems~\ref{thm:CM-dim-Burch-submodules} and \ref{thm:Gor-char-vanishing-Ext} for more detailed results]\label{thm:CM-Gor-Main-results}
Let $M$ be a Burch submodule of some $R$-module $L$.
\begin{enumerate}[\rm (1)]
\item When $\depth(M)\ge 1$, the ring $R$ is {\rm CM} $\Longleftrightarrow$ $\cmdim_R(M)<\infty$.
\item When $L$ is free $($e.g., when $M=I$ is a Burch ideal of $R=L$$)$, the ring
\begin{enumerate}[\rm (i)]
\item $R$ is Gorenstein \iff $\Ext_R^n(M,R)=0$ for any three consecutive values of $n \ge \max\{\depth(R)-1,0\}$.
\item $R$ is {\rm CM} $\Longleftrightarrow$ $\cmdim_R(M)<\infty$ $\Longleftrightarrow$ $\cmdim_R(L/M)<\infty$.
\end{enumerate}
\end{enumerate}
\end{theorem}
The technique used in the proof of the results on CM-dimension in Theorem~\ref{thm:CM-Gor-Main-results} can be applied to get simple and elementary proofs of the analogous results on other homological dimensions, see Theorem~\ref{thm:CM-dim-Burch-submodules} for a combined proof of all these results.
The main motivation for Theorems~\ref{thm:H-dim-Burch-submodules}, \ref{thm:CM-dim-Burch-submodules} and \ref{thm:Gor-char-vanishing-Ext} came from the characterizations stated below. A remarkable result due to
Burch \cite[pp~947, Cor.~3]{Bu68} states that $R$ is regular \iff projective dimension $\pd_R(I)$ is finite, where $I$ is an $\mathfrak{m}$-primary integrally closed ideal of $R$. The analogous results for injective and CI dimensions are shown in \cite[Cor.~6.12]{CGSZ18} and \cite[Cor.~2.7]{GP22} respectively. Moreover, in \cite[Thm.~2.6]{GP22}, it is shown that such an ideal $I$ has maximal complexity and curvature. For G-dimension, in \cite[Thm.~1.1]{CS16}, Celikbas--Sather-Wagstaff proved that $R$ is Gorenstein \iff $\gdim_R(I)$ is finite for some integrally closed ideal $I$ of $R$ such that $\depth(R/I)=0$ (a much weaker condition than $\mathfrak{m}$-primary).
We show that under some mild conditions, an integrally closed ideal $I$ with $\depth(R/I)=0$ is a Burch ideal, see Proposition~\ref{prop:int-closed-ideal-Burch} and Remark~\ref{rmk:DKT20-example-not-true}. Hence, as applications of Theorems~\ref{thm:H-dim-Burch-submodules}, \ref{thm:CM-dim-Burch-submodules} and \ref{thm:Gor-char-vanishing-Ext}, in Corollary~\ref{cor:characterizations-via-int-closed-ideal},
we considerably strengthen all these results. Furthermore, we obtain the analogous result for CM-dimension.
\begin{corollary}[See Corollary~\ref{cor:characterizations-via-int-closed-ideal} for more detailed results
Let $I$ be an integrally closed ideal of $R$ such that $\depth(R/I)=0$. Then $I$ has maximal projective $($resp., injective$)$ complexity and curvature.
Furthermore, $R$ is Gorenstein \iff $\Ext_R^n(I,R)=0$ for any three consecutive values of $n\ge \max\{\depth(R)-1,0\}$. Moreover, $R$ is {\rm CM} \iff $\cmdim_R(I)<\infty$.
Particularly, it follows that
\begin{center}
$R$ is {\rm H} $\Longleftrightarrow$ $\hdim_R(I)<\infty$ \;$($equivalently, $\hdim_R(R/I)<\infty$$)$,
\end{center}
where {\rm H} denotes {\rm proj}, {\rm inj} $($regular in case of rings$)$, {\rm CI}, {\rm G} and {\rm CM} respectively.
\end{corollary}
Levin-Vasconcelos in \cite[Thm.~1.1 and remark afterward]{LV68} showed that $R$ is regular \iff $\pd_R(\mathfrak{m} N)<\infty$, which is equivalent to that $\id_R(\mathfrak{m} N)<\infty$, where $N$ is an $R$-module such that $\mathfrak{m} N\neq 0$. It follows from \cite[Cor.~5]{Avr96} that $R$ is CI \iff $\cidim_R(\mathfrak{m} N)<\infty$.
Analogous result for G-dimension can be derived from \cite[pp.~316, Lem.]{LV68} and \cite[Thm.~4.4]{CS16}.
Motivated by the result of Levin-Vasconcelos, in \cite[Thm.~1]{AP05}, Asadollahi-Puthenpurakal proved that if $N$ is an $R$-module of positive depth, then $R$ is H \iff $\hdim_R(\mathfrak{m}^n N)<\infty$ for some $n\ge \rho(N)$ (cf.~\cite[1.6]{AP05} for the invariant $\rho(N)$), where H can be proj, CI, G and CM.
As other applications of Theorems~\ref{thm:H-dim-Burch-submodules}, \ref{thm:CM-dim-Burch-submodules} and \ref{thm:Gor-char-vanishing-Ext}, we combine all these results, and considerably strengthen some of them. Moreover, we obtain a few variations
\begin{corollary}[See Corollary~\ref{cor:characterizations-via-mN}]
Let $N$ be a submodule of an $R$-module $L$ such that $\mathfrak{m} N \neq 0$. Then the following statements hold true.
\begin{enumerate}[\rm (1)]
\item $R$ is {\rm H} \iff $\hdim_R(\mathfrak{m} N)<\infty$ \iff $\hdim_R(L/\mathfrak{m} N)<\infty$, where {\rm H} denotes {\rm proj}, {\rm inj} $($regular in case of rings$)$, {\rm CI} and {\rm G} respectively.
\item If either $\depth(N)\ge 1$, or $L$ is free $($e.g., $N=J$ is an ideal of $R=L$$)$, then
\begin{enumerate}[\rm (i)]
\item $R$ is {\rm CM} \iff $\cmdim_R(\mathfrak{m} N)<\infty$.
\item $R$ is Gorenstein \iff $\Ext_R^n(\mathfrak{m} N,R)=0$ for any three consecutive values of $n\ge \max\{\depth(R)-1,0\}$.
\end{enumerate}
\end{enumerate}
\end{corollary}
Now we describe in brief the contents of this article. In Section~\ref{sec:hom-inv}, we recall various homological invariants that are used in the paper. In order to prove the results on CI and CM dimensions, we need a number of lemmas, which are proved in Section~\ref{sec:G-C-dim-CM-dim}. The preliminaries on Burch submodules are discussed in Section~\ref{sec:Burch-ideals}. Particularly, Lemma~\ref{lem:Extension-of-Burch-submodule} is proved, which is crucial to obtain the results on Burch submodules of depth zero. Finally, our main results along with their applications are shown in Section~\ref{sec:Main-results-applications}.
\section{Homological invariants}\label{sec:hom-inv}
In this section, we recall the terminologies that are used in the subsequent sections. Let $M$ be an $R$-module. Let $ \beta_n^R(M) $ denote the $n$th Betti number of $M$, i.e., $ \beta_n^R(M) := \rank_k\left( \Ext_R^n(M,k) \right) $. The $i$th Bass number of $M$ is denoted by $\mu_i^R(M)$, i.e., $\mu_i^R(M):=\rank_{k}(\Ext_{R}^i(k,M))$.
\begin{para}\label{proj-inj-complexity}
\cite[Sec.~1]{Avr96} The projective complexity $\projcx_R(M)$ (resp., injective complexity $\injcx_R(M)$) is equal to a non-negative integer $b$ if $b-1$ is the smallest possible value such that there exists $\alpha>0$ satisfying $\beta_n^R(M) \le \alpha n^{b-1}$ (resp., $\mu_n^R(M) \le \alpha n^{b-1}$) for all $n\ge 0$. The projective complexity of $M$ is also denoted by $\cx_R(M)$.
\end{para}
\begin{para}\label{proj-inj-curv}
\cite[Sec.~1]{Avr96} The projective and injective curvatures of $M$ are defined by $$\projcurv_R(M)=\limsup_{n \to \infty} \sqrt[n]{\beta_n^R(M)} \quad \mbox{and} \quad \injcurv_R(M)=\limsup_{n \to \infty} \sqrt[n]{\mu_n^R(M)}.$$
The projective curvature of $M$ is also denoted by $\curv_R(M)$.
\end{para}
\begin{para}\label{para:global-cx-curv}
\cite[Prop.~2]{Avr96} Note that $\max\{ \projcx_R(M) : \mbox{$M$ is an $R$-module} \} = \projcx_R(k)$. Moreover, in the above equality, $\projcx$ can be replaced by $\injcx$, $\projcurv$ and $\injcurv$ respectively. An $R$-module $M$ is said to have maximal projective (resp., injective) complexity and curvature if these invariants are same for both $M$ and $k$.
\end{para}
The notion of CI-dimension is due to Avramov-Gasharov-Peeva \cite{AGP97}.
\begin{para}\label{defn:CI-dim}
\cite[(1.1) and (1.2)]{AGP97} A (codimension $c$) quasi-deformation of $R$ is a diagram of local ring homomorphisms $R \to R' \leftarrow Q$, where $R \to R'$ is flat, and $R' \leftarrow Q$ is a (codimension $c$) deformation, i.e., $R' \leftarrow Q$ is surjective with kernel generated by a (length $c$) regular sequence. Set $M' := M \otimes_R R'$. Define $\cid_R(0) := 0$. When $M\neq 0$, the CI-dimension of $M$ is defined to be $\cid_R(M) = $
\[
\inf\{ \pd_Q(M') - \pd_Q(R') : R \to R' \leftarrow Q \mbox{ is a quasi-deformation}\}.
\]
\end{para}
The notion of semidualizing modules was introduced by Golod \cite{Go84} by the name of suitable modules. In the same paper, he also introduced the notions of $G_C$-projective modules and $G_C$-dimension.
\begin{para}\cite{Go84}
An $R$-module $C$ is said to be semidualizing (or suitable) if
\begin{enumerate}
\item the natural homomorphism $R \to \Hom_R(C,C)$ is an isomorphism, and
\item $\Ext_{R}^i(C,C)=0$ for every $i>0$.
\end{enumerate}
\end{para}
\begin{para}\cite{Go84}
Let $C$ be a semidualizing $R$-module. Set $(-)^{\dagger} := \Hom_R(-,C)$. An $R$-module $X$ is called $G_C$-projective if
\begin{enumerate}
\item the natural homomorphism $X \to X^{\dagger\dagger}$ is an isomorphism, and
\item $ \Ext_R^i(X,C) = \Ext_R^i(X^\dagger,C) = 0 $ for every $ i > 0 $.
\end{enumerate}
The $G_C$-dimension of $M$, denoted by $\gcdim_R(M)$, is the least non-negative integer $n$ such that there exists an exact sequence $0 \to X_n \to \cdot\cdot\cdot \to X_1 \to X_0\to M \to 0$
of $R$-modules such that each $X_i$ is $G_C$-projective. If such $n$ does not exist, then $\gcdim_R(M) := \infty$. The G-dimension of $M$ is defined to be $\grdim_R(M)$, i.e., $\gcdim_R(M)$ when $C=R$, which is due to Auslander-Bridger \cite{AB69}.
\end{para}
In \cite{Ge01}, Gerko introduced the notion of CM-dimension.
\begin{para}\cite[3.1 and 3.2]{Ge01}\label{para:CM-defn-1}
A G-quasi-deformation of $R$ is a diagram of local homomorphisms $R \to R' \leftarrow Q$, where $R \to R'$ is flat, and $R' \leftarrow Q$ is a G-deformation, i.e., a surjective homomorphism whose kernel $I$ is a G-perfect ideal of $Q$ (which means that $\gdim_Q(Q/I) = \grade(Q/I)$). Set $M' := M \otimes_R R'$. Define $\cmdim_R(0) := 0$. When $ M \neq 0 $, the CM-dimension of $M$ is defined to be $\cmdim_R(M) = $
\[
\inf\{ \gdim_Q(M') - \gdim_Q(R') : R \to R' \leftarrow Q \mbox{ is a G-quasi-deformation}\}.
\]
\end{para}
The following definition of CM-dimension is equivalent to that in \ref{para:CM-defn-1}.
\begin{para}\label{para:CM-defn-2}\cite[pp.~1177, $3.2'$]{Ge01}
If $M\neq0$, then $\cmdim_R(M)$ is
$\inf\{\gcdim_{R'} (M \otimes_{R} R'): R \to R' $ is a local flat extension and $C$ is a semidualizing $R'$-module\}.
\end{para}
\begin{para}\label{para:H-dim-inequalities}
For an $R$-module $M$, by \cite[3.2]{Ge01} and \cite[(1.4)]{AGP97}, there are inequalities:
\[
\cmdim_R(M) \le \gdim_R(M) \le \cid_R(M) \le \pd_R(M).
\]
If one of these is finite, then it is equal to those to its left, which is same as $\depth(R) - \depth(M)$, see \cite[3.8]{Ge01}, \cite[(4.13.b)]{AB69}, \cite[(1.4)]{AGP97} and \cite[1.3.3]{BH93} respectively.
\end{para}
\begin{para}\label{para:char-local-rings-via-H-dim}
The ring $R$ is H $\Longleftrightarrow$ $\hdim_R(k)<\infty$ $\Longleftrightarrow$ $\hdim_R(M)<\infty$ for every $R$-module $M$, where H can be proj, inj (regular in case of rings), CI, G and CM, see \cite[Thm.~19.2]{Mat86}, \cite[3.1.26]{BH93}, \cite[(1.3)]{AGP97}, \cite[(4.20)]{AB69} and \cite[Thm.~3.9]{Ge01} respectively.
\end{para}
\begin{para}\label{para:H-dim-syz-relations}
For every $n\ge 0$, $\hdim_R(\Omega_n^R(M)) = \max\{ \hdim_R(M)-n, 0 \}$, where $\Omega_n^R(M)$ denotes the $n$th syzygy module of $M$. These equalities are well known when H is {\rm proj}, CI and G. See \cite[(1.9.1)]{AGP97} for CI. Similar arguments work for CM as well. The projective complexities (resp., curvatures) are same for both $\Omega_1^R(M)$ and $M$.
\end{para}
\begin{para}\label{para:H-dim-direct-sum}
Let $M$ and $N$ be $R$-modules. Then
\[
\hdim_R(M \oplus N) \ge \max\{ \hdim_R(M), \hdim_R(N) \}.
\]
It is shown in \cite[Prop.~3.3]{AP05} for H = CM. A similar proof works for CI as well. The inequality becomes equality when H is {\rm proj}, inj and {\rm G}, see, e.g., \cite[1.3.2]{BH93}, \cite[3.1.14]{BH93} and \cite[1.2.7, 1.2.9]{Ch00} respectively. When it is equality, $\hdim$ can also be replaced by $\cx$ and $\curv$ respectively, see \cite[4.2.4.(3)]{Avr98}.
\end{para}
\section{CI and CM dimensions}\label{sec:G-C-dim-CM-dim}
Here we prove a number of lemmas on CI-dimension and CM-dimension.
\begin{lemma}\label{lem:flat-extension}
Let $\varphi : R \to R'$ be a flat homomorphism, and $x$ be an $R$-regular element. Then the induced map $\overline{\varphi} : R/xR \to R'/xR'$ is also flat.
\end{lemma}
\begin{proof}
Consider a short exact sequence $0 \to M_1 \to M_2 \to M_3 \to 0 $ of $R/xR$-modules. Considering this as an exact sequence of $R$-modules, since $\varphi$ is flat, $0 \to M'_1 \to M'_2 \to M'_3 \to 0 $ is also exact, where $M'_i := M_i \otimes_R R'$. Note that
\[
M_i \otimes_R R' \cong (M_i \otimes_{R/xR} R/xR) \otimes_R R' \cong M_i \otimes_{R/xR} R'/xR'.
\]
It follows that $\overline{\varphi} : R/xR \to R'/xR'$ is flat.
\end{proof}
\begin{lemma}\label{lem:H-dim-M-M//xM}
Let $x \in R$ be a regular element over both $R$ and $M$. Then the following $($in$)$equalities hold true.
\begin{enumerate}[\rm (1)]
\item $\cx_R(M) = \cx_{R/xR}(M/xM)$ and $\curv_R(M) = \curv_{R/xR}(M/xM)$.
\item $\hdim_{R/xR}(M/xM) \le \hdim_R(M) $, where {\rm H} can be {\rm proj}, {\rm inj}, {\rm CI}, {\rm G} and {\rm CM}. The inequality becomes equality when {\rm H} is proj and {\rm G} respectively.
\end{enumerate}
\end{lemma}
\begin{proof}
The equalities in (1) are obvious as $\beta_n^R(M) = \beta_n^{R/xR}(M/xM)$ for every $n\ge 0$, see, e.g., \cite[p.~140, Lem.~2]{Mat86}. The (in)equalities in (2) are well known when H is proj, inj, {\rm CI} and {\rm G}, see \cite[1.3.5]{BH93}, \cite[3.1.15]{BH93}, \cite[(1.12.2)]{AGP97} and \cite[(4.31)]{AB69} respectively. Thus it requires to show the inequality for H = CM.
If $\cmdim_R M = \infty$, then the inequality is trivial. So assume that $\cmdim_R(M)$ is finite and same as $n$. Then there exists a local flat extension $ R \to R' $ and a semidualizing $R'$-module $C$ such that $\gcdim_{R'}(M') = n $, where $M' := M \otimes_R R'$. Since $x$ is regular over both $R$ and $M$, and $ R \to R' $ is flat, $x$ is also regular over $R'$ and $M'$. Hence $C/xC$ is a semidualizing module over $R'/xR'$. Moreover, considering the natural surjection $R' \to R'/xR'$, by \cite[Prop.~6.3.1]{SSW01},
\begin{equation}\label{eqn:GC-dim}
\gcodim_{R'/xR'}(M' \otimes_{R'} R'/xR') = \gcdim_{R'}(M') = n.
\end{equation}
By Lemma~\ref{lem:flat-extension}, $R/xR \to R'/xR'$ is flat. Note that
\begin{align*}
M/xM \otimes_{R/xR} R'/xR' &\cong M \otimes_R R/xR \otimes_{R/xR} R'/xR' \cong M \otimes_R R'/xR' \\
&\cong M \otimes_R R' \otimes_{R'} R'/xR' \cong M' \otimes_{R'} R'/xR'.
\end{align*}
So the equality \eqref{eqn:GC-dim} yields that $\cmdim_{R/xR}(M/xM) \le n = \cmdim_R(M)$.
\end{proof}
\begin{lemma}\label{lem:s.e.s-CM-dim}
Let $0 \to M_1 \to M_2 \to M_3 \to 0$ be a short exact sequence of $R$-modules. If one of these $M_i$ has finite projective dimension, and another one has finite CM-dimension, then the 3rd one has finite CM-dimension.
In the above mentioned result, {\rm CM} can be replaced by {\rm CI} as well.
\end{lemma}
\begin{proof}
Consider distinct $l,m,n \in \{1,2,3\}$. Suppose $\pd_R(M_l)$ and $\cmdim_R(M_m)$ are finite. We need to show that $\cmdim_R(M_n)<\infty$. Since $\cmdim_R(M_m)<\infty$, there exists a G-quasi-deformation $R \to R' \leftarrow Q$ such that $\gdim_Q(M_m')< \infty$, where $(-)':=(-)\otimes_R R'$. Since $R\to R'$ is flat, $\pd_{R'}(M_l')< \infty$. Since $R' \leftarrow Q$ is surjective, and its kernel is a G-perfect ideal of $Q$ (cf.~\ref{para:CM-defn-1}), one obtains that $\gdim_Q(R')< \infty$. From $\pd_{R'}(M_l')< \infty$ and $\gdim_Q(R')< \infty$, by \cite[1.2.9]{Ch00}, one derives that $\gdim_Q(M_l')< \infty$. Therefore, again by \cite[1.2.9]{Ch00}, in view of the short exact sequence $0\to M_1' \to M_2'\to M_3' \to 0$, we get that $\gdim_Q(M_n')< \infty$. Thus $\cmdim_R(M_n)<\infty$. Considering quasi-deformation (in place of G-quasi-deformation), a similar argument works for the result on CI-dimension.
\end{proof}
\begin{lemma}\label{lem:CM-direct-sum}
Let $M$ be a submodule of a free $R$-module $F$. Let {\rm H} denotes {\rm CI} and {\rm CM} respectively. Then the following are equivalent: {\rm (1)} $\hdim_R(M \oplus F/M) <\infty$, {\rm (2)} $\hdim_R(M)<\infty$, and {\rm (3)} $\hdim_R(F/M)<\infty$. In particular,
\[
\hdim_R(M \oplus F/M) = \hdim_R(F/M).
\]
\end{lemma}
\begin{proof}
We prove the lemma for H = CM. The case H = CI can be shown in a similar way. The implications (1) $\Rightarrow$ (2) and (1) $\Rightarrow$ (3) can be obtained from \ref{para:H-dim-direct-sum}. The equivalence of (2) and (3) follows from the short exact sequence $0\to M \to F \to F/M\to 0$ and Lemma~\ref{lem:s.e.s-CM-dim}. So it is enough to prove that (2) $\Rightarrow$ (1). Let $\cmdim_R(M)<\infty$. Then there exists a G-quasi deformation $R \to R' \leftarrow Q$ such that $\gdim_Q(M')<\infty$, where $(-)':=(-)\otimes_R R'$. Since $R\to R'$ is flat, the sequence $0\to M' \to F' \to (F/M)' \to 0$ is also exact. From the G-quasi deformation, one has that $\gdim_Q(R')< \infty$. Therefore, by \cite[1.2.7 and 1.2.9]{Ch00}, $\gdim_Q(F/M)'< \infty$, and hence $\gdim_Q(M \oplus F/M)'< \infty$. Thus $\cmdim_R(M \oplus F/M)<\infty$. This completes the proof of the first part. For the second part, we may assume that both $\hdim_R(M \oplus F/M)$ and $ \hdim_R(F/M)$ are finite. In view of \ref{para:H-dim-inequalities}, since $\depth(M \oplus F/M) = \min\{\depth(M),\depth(F/M)\}$, one obtains that $\hdim_R(M \oplus F/M) = \max\{\hdim(M),\hdim(F/M)\} = \hdim(F/M)$. The last equality follows from \eqref{para:H-dim-syz-relations} as $M=\Omega_1^R(F/M)$.
\end{proof}
\begin{lemma}\label{lem:local-flat}
Let $(R',\mathfrak{m}')$ be a Noetherian local ring, and $\varphi: R \to R'$ be a local homomorphism. Let $ \Phi : R[X]_{\langle \mathfrak{m}, X \rangle} \to R'[X]_{\langle \mathfrak{m}',X \rangle}$ be the map induced by $\varphi$.
\begin{enumerate}[\rm (1)]
\item If $\varphi$ is flat $($resp., local, surjective$)$, then so is $\Phi$.
\item $\Ker(\Phi) = \langle \Ker(\varphi) \rangle$.
\item Every quasi-deformation $R \to R' \leftarrow Q$ induces another quasi-deformation $ R[X]_{\langle \mathfrak{m}, X \rangle} \to R'[X]_{\langle \mathfrak{m}',X \rangle} \leftarrow Q[X]_{\langle \mathfrak{n}, X \rangle}$, where $\mathfrak{n}$ is the maximal ideal of $Q$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) If $\varphi$ is local (resp., surjective), then by the construction, $\Phi$ is so. Suppose $\varphi$ is flat. Then, since $R'[X]\otimes_{R[X]}(-)\cong R'\otimes_R R[X]\otimes_{R[X]}(-)$, the induced map $R[X] \to R'[X] $ is also flat. Since localization is flat, and the composition of two flat homomorphisms is flat, the homomorphism $R[X] \to R'[X]_{\langle \mathfrak{m}',X\rangle}$ is flat. Therefore, for any module $L$ over $R[X]_{\langle \mathfrak{m},X\rangle}$, in view of the isomorphisms
\begin{align*}
&R'[X]_{\langle \mathfrak{m}',X \rangle} \otimes_{R[X]_{\langle \mathfrak{m},X\rangle}} L \\
& \cong \left( R'[X]_{\langle \mathfrak{m}',X \rangle} \otimes_{R[X]} R[X]_{\langle \mathfrak{m},X\rangle} \right) \otimes_{R[X]_{\langle \mathfrak{m},X\rangle}} \left( R[X]_{\langle \mathfrak{m},X\rangle} \otimes_{R[X]} L \right) \\
& \cong R'[X]_{\langle \mathfrak{m}',X \rangle} \otimes_{R[X]} R[X]_{\langle\mathfrak{m},X\rangle} \otimes_{R[X]} L,
\end{align*}
one obtains that $\Phi$ is flat.
(2) The containment $\langle \Ker(\varphi) \rangle \subseteq \Ker(\Phi)$ is trivial. For other containment, consider an element $y= (a_0+a_1X+a_2 X^2+ \cdots +a_m X^m)/1$ of $\Ker(\Phi)$. Then $\phi(a_0)+\phi(a_1)X+\cdots +\phi(a_m)X^m =0$ in $R'[X]_{\langle \mathfrak{m}',X\rangle}$. Therefore, by forward induction on $i=0,1,\dots,m$, one concludes that $t\varphi(a_i)=0$ for some $t \in R'\smallsetminus \mathfrak{m}'$, and hence $\varphi(a_i)=0$ for every $i=0,1,\dots,m$. Thus $y \in \langle \Ker(\varphi) \rangle$.
(3) Note that $Q\to Q[X]_{\langle \mathfrak{n}, X \rangle}$ is local flat. Hence every $Q$-regular sequence is also $Q[X]_{\langle \mathfrak{n}, X \rangle}$-regular. Therefore, in view of the definition of quasi-deformation \ref{defn:CI-dim}, the statement (3) follows from (1) and (2).
\end{proof}
\begin{lemma}\label{lem:CM-dim-relation-mod-x-R-S}
Let $S=R[X]_{\langle \mathfrak{m}, X \rangle}$. Let $M$ be an $S$-module such that $X$ is $M$-regular. Then $\hdim_S(M) \le \hdim_R(M/XM)$ where $\rm H$ is $\rm CM$ and $\rm CI$ respectively.
\end{lemma}
\begin{proof}
Consider the case when $\rm H=CM$. We may assume that $\cmdim_R(M/XM)$ is finite, say $ n $. Then there exists a local flat homomorphism $R \to R'$ and a semidualizing $R'$-module $C$ such that $\gcdim_{R'}(M/XM \otimes_{R}R')=n$. Let $\mathfrak{m}'$ be the maximal ideal of $R'$. Set $S' := R'[X]_{\langle \mathfrak{m}',X \rangle}$. Being composition of two flat extensions $R' \to R'[X]$ and $R'[X] \to S'$, the extension $R' \to S'$ is flat. Therefore, for any two $R'$-modules $U$ and $V$,
\[
\Ext_{R'}^i(U,V) \otimes_{R'}S' \cong \Ext_{S'}^i( U \otimes_{R'}S',V \otimes_{R'}S') \quad \mbox{for every } i \ge 0.
\]
It follows that $C \otimes_{R'}S'$ is a semidualizing module over $S'$. Moreover, for every $G_C$-projective $R'$-module $L$, the module $L\otimes_{R'}S'$ is $G_{C \otimes_{R'}S'}$-projective over $S'$. Note that $R' \to S'$ is flat and local, hence faithfully flat. Since $\gcdim_{R'}(M/XM \otimes_{R}R') = n $, there exists a $G_C$-projective resolution of $(M/XM \otimes_{R}R')$ over $R'$ of length $n$. Applying $(-)\otimes_{R'}S'$ to this resolution, one obtains a $G_{C \otimes_{R'}S'}$-projective resolution of $ M/XM \otimes_{R}R'\otimes_{R'}S'$ over $S'$ of length $n$.
Thus, since
\begin{align*}
M/XM \otimes_{R} R'\otimes_{R'}S' &\cong (M\otimes_{S} S/XS)\otimes_{R}R' \otimes_{R'}S' \\ &\cong M\otimes_{S}(R \otimes_{R}R')\otimes_{R'}S' \\ &\cong M \otimes_{S} (R' \otimes_{R'}S') \cong M\otimes_{S} S',
\end{align*}
it follows that ${\rm G}_{(C \otimes_{R'}S')}\mbox{-}{\rm dim}_{S'} ( M\otimes_{S} S') \le n$. Therefore, since $S \to S'$ is a local flat homomorphism (by Lemma~\ref{lem:local-flat}), one gets that $\cmdim_S(M) \le n$
It remains to prove the statement when $\rm H=CI$. Let $\cid_R(M/XM)=n < \infty$. Then there exists a quasi-deformation $R \stackrel{\varphi}\longrightarrow R' \stackrel{\psi}\longleftarrow Q$ of $R$ such that $\pd_Q(M/XM \otimes_{R}R')=n+\pd_Q(R')$. By Lemma~\ref{lem:local-flat}, $S \stackrel{\Phi}\longrightarrow S' \stackrel{\Psi}\longleftarrow T$ is a quasi-deformation of $S$, and $(\Ker(\psi))T = \Ker(\Psi)$, where $S'=R'[X]_{\langle \mathfrak{m}', X \rangle}$ and $T=Q[X]_{\langle \mathfrak{n}, X \rangle}$, and $\mathfrak{m}'$ and $\mathfrak{n}$ are the maximal ideals of $R'$ and $Q$ respectively. Since
\begin{align*
M/XM \otimes_{R} R' \cong M \otimes_{S} S/XS \otimes_{R} R' \cong M \otimes_{S} R \otimes_{R} R' \cong M \otimes_{S} R',
\end{align*}
it follows that $\pd_Q(M \otimes_{S}R')=n+\pd_Q(R')$. Therefore, in view of
\begin{align*
(M \otimes_{S} R') \otimes_{Q} T &\cong M \otimes_{S} (R' \otimes_{Q} T) \cong M \otimes_{S}(Q/\Ker(\psi) \otimes_{Q}T)\\
& \cong M \otimes_{S}T/(\Ker(\psi))T \cong M \otimes_{S}T/\Ker(\Psi) \cong M \otimes_{S}S'
\end{align*}
since $Q \to T$ is flat and local, one obtains that $\pd_{T}(M \otimes_{S}S')=n+\pd_{T}(S')$. Hence $\cid_{S}(M) \le n=\cid_R(M/XM)$.
\end{proof}
\section{Burch submodules and their quotients}\label{sec:Burch-ideals}
The notion of $\mathfrak{m}$-full ideals was introduced by D.~Rees (unpublished), while weakly $\mathfrak{m}$-full and Burch ideals were introduced in \cite[3.7]{CIST18} and \cite[2.1]{DKT20} respectively. These are extended to submodules in \cite[2.1]{AP05}, \cite[4.1]{DK22} and \cite[3.1]{DK22} respectively.
\begin{definition}\cite[Def.~3.1]{DK22}\label{defn:Burch-submodule}
Let $L$ be an $R$-module. A submodule $M$ of $L$ is said to be Burch if $\mathfrak{m}(M:_L \mathfrak{m}) \neq \mathfrak{m} M$, i.e., $\mathfrak{m}(M:_L \mathfrak{m}) \not\subseteq \mathfrak{m} M$. A Burch submodule of $R$ is called a Burch ideal of $R$, which is due to \cite[Def.~2.1]{DKT20}.
\end{definition}
\begin{remark}\label{rmk:Burch-submodules}
Let $M$ be a Burch submodule of some $R$-module $L$. Then, by the definition, $M\neq 0$ and $L/M\neq 0$. Moreover, by \cite[Lem.~3.3]{DK22}, $\depth(L/M)=0$.
\end{remark}
\begin{definition}(\cite[2.1]{AP05} and \cite[4.1]{DK22})
A submodule $M$ of $L$ is called:
\begin{enumerate}[\rm (1)]
\item $\mathfrak{m}$-full if $(\mathfrak{m} M:_L x)=M$ for some $x\in \mathfrak{m}$.
\item weakly $\mathfrak{m}$-full if $(\mathfrak{m} M:_L \mathfrak{m})=M$.
\end{enumerate}
\end{definition}
\begin{remark}\label{rmk:strata}
\cite[pp.~13]{DK22}
A submodule $M$ of an $R$-module $L$ is
\[
\mbox{$\mathfrak{m}$-full} \;\; \Longrightarrow \;\; \mbox{weakly $\mathfrak{m}$-full} \;\; \stackrel{(*)}{\Longrightarrow} \;\; \mbox{Burch},
\]
where $\depth(L/M)$ is assumed to be zero for $(*)$.
\end{remark}
\begin{example}\label{exam:Burch-submodules-quotient}
\cite[3.5]{DK22} If $N$ is a submodule of some $R$-module $L$ such that $\mathfrak{m} N \neq 0$, then $M:=\mathfrak{m} N$ is a Burch submodule of $L$.
\end{example}
We show that a large class of integrally closed ideals are Burch ideals under some mild conditions on the ring.
\begin{proposition}\label{prop:int-closed-ideal-Burch}
Suppose that $k$ is infinite, and $R$ is not a field. Let $I$ be an integrally closed ideal of $R$ such that $\depth(R/I)=0$. Then the ideal $I$ is Burch.
\end{proposition}
\begin{proof}
By virtue of \cite[Thm.~2.4]{Go87}, either $I$ is $\mathfrak{m}$-full or $I=\sqrt{0}$. If $I$ is $\mathfrak{m}$-full, since $\depth(R/I)=0$, by \cite[Cor.~2.4]{DKT20}, $I$ is Burch. So we may assume that $I = \sqrt{0}$. Then $I^n = 0 $ for some $n\ge 1$. Since $\depth(R/I)=0$, there exists $x \notin I$ such that $\mathfrak{m} x \subseteq I$. If $x\in \mathfrak{m}$, then $x^2\in I$, hence $x^{2n}=0$, which implies that $x \in \overline{I}=I$, a contradiction. Thus $x$ is a unit. It follows from $\mathfrak{m} x \subseteq I$ that $I=\mathfrak{m}$, which is also a Burch ideal of $R$ (by \cite[2.2.(2)]{DKT20}).
\end{proof}
\begin{remark}\label{rmk:DKT20-example-not-true}
The statement given in \cite[Ex.~2.2.(4)]{DKT20} is not true in general. Indeed, the condition that $\depth(R/I)=0$ in Proposition~\ref{prop:int-closed-ideal-Burch} cannot be omitted as remarked in \ref{rmk:Burch-submodules} even when $\depth(R)\ge 1$. Consider $R=k[[x,y,z,w]]/(xy,yz,zx,w^2)$ over an (infinite) field $k$. Then $R$ is a CM local ring of dimension $1$. Set $I:=(w)$. Then $I$ is an integrally closed ideal of $R$ and $\depth(R/I)=1$. Since $\mathfrak{m}(I :_R \mathfrak{m}) = \mathfrak{m} I$, the ideal $I$ is not Burch.
\end{remark}
The following lemma is crucial in order to prove our results on Burch submodules of depth zero.
\begin{lemma}\label{lem:Extension-of-Burch-submodule}
Let $S=R[X]_{\langle \mathfrak{m}, X \rangle}$. Let $M$ be a Burch submodule of an $R$-module $L$. Set $L':=L[X]_{\langle \mathfrak{m}, X \rangle}$ and $M':=(M+XL[X])_{\langle \mathfrak{m}, X \rangle}$. Then
\begin{enumerate} [\rm (1)]
\item $R$ is regular $($resp., {\rm CI}, Gorenstein and {\rm CM}$)$ \iff $S$ is so.
\item $M'$ is a Burch submodule of the $S$-module $L'$.
\item $M'/XM' \cong M \oplus L/M$ as $R$-modules.
\item When $L$ is free, the following equalities hold true.
\begin{enumerate}[\rm (i)]
\item $\hdim_R(L/M)=\hdim_S(M')$, where {\rm H} can be {\rm proj}, {\rm CI}, {\rm G} and {\rm CM}.
\item $\cx_R(M)=\cx_S(M')$ and $\curv_R(M)=\curv_S(M')$.
\end{enumerate}
\item $\cx_R(k)=\cx_S(k)$ and $\curv_R(k)=\curv_S(k)$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Note that the maximal ideal of $S$ is given by $\mathfrak{n}= \mathfrak{m} S + XS$. The assertions in (1) follow from the fact that $X\in\mathfrak{n}\smallsetminus\mathfrak{n}^2$ is $S$-regular.
(2) Since $\mathfrak{m}(M:_L \mathfrak{m}) \neq \mathfrak{m} M$, there exist $a \in \mathfrak{m}$ and $y \in (M:_L \mathfrak{m})$ such that $ay \notin \mathfrak{m} M $. Then $ \frac{a}{1} \in \mathfrak{n}$ and $\frac{y}{1} \in (M':_{L'}\mathfrak{n})$. Since $a \in \mathfrak{m}$, $y \in L$ and $ay \notin \mathfrak{m} M$, it follows that $\frac{a}{1}\cdot \frac{y}{1} \notin \mathfrak{n} M'$. (Indeed, if $\frac{ay}{1} \in \mathfrak{n} M'$, then $say\in \langle \mathfrak{m}, X \rangle (M+XL[X])$ for some $s\in R[X]\smallsetminus \langle \mathfrak{m}, X \rangle$, and hence comparing the terms, one obtains that $ay\in \mathfrak{m} M$, a contradiction). Therefore $\mathfrak{n}(M':_{L'} \mathfrak{n})\neq \mathfrak{n} M'$
(3) As $R$-modules, we have the following isomorphisms:
\begin{equation}\label{M'-isomorphism-with-direct-sum}
\dfrac{M+XL[X]}{X(M+XL[X])} \cong \dfrac{M \oplus XL \oplus X^2L[X]}{0\oplus XM \oplus X^2 L[X]} \cong M \oplus \frac{L}{M}.
\end{equation}
Localizing \eqref{M'-isomorphism-with-direct-sum} with respect to $T:= R[X] \smallsetminus {\langle \mathfrak{m}, X \rangle}$, we get the desired isomorphism $M'/XM' \cong M \oplus L/M$ as $R$-modules.
(4) Note that $X$ is regular over both $S$ and $M'$. Therefore
\begin{align*}
\hdim_S(M') &=\hdim_{S/XS}(M'/XM') \quad\mbox{[by Lemmas~\ref{lem:H-dim-M-M//xM} and \ref{lem:CM-dim-relation-mod-x-R-S}]} \\
&=\hdim_R(M \oplus L/M) \quad\mbox{[by (3)]}\\
&= \max\{\hdim_R(M), \hdim_R(L/M)\} \quad \mbox{[by \ref{para:H-dim-direct-sum}]}\\
&=\hdim_R(L/M) \quad\mbox{[by \ref{para:H-dim-syz-relations} as $L$ is free]}
\end{align*}
when H is proj and G respectively. The same equalities hold true for complexity and curvature as well. When {\rm H} is {\rm CI} and {\rm CM}, for the last two equalities, by Lemma~\ref{lem:CM-direct-sum}, one directly has $\hdim_R(M \oplus L/M) = \hdim_R(L/M)$.
(5) These equalities are shown in \cite[Lem.~3.3]{DG22}.
\end{proof}
\begin{para}\cite{DK22}\label{Dey-Kobayashi-results}
Let $M$ be a Burch submodule of some $R$-module $L$. Let $n\ge 1$ be an integer. For an $R$-module $N$, the following statements hold true.
\begin{enumerate}[\rm (1)]
\item \cite[Prop.~3.16, Cor.~3.14 and Prop.~3.15]{DK22} $\pd_R(N)<n$ if at least one of the following conditions holds true.
\begin{enumerate}[\rm (i)]
\item $\Tor^R_n(N,M) = \Tor^R_{n-1}(N,M) = 0$.
\item $\Ext_R^n(N,M) = \Ext_R^{n+1}(N,M) = 0$.
\item $\Tor^R_n(N,L/M) = \Tor^R_{n+1}(N,L/M) = 0$.
\item $\Ext_R^{n}(N,L/M) = \Ext_R^{n-1}(N,L/M) = 0$
\end{enumerate}
\item \cite[Prop.~3.18]{DK22} $\id_R(N)<\infty$ if $\depth(M)\ge 1$ and $\Ext_R^i(M,N)=0$ for any two consecutive values of $i\ge \depth(N)-1$.
\end{enumerate}
\end{para}
In the next section, we provide a variation of \cite[Prop.~3.18]{DK22}, see Theorem~\ref{thm:Gor-char-vanishing-Ext}.
\section{Main results and applications}\label{sec:Main-results-applications}
We start by observing the following characterizations of various local rings in terms of Burch submodules and their quotients from the existing results in the literature. The statement \ref{thm:H-dim-Burch-submodules}.(1) is noted in \cite[2.5]{DKT20} and \cite[3.11]{DK22} for Burch ideals and submodules respectively only for projective complexity and curvature.
\begin{theorem}\label{thm:H-dim-Burch-submodules}
Let $M$ be a Burch submodule of some $R$-module $L$. Then
\begin{enumerate}[\rm (1)]
\item $M$ has maximal projective $($resp., injective$)$ complexity and curvature.
\item $R$ is regular
$\Longleftrightarrow$ $\Ext_R^n(M,N) = \Ext_R^{n+1}(M,N)=0$ for some $n\ge 1$, where $N$ is a Burch submodule of some $R$-module. Under these conditions, $\pd_R(M)<n$.
\item $\pd_R(L/M)<\infty$ $\Longleftrightarrow$ $R$ is regular $\Longleftrightarrow$ $\id_R(L/M)<\infty$
\item The following are equivalent: {\rm (i)} $R$ is complete intersection, {\rm (ii)} $\cidim_R(M)<\infty$, {\rm (iii)} $\cidim_R(L/M)<\infty$, {\rm (iv)} $\projcx_R(M)<\infty$, {\rm (v)} $\injcx_R(M)<\infty$, {\rm (vi)} $\projcurv_R(M)\le 1$ and {\rm (vii)} $\injcurv_R(M)\le 1$.
\item $R$ is Gorenstein $\Longleftrightarrow$ $\Ext_R^n(M,R)=0$ for all $n\gg 0$ $\Longleftrightarrow$ $\Ext_R^n(L/M,R)=0$ for all $n\gg 0$.
\item All together, $R$ is {\rm H} $\Longleftrightarrow$ $\hdim_R(M)<\infty$ $\Longleftrightarrow$ $\hdim_R(L/M)<\infty$, where {\rm H} denotes {\rm proj}, {\rm inj} $($regular in case of rings$)$, {\rm CI} and {\rm G} respectively.
\end{enumerate}
\end{theorem}
\begin{proof}
Since $M$ is a submodule of $(M:_L \mathfrak{m})$ satisfying $M \supseteq \mathfrak{m} (M:_L \mathfrak{m}) \neq \mathfrak{m} M$, by virtue of \cite[Thm.~4]{Avr96}, $M$ has maximal projective $($resp., injective$)$ complexity and curvature. For (2), if $\Ext_R^n(M,N) = \Ext_R^{n+1}(M,N)=0$, then by \ref{Dey-Kobayashi-results}.(1).(ii), $\pd_R(M)<n$, which implies that $\cx_R(k)=\cx_R(M)=0$, i.e., $\pd_R(k)<\infty$, hence $R$ is regular. The equivalences in (3) can be deduced from \ref{Dey-Kobayashi-results}.(1).(iii) and (iv) respectively. The statements in (4) (except (4).(i) $\Leftrightarrow$ (iii)) are consequences of (1), see, e.g., \cite[pp.~321 and Thm.~3]{Avr96}. In view of \ref{Dey-Kobayashi-results}.(1).(i) and (iii), $M$ and $L/M$ are test $R$-modules. Hence the equivalences in (5) and (4).(i) $\Leftrightarrow$ (iii) can be obtained from \cite[Thm.~4.4]{CS16} and \cite[Cor.~3.4.4]{Ta19} respectively. The equivalences in (6) are consequences of (2), (3), (4) and (5).
\end{proof}
The result on CM-dimension in the theorem below is totally new, while the results on other homological invariants can be observed from Theorem~\ref{thm:H-dim-Burch-submodules}. However, we give a simple, elementary and combined proof of the results. Note that Theorem~\ref{thm:H-dim-Burch-submodules}.(1) is shown by applying \cite[Thm.~4]{Avr96}, where cohomology representation of the homotopy Lie algebra is used in the proof of \cite[Thm.~4]{Avr96}.
\begin{theorem}\label{thm:CM-dim-Burch-submodules}
Let $M$ be a Burch submodule of some $R$-module $L$. Suppose either $\depth(M)\ge 1$, or $L$ is free $($e.g., $M=I$ is a Burch ideal of $R=L$$)$. Then the following statements hold true.
\begin{enumerate}[\rm (1)]
\item The ring $R$ is {\rm H} \iff $\hdim_R(M)<\infty$, where {\rm H} denotes {\rm proj} $($regular in case of rings$)$, {\rm CI}, {\rm G} and {\rm CM} respectively.
\item $\cx_R(M) = \cx_R(k)$ and $\curv_R(M) = \curv_R(k)$.
\end{enumerate}
\end{theorem}
\begin{proof}
The `only if' parts in (1) are well known, see \ref{para:char-local-rings-via-H-dim}. So we need to prove (2) and the `if' parts in (1). By virtue of Lemma~\ref{lem:Extension-of-Burch-submodule}, in the case when $L$ is free, if necessary, then replacing $R$ and $M$ by $S:=R[X]_{\langle \mathfrak{m}, X \rangle}$ and $M':=(M+XL[X])_{\langle \mathfrak{m}, X \rangle}$ respectively, we may assume that $\depth(M)\ge 1$. Then, by \cite[Lem.~3.7]{DK22}, there exists an element $x \in \mathfrak{m}\smallsetminus\mathfrak{m}^2$ which is regular over both $R$ and $M$ such that $k$ is a direct summand of $M/xM$ as an $R/xR$-module. Hence
\begin{align*}
\hdim_{R/xR}(k) & \le \hdim_{R/xR}(M/xM) \quad \mbox{[by \ref{para:H-dim-direct-sum}]} \\ & \le \hdim_R(M) \quad \mbox{[by Lemma~\ref{lem:H-dim-M-M//xM}].}
\end{align*}
Thus, if $\hdim_R(M) < \infty$, then $\hdim_{R/xR}(k) < \infty$, which (in view of \ref{para:char-local-rings-via-H-dim}) implies that $R/xR$ is H, and hence $R$ is H. For (2), note that
\[
\cx_R(k) = \cx_{R/xR}(k) \le \cx_{R/xR}(M/xM) = \cx_R(M) \le \cx_R(k),
\]
where the respective (in)equalities are obtained from \cite[Lem.~3.3]{DG22}, \ref{para:H-dim-direct-sum}, Lemma~\ref{lem:H-dim-M-M//xM} and \ref{para:global-cx-curv}. Hence $\cx_R(M) = \cx_R(k)$. Similarly, $\curv_R(M) = \curv_R(k)$
\end{proof}
The next result shows that, by virtue of Lemma~\ref{lem:Extension-of-Burch-submodule}, the depth condition on the Burch submodule $M$ in \cite[Prop.~3.18]{DK22} can be removed under some extra vanishing of certain Ext modules.
\begin{theorem}\label{thm:Gor-char-vanishing-Ext}
Let $M$ be a Burch submodule of some $R$-module $L$. Let $N$ be an $R$-module. Let $n\ge \depth(N)$ be such that
\[
\Ext_R^i(M,N) = \Ext_R^j(L,N) = 0 \quad \mbox{for all }i = n-1, n, n+1\mbox{ and } j = n, n+1.
\]
Then $\id_R(N)<\infty$.
\end{theorem}
\begin{proof}
Considering the long exact sequence
\[
\cdot \cdot \cdot \to \Ext_R^{l-1}(M,N) \to \Ext_R^{l}(L/M,N) \to \Ext_R^{l}(L,N) \to \Ext_R^{l}(M,N) \to \cdot \cdot\cdot,
\]
from the given hypotheses, one obtains that $\Ext_R^j(L/M,N)=0$ for both $j=n,n+1$. Therefore, by virtue of Lemma~\ref{lem:Extension-of-Burch-submodule}.(3), $\Ext_R^i(M'/XM',N)=0$ for both $i=n,n+1$, where $M':=(M+XL[X])_{\langle \mathfrak{m}, X \rangle} $, a Burch submodule of $L':=L[X]_{\langle \mathfrak{m}, X \rangle}$ over $S:=R[X]_{\langle \mathfrak{m}, X \rangle}$. Set $N':=N[X]_{\langle \mathfrak{m}, X \rangle}$. Then, by \cite[3.1.16]{BH93},
\begin{align*}
\Ext_S^{i+1}(M'/XM',N') \cong \Ext_{S/XS}^{i}(M'/XM',N'/XN')=\Ext_R^{i}(M'/XM',N)=0
\end{align*}
for $i=n,n+1$. Hence, in view of the exact sequences
\[
\Ext_S^{i}(M',N') \stackrel{X}{\longrightarrow} \Ext_S^{i}(M',N') \longrightarrow \Ext_S^{i+1}(M'/XM',N'),
\]
the Nakayama Lemma yeilds that $\Ext_S^{i}(M',N')=0$ for $i=n,n+1$. Note that $\depth_S(M')\ge 1$ and $n\ge \depth_R(N)=\depth_S(N')-1$. Therefore, by \ref{Dey-Kobayashi-results}.(2), $\id_S(N')<\infty$, which implies that $\id_{S/XS}(N'/XN')< \infty$ (cf.~\cite[3.1.15]{BH93}), i.e., $\id_R(N)<\infty$.
\end{proof}
As an immediate consequence of Theorem~\ref{thm:Gor-char-vanishing-Ext}, one deduces the following.
\begin{corollary}\label{cor:char-Gor-Burch-ideals}
Let $M$ be a Burch submodule of some free $R$-module $($e.g., $M=I$ is a Burch ideal of $R$$)$. If $\Ext_R^m(M,R)=0$ for any three consecutive values of $m \ge \max\{\depth(R)-1,0\}$, then $R$ is Gorenstein.
\end{corollary}
\begin{proof}
In Theorem~\ref{thm:Gor-char-vanishing-Ext}, consider $L$ as a free $R$-module, and $N=R$. Let $m \ge \max\{\depth(R)-1,0\}$. Then $m \ge \depth(R)-1$ and $m+1\ge 1$. Hence $\Ext_R^j(L,R) = 0$ for all $j\ge m+1$. Therefore, if $\Ext_R^i(M,R)=0$ for $i = m, m+1, m+2$, by Theorem~\ref{thm:Gor-char-vanishing-Ext}, $\id_R(R)<\infty$, i.e., $R$ is Gorenstein.
\end{proof}
In view of Proposition~\ref{prop:int-closed-ideal-Burch} and Example~\ref{exam:Burch-submodules-quotient}, Theorems~\ref{thm:H-dim-Burch-submodules}, \ref{thm:CM-dim-Burch-submodules} and \ref{thm:Gor-char-vanishing-Ext} have many applications. For example, an integrally closed ideal $I$ of $R$ with $\depth(R/I)=0$ can be used to characterize various local rings.
\begin{corollary}\label{cor:characterizations-via-int-closed-ideal}
Let $I$ be a nonzero ideal of $R$ such that $\depth(R/I)=0$. Assume that $I$ is integrally closed or weakly $\mathfrak{m}$-full. Then the following statements hold true.
\begin{enumerate}[\rm (1)]
\item $I$ has maximal projective $($resp., injective$)$ complexity and curvature.
\item $R$ is regular
$\Longleftrightarrow$ $\Ext_R^n(I,J) = \Ext_R^{n+1}(I,J)=0$ for some $n\ge 1$, where $J\neq 0$ is an ideal satisfying $\depth(R/J)=0$, and $J$ is integrally closed or weakly $\mathfrak{m}$-full. Under these conditions, $\pd_R(I)<n$.
\item $R$ is complete intersection $\Longleftrightarrow$ $\cidim_R(I)<\infty$ $\Longleftrightarrow$ $\projcx_R(I)<\infty$ $\Longleftrightarrow$ $\injcx_R(I)<\infty$ $\Longleftrightarrow$ $\projcurv_R(I)\le 1$ $\Longleftrightarrow$ $\injcurv_R(I)\le 1$.
\item $R$ is Gorenstein $\Longleftrightarrow$ $\Ext_R^n(I,R)=0$ for any three consecutive values of $n\ge \max\{\depth(R)-1,0\}$. We need two consecutive vanishing when $\depth(R)\ge 1$.
\item $R$ is {\rm CM} $\Longleftrightarrow$ $\cmdim_R(I)<\infty$
\end{enumerate}
\end{corollary}
\begin{proof}
If the residue field is finite, then passing through $R[X]_{\mathfrak{m} R[X]}$, we may assume that $R$ has infinite residue field $k$. If $R$ is a field, then there is nothing to prove. So we may assume that $\mathfrak{m}\neq 0$. Therefore the proof follows from Proposition~\ref{prop:int-closed-ideal-Burch} and Remark~\ref{rmk:strata}, by applying Theorem~\ref{thm:H-dim-Burch-submodules}.(1), (2) and (4), Corollary~\ref{cor:char-Gor-Burch-ideals} and Theorem~\ref{thm:CM-dim-Burch-submodules} respectively. For the second part of (4), we must use \ref{Dey-Kobayashi-results}.(2) and the fact that $\depth(I)\ge 1$ if $\depth(R)\ge 1$.
\end{proof}
\begin{remark}
\begin{enumerate}[(1)]
\item
The statements (1), (2) and (3) in Corollary~\ref{cor:characterizations-via-int-closed-ideal} considerably strengthen the results \cite[Thm.~2.6]{GP22}, \cite[pp~947, Cor.~3]{Bu68}, \cite[6.12]{CGSZ18} and \cite[Cor.~2.7]{GP22}. Moreover, Corollary~\ref{cor:characterizations-via-int-closed-ideal}.(2) notably improves \cite[Cor.~3.14]{CIST18}, the main outcome of \cite{CIST18}.
\item
When $R$ is CM and $I$ is as in Corollary~\ref{cor:characterizations-via-int-closed-ideal}, it is shown in \cite[Cor.~3.16]{CIST18} that $R$ is Gorenstein if $\Ext_R^i(I,R)=0$ for any $\dim(R)+2$ consecutive values of $i \ge 1$. Corollary~\ref{cor:characterizations-via-int-closed-ideal}.(4) considerably strengthen this result when $\dim(R) \ge 1$.
\end{enumerate}
\end{remark}
\begin{corollary}\label{cor:characterizations-via-mN}
Let $N$ be a submodule of an $R$-module $L$ such that $\mathfrak{m} N \neq 0$.
\begin{enumerate}[\rm (1)]
\item Let {\rm H} denotes {\rm proj}, {\rm inj} $($regular in case of rings$)$, {\rm CI} and {\rm G} respectively. Then
\begin{center}
$R$ is {\rm H} $\Longleftrightarrow$ $\hdim_R(\mathfrak{m} N)<\infty$ $\Longleftrightarrow$ $\hdim_R(L/\mathfrak{m} N)<\infty$.
\end{center}
\item If either $\depth(N)\ge 1$, or $L$ is free $($e.g., $N=J$ is an ideal of $R=L$$)$, then
\begin{enumerate}[\rm (i)]
\item $R$ is {\rm CM} $\Longleftrightarrow$ $\cmdim_R(\mathfrak{m} N)<\infty$.
\item $R$ is Gorenstein $\Longleftrightarrow$ $\Ext_R^n(\mathfrak{m} N,R)=0$ for any three consecutive values of $n\ge \max\{\depth(R)-1,0\}$. We need two consecutive vanishing if $\depth(N)\ge 1$.
\end{enumerate}
\end{enumerate}
\end{corollary}
\begin{proof}
In view of Example~\ref{exam:Burch-submodules-quotient}, $\mathfrak{m} N$ is a Burch submodule of $L$. Hence the equivalences in (1) and (2).(i) can be obtained from Theorems~\ref{thm:H-dim-Burch-submodules}.(6) and \ref{thm:CM-dim-Burch-submodules}.(1) respectively, while (2).(ii) follows from \ref{Dey-Kobayashi-results}.(2) and Corollary~\ref{cor:char-Gor-Burch-ideals}.
\end{proof}
In view of Theorems~\ref{thm:H-dim-Burch-submodules} and \ref{thm:CM-dim-Burch-submodules}, the following questions arise naturally.
\begin{question}
Let $M$ be a Burch submodule of some $R$-module $L$.
\begin{enumerate}[\rm (1)]
\item Does $L/M$ have maximal projective $($resp., injective$)$ complexity and curvature?
\item If $\cmdim_R(L/M)<\infty$, then is $R$ {\rm CM}?
\end{enumerate}
\end{question}
Keeping the results in Theorems~\ref{thm:H-dim-Burch-submodules}.(6) and \ref{thm:CM-dim-Burch-submodules}.(1), \ref{Dey-Kobayashi-results}.(2) and Corollary~\ref{cor:char-Gor-Burch-ideals} in mind, we expect positive answers to the following questions.
\begin{question}\label{ques:Burch-sub-quo-CM-dim}
Let $M$ be a Burch submodule of depth $0$ of some $R$-module $L$.
\begin{enumerate}[(1)]
\item If $\cmdim_R(M)<\infty$, then is $R$ {\rm CM}?
\item If $\Ext_R^n(M,R)=0$ for any three consecutive values of $n\ge \max\{\depth(R)-1,0\}$, then is $R$ Gorenstein?
\end{enumerate}
\end{question}
The following question is a particular case of Question~\ref{ques:Burch-sub-quo-CM-dim}.
\begin{question}\label{ques:depth-mN-zero-CM-dim}
Let $N$ be an $R$-module such that $\depth(N)=0$ and $\mathfrak{m} N \neq 0$.
\begin{enumerate}[(1)]
\item If $\cmdim_R(\mathfrak{m} N)<\infty$, then is $R$ {\rm CM}?
\item If $\Ext_R^n(\mathfrak{m} N,R)=0$ for any three consecutive values of $n\ge \max\{\depth(R)-1,0\}$, then is $R$ Gorenstein?
\end{enumerate}
\end{question}
Theorem~\ref{thm:CM-dim-Burch-submodules}.(1) and Corollaries~\ref{cor:char-Gor-Burch-ideals} and \ref{cor:characterizations-via-mN}.(2) provide positive answers to Questions~\ref{ques:Burch-sub-quo-CM-dim} and \ref{ques:depth-mN-zero-CM-dim} under the conditions `$L$ is free' and `$N$ is a submodule of a free $R$-module' respectively.
\section*{Acknowledgments}
Ghosh was supported by Start-up Research Grant (SRG) from SERB, DST, Govt.~of India with the Grant No SRG/2020/000597. Saha was supported by Junior Research Fellowship (JRF) from UGC, MHRD, Govt.\,of India.
|
1,314,259,993,555 | arxiv | \section{Motivation and Statement of the Main Results}
Since the classical works of Euler and Riemann it is known that the distribution of prime numbers is intimately related to the Riemann zeta-function $\zeta(s)$. In general, many topics in analytic number theory are tied to associated Dirichlet series. With his path-breaking articles Ivan Matveevich Vinogradov made outstanding contributions in this direction. For example, he proved the so far widest zero-free region for $\zeta(s)$ and therefore the up to date smallest error term in the prime number theorem \cite{vino2} (independently obtained by Nikolai Michailovitch Korobov); moreover, with his treatment of exponential sums Vinogradov proved that all sufficiently large odd integers satisfy the ternary Goldbach conjecture \cite{vino1} (which was recently extended by Harald Helfgott to all odd integers $\geq 7$).
\par
In this article we consider generalizations of the Riemann zeta-function and their value-distribution. Given an arithmetical function $f\,:\,\hbox{\font\dubl=msbm10 scaled 1100 {\dubl N}}\to\hbox{\font\dubl=msbm10 scaled 1100 {\dubl C}}$, the associated Dirichlet series is a complex-valued function of a complex variable $s:=\sigma+it$, given by
$$
L(s;f):=\sum_{n=1}^\infty \dfrac{f(n)}{n^{s}}.
$$
Let $q$ be a positive integer.
If $f$ is $q$-periodic, that means $f(n+q)=f(n)$ for all positive integers $n$, then the $L(s;f)$ defining Dirichlet series converges absolutely in the right half-plane $\sigma>1$. Moreover, $L(s;f)$ can be analytically continued to the whole complex plane except for at most a simple pole at $s=1$; and $L(s;f)$ is regular at $s=1$ if, and only if the related residue
$$
{\dfrac{1}{q}}\sum_{a\bmod\,q}f(a)
$$
vanishes.
For technical reasons it is useful to extend $f$ to be defined on $\hbox{\font\dubl=msbm10 scaled 1100 {\dubl Z}}$ by periodicity. In addition, $L(s;f)$ satisfies a functional equation (or {\it identity}), namely,
\begin{eqnarray}\label{feq1}
L(1-s;f)&=&\left(\dfrac{q}{2\pi}\right)^s{\dfrac{\Gamma(s)}{\sqrt{q}}}\left(\mathrm{e}\left(\dfrac{s}{4}\right)L\left(s;f^-\right)+\mathrm{e}\left(-\dfrac{s}{4}\right)L\left(s;f^+\right)\right),
\end{eqnarray}
where $\mathrm{e}(z):=\exp(2\pi iz)$ and $f^{\pm}$ are $q$-periodic arithmetical functions defined by
\begin{equation}\label{dft}
f^{\pm}(n)={\dfrac{1}{\sqrt{q}}}\sum_{a\bmod\,q}f(a)\mathrm{e}\left(\pm \dfrac{an}{q}\right)
\end{equation}
and may be interpreted as discrete Fourier transforms of $f_{\pm}$ defined by $f_+=f$ and $f_-(n)=f(-n)$.
All these properties follow from similar properties of the Hurwitz zeta-function $\zeta(s;\alpha)=\sum_{m\geq 0}(m+\alpha)^{-s}$ (with a real parameter $\alpha\in(0,1]$) and a straightforward representation of such $L(s;f)$ as a sum of Hurwitz zeta-functions with rational parameters $\alpha=a/q$.
This and more were first discovered by Walter Schnee \cite{schnee} ninety years ago.
\par
We are concerned with {\it even} and {\it odd} $q$-periodic functions $f$, i.e., $f=\delta f_-$ with $\delta=+1$ for {\it even} $f$ and $\delta=-1$ for {\it odd} $f$. In this case (\ref{feq1}) can be rewritten as
\begin{eqnarray}\label{feq2}
L(s;f)&=&\Delta(s;f)L\left(1-s;f^+\right),
\end{eqnarray}
where
\begin{eqnarray}\label{delta}
\Delta(s;f):=-i\left({\dfrac{q}{2\pi}}\right)^{1-s}{\dfrac{\Gamma(1-s)}{\sqrt{q}}}\left(\mathrm{e}\left(\dfrac{s}{4}\right)-\delta \mathrm{e}\left(-\dfrac{s}{4}\right)\right).
\end{eqnarray}
This setting includes the case of Dirichlet $L$-functions associated with (not necessarily primitive) residue class characters $\chi\bmod\,q$ (see \cite{apostol} for details). Some of our results generalize previous ones for the Riemann zeta-function \cite{Kalp,korolev,steusur}; however, the findings concerning uniform distribution and universality are new (although the latter property has been considered in this context earlier by Maxim Korolev and Antanas Laurin\v cikas \cite{kola} for a special case). Our approach is inspired by rather general though deep theorems from the theory of functions due to \'Emile Picard, Gaston Julia and Rolf Nevanlinna, and our analysis relies mainly on the functional equation \eqref{feq2}.
\par
In 1879, Picard \cite{picard} proved that if an analytic function $f$ has an essential singularity at a point $\omega$, then $f(s)$ takes all possible complex values with at most a single exception (infinitely often) in every neighbourhood of $\omega$. And in 1919, a little more than one hundred years ago, Julia \cite{julia} refined Picard's great theorem by showing that one can even add a restriction on the angle at $\omega$ to lie in an arbitrarily small cone (see also \cite[\S 12.4]{burckel}).
For Dirichlet series appearing in number theory, however, it is more natural to consider so-called {\it Julia lines} rather than Julia directions.
For this and further references we refer to \cite{christ,steudi,steusur}.
\par
Given a complex number $a$, the solutions of the equation
$$
\Delta\left(s;f\right)=a
$$
are called the {\it $a$-points} of $\Delta\left(s;f\right)$, and we denote them as $\delta_a:=\beta_a+i\gamma_a$.
We shall show that for any fixed $a\neq 0$ {\it most} of the $a$-points are clustered around the critical line $1/2+i\hbox{\font\dubl=msbm10 scaled 1100 {\dubl R}}$ which, therefore, is a Julia line for $\Delta\left(s;f\right)$. Moreover, we prove that the mean of the values $L(\delta_a;f)$ exists and that every complex number appears as such a limit. This indicates an interesting link between the distribution of $a$-points of $\Delta(s;f)$ and the values taken by $L(s;f)$. For the case of the Riemann zeta-function and values $a$ from the unit circle these $a$-points have been studied by Justas Kalpokas and the second author \cite{Kalp} as well as by Kalpokas, Maxim Korolev and the second author \cite{korolev}, respectively. In this situation the $\exp(2i\phi)$-points correspond to intersections of the curve $t\mapsto \zeta\left({\frac{1}{ 2}}+it\right)$ with straight lines $\exp(i\phi)\hbox{\font\dubl=msbm10 scaled 1100 {\dubl R}}$ through the origin.
\begin{Theorem}\label{julia}
Let $f$ be an even or odd $q$-periodic function. Then, the function $s\mapsto \Delta\left(s;f\right)$ is a meromorphic function with two exceptional values $0$ and $\infty$ in the sense of Nevanlinna theory and the critical line $1/2+i\hbox{\font\dubl=msbm10 scaled 1100 {\dubl R}}$ is a Julia line for $\Delta(s;f)$. Moreover, given a complex number $a\neq 0$, the number $N_a(T;f)$ of $a$-points $\delta_a= \beta_a+ i\gamma_a$ of $\Delta\left(s;f\right)$ satisfying $0<\beta_a<1, 0<\gamma_a<T$ is asymptotically equal to
\begin{equation}\label{na}
N_a(T;f)=\dfrac{T}{2\pi}\log{\dfrac{qT}{2\pi e}}+O(\log T).
\end{equation}
\end{Theorem}
\noindent The error term here and all error terms in the sequel depend on $a$ and $f$. The condition $0<\beta_a<1$ in the above theorem can be relaxed to any open interval centered at $1/2$. This can be easily explained using an asymptotic equation for $\Delta(s;f)$ (appearing in the next section).
\begin{Theorem}\label{mean}
Let $f$ be an even or odd $q$-periodic function and let $\delta_a=\beta_a+i\gamma_a$ denote the $a$-points of $\Delta(s;f)$. Then, for every complex number $a\neq 0$ and any $\epsilon>0$,
\begin{eqnarray*}
\mathop{\sum_{0<\gamma_a<T}}_{ 0<\beta_a<1}L(\delta_a;f)&=&\left(f(1)+af^+(1)\right)\dfrac{T}{2\pi}\log{\dfrac{qT}{2\pi e}}+O_\epsilon\left(T^{1/2+\epsilon}\right).
\end{eqnarray*}
\end{Theorem}
The horizontal distribution of the $a$-points of $\Delta(s;f)$ is quite regular as we will see in the next section. We want to provide some additional information regarding their vertical distribution, since such questions arise quite often in zeta-function theory. For example, in the case of $\zeta(s)$, Edmund Landau \cite{land} proved the explicit formula
\begin{equation}\label{Lan}
\sum\limits_{0<\gamma\leq T}x^\rho=-\dfrac{T}{2\pi}\Lambda(x)+O_x(\log T),
\end{equation}
valid for any $T>1$ and any positive number $x>1$, where the $\rho:=\beta+i\gamma$ denote the non-trivial zeros of $\zeta(s)$ and $\Lambda(x)$ is the von Mangoldt function extended to the whole real line by setting it to be zero if $x$ is not a positive integer. The second author \cite{steu1} proved a similar result in the case of $a$-points of $\zeta(s)$, namely, for $a\neq1$,
\begin{equation}\label{St}\sum\limits_{0<\gamma_a\leq T}x^{\rho_a}=\dfrac{T}{2\pi}\left(a(x)-x\Lambda\left(\frac{1}{x}\right)\right)+O_{x,\epsilon}\left(T^{1/2+\epsilon}\right),
\end{equation}
where $\rho_a:=\beta_a+i\gamma_a$ denotes the so called {\it non-trivial} $a$-points of $\zeta(s)$ and $a(x)$ is a certain computable arithmetical function extended to the whole real line. The authors could not succeed in proving a comparable asymptotic formula in the case of the $\delta_a$ but an upper bound for the corresponding sum with respect to the $a$-points of $\Delta$ which is remarkably small compared with \eqref{Lan} or \eqref{St}.
In order to formulate this result we recall some standard notation. The number $\lfloor x\rfloor$ denotes the greatest integer which is smaller than or equal to the real number $x$ and $\mathbbm{1}_S$ denotes the characteristic function of a set $S$.
\begin{Theorem}\label{uniform}
Let $a\neq0$ be a fixed complex number.
Then there exists a constant $c>0$, depending on $a$, such that, for any $0<x\neq1$ and any $T,T'$ satisfying $\max\left\{1,4\pi/q\right\}\leq T<T+1\leq T'\leq2T$, we have that
\begin{eqnarray*}
\sum_{\substack{T<\gamma_a<T'}} x^{\delta_a} &\ll& x^{1/2}\left(x^{c/\log \frac{qT}{2\pi}}+x^{-c/\log \frac{qT}{2\pi}}\right)\left(|\log x|+\log T+\dfrac{\log T}{|\log x|}\right) \\
&&+~ \mathbbm{1}_{(1,+\infty)}(x) E_1(x,a,T)-\mathbbm{1}_{(0,1)}(x)E_2(x,a,T),
\end{eqnarray*}
where $E_1(x,a,T)$ and $E_2(x,a,T)$ are always zero, unless there is a positive integer $$j\leq\left\lfloor\dfrac{2\log\frac{qT}{4\pi}}{\log30}\right\rfloor$$
such that $x^{1/j}\in\left[qT/(4\pi),5qT/(4\pi)\right]$ or $x^{-1/j}\in\left[qT/(4\pi),5qT/(4\pi)\right]$, in which case we have that
\begin{eqnarray*}
E_1(x,a,T)&:=&\dfrac{x^{1/2}\log x}{j^{3/2}}\left(\dfrac{x^{1/(2j)}}{|a|^{j}}+\dfrac{x^{c/\log \frac{qT}{2\pi}}T^{1/2}}{30^{j}}\right),\\
E_2(x,a,T)&:=&\dfrac{x^{1/2}\log x}{j^{3/2}}\left({x^{-1/(2j)}}|a|^j+\dfrac{x^{-c/\log \frac{qT}{2\pi}}T^{1/2}}{30^{j}}\right),
\end{eqnarray*}
respectively.
In particular, if $x\neq1$ is such that $4\pi/(qT)\leq x\leq qT/(4\pi)$, then
$$ \sum_{\substack{T<\gamma_a<T'}} x^{\delta_a}\ll x^{1/2}\log T\left(1+\dfrac{1}{|\log x|}\right).
$$
\end{Theorem}
A sequence of real numbers $(x_n)_{n\in\mathbb{N}}$ is called {\it uniformly distributed modulo 1} if
$$
\lim\limits_{N\to\infty}\dfrac{1}{N}\sharp\left\{1\leq n\leq N:\lbrace x_n\rbrace\in[a,b]\right\}=b-a
$$
for any real numbers $0\leq a\leq b\leq1$, where $\lbrace x\rbrace:=x-\lfloor x\rfloor$ for $x\geq0$.
Hans Rademacher \cite{rader} employed Landau's formula \eqref{Lan} to prove, under the Riemann Hypothesis, that the sequence $(\alpha\gamma)_{\gamma>0}$ is uniformly distributed modulo $1$, where $\alpha\neq0$ is a real number and $(\gamma)_{\gamma>0}$ is the sequence of ordinates of the non-trivial zeros of $\zeta(s)$ in ascending order (and counted with multiplicities).
Peter Elliott \cite{elliot} and (independently) Edmund Hlawka \cite{hlawka} proved the uniform distribution of $(\alpha\gamma)_{\gamma>0}$ unconditionally.
Respectively, the second author \cite{steu1} proved the uniform distribution of the sequence $(\alpha\gamma_a)_{\gamma_a>0}$ , where $(\gamma_a)_{\gamma_a>0}$ is the sequence of ordinates of the non-trivial $a$-points of $\zeta(s)$ in ascending order (counting multiplicities).
The common feature of both statements is that they follow from \eqref{Lan} and \eqref{St} and known results on the {\it clustering} of the non-trivial zeros (or $a$-points) of $\zeta(s)$ around the critical line. This finally leads to
\begin{equation}\label{ig}
\sum\limits_{0<\gamma<T}x^{i\gamma}=o_x(T\log T)=\sum\limits_{0<\gamma_a<T}x^{i\gamma_a}
\end{equation}
and uniform distribution follows from a well-known criterion of Hermann Weyl. It is noteworthy that, for fixed $x$, the best possible bounds known for the sums in \eqref{ig} are of order $T$.
Compared to these results, the bounds given in the following theorem for the case of $a$-points of $\Delta(s;f)$ are very different. This is probably not unexpected since $\Delta(s;f)$ is not a zeta-function to begin with and the clustering of its $a$-points around the critical line has a simple explanation as we shall see in Sections \ref{juliaa} and \ref{uniform}.
\begin{Theorem}\label{u.d.}
Let $0<\gamma_a^{(1)}\leq\gamma_a^{{(2)}}\leq\gamma_a^{{(3)}}\leq\dots$ be the sequence of ordinates of all $a$-points of $\Delta(s;f)$ (counted with multiplicities) and having real part in $(0,1)$, where $f$ is an odd or even $q$-periodic function. Then, for any integer $N\geq0$ and any number $x\neq1$ satisfying $4\pi/(qN)\leq x\leq qN/(4\pi)$, we have
\begin{equation*}
\sum\limits_{N< n\leq 2N}x^{i\gamma_a^{{(n)}}}\ll\left(\dfrac{1}{|\log x|}+|\log x|\right)\log N.
\end{equation*}
In particular, the sequence of numbers $\alpha\gamma_a^{{(n)}}$, $n\in\mathbb{N}$, is uniformly distributed modulo $1$ for any real number $\alpha\neq0$.
\end{Theorem}
Robert Spira \cite{spira} showed that the $\Delta$-function appearing in the functional equation for the Riemann zeta-function $\zeta(s)=L(s;\mathbf{1})$ (with $\mathbf{1}$ being the function constant $1$) satisfies $\vert \Delta\left(s;\mathbf{1}\right)\vert <1$ for $1/2<\sigma<1$ and $t\geq10$. This implies along with the identity $\Delta\left(s;\mathbf{1}\right)\Delta\left(1-s;\mathbf{1}\right)=1$ that, if $a$ is a complex number from the unit circle, then all but finitely many $a$-points of $\Delta\left(s;\mathbf{1}\right)$ have real part equal to $1/2$. Therefore, our theorems generalize some results from \cite{Kalp},
as well as recent results by Korolev \& Laurin\v cikas \cite{kola} on the uniform distribution of Gram points (the $1$-points of $\Delta\left(s;\mathbf{1}\right)$). This last article motivates us to prove the following {\it joint universality theorem}:
\begin{Theorem}\label{universal}
Let $f$ be an even or odd $q$-periodic function and denote the ordinates of the $a$-points of $\Delta(s;f)$ in ascending order by $\gamma_a^{(n)}$.
Let also $\chi_1,\chi_2,\dots,\chi_J$ be non-equivalent Dirichlet characters and $\psi\neq\mathbf{0}$ be an $r$-periodic arithmetical function with $r\neq2$.
Then, for any compact set with connected complement $K$ inside the strip $1/2<\sigma<1$, any $g_1,g_2,\dots,g_J,h$ continuous non-vanishing functions on $K$ which are analytic in its interior, any real numbers $z>0$ and $\xi_p$, indexed by the primes $p\leq z$, and any $\varepsilon>0$,
\begin{equation}\label{joint}
\liminf\limits_{N\to\infty}\dfrac{1}{N}\sharp\left\{1\leq n\leq N:\begin{array}{c}\max\limits_{1\leq j\leq J}\max\limits_{s\in K}\left|L\left(s+i\gamma_a^{{(n)}};\chi_j\right)-g_j(s)\right|<\varepsilon,\\ \& \ \max\limits_{p\leq z}\left\|\gamma_a^{{(n)}}\dfrac{\log p}{2\pi}-\xi_p\right\|<\varepsilon
\end{array}\right\}>0
\end{equation}
and
\begin{equation}\label{periodic}
\liminf\limits_{N\to\infty}\dfrac{1}{N}{\sharp\left\{1\leq n\leq N:\max\limits_{s\in K}\left|L\left(s+i\gamma_a^{{(n)}};\psi\right)-h(s)\right|<\varepsilon\right\}}>0,
\end{equation}
where $\sharp A$ denotes the cardinality of a set $A\subseteq\mathbb{R}$ and $\|x\|:=\min_{m\in\mathbb{Z}}|m-x|$.
In addition, if $\psi$ has period $r\geq3$ and is not a multiple of a Dirichlet character mod $r$, then $h$ is allowed to have zeros in $K$.
In the case when $r=2$, \eqref{periodic} holds for any compact set with connected complement $K$ inside the open set
$$D_0:=\left\{s\in\mathbb{C}:\dfrac{1}{2}<\sigma<1\right\}\setminus\left\{\log\left(1-\dfrac{\psi(2)}{\psi(1)}\right)+2k\pi i:k\in\mathbb{Z}\right\},$$
where $\log$ is the principal logarithm.
\end{Theorem}
\noindent This is a discrete version of Sergei Voronin's classical universality theorem for the Riemann zeta-function \cite{voronin}:
\begin{eqnarray}\label{univer}
\liminf\limits_{T\to\infty}\dfrac{1}{T}{\sharp\left\{\tau\in[0,T]:\max\limits_{s\in K}\left|\zeta(s+i\tau)-h(s)\right|<\varepsilon\right\}}>0
\end{eqnarray}
(resp. its simultaneous version \cite{voro2} for a family of Dirichlet $L$-functions which is often called {\it joint universality}). As a matter of fact, Voronin proved that there exists some real $\tau$ such that $\zeta(s+i\tau)$ is $\varepsilon$-close to $h(s)$ when $s$ ranges in a disc of center $3/4+it_0$ and radius $r<1/4$ (see for example \cite[Chapter VII]{karat}). A few years later Axel Reich \cite{reich} and (independently) Bhaskar Bagchi \cite{bagchi} obtained \eqref{univer} (which is implicit in Voronin's work) and provided also a discrete version with respect to arithmetic progressions
\begin{eqnarray*}\label{duniver}
\liminf\limits_{T\to\infty}\dfrac{1}{N}{\sharp\left\{1\leq n\leq N:\max\limits_{s\in K}\left|\zeta\left(s+id n\right)-h(s)\right|<\varepsilon\right\}}>0,
\end{eqnarray*}
where $d$ is a fixed non-zero real number. Theorem \ref{universal} is of similar nature, where the shifts are ordinates of $a$-points of $\Delta(s;f)$.
\section{Proof of Theorem \ref{julia}}\label{juliaa}
We first remark that $\Delta(s;f)$ depends only on the period $q$ of $f$ and $\delta=\pm1$ which determines whether $f$ is an even or odd function. We also observe that if $f$ is an even or odd $q$-periodic arithmetical function, then so are $f^\pm$ and $\overline{f}$ (the complex conjugate of $f$). In the rest of this paper, we will use the simplified notation
\begin{eqnarray}\label{delta1}
\Delta(s):=\Delta(s;f)=\Delta\left(s;\overline{f}\right)=\Delta\left(s;f^+\right).
\end{eqnarray}
This function $\Delta(s)$ is the product of an exponential function, the Gamma-function, and a trigonometric function.
It is well-known that $\Gamma(z)^{-1}$ is an entire function with only simple zeros at $z=-n$ for $n\in\hbox{\font\dubl=msbm10 scaled 1100 {\dubl N}}_0:=\hbox{\font\dubl=msbm10 scaled 1100 {\dubl N}}\cup\{0\}$.
Hence, by (\ref{delta}), $\Delta(s)$ is regular except for simple poles at the positive odd integers (if $\delta=+1$), respectively for the positive even integers (if $\delta=-1$); moreover, $\Delta(s)$ vanishes exactly for the non-positive even integers (if $\delta=+1$), respectively for the negative odd integers. One can show by an application of Rouch\'e's theorem (as Levinson did in \cite{levi} for $\Delta(s;\mathbf{1})$, the case of the Riemann zeta-function) that there are a few $a$-points of $\Delta(s)$ in a neighbourhood of the real line; their count inside a strip $\left\{x+iy:|x|\leq r\text{ and }|y|\leq1\right\}$ is $O(r)$ as $r\to\infty$. In comparison with Formula \eqref{na} it follows (along the lines of \cite[Section 7.3]{steudi}) that $0$ and $\infty$ are exceptional values of $\Delta(s)$ in the sense of Nevanlinna theory. It remains to prove the Riemann-von Mangoldt-type formula \eqref{na}.
It is an easy consequence of Stirling's formula,
\begin{equation} \label{stirling}
\Gamma(\sigma+it)=\sqrt{2\pi}t^{\sigma+it-1/2}\exp\left(-{\dfrac{\pi t}{ 2}}-it+{\dfrac{\pi i}{ 2}}\left(\sigma-\dfrac{1}{2}\right)\right)\left(1+O\left(\dfrac{1}{t}\right)\right),
\end{equation}
and the reflection principle
\begin{equation}\label{reflection}\Gamma(\sigma-it)=\overline{\Gamma(\sigma+it)}
\end{equation}
both valid uniformly for $t\geq 1$ and $\sigma$ from any strip of bounded width, and (\ref{delta}) that
\begin{equation}\label{pow}
\Delta(\sigma+it)=\delta\left({\dfrac{qt}{ 2\pi}}\right)^{1/2-(\sigma+it)}\exp\left(i\left(t+\dfrac{\pi}{4}\right)\right)\left(1+O\left(\dfrac{1}{t}\right)\right)
\end{equation}
as $t\to+\infty$.
Hence, $\Delta(\sigma+it)$ tends for $\sigma<1/2$ to infinity and for $\sigma>1/2$ to zero as $t\to +\infty$.
Thus, the critical line $1/2+i\hbox{\font\dubl=msbm10 scaled 1100 {\dubl R}}$ divides the upper half-plane into two domains where the limit $\lim_{t\to+\infty}\Delta(\sigma+it)$ exists in the compactified plane $\hbox{\font\dubl=msbm10 scaled 1100 {\dubl C}}\cup\{\infty\}$; on the critical line however the limit does not exist. The behaviour in the lower half-plane is ruled by conjugation,
\begin{eqnarray}\label{refle}
\Delta(\sigma-it) = \delta\,\overline{\Delta(\sigma+it)} &\text{ and }&
\Delta'(\sigma-it) = \delta\,\overline{\Delta'(\sigma+it)},
\end{eqnarray}
as follows from \eqref{delta}, \eqref{reflection} and Cauchy's integral formula. Near the boundary (critical) line $1/2+i\hbox{\font\dubl=msbm10 scaled 1100 {\dubl R}}$, however, the distribution of values is rather different. As a matter of fact, $\Delta(s)$ takes every complex value $a\neq 0$ infinitely often there. Writing $a=\Delta(\delta_a)=\vert a\vert \exp(i\phi)$ with an $a$-point $\delta_a=\beta_a+i\gamma_a$ of $\Delta$ and comparing with (\ref{pow}) implies that
\begin{eqnarray}\label{amod}
\vert a\vert &=& \left({\dfrac{q\gamma_a}{ 2\pi}}\right)^{1/2-\beta_a} \left(1+O\left(\dfrac{1}{\gamma_a}\right)\right),\\
\phi &\equiv& \gamma_a\log {\dfrac{2\pi e}{ q\gamma_a}}+{\dfrac{\pi}{ 4}}+\dfrac{(1-\delta)\pi}{2}+O\left(\dfrac{1}{\gamma_a}\right)\ \bmod\, 2\pi.\nonumber
\end{eqnarray}
This shows that $\beta_a\to 1/2$ as $\gamma_a\to+\infty$ (and explains a remark from the introduction). In particular, there exists a real number $t_a>0$, depending only on $a,\delta$ and $q$, such that $t_a$ is not an ordinate of any $a$-point of $\Delta$ and $\beta_a\in(0,1)$ if, and only if, $\gamma_a\geq t_a$; we can actually choose $t_a$ here such that $\beta_a$ is included in any open interval centered at $1/2$, but the way we define it here yields, for instance, $N_a(T;f)=\sharp\lbrace \gamma_a:t_a<\gamma_a<T\rbrace$.
Before showing \eqref{na}, we use an argument similar to \cite[\S 9.2]{tit} to prove
\begin{align}\label{app1}
N_a(T+1;f)-N_a(T;f)\ll\log T
\end{align}
for any $T\geq t_a+3$. Indeed, if $n(r)$ denotes the number of $a$-points of $\Delta(s)$ in the disc with center $2+iT$ and radius $r$, then
$$
N_a(T+1;f)-N_a(T;f)\leq n\left(\sqrt{5}\right)\ll\int_0^3\dfrac{n(r)}{r}\,{\rm{d}} r.
$$
It follows from Jensen's formula (see for example \cite[\S 3.61]{tit1}) and \eqref{pow} that
\begin{eqnarray*}
\int_0^3\dfrac{n(r)}{r}\,{\rm{d}} r&=&\dfrac{1}{2\pi}\int_0^{2\pi}\log\left|\Delta\left(2+iT+3e^{i\theta}\right)-a\right|\,{\rm{d}}\theta-\log\left|\Delta\left(2+iT\right)-a\right|\\&\ll&\log T
\end{eqnarray*}
and thus \eqref{app1} holds.
To prove \eqref{na} we apply the argument principle to the function $\Delta(s)-a$ and integrate over the counterclockwise oriented rectangle ${\mathcal C}$ with vertices $-1+it_a$, $2+it_a$, $2+iT$ and $-1+iT$. This gives
\begin{equation}\label{arg_prin}
2\pi i N_a\left(T;f\right) =\int_{\mathcal{C}} {\dfrac{\Delta'(s)}{ \Delta(s)-a}} \,{\rm{d}} s;
\end{equation}
since all poles of $\Delta(s)$ lie on the real line, they do not affect here. In addition, all zeros lie outside ${\mathcal C}$; hence, we may rewrite the integrand as
\begin{equation}\label{bibo}
{\dfrac{\Delta'(s)}{\Delta(s)-a}}={\dfrac{\Delta'}{ \Delta}}(s)\cdot \frac{1}{1 - a/\Delta(s)}
\end{equation}
or
\begin{equation}\label{bibo2}
{\dfrac{\Delta'(s)}{\Delta(s)-a}}={\dfrac{\Delta'(s)}{-a}}\cdot \frac{1}{1 - \Delta(s)/a}.
\end{equation}
Now, taking into account (\ref{pow}) in combination with another form of Stirling's formula,
\begin{equation} \label{log_der}
{\dfrac{\Delta' }{ \Delta}}(\sigma+it) = -\log{\dfrac{qt}{ 2\pi}} + O\left(\dfrac{1}{t}\right),
\end{equation}
which is also valid for $t\geq t_a>0$ and $\sigma$ from any strip of bounded width, we obtain, for $\epsilon>0$ and $\sigma\geq1/2+\epsilon$,
\begin{equation}\label{one}
{\dfrac{\Delta'(s)}{\Delta(s)-a}}\ll_\epsilon t^{1/2-\sigma}\log t.
\end{equation}
Similarly, we have, for $\epsilon>0$ and $\sigma\leq1/2-\epsilon$, that
\begin{eqnarray}\label{two}
{\dfrac{\Delta'(s) }{ \Delta(s)-a}}&=&{\dfrac{\Delta'}{\Delta}}(s)\cdot \left(1+\sum_{j\geq 1}\left({\dfrac{a}{ \Delta(s)}}\right)^j\right)\\
&=& -\log{\dfrac{qt}{ 2\pi}} + O_\epsilon\left(t^{-1}+t^{\sigma-1/2}\log t\right)\nonumber;
\end{eqnarray}
here (\ref{pow}), obviously, allows us to expand the second factor into a geometric series.
Next, it follows from \eqref{one} that the contribution of the integral over the right vertical segment is negligible:
$$
\int_{2+it_a}^{2+iT}{\dfrac{\Delta'(s)}{ \Delta(s)-a}} \,{\rm{d}} s \ll 1.
$$
Using expression \eqref{two}, however, the contribution of the integral over the left vertical segment leads to
\begin{eqnarray*}
\int_{-1+iT}^{-1+it_a}{\dfrac{\Delta'(s)}{\Delta(s)-a}} \,{\rm{d}} s
&=& -i\int_{t_a}^T\left(-\log{\dfrac{qt}{ 2\pi}}+O\left(\dfrac{1}{t}\right)\right)\,{\rm{d}} t+O(\log T)\\
&=&i T\log{\dfrac{qT}{ 2\pi e}}+O(\log T).
\end{eqnarray*}
It remains to estimate the integrals on the horizontal segments of $\mathcal{C}$. The lower one is trivially bounded
$$
\int_{-1+it_a}^{2+it_a}{\dfrac{\Delta'(s)}{\Delta(s)-a}} \,{\rm{d}} s\ll1,
$$
while for the upper one may use the following truncated partial fraction decomposition
\begin{align}\label{aprox}
{\dfrac{\Delta'(s)}{ \Delta(s)-a}}=\sum_{\vert t-\gamma_a\vert\leq 1}\frac1{s-\delta_a}+O(\log t),
\end{align}
which is valid for $\sigma\in[-1,2]$ and $t\geq t_a>0$. We will show this after finishing the proof of Theorem \ref{julia}.
In view of (\ref{aprox})
$$
\int_{2+iT}^{-1+iT}{\dfrac{\Delta'(s)}{\Delta(s)-a}} \,{\rm{d}} s=\sum_{\vert T-\gamma_a\vert\leq 1}\int_{2+iT}^{-1+iT}\frac{\,{\rm{d}} s}{s-\delta_a}+O(\log T).
$$
By the calculus of the residues we obtain that
$$
\int_{2+iT}^{-1+iT}\frac{\,{\rm{d}} s}{s-\delta_a}=\left\{\int_{2+iT}^{2+i(T+2)}+\int_{2+i(T+2)}^{-1+i(T+2)}+\int_{-1+i(T+2)}^{-1+iT}\right\}\frac{\,{\rm{d}} s}{s-\delta_a}-2\pi i R(\delta_a),
$$
where $R(\delta_a)$ is $1$ or $0$ depending on whether $\delta_a$ lies inside the rectangle described above or not. Recall that $\beta_a\in(0,1)$. Hence, for every $a$-point with $\left|T-\gamma_a\right|\leq1$,
\begin{eqnarray*}
\int_{2+iT}^{-1+iT}\frac{\,{\rm{d}} s}{s-\delta_a}&\ll&\int_T^{T+2}\dfrac{\,{\rm{d}} t}{\left|2-\beta_a+i\left(t-\gamma_a\right)\right|}+\int_{-1}^{2}\dfrac{\,{\rm{d}} \sigma}{\left|\sigma-\beta_a+i\left(T+2-\gamma_a\right)\right|}\\
&&+ \int_T^{T+2}\dfrac{\,{\rm{d}} t}{\left|-1-\beta_a+i\left(t-\gamma_a\right)\right|}+1\\
&\ll&1.
\end{eqnarray*}
In combination with inequality \eqref{app1}, we obtain
\begin{align*}
\int_{2+iT}^{-1+iT}{\dfrac{\Delta'(s)}{\Delta(s)-a}} \,{\rm{d}} s\ll\sum_{\vert T-\gamma_a\vert\leq 1}1+O(\log T)\ll\log T.
\end{align*}
Finally, we arrive at
$$
\int_{\mathcal{C}} {\dfrac{\Delta'(s)}{\Delta(s)-a}} \,{\rm{d}} s=i T\log{\dfrac{qT}{ 2\pi e}}+O(\log T).
$$
Substituting this into \eqref{arg_prin} finishes the proof of \eqref{na}.
It remains to show \eqref{aprox}. For this purpose, we apply Jacques Hadamard's theory of functions of finite order (see \cite[\S 5.3]{palka}); our reasoning is similar to the case of the Riemann zeta-function (see \cite[\S 9.6]{tit}).
As already mentioned, $\Delta(s)$ is analytic except for simple poles at some positive integers.
Thus,
$$
F(s):=(\Delta(s)-a)\cdot \Gamma(1-s)^{-1}
$$
defines an entire function. By Stirling's formula it follows that $F$ is entire and of order one. Hence, Hadamard's factorization theorem implies the product representation
$$
F(s)=\exp(A+Bs)\prod_{\delta_a}\left(1-\dfrac{s}{ \delta_a}\right)\exp\left(\dfrac{s}{ \delta_a}\right),
$$
where $A$ and $B$ are certain complex constants and the product is taken over {\it all} zeros $\delta_a$ of $F(s)$. Taking the logarithmic derivative, we deduce
$$
\dfrac{F'}{ F}(s)=B+\sum_{\delta_a}\left(\frac1{s-\delta_a}+\frac1{\delta_a}\right).
$$
Since
$$
\dfrac{F'}{ F}(s)=\dfrac{\Delta'(s)}{ \Delta(s)-a}+\dfrac{\Gamma'} {\Gamma}(1-s)
$$
and $\dfrac{\Gamma'}{ \Gamma}(1-s)\ll \log t$ (also a consequence of Stirling's formula), we have
$$
\dfrac{\Delta'(s)}{\Delta(s)-a}=\sum_{\delta_a}\left(\frac1{s-\delta_a}+\frac1{\delta_a}\right)+O(\log t).
$$
Setting $s=2+it$ and subtracting from the latter expression
$$
\dfrac{\Delta'(2+it)}{ \Delta(2+it)-a},
$$
which is $O(1)$ by \eqref{one}, we obtain
$$
\dfrac{\Delta'(s)}{ \Delta(s)-a}=\sum_{\delta_a}\left(\frac1{s-\delta_a}-\dfrac{1}{ 2+it-\delta_a}\right)+O(\log t).
$$
In view of \eqref{app1}, it follows that
$$
\sum_{\vert t-\gamma_a\vert\leq 1}\frac1{2+it-\delta_a}\ll \sum_{\vert t-\gamma_a\vert\leq 1}1=N_a(t+1;f)-N_a(t-1;f)\ll \log t.
$$
A short computation shows that, for any positive integer $n$,
$$
\sum_{t+n<\gamma_a\leq t+n+1}\left(\frac1{s-\delta_a}-\dfrac{1}{ 2+it-\delta_a}\right)\ll \sum_{t+n<\gamma_a\leq t+n+1}\dfrac {1}{ n^2}\ll\dfrac {\log (t+n)}{ n^2}.
$$
And since $\sum_{n\geq 1}\log(t+n)/n^2\ll \log t$, it follows that
$$
\sum_{\gamma_a>t+1}\left(\frac1{s-\delta_a}-\dfrac{1}{ 2+it-\delta_a}\right)\ll \log t;
$$
obviously, we can bound the sum over the $a$-points $\delta_a$ satisfying $\gamma_a<t-1$ similarly by the same bound. Consequently, the contribution of the $a$-points distant from $s$ are negligible. This implies (\ref{aprox}) and concludes the proof of Theorem \ref{julia}.
\section{Proof of Theorem \ref{mean}}
In view of \eqref{na}, it follows that for any given $T_0>0$, there exists $T\in[T_0,T_0+1)$ such that
\begin{equation}\label{condi}
\min_{\delta_a}\vert T-\gamma_a\vert\gg\dfrac{1}{\log T},
\end{equation}
where the minimum is taken over all $a$-points $\delta_a$. In the proof of Theorem \ref{julia} we have observed that there exists a real number $t_a>0$ such that $\beta_a\in(0,1)$ if and only if $\gamma_a>t_a$. Thus integrating over the counterclockwise oriented rectangle ${\mathcal C}$ with vertices $2+it_a, 2+iT, -\epsilon+iT, -\epsilon+it_a$, where $\epsilon>0$, we obtain
\begin{eqnarray}\label{i1234}
\lefteqn{\mathop{\sum_{0<\gamma_a<T}}_{ 0<\beta_a<1}L(\delta_a;f)}\nonumber\\
&=&\frac{1}{2\pi i}\left\{\int_{2+it_a}^{2+iT}\hspace*{-1pt}+\hspace*{-1pt}\int_{2+iT}^{-\epsilon+iT}\hspace*{-1pt}+\hspace*{-1pt}\int_{-\epsilon+iT}^{-\epsilon+it_a}\hspace*{-1pt}+\hspace*{-1pt}\int_{-\epsilon+it_a}^{2+it_a}\right\}\hspace*{-1pt} \frac{\Delta'(s)}{\Delta(s)-a}L(s;f)\,{\rm{d}} s\nonumber \\
&=:&\sum_{1\leq j\leq 4}\mathcal{I}_j.
\end{eqnarray}
Since the Dirichlet series coefficients $f(n)$ are bounded, we have $L(s;f)\ll_\epsilon 1$ for $\sigma\geq 1+\epsilon$. In view of the functional equation \eqref{feq2} and the asymptotic formula (\ref{pow}) it follows from the Phragm\'en-Lindel\"of principle (which is a kind of maximum principle for unbounded domains) that
\begin{equation}\label{ioi}
\begin{array}{cccc}
L(\sigma+it;f)&\ll_\epsilon& 1+t^{(1-\sigma)/2+\epsilon}&\quad\mbox{for}\quad \sigma\in[0,2],\ t\geq x_0>0,
\end{array}
\end{equation}
and
\begin{equation}\label{oio}
\begin{array}{cccc}
L(\sigma+it;f)&\ll_\epsilon &t^{1/2-\sigma+\epsilon}&\quad\mbox{for}\quad \sigma\in [-1,0],\ t\geq x_0>0
\end{array}
\end{equation}
(see \cite[\S 5.1]{tit} and \cite[\S 9.41]{tit1} for the case of $\zeta(s)$; the generalization to $L(s;f)$ is straightforward).
We begin with the vertical integrals in (\ref{i1234}). Since the range of integration of $\mathcal{I}_1$ lies in the half-plane $\sigma>1/2$, it follows from (\ref{one}) in combination with \eqref{ioi} that
\begin{eqnarray}\label{i1}
{\mathcal I}_1 \ll \int_{t_a}^{T} t^{-3/2}\log t \,{\rm{d}} t\ll 1.
\end{eqnarray}
The integral $\mathcal{I}_3$ is over a line segment in $\sigma<1/2$. It follows from the functional equation \eqref{feq2} and relation \eqref{delta1} that
\begin{equation}\label{neu}
L(1-s;f)=\Delta(1-s)L\left(s;f^+\right)=\Delta(1-s)\Delta(s)L\left(1-s;\left(f^{+}\right)^+\right).
\end{equation}
Here we observe an instance of the Fourier inversion formula, namely $\left(f^+\right)^+=\delta f$. In order to see that we compute via (\ref{dft}) that
\begin{eqnarray*}
\left(f^{\pm}\right)^+(n)&=&{\dfrac{1}{\sqrt{q}}}\sum_{a\bmod\,q}f^{\pm}(a)\mathrm{e}\left(\dfrac{an}{q}\right)\\
&=&{\dfrac{1}{ q}}\sum_{a,b\bmod\,q}f(b)\mathrm{e}\left(\dfrac{\pm ba}{q}\right)\mathrm{e}\left(\dfrac{an}{q}\right)\\
&=&{\dfrac{1}{ q}}\sum_{b\bmod\,q}f(b)\sum_{a\bmod\,q}\mathrm{e}\left(\dfrac{a(n\pm b)}{q}\right)\\
&=&f(\mp n),
\end{eqnarray*}
by the orthogonality relation for additive characters (or simply using geometric series). This proves
\begin{equation}\label{ftf}
\left(f^{\pm}\right)^+=f_\mp=\delta f.
\end{equation}
Inserting this into \eqref{neu} leads to
\begin{equation}\label{ldel}
\Delta(1-s)\Delta(s)=\delta.
\end{equation}
Thus using \eqref{two}, we get
\begin{eqnarray}\label{i3}
{\mathcal I}_3 &=& -\dfrac{1}{ 2\pi i}\int_{-\epsilon+it_a}^{-\epsilon+iT} {\dfrac{\Delta'}{\Delta}}(s)\left(1+\dfrac{a}{\Delta(s)}+\sum_{j\geq 2}\left(\dfrac{a}{\Delta(s)}\right)^j\right)L(s;f)\,{\rm{d}} s \nonumber\\
&=&\dfrac{1}{ 2\pi i}\int_{1+\epsilon-it_a}^{1+\epsilon-iT} \dfrac{\Delta'}{\Delta}(1-s) \nonumber \\
&&\qquad \times \left(1+\delta a\Delta(s)+\sum_{j\geq 2}\left(\delta a\Delta(s)\right)^j\right)L(1-s;f)\,{\rm{d}} s\nonumber \\
&=:&\sum_{1\leq \ell\leq 3}\mathcal{J}_\ell.
\end{eqnarray}
We estimate at first $\mathcal{J}_1$ by computing its conjugate $\overline{\mathcal{J}_1}$. The functional equation \eqref{feq2}, relations \eqref{delta1} and \eqref{refle}, estimate \eqref{log_der}, and the Dirichlet series representation of $L\left(s;\overline{f^+}\right)$ in the half-plane $\sigma>1$ yield
\begin{eqnarray}\label{grobi}
\overline{\mathcal J}_1 &=& -\dfrac{1}{ 2\pi i}\overline{\int_{t_a}^{T}{\dfrac{\Delta'}{\Delta}(-\epsilon+it)}{\Delta(-\epsilon+it)}{L\left(1+\epsilon-it;{f}^+\right)}(-i)\,{\rm{d}} t}\nonumber\\
&=&\dfrac{1}{2\pi i}{\int_{t_a}^{T}-\overline{\dfrac{\Delta'}{\Delta}(-\epsilon+it)}\,\delta{\Delta(-\epsilon-it)}{L\left(1+\epsilon+it;\overline{{f}^+}\right)}i\,{\rm{d}} t}\nonumber\\
&=& \delta\int_{t_a}^{T}\left(\log\dfrac{q\tau}{2\pi}+O\left(\dfrac{1}{\tau}\right)\right)\,{\rm{d}}\left(\frac{1}{2\pi i}\int_{1+\epsilon+i}^{1+\epsilon+i\tau}\Delta(1-s)L\left(s;\overline{f^+}\right)\,{\rm{d}} s\right).
\end{eqnarray}
We evaluate the inner integral by applying Gonek's lemma: {\it Suppose that $\sum_{n=1}^{\infty}a(n)n^{-s}$ converges for $\sigma>1$ where $a(n)\ll n^\epsilon$ for any $\epsilon >0$.
Let $\omega=\pm 1$ and $\epsilon>0$.
Then we have}
\begin{eqnarray*}
&&\frac{1}{2\pi i}\int_{1+\epsilon+i}^{1+\epsilon+i\tau}\left(\frac{q}{2\pi}\right)^s\Gamma(s)\exp\left(\omega \frac{\pi i s}{2}\right)\sum_{n=1}^{\infty}\frac{a(n)}{n^s}\,{\rm{d}} s \\
&&\qquad= \begin{cases}
\displaystyle \sum_{n\leq \frac{\tau q}{2\pi}}a(n)\exp(-2\pi i\frac{n}{q})+O_\epsilon\left(\tau^{1/2+\epsilon}\right),&\mbox{ if } \quad \omega = -1, \\[5mm]
O_\epsilon(1), & \mbox{ if }\quad \omega = +1.
\end{cases}
\end{eqnarray*}
A proof of this lemma follows along the lines of \cite[Lemma 3]{steudi1}. A slightly weaker statement was proven originally by Gonek \cite[Lemma 5]{gonek} (and there exists a preversion of it in \cite[\S 7.4]{tit}).
Since
$$
\Delta(1-s)=\left(\dfrac{q}{2\pi}\right)^s\dfrac{\Gamma(s)}{\sqrt{q}}\left(\delta \mathrm{e}\left(\frac{s}{4}\right)+\mathrm{e}\left(-\frac{s}{4}\right)\right)
$$
by the definition (\ref{delta}) of $\Delta$, this gives here
$$
\frac{1}{2\pi i}\int_{1+\epsilon+i}^{1+\epsilon+i\tau}\Delta(1-s)L\left(s;\overline{f^+}\right)\,{\rm{d}} s
=\dfrac{1}{ \sqrt{q}}\sum_{n\leq \frac{\tau q}{ 2\pi}}\overline{f^+}(n)\,\mathrm{e}\left(-\dfrac{n}{q}\right)+O\left(\tau^{1/2+\epsilon}\right);
$$
we can simplify the right hand side: taking into account the $q$-periodicity, we find
\begin{eqnarray*}
\sum_{n\leq \frac{\tau q}{ 2\pi}}\overline{f^+}(n)\,\mathrm{e}\left(-\dfrac{n}{q}\right)&=&\sum_{a\bmod\,q}\overline{f^+}(a)\,\mathrm{e}\left(-\dfrac{a}{q}\right)\mathop{\sum_{n\leq \frac{\tau q}{ 2\pi}}}_{ n\equiv a\bmod\,q}1\\
&=&\overline{\sum_{a\bmod\,q}{f}^+(a)\,\mathrm{e}\left(\dfrac{a}{q}\right)}\left(\dfrac{\tau}{ 2\pi}+O(1)\right)\\
&=&\sqrt{q}\,\overline{(f^+)^+}(1)\dfrac{\tau}{ 2\pi}+O(1) \\
&=& \sqrt{q}\delta\,\overline{f}(1)\dfrac{\tau}{ 2\pi}+O(1).
\end{eqnarray*}
This leads via (\ref{grobi}) to
\begin{equation}\label{j1}
\mathcal{J}_1 = f(1)\frac{T}{2\pi}\log\frac{qT}{2\pi e}+O\left(T^{1/2+\epsilon}\right).
\end{equation}
Next we consider
$$
{\mathcal J}_2 = \dfrac{a}{ 2\pi\delta i}\int_{1+\epsilon-it_a}^{1+\epsilon-iT}\frac{\Delta'}{ \Delta}(1-s)\Delta(s)\Delta(1-s)L\left(s;f^+\right)\,{\rm{d}} s.
$$
In view of (\ref{ldel}) this equals $a/(2\pi)$ times the conjugate of
\begin{eqnarray*}
\lefteqn{-\int_{t_a}^{T}\overline{\dfrac{\Delta'}{\Delta}(-\epsilon+it)L(1+\epsilon-it;{f^+})}\,{\rm{d}} t}\\&=&\int_{t_a}^T\left(\log\dfrac{qt}{ 2\pi}+O\left(\dfrac{1}{t}\right)\right)\sum_{n\geq 1}\dfrac{\overline{f^+}(n)}{n^{1+\epsilon+it}}\,{\rm{d}} t\\
&=&\overline{f^+}(1)\int_{t_a}^{T}\log\frac{qt}{2\pi}\,{\rm{d}} t+\sum_{n\geq2}\dfrac{\overline{f^+}(n)}{n^{1+\epsilon}} J_n(T)+O(1),
\end{eqnarray*}
where we have used the absolute convergence in $\sigma>1$ and $J_n(T)$ is for $n\geq 2$ given by
$$
J_n(T):=\int_{t_a}^T\log\dfrac{qt}{ 2\pi}\exp(-it\log n)\,{\rm{d}} t.
$$
To bound this integral we shall use the first derivative test: {\it Given real functions $F$ and $G$ on $[a,b]$ such that $G(t)/F'(t)$ is monotonic and $F'(t)/G(t)\geq M>0$ or $F'(t)/G(t)\leq -M<0$, then}
$$
\int_a^bG(t)\exp(iF(t))\,{\rm{d}} t\ll \dfrac{1}{ M}.
$$
This is essentially a classical lemma from \cite[\S 4.3]{tit}.
This leads to the estimate $J_n(T)\ll \frac{\log T}{\log n}$ for $n\geq 2$.
Hence, the contribution of the tail of the Dirichlet series is negligible and we get
\begin{equation}\label{j2}
{\mathcal J}_2=af^+(1)\dfrac{T}{ 2\pi}\log\dfrac{qT}{ 2\pi e}+O(\log T).
\end{equation}
Finally, we have to consider the third integral over the tail of the geometric series.
For this aim we apply \eqref{pow}, \eqref{log_der} and \eqref{oio} and get
$$
\mathcal{J}_3 \ll \int_{t_a}^{T} \log t\sum_{j\geq 2}t^{-j(1/2+\epsilon)}t^{1/2+\epsilon}\,{\rm{d}} t \ll T^{1/2+\epsilon}.
$$
This together with \eqref{j1} and \eqref{j2} substituted into \eqref{i3}, in combination with \eqref{i1} shows that the vertical integrals contribute
\begin{eqnarray}\label{verti}
\mathcal{I}_1+\mathcal{I}_3&=&\left(f(1)+af^+(1)\right)\dfrac{T}{ 2\pi}\log\dfrac{qT}{ 2\pi e}+O(T^{1/2+\epsilon}),\end{eqnarray}
which is already the main term.
It remains to consider the horizontal integrals in (\ref{i1234}).
Although they can be treated the same way as in \cite{steusur} we sketch the details.
The integral $\mathcal{I}_2$ in \eqref{i1234} can be rewritten with the aid of the truncated partial fraction decomposition \eqref{aprox} as
$$
{\mathcal I}_2= \dfrac{1}{ 2\pi i}\int_{2+iT}^{-\epsilon+iT}\left(\sum_{\vert T-\gamma_a\vert\leq 1}\frac1{s-\delta_a}+O(\log T)\right)L(s;f)\,{\rm{d}} s.
$$
Taking into account (\ref{condi}) we notice that $1/\vert s-\delta_a\vert\ll \log T$.
Hence, utilizing \eqref{app1}, \eqref{ioi} and \eqref{oio}, we can show
\begin{eqnarray*}
{\mathcal I}_2&\ll &(\log T)^2 \left\{\int_{-\epsilon}^{0}+\int_{0}^{1}+\int_1^{2}\right\}\vert L\left(\sigma+iT;f\right)\vert\,{\rm{d}} \sigma\\
&\ll_\epsilon & (\log T)^2\left\{T^{1/2+\epsilon}+T^{1/2+\epsilon}+1\right\}\\
&\ll_\epsilon& T^{1/2+\epsilon},
\end{eqnarray*}
where $\epsilon$ at different places may take different values. This is the bound for the horizontal integrals. In combination with \eqref{verti} we arrive via \eqref{i1234} at the asymptotic formula of the theorem. Taking into account \eqref{ioi} and \eqref{oio} we may replace the chosen $T$ (with respect to \eqref{condi}) with a general $T\geq t_a$ at the expense of an error of order $T^{1/2+\epsilon}$ (as follows from \eqref{ioi}). This concludes the proof of Theorem \ref{mean}.
\section{Proof of Theorem \ref{uniform}}
We first note that \eqref{pow} implies
\begin{equation} \label{delta_asymp}
\Delta(\sigma+it) \asymp \left(\dfrac{qt}{2\pi}\right)^{1/2-\sigma},\quad t\geq t_0>0,
\end{equation}
while \eqref{amod} yields
\begin{equation}\label{rp}
\beta_a=\dfrac{1}{2}-\dfrac{\log|a|}{\log\frac{q\gamma_a}{2\pi}}+O\left(\dfrac{1}{\gamma_a\log\frac{q\gamma_a}{2\pi}}\right)
\end{equation}
for any $\gamma_a\geq T_0:=\max\left\{1,4\pi/q\right\}$.
Let $T\geq T_0$, $x>1$ and
$$
\alpha=\alpha(T) := \dfrac12+\dfrac{c}{\log{\frac{qT}{2\pi}}}
$$
where $c>0$ is a sufficiently large constant depending on $a$ satisfying
\begin{equation}\label{dist}
\left|\beta_a-\dfrac{1}{2}\right|\leq\dfrac{c}{2\log \frac{qT}{2\pi}}
\end{equation}
for any $a$-point with $\gamma_a\geq T\geq T_0$ as follows from \eqref{rp}, and
\begin{equation}\label{ineq}
\begin{array}{ccccccccc}
&|\Delta(\alpha+it)|&\leq &K\left(\dfrac{qt}{2\pi}\right)^{-c/\log \frac{qT}{2\pi}}&\leq &Ke^{-c}&<&\dfrac{|a|}{30}\\
\\
|\Delta(\alpha+it)|^{-1}= &|\Delta(1-\alpha+it)|&\geq& L\left(\dfrac{qt}{2\pi}\right)^{c/\log \frac{qT}{2\pi}}&\geq &Le^{c}&>&30|a|
\end{array}
\end{equation}
for any $t\geq T\geq T_0$, where $K,L$ are absolute constants coming from \eqref{delta_asymp} and we can assume without loss of generality that $K>1$ is sufficiently large and $L=1/K$.
After choosing a suitable $K$, we then take $c$ sufficiently large. These show that $\Delta(\alpha+it) \neq a$, $\Delta(1-\alpha+it) \neq a$ when $t\ge T$.
We know from \eqref{na} that for any $T_0\leq T<T+1\leq T'\leq2T$, we can find $T_1\in\left[T,T+1/2\right)$ and $T_2\in\left(T'-1/2,T'\right]$ such that
\begin{align}\label{T_i}
\begin{array}{ccc}\min\limits_{\ell=1,2}\min\limits_{\delta_a}\left|T_\ell-\gamma_a\right|\gg\dfrac{1}{\log T}\end{array}.
\end{align}
If $\mathcal{C}$ is the positively oriented rectangular contour with vertices $\alpha+iT_1$, $\alpha+iT_2$, $1-\alpha+iT_2$, $1-\alpha+iT_1$, then \eqref{na}, \eqref{dist} and the calculus of residues yield
\begin{eqnarray}\label{anf}
\sum_{\substack{T<\gamma_a<T'}} x^{\delta_a}&=& \sum_{\substack{T_1<\gamma_a<T_2}} x^{\delta_a}+\sum_{\substack{T<\gamma_a<T_1}}x^{\delta_a}+\sum_{\substack{T_2<\gamma_a<T'}} x^{\delta_a}\nonumber\\
&=&\frac{1}{ 2\pi i} \int_{\mathcal{C}} \frac{\Delta'(s)}{\Delta(s)-a} x^s\, \,{\rm{d}} s+O\left(x^\alpha\log T\right).
\end{eqnarray}
We break the integral
$$ \frac{1}{ 2\pi i} \int_{\mathcal{C}} \frac{\Delta'(s)}{\Delta(s)-a} x^s\, \,{\rm{d}} s $$
down into
$$
\frac{1}{2\pi i} \left(\int_{\alpha+iT_1}^{\alpha+iT_2} + \int_{\alpha+iT_2}^{1-\alpha+iT_2} + \int_{1-\alpha+iT_2}^{1-\alpha+iT_1} + \int_{1-\alpha+iT_1}^{\alpha+iT_1} \right) \frac{\Delta'(s)}{\Delta(s)-a} x^s \,\,{\rm{d}}{s} =: \sum_{1\leq j\leq 4}I_j.
$$
In view of \eqref{aprox} and \eqref{T_i}, for $\ell= 1, 2$ we have
\begin{eqnarray*}
\int_{1-\alpha+iT_\ell}^{\alpha+iT_\ell} \frac{\Delta'(s)}{\Delta(s)-a} x^s \,\,{\rm{d}}{s}
&=& \int_{1-\alpha+iT_\ell}^{\alpha+iT_\ell} \left(\sum_{\vert t-\gamma_a\vert\leq 1}\frac1{s-\delta_a} + O(\log{t})\right) x^s \,\,{\rm{d}}{s} \\
&=& \int_{1-\alpha+iT_\ell}^{\alpha+iT_\ell} \sum_{\vert t-\gamma_a\vert\leq 1} \frac1{s-\delta_a} x^s \,\,{\rm{d}}{s}
+ O\left(x^\alpha\right),
\end{eqnarray*}
since
$$
\int_{1-\alpha+iT_\ell}^{\alpha+iT_\ell} O(\log{t}) x^s \,\,{\rm{d}}{s}
\ll \log T_\ell \int_{1-\alpha}^\alpha x^\sigma\, \,{\rm{d}}\sigma
\ll x^\alpha (2\alpha-1) \log T \ll x^\alpha.
$$
Meanwhile,
\begin{eqnarray*}
\int_{1-\alpha+iT_\ell}^{\alpha+iT_\ell} \sum_{\vert t-\gamma_a\vert\leq 1} \frac1{s-\delta_a} x^s \,\,{\rm{d}}{s}
&\ll& \sum_{\vert T_\ell-\gamma_a\vert\leq 1} \int_{1-\alpha}^{\alpha} \frac{x^\sigma}{\left|\alpha-\beta_a+i(T_\ell-\gamma_a)\right|} \,\,{\rm{d}}{\sigma} \\
&\ll& x^\alpha (2\alpha-1) \log T \sum_{\vert T_1-\gamma_a\vert\leq 1} 1 \\
&\ll& x^\alpha\log T.
\end{eqnarray*}
Therefore,
\begin{equation} \label{hor}
I_2,I_4 \ll x^\alpha\log{T}.
\end{equation}
We now estimate the vertical integrals $I_1$ and $I_3$. For $\sigma=\alpha$, we use \eqref{bibo2} and thus
\begin{equation} \label{delta_i1_rewrite}
\frac{\Delta'(s)}{\Delta(s)-a} = \frac{\Delta'(s)}{ -a} \cdot \frac{1}{1 - \Delta(s)/a}
= \frac{\Delta'(s)}{ -a} \left( 1 + \sum_{j\ge1} \left(\frac{\Delta(s)}{a}\right)^j \right);
\end{equation}
here the first inequality in (\ref{ineq}) allows us to expand the second factor into a geometric series. Applying this, we find
\begin{equation}\label{I_1ref}
I_1 = -\frac{x^{\alpha}}{2\pi a} \int_{T_1}^{T_2} \Delta'(\alpha+it) \left( \sum_{0\leq j< m} \left(\frac{\Delta(\alpha+it)}{a}\right)^j + \sum_{j\ge m} \left(\frac{\Delta(\alpha+it)}{a}\right)^j \right) x^{it} \,\,{\rm{d}}{t}.
\end{equation}
Here we pick $m$ depending on $T$, large enough to bound the last term in the integral trivially. By \eqref{delta_asymp} and again the first inequality in \eqref{ineq},
\begin{equation} \label{delta_i1_asymp}
\sum_{j\ge m} \left(\frac{\Delta(\alpha+it)}{a}\right)^j
= \frac{ \left(\frac{\Delta(\alpha+it)}{a} \right)^m}{1-\frac{\Delta(\alpha+it)}{a}}
\ll\dfrac{30}{29}\left(\dfrac{1}{30}\right)^m.
\end{equation}
Take
\begin{equation} \label{m_of_T}
m := \left\lfloor\dfrac{2\log{\frac{qT}{4\pi}}}{\log 30}\right\rfloor.
\end{equation}
It follows then by \eqref{delta_i1_asymp} that
\begin{eqnarray*}
I_1 &=& -\frac{x^{\alpha}}{2\pi a} \int_{T_1}^{T_2} \Delta'(\alpha+it) \sum_{0\leq j< m} \left(\frac{\Delta(\alpha+it)}{a}\right)^jx^{it} \,\,{\rm{d}}{t} \\
&&+~ O\left(x^\alpha\int_{T_1}^{T_2}\left|\Delta'(\alpha+it)\right|\left(\dfrac{4\pi}{qT}\right)^{2} \,{\rm{d}}{t}\right).
\end{eqnarray*}
The first term of the integrand can be estimated using \eqref{pow} for which we obtain
\begin{eqnarray*}
\Delta'(\alpha+it) &=& \delta\left(\frac{qt}{ 2\pi}\right)^{1/2-(\alpha+it)}\exp\left({i\left(t+\frac{\pi}{ 4}\right)}\right) \left( -\log\frac{qt}{ 2\pi} + O\left(\frac{\log{qt}}{t}\right) \right)\\
&\ll& \log\frac{qt}{2\pi}.
\end{eqnarray*}
Here we substituted the value of $\alpha$ and used the first inequality in \eqref{ineq}.
It then follows that we can discard the last term in $I_1$ as
$$
I_1 = -\frac{x^{\alpha}}{2\pi a} \int_{T_1}^{T_2} \Delta'(\alpha+it) \sum_{0\le j< m} \left(\frac{\Delta(\alpha+it)}{a}\right)^j x^{it} \,\,{\rm{d}}{t} + O\left(x^{\alpha}\right).
$$
Since
$$
\frac\,{\rm{d}}{\,{\rm{d}}{t}} \Delta(\alpha+it)^{j+1} = i(j+1)\Delta'(\alpha+it)\Delta(\alpha+it)^j,
$$
we can rewrite $I_1$ as
\begin{eqnarray}\label{I_1}
I_1 &=& -\frac{x^{\alpha}}{2\pi i}\sum_{1\le j\le m} \dfrac{1}{ja^j} \int_{T_1}^{T_2} \left(\Delta(\alpha+it)^{j}\right)' x^{it} \,\,{\rm{d}}{t} + O\left(x^{\alpha}\right)\nonumber\\
&=:& -\frac{x^{\alpha}}{2\pi i}\sum_{1\le j\le m} \dfrac{1}{ja^j} I_{1j}+ O\left(x^{\alpha}\right).
\end{eqnarray}
We estimate $I_{1j}$ for $1\le j\le m$. Integrating by parts, we obtain with the aid of \eqref{pow} and the first inequality in \eqref{ineq}
\begin{eqnarray}
I_{1j}&=&\Delta(\alpha+it)^{j} x^{it}\Big|_{T_1}^{T_2}-i\log x\int_{T_1}^{T_2}\Delta(\alpha+it)^{j}x^{it}\,{\rm{d}}{t}\nonumber\\
&\ll&\log x\left|\int_{T_1}^{T_2}\delta^j\left(\frac{qt}{ 2\pi}\right)^{(1/2-\alpha-it)j}\exp\left(ij\left(t+\dfrac{\pi}{ 4}\right)\right)\left(1+O\left(t^{-1}\right)\right)^jx^{it}\,{\rm{d}}{t}\right| \nonumber\\
&&+ \left(\dfrac{|a|}{30}\right)^j \nonumber.
\end{eqnarray}
We see that if $|O\left(t^{-1}\right)|<D/t$ for some $D>1$, then
$$
\left| \left(1+O\left(t^{-1}\right)\right)^j-1 \right|
= \left| \sum\limits_{k=1}^j\binom{j}{k}\left(O\left(t^{-1}\right)\right)^k \right|
\leq \frac{D^j}t \sum\limits_{k=1}^j\binom{j}{k} \leq \frac{(2D)^j}t
$$
or
$$
\left(1+O\left(t^{-1}\right)\right)^j=1+O\left((2D)^jt^{-1}\right).
$$
In view of \eqref{ineq}, we then have for a sufficiently large constant $K$ that
\begin{eqnarray}\label{I_1j}
I_{1j} &\ll& \log x\left|\int_{T_1}^{T_2}\left(\frac{qt}{ 2\pi}\right)^{\left(-c/\log \frac{qT}{2\pi}-it\right)j} \exp\left(ijt\right) \left(x^{1/j}\right)^{ijt} \,{\rm{d}}{t} \right| \nonumber\\
&&+\, \log x\int_{T_1}^{T_2}\left(2D\left(\frac{qt}{ 2\pi}\right)^{-c/\log \frac{qT}{2\pi}}\right)^jt^{-1}\,{\rm{d}}{t}
+ \left(\dfrac{|a|}{30}\right)^j \nonumber\\
&\ll& q^{-jc/\log \frac{qT}{2\pi}} \log{x} \left| \int_{T_1}^{T_2} \left(\frac{t}{ 2\pi}\right)^{-jc/\log \frac{qT}{2\pi}}
\exp\left[-ijt\log\left(\dfrac{qt}{2\pi x^{1/j}e}\right) \right]\,{\rm{d}}{t}\right| \nonumber\\
&&+ \left(2D\dfrac{|a|}{30K}\right)^j \log x \int_{T_1}^{T_2}t^{-1}\,{\rm{d}}{t}+\left(\dfrac{|a|}{30}\right)^j\nonumber\\
&\ll&\dfrac{\log x}{j}\left(\dfrac{q}{j}\right)^{-jc/\log \frac{qT}{2\pi}}\left|\mathcal{J}\right| + (1+\log x)\left(\dfrac{|a|}{30}\right)^j,
\end{eqnarray}
where
$$
\mathcal{J}:=\int_{jT_1}^{jT_2}\exp\left[-it\log\left(\dfrac{qt}{2\pi jx^{1/j}e} \right)\right]\left(\frac{t}{ 2\pi}\right)^{1/2-jc/\log \frac{qT}{2\pi}-1/2}\,{\rm{d}}{t}.
$$
To estimate $\mathcal{J}$ we employ another form of Gonek's lemma (see \cite[Lemma 2]{gonek}):
{\it For large $A$ and $A<r\leq B\leq 2A$
\begin{eqnarray*}
\lefteqn{\int_A^B\exp\left[it\log\left(\dfrac{t}{re}\right)\right]\left(\dfrac{t}{2\pi}\right)^{\mathfrak{a}-1/2}\,{\rm{d}}{t}}\\
&=& (2\pi)^{1-\mathfrak{a}}r^\mathfrak{a}\exp\left(-i\Big(r-{\pi\over 4}\Big)\right)\mathbbm{1}_{[A,B)}(r)+E(r,A,B),
\end{eqnarray*}
where $\mathfrak{a}$ is a fixed real number and
$$
E(r,A,B) \ll A^{\mathfrak{a}-1/2}+\dfrac{A^{\mathfrak{a}+1/2}}{|A-r|+A^{1/2}}+\dfrac{B^{\mathfrak{a}+1/2}}{|B-r|+B^{1/2}}.
$$}
It is easily seen that that this holds uniformly in $\mathfrak{a}$ from a bounded interval (the implicit constants depend of course on the limits of this interval). We then apply this, by taking complex conjugate, to $\mathcal{J}$ with $\mathfrak{a}=1/2-jc/\log \frac{qT}{2\pi}$, $r=2\pi jx^{1/j}/q$, $A=jT_1$ and $B=jT_2$:
\begin{eqnarray*}
\mathcal{J}&\ll&(2\pi)^{1/2+jc/\log \frac{qT}{2\pi}}\left(\dfrac{2\pi jx^{1/j}}{q}\right)^{1/2-jc/\log \frac{qT}{2\pi}}\mathbbm{1}_{\left[jT_1,jT_2\right)}\left(\dfrac{2\pi jx^{1/j}}{q}\right) \nonumber\\
&&+\, \left(jT_1\right)^{-jc/\log \frac{qT}{2\pi}}
+ \dfrac{\left(jT_1\right)^{1-jc/\log \frac{qT}{2\pi}}}{\left|jT_1-\frac{2\pi jx^{1/j}}{q}\right|+\sqrt{jT_1}}
+ \dfrac{\left(jT_2\right)^{1-jc/\log \frac{qT}{2\pi}}}{\left|jT_2-\frac{2\pi jx^{1/j}}{q}\right|+\sqrt{jT_2}} \nonumber\\
&\ll& \left(\dfrac{j}{q}\right)^{1/2-jc/\log \frac{qT}{2\pi}}x^{1/(2j)-c/\log \frac{qT}{2\pi}}
\mathbbm{1}_{\left(T/2,{5}T/2\right)}\left(\dfrac{2\pi x^{1/j}}{q}\right) \\
&&+~ (jT)^{-jc/\log \frac{qT}{2\pi}}E_0(x,j,T), \nonumber
\end{eqnarray*}
where
\begin{eqnarray}\label{error}
E_0(x,j,T)
&=&\left\{\begin{array}{ll}
O\left(\sqrt{jT}\right),
&\text{if } \dfrac{2\pi x^{1/j}}{q}\in\left(\dfrac{T}{2},\dfrac{{5}T}{2}\right), \\[3mm]
O(1), &\text{otherwise}.
\end{array}\right.
\end{eqnarray}
Recall that $T\asymp T_1\asymp T_2$ and $j\ll\log T$. Therefore, $E_0$ does not restrictively depend on $T_1$ and $T_2$.
Applying this to \eqref{I_1j} we have
\begin{eqnarray*}
I_{1j} &\ll& \dfrac{\log x}{\sqrt{jq}}x^{1/(2j)-c/\log \frac{qT}{2\pi}}\mathbbm{1}_{\left(T/2,{5}T/2\right)}\left(\dfrac{2\pi x^{1/j}}{q}\right) \\
&&+~ \dfrac{\log x}{j}\left(\left(qT\right)^{-c/\log \frac{qT}{2\pi}}\right)^jE_0(x,j,T)
+ (1+\log x) \left(\dfrac{|a|}{30}\right)^j.
\end{eqnarray*}
Hence, in view of \eqref{I_1} and \eqref{ineq}, we obtain
\begin{eqnarray}\label{sum1}
I_1&\ll& x^{1/2}\log x\sum\limits_{1\leq j\le m} \dfrac{x^{1/(2j)}}{j^{3/2}|a|^j}\mathbbm{1}_{\left(T/2,{5}T/2\right)}\left(\dfrac{2\pi x^{1/j}}{q}\right) \nonumber\\
&&+~ x^\alpha\log x\sum\limits_{1\leq j\le m}\dfrac{1}{j^2|a|^j}\left(\left(2\pi\right)^{-c/\log \frac{qT}{2\pi}}\dfrac{1}{K}\dfrac{|a|}{30}\right)^jE_0(x,j,T) \nonumber\\
&&+~ x^\alpha(1+\log x)\sum\limits_{1\leq j\le m}\dfrac{1}{j30^j} + x^\alpha.
\end{eqnarray}
Observe that the intervals
\begin{eqnarray*}
\left(\left(\dfrac{qT}{4\pi}\right)^j,\left(\dfrac{{5}qT}{4\pi}\right)^j\right), &1\le j\le m,
\end{eqnarray*} are pairwise disjoint whenever
$$ j<\dfrac{1}{\log{5}}\log\left(\dfrac{qT}{4\pi}\right). $$
Comparing this with \eqref{m_of_T}, we see that
$$ m \leq \dfrac{2}{\log{30}}\log\left(\dfrac{qT}{4\pi}\right)< \dfrac{1}{\log{5}}\log\left(\dfrac{qT}{4\pi}\right) $$
for all $T\geq T_0$.
By this construction, the first sum in \eqref{sum1} can have at most only one term, namely,
$$\dfrac{x^{1/(2j_x)}}{j_x^{3/2}|a|^{j_x}}$$
for the unique, if any, $j_x\le m$ such that $2\pi x^{1/j_x}/q\in(T/2,{5}T/2)$.
Similar reasoning also applies to $E_0$ of the second sum in \eqref{sum1} and it follows in view of \eqref{error} that
$$ \sum\limits_{1\leq j\le m} \dfrac{1}{j^2 30^j} E_0(x,j,T)
\ll \mathop{\sum\limits_{1\leq j\le m}}_{j\neq j_x} \dfrac{1}{j^230^j}
+ \dfrac{T^{1/2}}{j_x^{3/2}30^{j_x}}\ll1+\dfrac{T^{1/2}}{j_x^{3/2}30^{j_x}}. $$
The last sum in \eqref{sum1} is trivially $O(1)$.
Collecting these estimates, we conclude that
\begin{equation}\label{I_1f}
I_1\ll x^{\alpha}(1+\log x)+E_1(x,a,T),
\end{equation}
where $E_1(x,a,T)$ is defined to be equal to
\begin{equation}\label{E''}
\dfrac{\log x}{j_x^{3/2}}\left(\dfrac{x^{1/2+1/(2j_x)}}{|a|^{j_x}}+\dfrac{x^{\alpha}T^{1/2}}{30^{j_x}}\right)
\end{equation}
if such $j_x$ exists, and zero otherwise.
Observe that if $T\geq T_0\geq4\pi/q$, then for any $x>1$ it is $E_1(x,a,T)\geq0$, while for $0<x<1$ we always have $E_1(x,a,T)=0$.
For $\sigma=1-\alpha$, we use \eqref{bibo} and the corresponding \eqref{two} to write
\begin{equation} \label{delta_i3_rewrite}
\frac{\Delta'(s)}{\Delta(s)-a} = \frac{\Delta'}{\Delta}(s) \cdot \frac{1}{1 - a/\Delta(s)}
=\frac {\Delta'}{\Delta}(s) \left( 1 + \sum_{j\ge1} \left(\frac{a}{\Delta(s)}\right)^j \right);
\end{equation}
here the second inequality in (\ref{ineq}) allows us to expand the second factor into a geometric series.
Thus the left vertical integral $I_3$ can be decomposed as follows
\begin{eqnarray*}
I_3 &= &\frac{1 }{ 2\pi i} \int_{1-\alpha+iT_2}^{1-\alpha+iT_1} \frac{\Delta'(s)}{\Delta(s)-a} x^s \,\,{\rm{d}}{s} \\
&= &\dfrac{1}{2\pi i}\int_{1-\alpha+iT_2}^{1-\alpha+iT_1}\dfrac{\Delta'}{\Delta}(s)x^s \,{\rm{d}}{s}
+ \dfrac{1}{2\pi i}\int_{1-\alpha+iT_2}^{1-\alpha+iT_1}\dfrac{\Delta'}{\Delta}(s)x^s\sum_{j\ge1} \left(\frac{a}{\Delta(s)}\right)^j \,{\rm{d}}{s} \\
&=:& I_{31}+I_{32}.
\end{eqnarray*}
Integrating by parts, we obtain in view of \eqref{log_der}
\begin{eqnarray}\label{I_31f}
I_{31} &=& \dfrac{x^{1-\alpha}}{2\pi} \int_{T_2}^{T_1} \left(-\log\dfrac{qt}{2\pi} + O\left(\dfrac{1}{t}\right)\right)x^{it}\,{\rm{d}}{t} \nonumber\\
&=& \left.\dfrac{x^{1-\alpha+it}}{2\pi i\log x} \left(\log\dfrac{qt}{2\pi} + O\left(\dfrac{1}{t}\right)\right) \right|_{T_1}^{T_2}
- \dfrac{x^{1-\alpha}}{2\pi i\log x}\int_{T_1}^{T_2}O\left(\dfrac{1}{t}\right)\,{\rm{d}}{t} \nonumber\\
&\ll&\dfrac{x^{{1-}\alpha}\log T}{\log x}.
\end{eqnarray}
In the case of $I_{32}$, it follows from \eqref{ldel} that
\begin{eqnarray*}
I_{32}&=&-\dfrac{1}{2\pi i}\int_{\alpha-iT_2}^{\alpha-iT_1}\dfrac{\Delta'}{\Delta}(1-s)x^{1-s}\sum_{j\ge1} \left(\frac{a}{\Delta(1-s)}\right)^j \,{\rm{d}}{s}\nonumber\\
&=&\dfrac{1}{2\pi i}\int_{\alpha-iT_1}^{\alpha-iT_2}\dfrac{\Delta'}{\Delta}(s)x^{1-s}\sum_{j\ge1} \left(\frac{a\Delta(s)}{\delta}\right)^j \,{\rm{d}}{s}\nonumber\\
&=&-\dfrac{x^{1-\alpha}}{2\pi}\int_{T_1}^{T_2}\dfrac{\Delta'}{\Delta}(\alpha-it)\sum_{j\ge1} \left(\frac{a\Delta(\alpha-it)}{\delta}\right)^j x^{it}\,{\rm{d}}{t}\nonumber\\
\end{eqnarray*}
or from \eqref{refle}
\begin{eqnarray}
\overline{I_{32}}&=&-\dfrac{x^{1-\alpha}}{2\pi}\int_{T_1}^{T_2}\dfrac{\overline{\Delta'(\alpha-it)}}{\overline{\Delta(\alpha-it)}}\sum_{j\ge1} \left(\frac{\overline{a}\overline{\Delta(\alpha-it)}}{\delta}\right)^jx^{-it} \,{\rm{d}}{t}\nonumber\\
&=&-\dfrac{\overline{a}x^{1-\alpha}}{2\pi}\int_{T_1}^{T_2}\Delta'(\alpha+it)\sum\limits_{j\geq0}\left(\overline{a}\Delta(\alpha+it)\right)^j\left(\dfrac{1}{x}\right)^{it}\,{\rm{d}}{t}.
\end{eqnarray}
To estimate now $\overline{I_{32}}$ we proceed exactly as the estimation of $I_1$ in \eqref{I_1ref}, where we have $1/\overline{a}$ instead of $a$, $x^{1-\alpha}$ instead of $x^{\alpha}$ and $y:=1/x$ instead of $x$.
We can then derive as in \eqref{I_1ref}--\eqref{I_1f} that
\begin{equation}\label{I_3f}
I_{32}\ll x^{1-\alpha}(1+\log x) + E_2(x,a,T),
\end{equation}
where $E_2(x,a,T)$ is defined to be equal to
\begin{equation}\label{E'''}
\dfrac{\log x}{j_{y}^{3/2}} \left( x^{1/2-1/(2j_{y})}|a|^{j_{y}} + \dfrac{x^{1-\alpha}T^{1/2}}{30^{j_{y}}} \right)
\end{equation}
if such $j_y$ exists, and zero otherwise.
Observe that if $T\geq T_0\geq4\pi/q$, then for any $x>1$ it is always $E_2(x,a,T)=0$, while for $0<x<1$ it is $E_2(x,a,T)\leq0$.
Collecting the estimates in \eqref{anf}, \eqref{hor}, \eqref{I_1f}, \eqref{I_31f} and \eqref{I_3f} we obtain that
$$
\sum_{\substack{T<\gamma_a< T'}} x^{\delta_a}\ll x^\alpha\left(\log x+\log T+\dfrac{\log T}{\log x}\right) + E_1(x,a,T)
$$
for any $x>1$ and any $T_0\leq T<T+1\leq T'\leq2T$.
The case of $0<x<1$ follows from \eqref{refle}, \eqref{ldel} and the above estimate. Indeed, relations \eqref{refle} and \eqref{ldel} imply that a complex number $z$ is an $a$-point of $\Delta(s)$ (where $a\neq0$) if and only if the complex number $1-\overline{z}$ is a $b:=1/\overline{a}$-point of $\Delta$.
Thus if $0<x<1$, then
$$\dfrac{1}{x}\sum_{\substack{T<\gamma_a<T'}} x^{\delta_a}
= \sum_{\substack{T<\gamma_a<T'}}\left(\dfrac{1}{ x}\right)^{1-\delta_a}
= \overline{\sum_{\substack{T<\gamma_a<T'}}\left(\dfrac{1}{ x}\right)^{1-\overline{\delta_a}}}
= \overline{\sum_{\substack{T<\gamma_b<T'}}\left(\dfrac{1}{ x}\right)^{\delta_b}},$$
or
$$
\sum_{\substack{T<\gamma_a<T'}} x^{\delta_a}
\ll x^{1-\alpha}\left(-\log x+\log T-\dfrac{\log T}{\log x}\right)
+ x \left(E_1\left(\frac1x,b,T\right)\right).
$$
By symmetry, it follows easily from \eqref{E''} and \eqref{E'''} that
\begin{eqnarray*}
xE_1(1/x,b,T) = -E_2(x,a,T)\geq0.
\end{eqnarray*}
Hence,
\begin{eqnarray*}
\sum_{\substack{T<\gamma_a<T'}} x^{\delta_a} &\ll& \left(x^\alpha+x^{1-\alpha}\right)\left(|\log x|+\log T+\dfrac{\log T}{|\log x|}\right) \\
&&+\, \mathbbm{1}_{(1,+\infty)}(x) E_1(x,a,T)-\mathbbm{1}_{(0,1)}(x)E_2(x,a,T)
\end{eqnarray*}
for any $0<x\neq1$ and any $T_0\leq T<T+1\leq T'\leq2T$.
The last statement of the theorem follows by the above construction. If $x\neq1$ is such that $4\pi/(qT)\leq x\leq qT/4\pi$, then $E_1(x,a,T)=E_2(x,a,T)=0$ and
\begin{eqnarray*}
\sum_{\substack{T<\gamma_a<T'}} x^{\delta_a} &\ll& x^{1/2}\left(x^{c/\log\frac{qT}{2\pi}}+x^{-c/\log\frac{qT}{2\pi}}\right)\left(|\log x|+\log T+\dfrac{\log T}{|\log x|}\right)\\
&\ll& x^{1/2}\left(1+\dfrac{1}{|\log x|}\right)\log T.
\end{eqnarray*}
\section{Proof of Theorem \ref{u.d.}}
The Riemann-von Mangoldt type formula \eqref{na} implies that
\begin{equation}\label{RvM}
\begin{array}{ccc}N_a(T;f)\sim\dfrac{T\log T}{2\pi}&\text{ and }&\gamma_{a}^{(n)}\sim\dfrac{2\pi n}{\log n}.\end{array}
\end{equation}
Therefore, there is a $K\in\mathbb{N}$ such that $2^{K-1}\leq\gamma_a^{{(2n)}}/\gamma_a^{{(n)}}\leq 2^K$ for all $n\in\mathbb{N}$.
If we now set $\delta_{a}^{{(n)}}:=\beta_{a}^{{(n)}}+i\gamma_{a}^{{(n)}}$ to be the $a$-point of $\Delta$ with ordinate $\gamma_{a}^{{(n)}}$, then for any integers $1\leq N< M\leq 2N$ and any positive number $x\neq1$, we have
\begin{equation}\label{fs}
\sum\limits_{N< n\leq M}x^{\delta_{a}^{{(n)}}}
=\sum\limits_{k\leq K}\sum\limits_{2^{k-1}\gamma_{a}^{{(N)}}<\gamma_a\leq\min\left\{ \gamma_{a}^{{(M)}},2^{k}\gamma_{a}^{{(N)}}\right\}}x^{\delta_a},
\end{equation}
where the inner sum on the right-hand side is zero when the set of indices of its summands is empty.
Since $4\pi/(qN)\leq x\leq qN/(4\pi)$, \eqref{RvM}, \eqref{fs} and Theorem 3 yield
\begin{align*}\sum\limits_{N< n\leq M}x^{\delta_{a}^{{(n)}}}&\ll x^{1/2}\left(1+\dfrac{1}{|\log x|}\right)\log N.
\end{align*}
We also have
$$
x^{-\beta_a^{{(M)}}}\ll x^{-1/2\pm c/\log\frac{qN}{2\pi}}\ll x^{-1/2}
$$
for any $M>N$. Using this and Abel's summation formula (summation by parts), we obtain
\begin{eqnarray}\label{rim}
\sum\limits_{N< n\leq 2N}x^{i\gamma_{a}^{{(n)}}}
&=&\sum\limits_{N< n\leq 2N}x^{-\beta_{a}^{{(n)}}}x^{\delta_{a}^{{(n)}}}\nonumber\\
&\ll &x^{-\beta_{a}^{{(2N)}}}\left|\sum\limits_{N< n\leq 2N}x^{\delta_{a}^{{(n)}}}\right|+\nonumber\\
&&+\max\limits_{N< M<2N}\left|\sum\limits_{N< n\leq M}x^{\delta_{a}^{{(n)}}}\right|\sum\limits_{N< M<2N}\left|x^{-\beta_{a}^{{(M+1)}}}-x^{-\beta_{a}^{{(M)}}}\right|\nonumber\\
&\ll & \left(1+\dfrac{1}{|\log x|}\right)\log N\left(1+\sum\limits_{N<M< 2N}\left|x^{\beta_{a}^{{(M+1)}}-\beta_{a}^{{(M)}}}-1\right|\right).
\end{eqnarray}
By (\ref{amod}) resp. (\ref{rp}),
$$\dfrac{\beta_{a}^{{(M+1)}}-\beta_{a}^{{(M)}}}{\log|a|}=\left[\left({\log\frac{q\gamma_a^{{(M)}}}{2\pi}}\right)^{-1}-\left({\log\frac{q\gamma_a^{{(M+1)}}}{2\pi}}\right)^{-1}+O\left(\dfrac{1}{\gamma_a^{{(M)}}\log{\gamma_a^{{(M)}}}}\right)\right]$$
and $4\pi/(qN)\leq x\leq qN/(4\pi)$. Hence, with the aid of \eqref{RvM} we can show
\begin{eqnarray*}
\lefteqn{\sum\limits_{N<M<2N}\left|x^{\beta_{a}^{{(M+1)}}-\beta_{a}^{{(M)}}}-1\right|}\nonumber\\
&\ll &|\log x|\sum\limits_{N<M<2N}\left[\left({\log\frac{q\gamma_a^{{(M)}}}{2\pi}}\right)^{-1}-\left({\log\frac{q\gamma_a^{{(M+1)}}}{2\pi}}\right)^{-1}+\dfrac{1}{\gamma_a^{{(M)}}\log\frac{q\gamma_a^{{(M)}}}{2\pi}}\right]\\
&\ll &|\log x|\left[\left({\log\frac{\gamma_a^{{(N+1)}}}{2\pi}}\right)^{-1}-\left({\log\frac{\gamma_a^{{(2N-1)}}}{2\pi}}\right)^{-1}+\dfrac{N}{\gamma_a^{{(N)}}\log\gamma_a^{{(N)}}}\right]\\
&\ll &|\log x|\left[\left(\log \gamma_a^{{(N)}}\right)^{-2}+1\right]\\
&\ll & |\log x|.
\end{eqnarray*}
This and inequality \eqref{rim} lead for fixed $x$ to
\begin{equation}\label{la1}
\sum\limits_{N< n\leq 2N}x^{i\gamma_{a}^{{(n)}}}\ll\left(\dfrac{1}{|\log x|}+|\log x|\right)\log N=o_x\left(\left(\log N\right)^2\right),
\end{equation}
which is the first statement of the theorem.
Let now $\alpha\neq0$ be a real number and $k\neq0$ be an integer. If $x=\exp(2\pi\alpha k)$, then for every $N\geq N_x:=4\pi x/q$ and $K_x:=\left\lfloor\frac{\log (N/N_x)}{\log 2}\right\rfloor$ we have that
\begin{eqnarray*}
\sum\limits_{n\leq N}\mathrm{e}\left(k\alpha\gamma_a^{{{(n)}}}\right)&=&\sum\limits_{n\leq\frac{N}{2^{K_x}}}\mathrm{e}\left(k\alpha\gamma_a^{{{(n)}}}\right)+\sum\limits_{k\leq K_x}\sum\limits_{\frac{N}{2^{k}}< n\leq\frac{N}{2^{k-1}}}x^{i\gamma_{a}^{{(n)}}}\\
&=&O(N_x)+\sum\limits_{k\leq K_x}o_x\left(\left(\log N\right)^2\right)\\
&=&o_x\left(\left(\log N\right)^3\right),
\end{eqnarray*}
as follows from \eqref{la1}.
Since $k\in\mathbb{Z}\setminus\lbrace0\rbrace$ can be chosen arbitrarily, the sequence $\alpha\gamma^{{(n)}}_a$, $n\in\mathbb{N}$, satisfies Weyl's Criterion (see \cite{weyl}):
{\it a sequence $(x_n)_{n\in\mathbb{N}}$ is uniformly distributed modulo 1 if
\begin{equation*}\sum\limits_{n\leq N}\mathrm{e}(kx_n)=o(N)
\end{equation*}
for any integer $k\neq0$}. This concludes the proof of the theorem.
\section{Proof of Theorem \ref{universal}}
If $\chi$ is a Dirichlet character mod $r$ and $Q>0$, we define the truncated and twisted Euler product
\begin{equation}\label{trE}
s\longmapsto L_Q(s;\chi):=\prod\limits_{p\leq Q}\left(1-\dfrac{\chi(p)}{p^{s}}\right)^{-1}
\end{equation}
for every $s\in\mathbb{C}$ with $\sigma>0$, where $p$ will denote from here on a prime number.
The proof of \eqref{joint} is similar to Reich's proof \cite{reich} for the discrete universality of $\zeta(s)$ and we will not repeat it here. Instead we employ the following theorem which highlights the necessary conditions a sequence $(x_n)_{n\in\mathbb{N}}$ has to meet in order to derive universality: {\it Let $\chi_1,\dots , \chi_J$ be pairwise non-equivalent Dirichlet characters. Let also $(x_n)_{n\in\mathbb{N}}$ be a sequence of real numbers such that the sequence of vectors
\begin{equation}\label{con1}
\begin{array}{cc}\left(x_n\dfrac{\log p}{2\pi}\right)_{p\in\mathcal{M}},&n\in\mathbb{N},
\end{array}
\end{equation}
is uniformly distributed modulo 1 for any finite set of primes $\mathcal{M}$, and
\begin{equation}\label{con2}\lim\limits_{Q\to\infty}\limsup\limits_{N\to\infty}\dfrac{1}{N+1}\sum\limits_{N\leq n\leq2N}\left|L\left(s+ix_n;\chi_j\right)-L_Q\left(s+ix_n;\chi_j\right)\right|^2=0,
\end{equation}
$j=1,\dots,J$, uniformly in compact subsets of the strip $1/2<\sigma<1$. Then for any compact subset with connected complement $K$ of this strip, any $g_1,\dots,g_J$ continuous non-vanishing functions on $K$ and analytic in its interior, any $z>0$ and $\xi_p$, $p\leq z$, real numbers, and any $\varepsilon>0$,}
$$
\liminf\limits_{N\to\infty}\dfrac{1}{N}\sharp\left\{1\leq n\leq N:\begin{array}{c}\max\limits_{1\leq j\leq J}\max\limits_{s\in K}\left|L\left(s+ix_n;\chi_j\right)-g_j(s)\right|<\varepsilon\\ \& \
\max\limits_{p\leq z}\left\|\gamma_a^{{(n)}}\dfrac{\log p}{2\pi}-\xi_p\right\|<\varepsilon
\end{array}\right\}>0.
$$
For the proof see \cite[Theorem 3.1]{sourm}.
The definition of uniform distribution of a multidimensional sequence (sequence of vectors) is analogous to the one-dimensional case (sequence of numbers) and so we omit the details here.
However, we will use an equivalent statement of it (see \cite[Chapter I, Theorem 6.3]{kuinie}): {\it A sequence $\left(\underline{x}_n\right)_{n\in\mathbb{R}}$, of vectors from $\mathbb{R}^{\ell}$ (for some $\ell\in\mathbb{N}$) is uniformly distributed modulo 1 if, and only if, for every $\underline{h}\in\mathbb{Z}^\ell\setminus\lbrace\underline{0}\rbrace$, the sequence $\left\langle\underline{h},\underline{x}_n\right\rangle$, $n\in\mathbb{N}$, is uniformly distributed modulo $1$, where $\langle\cdot,\cdot\rangle$ is the standard inner product.}
\noindent Therefore, if $\mathcal{M}$ is a finite set of primes, then the sequence
\begin{eqnarray*}
\left(\gamma_a^{{(n)}}\dfrac{\log p}{2\pi}\right)_{p\in \mathcal{M}},&n\in\mathbb{N},
\end{eqnarray*}
is uniformly distributed modulo $1$ if and only if the sequence
\begin{eqnarray*}
\gamma_a^{{(n)}}\sum\limits_{p\in \mathcal{M}}h_p\dfrac{\log p}{2\pi},&n\in\mathbb{N},
\end{eqnarray*}
is uniformly distributed modulo $1$ for any
$(h_p)_{p\in\mathcal{M}}\in\mathbb{Z}^{\sharp\mathcal{M}}\setminus\lbrace\underline{0}\rbrace$.
But this follows immediately from Theorem \ref{u.d.} and the unique factorization of integers into primes.
Thus, $\gamma_{a}^{{(n)}}$, $n\in\mathbb{N}$, satisfies condition \eqref{con1}.
To prove that $\gamma_{a}^{{(n)}}$, $n\in\mathbb{N}$, satisfies also condition \eqref{con2}, we will use the following approximate functional equation
\begin{eqnarray}\label{aprfe}
L(s;\chi_j) &=& \sum\limits_{n\leq X}\chi_j(n)n^{-s}+\Delta(s;\chi_j)\sum\limits_{n\leq y}\chi_j^-(n){n^{s-1}}+\nonumber\\
&&+~ O\left(X^{-\sigma}\log(y+2)+\tau^{1/2-\sigma}y^{\sigma-1}\right)
\end{eqnarray}
valid for $s=\sigma+i\tau$ with $0<\sigma<1$, $\tau\geq \tau_0>0$, and $X,y>0$ such that $2\pi Xy=q_j \tau$, where $q_j$ is the modulus of $\chi_j$.
This formula follows from a result due to Vivek Rane \cite{ran}.
Observe also that $L_Q(s;\chi_j)$ can be written as an absolutely convergent Dirichlet series
\begin{equation}\label{trEd}
L_Q(s;\chi_j)=\mathop{\sum\limits_{n=1}^{\infty}}_{p\mid n\Rightarrow p\leq Q}\dfrac{\chi_j(n)}{n^s}
\end{equation}
for any $\sigma>0$ and $Q>0$, as follows from \eqref{trE} by expanding each factor of the truncated Euler product into a geometric series.
Now let $K$ be a compact subset of the strip $\sigma_1\leq\sigma<1$ for some $\sigma_1\in(1/2,1)$. Then equations \eqref{aprfe} and \eqref{trEd} imply, for every $s=\sigma+it\in K$, any $Q>0$, any sufficiently large $N\in\mathbb{N}$ and for $X=q_jN/(4\pi)$, that
\begin{eqnarray}\label{long}
\lefteqn{\sum\limits_{N\leq n\leq2N}\left|L\left(s+i\gamma_{a}^{{(n)}};\chi_j\right)-L_Q\left(s+i\gamma_{a}^{{(n)}};\chi_j\right)\right|^2}\nonumber\\
&\ll& \sum\limits_{N\leq n\leq2N}\left|\mathop{\sum\nolimits_1}_{m\leq X}\dfrac{\chi_j(m)}{m^{s+i\gamma_{a}^{{(n)}}}}\right|^2+\sum\limits_{N\leq n\leq2N}\left|\mathop{\sum\nolimits_2}_{m> X}\dfrac{\chi_j(m)}{m^{s+i\gamma_{a}^{{(n)}}}}\right|^2\nonumber+\\
&&+ \sum\limits_{N\leq n\leq 2N}\left|\Delta(s+i\gamma_{a}^{{(n)}};\chi_j)\right|^2\left|\sum\limits_{m\leq y}\dfrac{\chi_j^-(m)}{{m^{1-s-i\gamma_{a}^{{(n)}}}}}\right|^2+\nonumber\\
&&+ \sum\limits_{N\leq n\leq 2N}O\left(X^{-2\sigma}\left(\log(y+2)\right)^2+\left(t+\gamma_a^{{(n)}}\right)^{1-2\sigma}y^{2(\sigma-1)}\right)
\end{eqnarray}
where $\sum\nolimits_1$ denotes the sum over integers $m$ for which there is no prime $p\leq Q$ with $p\mid m$, and $\sum\nolimits_2$ denotes the sum over integers $m$ which are divisible only by primes $p\leq Q$.
We denote the terms on the right-hand side of \eqref{long} by $S_1$, $S_2$, $S_3$, $S_4$, respectively, and we prove that each of them is at most $O\left(NQ^{1-2\sigma_1}\right)$ as $N$ tends to infinity.
In what follows, asymptotics and limits are not taken with respect to parameter $t$ since $t$ represents the imaginary part of complex numbers $s$ from a compact set $K$.
The implicit constants depend of course also on $K$ and the finitely many moduli $q_1,\dots,q_J$, but they are negligible in our proof.
Recall that $\gamma_a^{{(n)}} \sim n/\log n$. Then, we have that
\begin{eqnarray*}
S_4&\ll&\sum\limits_{N\leq n\leq 2N}\left[N^{-2\sigma}\left(\log\left(\dfrac{\gamma_{a}^{{(n)}}}{N}+2\right)\right)^2+\left(\gamma_{a}^{{(n)}}\right)^{1-2\sigma}\left(\dfrac{\gamma_{a}^{{(n)}}}{N}\right)^{2(\sigma-1)}\right]\\
&\ll& N\left[N^{-2\sigma_1}+{N}^{1-2\sigma_1}\log N\right]\\
&=&o(N),
\end{eqnarray*}
while \eqref{pow} implies that
\begin{eqnarray*}
S_3&\ll&\sum\limits_{N\leq n\leq2N}\left(\gamma_{a}^{{(n)}}\right)^{1-2\sigma}\left(\sum\limits_{m\leq y}\dfrac{1}{m^{1-\sigma}}\right)^2\\
&\ll& N\left(\gamma_{a}^{{(N)}}\right)^{1-2\sigma_1}\left(\dfrac{\gamma_{a}^{{(2N)}}}{N}\right)^{2}\\
&\ll& N\left(\dfrac{N}{\log N}\right)^{1-2\sigma_1}\left(\log N\right)^{-2}\\
&=&o(N).
\end{eqnarray*}
To estimate $S_2$, we first observe that its inner sum is a tail of an absolutely convergent series. Therefore,
\begin{eqnarray*}
S_2\ll\sum\limits_{N\leq n\leq2N}\left(\mathop{\sum\nolimits_2}_{m>X}\dfrac{1}{m^{\sigma}}\right)^2\ll N\left(\mathop{\sum\nolimits_2}_{m>N/\log N}\dfrac{1}{m^{\sigma}}\right)^2=o(N).
\end{eqnarray*}
Lastly,
\begin{eqnarray}\label{S_1}
S_1&=&\mathop{\sum\nolimits_1}_{m_1,m_2\leq X}\dfrac{1}{(m_1m_2)^{\sigma}}\left(\dfrac{m_2}{m_1}\right)^{it}\sum\limits_{N\leq n\leq2N}\left(\dfrac{m_2}{m_1}\right)^{i\gamma_{a}^{{(n)}}}\nonumber\\
&\ll&N\mathop{\sum\nolimits_1}_{m\leq X}\dfrac{1}{m^{2\sigma_1}}+\mathop{\sum\nolimits_1}_{1\leq m_1<m_2\leq X}\dfrac{1}{(m_1m_2)^{\sigma_1}}\left|\sum\limits_{N\leq n\leq2N}\left(\dfrac{m_2}{m_1}\right)^{i\gamma_{a}^{{(n)}}}\right|.
\end{eqnarray}
The first term on the right hand-side of \eqref{S_1} is $O\left(NQ^{1-2\sigma_1}\right)$.
For the second term observe that for any $1\leq m_1<m_2\leq X$, we have $1<m_2/m_1\leq q_jN/(4\pi)$.
Therefore, Theorem \ref{u.d.} implies that this term is bounded from above by
\begin{eqnarray*}
\mathop{\sum\nolimits_1}_{1\leq m_1<m_2\leq X}\dfrac{\log N}{(m_1m_2)^{\sigma_1}}\left(\left(\log\dfrac{m_2}{m_1}\right)^{-1}+\log\dfrac{m_2}{m_1}\right)&\ll& X^{2-2\sigma_1}\log( X)\log N\\
&\ll&N^{2-2\sigma_1}(\log N)^2\\
&=&o(N).
\end{eqnarray*}
Collecting the above estimates we finally arrive at
$$
\limsup\limits_{N\to\infty}\dfrac{1}{N+1}\sum\limits_{N\leq n\leq2N}\left|L\left(s+i\gamma_{a}^{{(n)}};\chi_j\right)-L_Q\left(s+i\gamma_{a}^{{(n)}};\chi_j\right)\right|^2\ll Q^{1-2\sigma_1}
$$
uniformly in $K$ and arbitrary $Q>0$. Taking $Q$ to infinity shows that the sequence $\gamma_{a}^{{(n)}}$, $n\in\mathbb{N}$, also satisfies \eqref{con2} and thus \eqref{joint} holds.
We prove \eqref{periodic} from \eqref{joint}. We employ techniques introduced by Bagchi \cite{bagchi}, Gonek \cite{gonek0} as well as J\" urgen Sander and the second author \cite{sanste}. Let $\psi\neq\mathbf{0}$ be an $r$-periodic arithmetical function. If $r=1$, then
$$
L(s;\psi)=\psi(1)\zeta(s)
$$
and \eqref{periodic} holds, since $\zeta(s)$ is $L(s;\chi_0)$, where $\chi_0$ is the Dirichlet character mod $1$ and we apply \eqref{joint} only to this character.
The case $r=2$ is rather special; here some cases need a restriction on the range of approximation. This observation is due to Jerzy Kaczorowski \cite{jerzy} and has recently been discussed by Philipp Muth and the second author \cite{phil}. Since $\psi(1)\neq\psi(2)$, for any $\sigma>1$, we have
$$
L(s;\psi)=\sum\limits_{n=1}^\infty\dfrac{\psi(1)}{(2n-1)^s}+\sum\limits_{n=1}^\infty\dfrac{\psi(2)}{(2n)^s}=\left(\psi(1)+\dfrac{\psi(2)-\psi(1)}{2^s}\right)\zeta(s)=:P(s)\zeta(s).
$$
The latter holds for all $s\in\mathbb{C}$ by analytic continuation. Observe here that $P(s)$ is analytic and bounded in any half-plane $\sigma\geq\sigma_0$ and
\begin{equation}\label{Pol}
P(s+i\tau)-P(s)\ll\left\|\tau\dfrac{\log2}{2\pi}\right\|
\end{equation}
uniformly in $\sigma\geq\sigma_0$ and $\tau\in\mathbb{R}$.
Additionally, $P(s)$ is zero-free in the open set
$$D_0:=\left\{s\in\mathbb{C}:\dfrac{1}{2}<\sigma<1\right\}\setminus\left\{\log\left(1-\dfrac{\psi(2)}{\psi(1)}\right)+2k\pi i:k\in\mathbb{Z}\right\}.$$
Therefore, if we assume that $K\subseteq D_0$ and set
$$
g(s):=\frac{h(s)}{P(s)},
$$
then, from \eqref{joint} we have
\begin{equation*}
\sharp\left\{1\leq n\leq N:\begin{array}{c}\max\limits_{s\in K}\left|\zeta\left(s+i\gamma_a^{{(n)}}\right)-g(s)\right|<\eta,\\ \&\ \left\|\gamma_a^{{(n)}}\dfrac{\log 2}{2\pi}\right\|<\eta
\end{array}\right\}>c(\eta)N
\end{equation*}
for any $\eta>0$ and any sufficiently large $N\gg_{\eta}1$, where $c(\eta)>0$ is constant. For those $n$ from the set described on the left-hand side above, in combination with \eqref{Pol}, it also follows that
\begin{eqnarray*}
\max\limits_{s\in K}\left|L\left(s+i\gamma_a^{{(n)}};\psi\right)-h(s)\right|&<&\max\limits_{s\in K}\hspace{-0,5pt}\left|P\hspace{-0,5pt}\left(s\hspace{-0,5pt}+\hspace{-0,5pt}i\gamma_a^{{(n)}}\right)\hspace{-0,5pt}\right|\hspace{-0,5pt}\max\limits_{s\in K}\hspace{-0,5pt}\left|\zeta\hspace{-0,5pt}\left(s\hspace{-0,5pt}+\hspace{-0,5pt}i\gamma_a^{{(n)}}\right)\hspace{-0,5pt}-g(s)\right|\\
&&+~ \max\limits_{s\in K}|g(s)|\max\limits_{s\in K}\left|P\left(s+i\gamma_a^{{(n)}}\right)-P(s)\right|\\
&\ll&\eta.
\end{eqnarray*}
Taking $0<\eta\ll\varepsilon$ sufficiently small, we obtain \eqref{periodic} with the restriction we imposed on $K$.
If $r\geq3$, then $\phi(r)\geq2$, where $\phi$ is the Euler totient function.
Assuming that $\psi$ is a multiple of a Dirichlet character mod $r$, we can work as in the case of $r=1$. If $\psi$ is not such a multiple, then for every $s\in\mathbb{C}$
\begin{eqnarray}\label{licom1}
L(s;\psi)&=&\dfrac{1}{r^s}\sum\limits_{n=1}^r\psi(n)\zeta\left(s;\dfrac{n}{r}\right)\nonumber\\
&=&\dfrac{1}{r^s}\sum\limits_{n=1}^r\psi(n)\dfrac{r^s}{\phi(r)}\sum\limits_{i=1}^{\phi(r)}\overline{\chi_i}(n)L(s;\chi_i)\nonumber\\
&=&\sum\limits_{i=1}^{\phi(r)}\left(\dfrac{1}{\phi(r)}\sum\limits_{n=1}^r\psi(n)\overline{\chi_i}(n) \right)L(s;\chi_i),
\end{eqnarray}
where $\chi_i$, $i=1,2,\dots,\phi(r)$, are the Dirichlet characters mod $r$.
The expression of $L(s;\psi)$ as a linear combination of Hurwitz zeta-functions with rational parameters follows first for $\sigma>1$, where there are absolutely convergent Dirichlet series representations of them, and then by analytic continuation to the whole complex plane.
The expression of a Hurwitz zeta-function with rational parameter as a linear combination of Dirichlet $L$-functions follows from the orthogonality relation of the characters.
Now if we set
\begin{eqnarray*}
c_i:=\dfrac{1}{\phi(r)}\sum\limits_{n=1}^r\psi(n)\overline{\chi_i}(n),&i=1,2,\dots,\phi(r),
\end{eqnarray*}
then at least two of them, say $c_1$ and $c_2$, are non-zero by assumption.
If we define $M_h:=1+\max_{s\in K}|h(s)|$ and the functions
\begin{eqnarray}\label{licom2}
g_1(s):=\dfrac{h(s)+M_h}{c_1},&g_2(s):=-\dfrac{M_h}{c_2},
\end{eqnarray}
\begin{eqnarray}\label{licom3}
g_i(s):=\eta,&i=3,\dots,\phi(r)
\end{eqnarray}
for a given $\eta>0$, then $g_i$, $i=1,2,\dots,\phi(r)$, are non-zero continuous functions on $K$ which are analytic in its interior and
\begin{equation}\label{h(s)}
h(s)=\sum\limits_{i=1}^{\phi(r)}c_ig_i(s)-\sum\limits_{i=3}^{\phi(r)}c_ig_i(s)=\sum\limits_{i=1}^{\phi(r)}c_ig_i(s)-\eta(\phi(r)-3)\sum\limits_{i=3}^{\phi(r)}c_i.
\end{equation}
Since $\chi_i$, $i=1,2,\dots,\phi(r)$, are non-equivalent Dirichlet characters, by \eqref{joint} we obtain
\begin{equation*}
\sharp\left\{1\leq n\leq N:\max\limits_{1\leq i\leq\phi(r)}\max\limits_{s\in K}\left|L\left(s+i\gamma_a^{{(n)}};\chi_i\right)-g_i(s)\right|<\eta\right\}>c(\eta)N
\end{equation*}
for any sufficiently large $N\gg_{\eta}1$.
In this case, for those $n$ from the set described on the left-hand side above, in combination with \eqref{licom1}--\eqref{licom3}, it also follows that
\begin{eqnarray*}
\max\limits_{s\in K}\left|L\left(s+i\gamma_a^{{(n)}};\psi\right)-h(s)\right| &\leq& \sum\limits_{i=1}^{\phi(r)}|c_i|\max\limits_{s\in K}\left|L\left(s+i\gamma_a^{{(n)}};\chi_i\right)-g_i(s)\right| \\
&&+~ \eta(\phi(r)-3)\sum\limits_{i=3}^{\phi(r)}|c_i|\\
&\ll& \eta.
\end{eqnarray*}
Taking $0<\eta\ll\varepsilon$ sufficiently small, we obtain \eqref{periodic}. Observe that in this case, $h(s)$ is allowed to have zeros.
\bigskip
\small
|
1,314,259,993,556 | arxiv |
\section{Introduction} \label{sec:introduction}
\sun{Random walk (RW) is an effective tool for extracting relationships between entities in a graph, and is widely used in
many applications such as \emph{Personalized PageRank} (PPR) \cite{page1999pagerank}, \emph{SimRank} \cite{jeh2002simrank},
\emph{Random Walk Domination} \cite{li2014random}, \emph{Graphlet Concentration} (GC) \cite{prvzulj2007biological},
\emph{Network Community Profiling} (NCP) \cite{fortunato2016community}, \emph{DeepWalk} \cite{perozzi2014deepwalk}
and \emph{Node2Vec} \cite{grover2016node2vec}. For graph analysis tasks such as GC and NCP, RW queries generally dominate the
cost \cite{prvzulj2007biological,fortunato2016community}. Even for graph representation learning, the cost of sampling RW
is non-trivial, for example, a naive implementation of Node2Vec takes more than eight hours on the \emph{twitter} graph in our
experiments. Moreover, increasing the number of RW queries can improve the effectiveness of RW algorithms \cite{grover2016node2vec,prvzulj2007biological}.
Therefore, accelerating RW queries is an important problem.}
RW algorithms generally follow the execution paradigm illustrated
in Algorithm \ref{algo:common_paradigm}, which consists of massive RW queries.
Each query $Q$ starts from a given source vertex. At each step, $Q$ moves to a neighbour of the
current residing vertex at random, and repeats this process until satisfying a specific termination condition,
e.g., a target length is reached (Lines 2-5). Despite that RW algorithms follow a similar execution paradigm, there are
quite some variants of RW algorithms, which can differ significantly in neighbor selections (see Section~\ref{sec:random_walk}).
Encouraged by the success of in-memory graph processing engines~\cite{shun2013ligra,nguyen2013lightweight,zhang2018graphit,sundaram2015graphmat},
there have been some recent systems designed specifically for RW algorithms, including C-SAW \cite{pandey2020c}, GraphWalker \cite{wang2020graphwalker}
and KnightKing \cite{yang2019knightking}. They focus on accelerators, disk-based or distributed settings,
without specially optimizing in-memory execution of RW queries. \sun{However, with the rapid development of hardwares,
modern servers equip with hundred gigabytes, even several terabytes memory, which empowers in-memory processing of
graphs with hundred billions of edges. This covers many real-world graphs in applications \cite{dhulipalaprovably}. As such,
this paper studies the design and implementation of an in-memory graph engine for RW algorithms.}
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t]
\caption{Execution Paradigm of RW algorithms}
\label{algo:common_paradigm}
\footnotesize
\SetKwRepeat{Do}{do}{while}
\KwIn{a graph $G$ and a set $\mathbb{Q}$ of random walk queries\;}
\KwOut{the walk sequences of each query in $\mathbb{Q}$\;}
\ForEach{$Q \in \mathbb{Q}$}{
\Do{Terminate($Q$) is false}{
Select a neighbor of the current residing vertex $Q.cur$ at random\;
Add the selected vertex to $Q$\;
}
}
\KwRet $\mathbb{Q}$\;
\end{algorithm}
To crystallize the performance factors for in-memory RW executions, we conduct profiling studies on RW algorithms in comparison with conventional workloads of a single graph operation like BFS and SSSP (see Section \ref{sec:workload_profiling}). Our profiling results show that
common RW algorithms have as high as 73.1\% CPU pipeline slots stalled due to irregular memory access, which suffers significantly more memory stalls than the conventional workloads. Consequently, the CPUs frequently wait on the high-latency access to the main memory, which becomes the major performance bottleneck. Besides, we observe that the sampling methods such as \emph{inverse transformation sampling} \cite{marsaglia1963generating}, \emph{alias sampling} \cite{walker1977efficient} and \emph{rejection sampling} \cite{robert2013monte} have significant varying performance on different RW algorithms
\HBS{(with the difference as much as 6 times)}. Thus, it requires non-trivial and significant engineering efforts to develop any efficient RW algorithms considering the cache stall bottleneck, as well as parallelization and the choice of sampling methods.
\sun{In this paper, we propose \textbf{ThunderRW}, a generic and efficient in-memory RW framework.
We employ a \emph{step-centric} programming model abstracting the computation from the local view of moving one step of a walker.
Users implement their RW algorithms by "thinking like a walker" in user-defined functions (UDF). The framework applies UDFs to each query
and parallelizes the execution by regarding a step of a query as a task unit. Furthermore, ThunderRW provides variant sampling methods
so that users can select an appropriate one based on the characteristics of workloads.
Built upon the step-centric programming model, we propose the \emph{step interleaving} technique to resolve the cache stalls
caused by irregular memory access with \emph{software prefetching} \cite{lee2012prefetching}.
As modern CPUs can process multiple memory access requests simultaneously \cite{williams2009roofline},
the core idea of step interleaving is to hide memory access latency by issuing multiple outstanding memory
accesses, which exploits \emph{memory level parallelism} \cite{beamer2015locality} among different RW queries.}
We demonstrate the generality and programming flexibility of ThunderRW by showcasing four representative
RW algorithms including \HBS{PPR~\cite{page1999pagerank}, DeepWalk \cite{perozzi2014deepwalk}, Node2Vec \cite{grover2016node2vec} and MetaPath~\cite{sun2013mining}}. \sun{We conduct extensive experiments with
twelve real-world graphs. Experiment results show that (1) ThunderRW runs 8.6-3333.1X faster
than the naive implementation in popular open-source packages; (2) ThunderRW provides speedups of \sun{1.7-14.6X} over the state-of-the-art
frameworks including GraphWalker \cite{wang2020graphwalker} and KnightKing \cite{yang2019knightking} running on the same machine;
and (3) the step interleaving technique significantly reduces the memory stalls from \sun{73.1\%} to \sun{15.0\%}.}
\section{Background and Related Work} \label{sec:background}
\subsection{Preliminary} \label{sec:preliminaries}
This paper focuses on the directed graph $G = (V, E)$ where $V$ is a set of vertices and $E$ is a set of edges.
\HBS{An undirected graph can be supported by representing each undirected edge with two directed edges with the same two vertexes in our system.}
Given a vertex $v \in V$, $N_v$ denotes the neighbors of $v$, i.e., $\{v'| e(v, v') \in E\}$ where $e(v, v')$
represents the edge between $v$ and $v'$. The degree $d_v$ denotes the number of neighbors of $v$. $E_v$ is the
set of edges adjacent to $v$, i.e., $\{e(v, v')| v' \in N_v\}$. Given
$e \in E$ (resp. $v \in V$), $w_e$ and $l_e$ (resp. $w_v$ and $l_v$) represent its weight and label, respectively.
Given $G$, a RW $Q$ is a stochastic process on $G$, which consists of a sequence of adjacent vertices.
$Q[i]$ is the $i$th vertex in the sequence where $i$ starts from 0. $Q.cur$ is the current residing vertex of $Q$.
$|Q|$ is the number of vertices in $Q$.
Suppose that $Q.cur$ is $v$. Given $e \in E_v$, we call the probability of $e$ being selected
the \emph{transition probability}, which is represented by $p(e)$. Then, the neighbor selection is equivalent
to sampling from the discrete probability distribution $P = \{p(e)\}$ where $e \in E_v$. Specifically,
it is to pick an element $h$ from $E_v$ based on the distribution of $P$, i.e., $P[h = e] = p(e)$.
For example, if the relative chance of $e$ being selected is proportional to the edge weight $w_e$, then
$p(e) = \hat{w}_e$ is the normalized probability where $\hat{w}_e = \frac{w_e}{\sum_{e' \in E_v}w_{e'}}$.
\subsection{Random Walk based Algorithms} \label{sec:random_walk}
RW algorithms generally follow the execution paradigm in Algorithm \ref{algo:common_paradigm}.
They mainly differ in the neighbor selection step. We first categorize them into \emph{unbiased}
and \emph{biased} based on the transition probability properties. Unbiased RW selects each edge $e \in E_v$ with the same probability where $v = Q.cur$,
while \HBS{the transition probability is nonuniform for biased RWs}, e.g., depending on the edge weight. We further classify the biased
RWs into \emph{static} and \emph{dynamic}. If the transition probability is determined before execution, then RW is static. Otherwise, it is dynamic, which is affected by states of RW queries.
In the following, we introduce four representative RW algorithms \HBS{that have been used in many applications}.
\textbf{PPR (Personalized PageRank)} \cite{page1999pagerank} assigns a score to each vertex $v'$ in the graph from the personalized view of a given
source $v$, which describes how much $v$ is interested in (or similar to) $v'$. A common solution for this problem is to start a number of
RW queries from $v$, which have a fixed termination probability at each step, and approximately calculates the scores based on the
distribution of the end vertices of random walk queries \cite{liu2016powerwalk,fogaras2005towards}. The algorithms generally set RW queries as unbiased \cite{lofgren2015efficient}.
\textbf{DeepWalk} \cite{perozzi2014deepwalk} is a graph embedding technique widely used in machine learning.
It is developed based on the SkipGram model \cite{mikolov2013efficient}. For each vertex,
it starts a specified number of RW queries with a target length to generate embeddings. The original DeepWalk is unbiased,
while the recent work \cite{cochez2017biased} extends it to consider the edge weight, which becomes biased (static) random walk.
\textbf{Node2Vec} \cite{grover2016node2vec} is a popular graph embedding technique based on the second-order random walk.
Different from DeepWalk, its transition probability depends on the last vertex visited. Suppose that $Q.cur$ is $v$.
Equation \ref{eq:n2v} describes the transition probability of selecting the edge $e(v, v')$ where $u$ is
the last vertex visited, $dist(v', u)$ is the distance between $v'$ and $u$, and $a$ and $b$ are two hyperparameters
controlling the random walk behaviour. Node2Vec is dynamic because the transition probability relies on the states of queries.
Moreover, It can take the edge weight into the consideration by multiplying $p(e)$ with $w_e$.
\begingroup
\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1pt}
\begin{equation} \label{eq:n2v}
p(e(v, v')) =
\begin{cases}
\frac{1}{a} & \text{if $dist(v', u) = 0$},\\
1 & \text{if $dist(v', u) = 1$}, \\
\frac{1}{b} & \text{if $dist(v', u) = 2$}.
\end{cases}
\end{equation}
\endgroup
\textbf{MetaPath}~\cite{sun2013mining} is a powerful tool to extract semantics information from heterogeneous information networks,
and is widely used in machine learning tasks such as natural language processing \cite{lao2011random,lv2019adapting}.
The RW queries are associated with a \emph{meta-path schema} $H$, which defines the pattern of the walk paths
based on the edge type, e.g., "write->publish->mention". Let $H[i]$ be the $i$th label in $H$. At each step, the RW query only considers the
edges $e \in E_v$ where $v = Q.cur$ such that $l_e$ is equal to $H[|Q|]$. In other words, if $l_e \neq H[|Q|]$,
then $p(e) = 0$. Thus, the transition probability depends on the states of the RW, and MetaPath is dynamic.
\subsection{Sampling Methods} \label{sec:sampling_methods}
Sampling from a discrete probability distribution $P = \{p_0, p_1,...,p_{n - 1}\}$ is to select an element $h$
from $\{0,1,...,n-1\}$ based on $P$ (i.e., $P[h = i] = p_i$). In this paper, we focus on five sampling techniques,
including \emph{naive sampling}, \emph{inverse transformation sampling} \cite{marsaglia1963generating},
\emph{alias sampling} \cite{walker1977efficient}, \emph{rejection sampling} \cite{robert2013monte} and a special case of \emph{rejection sampling} \cite{yang2019knightking} because they are efficient and widely used \cite{schwarz2011darts,wang2020graphwalker,pandey2020c,yang2019knightking,shao2020memory}.
Naive sampling only works on the uniform discrete distribution, while the other four can handle non-uniform and select the element $h$ in two phases:
\emph{initialization}, which preprocesses the distribution $P$, and \emph{generation}, which picks an element on the basis of the initialization result.
Please refer to \cite{schwarz2011darts} for the details. In the following, we briefly introduce the sampling methods in the context of this paper, i.e.,
selecting an edge from $E_v$ based on the transition probability distribution $P$ where $v = Q.cur$.
\textbf{Naive sampling (\texttt{NAIVE}).} This method generates a uniform random integer number $x$ in the range $[0, d_v)$
and picks $E_v[x]$, which is the $x$th element in $E_v$.
It only works on the uniform discrete distribution. The time and space complexities are both $O(1)$.
\textbf{Inverse transformation sampling (\texttt{ITS}).} The initialization phase of \texttt{ITS}
computes the \emph{cumulative distribution function} of $P$ as follows: $P' = \{p'_i = \sum_{j = 0} ^ {i}p_{j}\}$ where $0 \leqslant i < d_v$.
After that, the generation phase first generates a uniform real number $x$ in $[0, p'_{d_v - 1})$, then uses a binary
search to find the smallest index $i$ such that $x < p_{i}'$, and finally selects $E_v[i]$. The time complexity
of the initialization is $O(d_v)$, and that of the generation is $O(\log d_v)$. As \texttt{ITS} needs to store $P'$,
the space complexity is $O(d_v)$.
\textbf{Alias sampling (\texttt{ALIAS}).} The initialization phase
builds two tables: the \emph{probability table} $H$, and the \emph{alias table} $A$. Both of them have
$d_v$ values. $H[i]$ and $A[i]$ represent the $i$th value of $H$ and $A$, respectively.
Given $0 \leqslant i < d_v$, $A[i]$ is a bucket containing one or two elements from $\{0, 1,...,d_v - 1\}$,
which are denoted by $A[i].first$ and $A[i].second$, respectively. $H[i]$ is the probability selecting $A[i].first$.
If $A[i]$ has only one element, then $A[i].second$ is $null$ and $H[i]$ is equal to 1. The generation
phase first generates a uniform integer number $x$ in $[0, d_v)$ and then retrieves $H[x]$ and $A[x]$. Next,
it generates a uniform real number $y$ in $[0, 1)$. If $y < H[x]$, then picks $e(v, A[x].first)$.
Otherwise, the edge selected is $e(v,A[x].second)$. The time complexity of initialization is $O(d_v)$
and that of generation is $O(1)$. The space complexity is $O(d_v)$.
\textbf{Rejection sampling (\texttt{REJ}).} The initialization phase of \texttt{REJ} gets $p ^ * = \max_{p \in P} p$.
The generation phase can be viewed as throwing darts on a rectangle dartboard until hitting the target area.
Specifically, it has two steps: (1) generate a uniform integer number $x$ in $[0, d_v)$ and a uniform real number
$y$ in $[0, p^*)$ (i.e., the dart is thrown at the position $(x, y)$); and (2) if $y < p_x$,
then select $E_v[x]$ (i.e., hit the target area); otherwise, repeat Step (1). The time complexity of initialization
is $O(d_v)$, and that of generation is $O(\mathbb{E})$ where $\mathbb{E} = \frac{d_v \times p^*}{\sum_{p \in P}p}$
(i.e., the area of the rectangle board divides the target area). Based on the computation method of $\mathbb{E}$,
we can get that $1 \leqslant \mathbb{E} \leqslant d_v$. The space complexity is $O(1)$.
\textbf{A special case of \texttt{REJ} (\texttt{O-REJ}).} A special case of \texttt{REJ} is that we
can set a value $p ^ * \geqslant \max_{p \in P} p$ without the initialization phase, but of keeping
$\mathbb{E} = \frac{d_v \times p^*}{\sum_{p \in P}p}$ is close to $\frac{d_v \times \max_{p \in P}}{\sum_{p \in P}p}$.
For example, set $p ^ *$ to $\max \{1, \frac{1}{a}, \frac{1}{b}\}$ for Node2Vec \cite{yang2019knightking}.
The generation phase is the same as \texttt{REJ}. Therefore, the time complexity is $O(\mathbb{E})$ where $\mathbb{E} = \frac{d_v \times p^*}{\sum_{p \in P}p}$
and $p ^ *$ is specified by users. The space complexity is $O(1)$.
\sun{In existing works, unbiased random walks (e.g., PPR \cite{page1999pagerank} and unweighted DeepWalk \cite{perozzi2014deepwalk}) adopt
\texttt{NAIVE} sampling. In contrast, biased random walks (e.g., weighted DeepWalk \cite{ye2019improved,dai2018adversarial},
Node2Vec \cite{grover2016node2vec} and MetaPath \cite{fu2017hin2vec,hu2018leveraging})
use \texttt{ALIAS} sampling because the time complexity of the generation phase is $O(1)$.
C-SAW \cite{pandey2020c} adopts \texttt{ITS} to utilize the parallel computation capability of GPUs to calculate the prefix sum.
KnightKing \cite{yang2019knightking} uses \texttt{O-REJ} to avoid scanning neighbors of $Q.cur$ to reduce the network communication cost.}
\subsection{Related Work} \label{sec:related_work}
\textbf{Graph computing frameworks.} There are a number of generic graph computing frameworks working on different computation environments,
for example, (1) Single Machine (CPUs): GraphChi \cite{kyrola2012graphchi}, Ligra \cite{shun2013ligra}, Graphene \cite{liu2017graphene}, and
GraphSoft \cite{jun2018grafboost}; (2) GPUs: Medusa \cite{zhong2013medusa}, CuSha \cite{khorasani2014cusha} and Gunrock \cite{wang2016gunrock};
and (3) Distributed Environment: Pregel \cite{malewicz2010pregel}, GraphLab \cite{low2012distributed},
PowerGraph \cite{gonzalez2012powergraph}, GraphX \cite{gonzalez2014graphx}, Blogel \cite{yan2014blogel}, Gemini \cite{zhu2016gemini},
and Grapes \cite{fan2018parallelizing}. They usually adopt vertex- or edge-centric model, and are highly optimized for a single graph operation.
In contrast, ForkGraph \cite{lu2021cache} targets at graph algorithms consisting of concurrent graph queries, for example, betweenness centrality.
However, all of them focus on traditional graph query operations such as BFS and SSSP without considering RW workloads.
\HBS{That motivates the development of engines specially optimized for RW~\cite{yang2019knightking, pandey2020c, wang2020graphwalker}.}
\sun{\textbf{Random walk frameworks.} In contrast to graph computing frameworks abstracting the computation from
the view of the graph data, existing RW frameworks adopt the \emph{walker-centric} model, which regards
each query as the parallel task. KnightKing \cite{yang2019knightking} is a distributed
framework. It adopts the BSP model that moves a step for all queries
at each iteration until all queries complete. To reduce data transfers in network, it utilizes \texttt{O-REJ} sampling to
avoid scanning $E_v$ where $v = Q.cur$. It exposes an API for users to set a suitable upper
bound for the edge transition probability for each edge adjacent to $Q.cur$. Unfortunately, we find that this design introduces
an implicit constraint on RW algorithms: a suitable upper bound must be determined without
looping over $E_v$. This works well for Node2Vec by setting the upper bound as $\max{\{1.0/a, 1.0, 1.0/b\}}$
according to Equation \ref{eq:n2v}. However, it cannot
handle MetaPath because the transition probability of each $e \in E_v$ can be zero because of
the label filter. Another limitation is that KnightKing can
suffer the tail problem since it moves a step for all queries at an iteration, whereas queries can have variant lengths.}
\sun{C-SAW \cite{pandey2020c} is a framework on GPUs. It adopts the BSP model as well. To utilize the parallel computation capability in
the many-core architecture, C-SAW uses \texttt{ITS} sampling in computation. Particularly, for all random walk types including
unbiased, static and dynamic, C-SAW first conducts a prefix sum on the
transition probability of edges adjacent to $Q.cur$, and then selects an edge. Consequently, it incurs high overhead for unbiased and
static random walks. Moreover, C-SAW cannot support random walks with variant lengths (e.g., PPR) since such RW queries can degrade the
utilization of GPUs. Additionally, Node2Vec is not supported by C-SAW, because C-SAW does not support
the distance verification on GPUs.}
\sun{GraphWalker \cite{wang2020graphwalker} is an I/O efficient framework on a single machine.
For a graph that cannot reside in memory, GraphWalker divides it into a set of partitions, and focuses on optimizing the scheduling
of loading partitions into memory to reduce the number of I/Os. Specifically, for each partition, GraphWalker records
the number of queries residing in it, and the scheduler prioritizes partitions with
more queries. Given a partition $G'$ in memory, GraphWalker adopts the ASP model to execute queries in it. It assigns a query $Q$ to
each worker (i.e., a thread), and executes it independently until $Q$ completes or jumps out $G'$.
Once all queries in $G'$ complete or leave $G'$, GraphWalker swaps it out, and reads the partition with most queries in disk.
It repeats this process till all queries complete. GraphWalker supports unbiased RW only.}
\sun{This paper focuses on accelerating the in-memory execution of RW queries. ThunderRW abstracts the computation of RW algorithms
from the perspective of queries as well to exploit the parallelism in RW algorithms, but takes the \emph{step-centric} model,
which regards one step of a query as the task unit and factors one step into the gather-move-update operations to empower the
step interleaving technique. Moreover, ThunderRW supports all the five sampling methods in Section \ref{sec:sampling_methods} so
that users can adopt an appropriate sampling method given a specific workload. ThunderRW
supports all the four RW-algorithms in Section \ref{sec:random_walk}, which demonstrates its programming flexibility over other RW frameworks.}
\textbf{RW algorithm optimization.} Due to the importance of the RW-based
applications, a variety of algorithm-specific optimizations have been proposed for
different RW applications, e.g., PPR \cite{wang2017fora,shi2019realtime,lofgren2014fast,wei2018topppr,guo2017parallel},
Node2Vec \cite{zhou2018efficient} and second-order random walks \cite{shao2020memory}. In contrast,
we aim to design a generic and efficient random walk framework on which users can easily implement different
kinds of random walk applications. Thus, the algorithm-specific optimizations are beyond the scope of this paper.
\textbf{Prefetching in databases.} Our step-interleaving techniques are inspired by the prefetching techniques in query processing of databases. As the performance gap between main memory and CPU widens, prefetching has been an effective means to improve database performance. There have been studies applying prefetching to B-tree index~\cite{10.1145/376284.375688} and hash joins~\cite{chen2007improving, 6544839, 10.14778/1687553.1687564, 10.14778/2735703.2735704}. Hash joins are probably the most widely studied operator for prefetching. The group prefetching (GP) and software
pipeline prefetching (SPP) \cite{chen2007improving} are the classic prefetching technique for hash joins, which rearrange a sequence of operations in a loop to several
stages and execute all queries stage by stage in batch. However, GP and SPP cannot efficiently handle queries with
irregular access patterns, for example a binary search performs three searches to find the target value,
while the other one needs four times. To resolve the problem, AMAC \cite{kocberber2015asynchronous} proposes
to execute the stages of each query asynchronously by explicitly maintaining the states of each stage. However,
AMAC incurs more overhead than GP and SPP, especially when there are a number of stages
because it needs to maintain the states of each stage. As in the context of random walk, there is a lack of a model to abstract stages from a
sequence of operations and model their dependency relationships to guide the implementation.
\section{Motivations} \label{sec:workload_profiling}
In this section, we study the profiling results to assess the performance bottlenecks of in-memory computation
of RW algorithms. Specifically, we execute RW queries with different sampling methods and
examine the hardware utilization with the \emph{top-down microarchitecture
analysis method} (TMAM). In the following, we first introduce TMAM and then present the profiling results.
\textbf{Top-down analysis method (TMAM) \cite{coorporation2016intel}.} TMAM is a simplified and intuitive model for
identifying the performance bottlenecks in out-of-order CPUs. It uses
the \emph{pipeline slot} to represent the hardware resources required to process the micro-operations (uOps). In
a cycle, a pipeline slot is either empty (\emph{stalled}) or filled with a uOp. The execution stall is caused by the \emph{front-end}
or the \emph{back-end} part of the pipeline. \HBS{Specifically, the back-end cannot accept new operations due to the lack of required resources. It can be
further split into \emph{memory bound}, which represents the stall caused by the memory subsystem, and
\emph{core bound}, which reflects the stall incurred by the unavailable execution units.} When the slot is filled with a uOp, it will be classified as \emph{retiring} if
the uOp eventually retires (Otherwise, the slot is categorized as \emph{bad speculation}).
We use Intel Vtune Profiler to measure the percentage of pipeline slots in each category (retiring, bad speculation, front-end bound, memory bound and core bound) in our experiments.
\subsection{Observations}
\begin{table}[t]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Comparison of pipeline slot breakdown and memory bandwidth (the total value of read and write) between traditional graph algorithms and RW algorithms.}
\label{tab:comparison_breakdown}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\textbf{Method} & \textbf{\begin{tabular}[c]{@{}c@{}}Front\\ End\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Bad\\ Spec\end{tabular}} & \textbf{Core} & \textbf{Memory} & \textbf{Retiring} & \textbf{\begin{tabular}[c]{@{}c@{}}Memory\\ Bandwidth\end{tabular}} \\ \hline\hline
BFS & 11.6\% & 9.1\% & 20.8\% & 40.6\% & 18.0\% & 51.7 GB/s \\ \hline
SSSP & 9.1\% & 12.5\% & 24.9\% & 36.9\% & 16.6\% & 38.2 GB/s \\ \hline
\sun{PPR} & \sun{0.6\%} & \sun{0.7\%} & \sun{15.8\%} & \sun{\textbf{73.1\%}} & \sun{9.7\%} & \sun{\textbf{1.4 GB/s}} \\ \hline
DeepWalk & 1.0\% & 3.9\% & 16.7\% & \textbf{69.7\%} & 8.7\% & \textbf{5.6 GB/s} \\ \hline
\sun{Node2Vec} & \sun{11.5\%} & \sun{22.1\%} & \sun{24.3\%} & \sun{28.1\%} & \sun{14.1\%} & \sun{17.1 GB/s} \\ \hline
\sun{MetaPath} & \sun{6.2\%} & \sun{7.5\%} & \sun{29.7\%} & \sun{33.9\%} & \sun{22.7\%} & \sun{9.9 GB/s} \\ \hline
\end{tabular}
}
\end{table}
\sun{\textbf{Varying random walk workloads.} We first evaluate the four RW algorithms in Section \ref{sec:random_walk}.
Specifically, we set PPR as unbiased, and configure the termination probability as 0.2.
For DeepWalk and Node2Vec, we set the target length as 80. The transition probability of DeepWalk
is the edge weight, and that of Node2Vec is calculated based on Equation \ref{eq:n2v} where $a=2$ and $b=0.5$.
The schema length of MetaPath is 5, and we generate it by randomly choosing five labels from the edge label set.
PPR starts $|V|$ queries from a given vertex, and the others start a query from each vertex in $V$.
Following existing studies \cite{page1999pagerank,cochez2017biased,grover2016node2vec,sun2013mining} (as well as popular open-source packages
\footnote{\url{https://github.com/aditya-grover/node2vec}, Last accessed on 2021/03/20} \footnote{\url{https://github.com/GraphSAINT/GraphSAINT}, Last accessed on 2021/03/20.}), we use \texttt{NAIVE} sampling for PPR,
while \texttt{ALIAS} sampling for the others. Moreover, we build alias tables for DeepWalk in a preprocessing phase to accelerate the execution
of queries. However, this method is prohibitively expensive for high order RW due the the exponential memory consumption
\cite{yang2019knightking,shao2020memory}. For example, the space complexity of such an index for Node2Vec, which is second-order, is
$O(\sum_{v\in V}d_{v} ^ 2)$, and it can consume more than 1000 TB space for \emph{twitter}. As such, we compute the transition probability
and perform the initialization of \texttt{ALIAS} in run time. To compare the performance characteristics with RW algorithms,
we evaluate BFS and SSSP, which are two conventional graph algorithms. We develop RW algorithms without any frameworks, whereas
implementing BFS and SSSP with Ligra \cite{shun2013ligra}.}
Table \ref{tab:comparison_breakdown} presents the experiment results on \emph{livejournal}, the details of which are listed
in Table \ref{tab:datasets}. RW queries randomly visit nodes on the graph that leads to a massive number of random memory accesses.
Consequently, as high as 73.1\% pipeline slots of PPR and DeepWalk
are stalled due to memory access. In contrast, the memory bound of BFS and SSSP
is less than 45\%, which demonstrates much better cache locality than that of PPR and DeepWalk.
Due to the large proportional of memory stalls, the retiring of PPR and DeepWalk is less than 10\%.
Furthermore, we measure the memory bandwidth utilization of these algorithms. Our benchmark shows
that the max memory bandwidth of our test bed is 60 GB/s. As shown in the table, \HBS{
the bandwidth utilization of BFS and SSSP are rather high (86.2\% and 63.6\%, respectively), while that of PPR and DeepWalk
is very low (2.3\% and 9.3\%, respectively).}
\sun{Compared with PPR and DeepWalk, Node2Vec and MetaPath exhibit different characteristics. The memory bound is lower than
PPR and DeepWalk, whereas the retirement and bandwidth are much higher. To achieve more insights, we first examine the execution time breakdown
on computing the transition probability (denoted by \emph{compute $p(e)$}), and the initialization
and generation phases of sampling an edge (denoted by \emph{Init} and \emph{Gen}, respectively), and then analyze the complexity of these operations
at a step.}
\begin{table}[t]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{\sun{Comparison of execution time breakdown and the time complexity per step among RW algorithms where $v = Q.cur$
and $u$ is the last vertex of $Q$.}}
\label{tab:time_breakdown}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\multirow{3}{*}{\textbf{Method}} & \multicolumn{3}{c|}{\textbf{Time Breakdown}} & \multicolumn{3}{c}{\textbf{Complexity per Step}} \\ \cline{2-7}
& \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Compute\\ $p(e)$\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Sampling}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Compute\\ $p(e)$\end{tabular}}} & \multicolumn{2}{c}{\textbf{Sampling}} \\ \cline{3-4} \cline{6-7}
& & \textbf{Init} & \textbf{Gen} & & \textbf{Init} & \textbf{Gen} \\ \hline\hline
PPR & \textit{N/A} & \textit{N/A} & 100\% & \textit{N/A} & \textit{N/A} & $O(1)$ \\ \hline
DeepWalk & \textit{N/A} & \textit{N/A} & 100\% & \textit{N/A} & \textit{N/A} & $O(1)$ \\ \hline
Node2Vec & 89.9\% & 9.9\% & 0.2\% & $O(d_v \times \log d_{u})$ & $O(d_v)$ & $O(1)$ \\ \hline
MetaPath & 29.0\% & 69.9\% & 1.1\% & $O(d_v)$ & $O(d_v)$ & $O(1)$ \\ \hline
\end{tabular}
}
\end{table}
\sun{Table \ref{tab:time_breakdown} lists the results. PPR and DeepWalk are static, and they only need to sample an edge and move
$Q$ along it in run time. In contrast, Node2Vec and MetaPath are dynamic, and they first compute the transition probability for each $e \in E_v$
where $v = Q.cur$, and then sample an edge. Consequently, the cost on \emph{Gen} is neglected as shown in Table \ref{tab:time_breakdown}.
Moreover, the memory bound is much lower than static RWs in Table \ref{tab:comparison_breakdown} since the computation scans $E_v$ in
a continuous manner. Given $e \in E_v$ and $u$ is the last vertex of $Q$, the complexity of computing $p(e)$ in Node2Vec is $O(\log d_u)$ because the
distance check in Equation \ref{eq:n2v} is implemented by a binary search. However, MetaPath computes $p(e)$ with a simple label filter.
As a result, computing $p(e)$ accounts for around 90\% of the execution time in Node2Vec, whereas \emph{Init} dominates the cost in MetaPath.}
\sun{\textbf{Observation 1.} \emph{The in-memory computation of common RW algorithms suffers severe performance
issues due to memory stalls caused by cache misses and under-utilizes the memory bandwidth. For high order RW algorithms,
computing $p(e)$ and initializing the auxiliary data structure for sampling dominate the in-memory computation cost, and their complexities
are determined by the RW algorithm and the selected sampling method, respectively.}}
\textbf{Varying sampling methods and RW types.} We further examine the performance of sampling methods.
We \HBS{continue to develop a micro benchmark that executes} $10 ^ 7$ RW queries each of which starts from a vertex randomly selected from the graph. The target length is 80.
We evaluate three types of RW queries as discussed in Section~\ref{sec:random_walk}: \textbf{unbiased}, \textbf{static} and \textbf{dynamic}.
For unbiased RW, we first perform the initialization
phase of sampling methods for the neighbor set of each vertex in a \emph{preprocessing} step. We then use the generation phase of a
sampling method to select a neighbor of $Q.cur$ in execution. For static RW, we evaluate queries with the same process as that of unbiased.
The only difference is that the edge weight is used to set the transition probability for static RW whereas the transition probability in unbiased RW is the default uniform.
For dynamic RW, we set the edge weight as the transition probability, while performing the initialization phase for the neighbor set of $Q.cur$
in execution because the transition probability of dynamic RW varies during the computation.
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.155\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/profiling_results/livejournal_unbiased_execution_time.pdf}
\caption{Unbiased.}
\label{fig:unbiased}
\end{subfigure}
\begin{subfigure}[t]{0.155\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/profiling_results/livejournal_static_execution_time.pdf}
\caption{Static.}
\label{fig:static}
\end{subfigure}
\begin{subfigure}[t]{0.155\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/profiling_results/livejournal_dynamic_execution_time.pdf}
\caption{Dynamic.}
\label{fig:dynamic}
\end{subfigure}
\caption{Effectiveness of sampling methods.}
\label{fig:sampling_method_effectiveness}
\end{figure}
Figure \ref{fig:sampling_method_effectiveness} presents the experiment results of the sequential execution with
variant sampling methods on different RW types. \HBS{We have the following findings. First, the \texttt{NAIVE} sampling method performs the best on unbiased RW as
it has no initialization phase. Second, among static methods, the \texttt{ALIAS} sampling method outperforms others because its generation phase has lower time complexity. However, \texttt{ALIAS} runs much slower than other
methods on dynamic RW since its initialization cost is high in practice. Third, \texttt{O-REJ} performs well
on dynamic RW since it does not have the initialization phase. Fourth, we can observe that
the cost of evaluating dynamic RW is significantly expensive than that of unbiased and static RW
because of the initialization phase (if exists) at each step.}
\textbf{Observation 2.} \emph{Sampling methods have an important impact on the performance and
no sampling method can dominate on all cases. Generally, dynamic RW is expensive than unbiased and static RW.}
\subsection{System Implications}
\sun{Based on the profiling results, we can categorize the cost of evaluating RW queries into two classes, that of computing
$p(e)$ and that of sampling an edge. As the former is determined by the RW algorithms (i.e., algorithm-specific), our framework
targets at accelerating the latter operation.} Moreover, we have the following implications for the design and implementation of ThunderRW.
First, we need to develop mechanisms to reduce the cache stalls. Our profiling
results show that in-memory computation of common RW algorithms suffer severe performance issues due to
the irregular memory accesses. None of previous random walk frameworks
\cite{pandey2020c,wang2020graphwalker,yang2019knightking}
address the problem. On the other hand, there are massive queries in random walk workloads, but
the memory bandwidth is under-utilized.
Inspired by previous work on accelerating multiple index lookups in database systems with prefetching
\cite{chen2007improving,kocberber2015asynchronous}, there are opportunities for prefetching and
interleaving executions among different queries.
Second, there is a need to support multiple sampling methods. However, existing frameworks support
one sampling method only and generally regard all RW as dynamic (e.g., C-SAW), while (1) the sampling method has an
important impact on the performance and none of them can dominate on all cases; and (2) the cost of evaluating dynamic
RW is generally much more expensive than that of unbiased and static RW.
\section{ThunderRW Abstraction} \label{sec:computation_model}
In this section, we present the abstraction of the computation in ThunderRW.
\subsection{Step-centric Model} \label{sec:step_centric_model}
To abstract the computation of RW algorithms, we propose the \emph{step-centric} model in this paper.
We observe that RW algorithms are built upon a number of RW queries rather than a single query.
In spite of limited intra-query parallelism, there is abundant inter-query parallelism in RW-algorithms
as each RW query can be executed independently. Therefore, our step-centric
model abstracts the computation of RW algorithms from the perspective of queries to exploit the inter-query parallelism.
Specifically, we model the computation from the local view of moving one step of a query $Q$.
Then, we abstract a step of $Q$ into the \texttt{Gather}-\texttt{Move}-\texttt{Update} (GMU) operations
to characterize the common structure of RW algorithms. With the step-centric model,
users develop RW algorithms by "thinking like a walker". They focus on defining functions setting
the transition probability of $e \in E_v$ and updating states of $Q$ at each
step, while the framework facilitates applying \HBS{user-defined} step-oriented functions to RW
queries.
\subsection{Step-centric Programming} \label{sec:api}
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t]
\caption{ThunderRW Framework}
\label{algo:ThunderRW_framework}
\footnotesize
\SetKwProg{func}{Function}{}{}
\SetKwFunction{Gather}{Gather}
\SetKwFunction{Move}{Move}
\SetKwRepeat{Do}{do}{while}
\KwIn{a graph $G$ and a set $\mathbb{Q}$ of random walk queries\;}
\KwOut{the walk sequences of each query in $\mathbb{Q}$\;}
\ForEach{$Q \in \mathbb{Q}$}{
\Do{$stop$ is false}{
$C \leftarrow$ \Gather{$G$, $Q$, \textbf{Weight}}\;
$e \leftarrow$ \Move{$G$, $Q$, $C$}\;
$stop \leftarrow$ \emph{\textbf{Update}}($Q$, $e$)\;
}
}
\KwRet $\mathbb{Q}$\;
\func{\Gather{$G, Q$, \textbf{Weight}}}{
$C\leftarrow \{\}$\;
\ForEach{$e \in E_{Q.cur}$}{
Add \emph{\textbf{Weight}}($Q$, $e$) to $C$;
}
$C\leftarrow$ execute initialization phase of a given sampling method on $C$\;
\KwRet $C$\;
}
\func{\Move{$G, Q, C$}}{
Select an edge $e(Q.cur, v) \in E_{Q.cur}$ based on $C$ and add $v$ to $Q$\;
\KwRet $e(Q.cur, v)$\;
}
\end{algorithm}
\textbf{Framework.} Algorithm \ref{algo:ThunderRW_framework} gives
an overview of ThunderRW.
Lines 1-6 execute each query one-by-one. Lines 3-5 factor one step
into three functions based on the step-centric model. \texttt{Gather}
collects the transition probabilities of edges adjacent to $Q.cur$.
It loops over $E_{Q.cur}$, applies \texttt{Weight},
a user-defined function, to each edge $e$ and add the transition
probability of $e$ to $C$ (Lines 10-11). Then, Line 12 executes
the initialization phase of a given sampling method to update $C$.
\texttt{Move} picks an edge based on $C$ and moves $Q$ along the
selected edge (Lines 14-16). \sun{As random memory accesses in the \emph{system space}
(i.e., the framework excluding user-defined functions) are mainly in \texttt{Move},
we apply step-interleaving techniques to optimize its performance (see Section~\ref{sec:step_interleaving}).}
Finally, Line 5 invokes \texttt{Update}, a user-defined function, to update states of $Q$ based on the
movement. The return value of \texttt{Update} decides whether $Q$ should be
terminated.
The framework described in Algorithm \ref{algo:ThunderRW_framework}
can \HBS{support} unbiased, static and dynamic RW with different
sampling methods. Furthermore, we optimize the execution flow of ThunderRW
based on the RW type and the selected sampling method.
The transition probability of static RW is fixed during
the execution. In that case, ThunderRW omits the \texttt{Gather}
operation but introducing a preprocessing step to reduce the runtime cost, which obtains transition
probabilities \HBS{in the system initialization}. Algorithm
\ref{algo:preprocessing} presents the preprocessing for static RW.
Given a vertex $v$, Lines 3-4 loop over each edge $e$ in $E_v$ and apply the
\texttt{Weight} function to $e$ to obtain the transition probability. As
the probability does not rely on a query, we set $Q$ as \emph{null}. After
that, Lines 5-6 perform the initialization phase of a given sampling method
on $C_v$ and store $C_v$ for the usage in the query execution. As such,
we can load $C_{Q.cur}$ directly without \texttt{Gather} for static RW in Algorithm \ref{algo:ThunderRW_framework}.
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t]
\caption{Preprocessing for Static Random Walk}
\label{algo:preprocessing}
\footnotesize
\KwIn{a graph $G$\;}
\KwOut{the transition probabilities $C_v$ on $E_v$ for each vertex $v$\;}
\ForEach{$v \in V(G)$}{
$C_v \leftarrow \{\}$\;
\ForEach{$e \in E_{v}$}{
Add \emph{\textbf{Weight}}($null$, $e$) to $C_v$;
}
$C_v\leftarrow$ execute initialization phase of a given sampling method on $C_v$\;
Store $C_v$ for the usage in query execution.
}
\end{algorithm}
Moreover, the \texttt{NAIVE} and \texttt{O-REJ} sampling methods have no
initialization phase as discussed in Section \ref{sec:sampling_methods}.
Hence, we do not need to collect the transition probability for initialization. As such, ThunderRW skips both the preprocessing
step and the \texttt{Gather} operation in the execution if
\texttt{NAIVE} or \texttt{O-REJ} is used.
\textbf{Application Programming Interfaces (APIs).} \sun{ThunderRW provides two
kinds of APIs, which include hyperparameters and user-defined functions.
Users develop their RW algorithms in two steps. Firstly, set the RW type and
the sampling method via hyperparameters \texttt{walker\_type} and \texttt{sampling\_method}, respectively.
Secondly, define the \texttt{Weight} and \texttt{Update} functions.} The
\texttt{Weight} function specifies the relative chance of an edge being
selected. The \texttt{Update} function modifies states of $Q$
given the selected edge. If its return value is \emph{true}, then
the framework terminates $Q$. Otherwise, $Q$ continues walking on $G$.
When using \texttt{O-REJ}, users need to implement the
\texttt{MaxWeight} function to set the maximum value of the transition probability.
We present an example in the following.
\begin{lstlisting} [language=C++,label={list:node2vec},numbers=none,mathescape=true,caption=Node2Vec sample code.]
WalkerType walker_type = WalkerType::Dynamic;
SamplingMethod sampling_method = SamplingMethod::O-REJ;
double Weight(Walker Q, Edge e) {
if (Q.length == 0) return max(1.0 / a, 1.0, 1.0 / b);
else if (e.dst == Q.prev) return 1.0 / a;
else if (IsNeighbor(e.dst, Q.prev)) return 1.0;
else return 1.0 / b;
}
bool Update(Walker Q, Edge e) {
return Q.length == target_length;
}
double MaxWeight() {
return max(1.0 / a, 1.0, 1.0 / b);
}
\end{lstlisting}
\begingroup
\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1pt}
\begin{example} \label{exmp:n2v}
List \ref{list:node2vec} shows the sample code of Node2Vec,
which is dynamic. As the maximum value can be easily
determined by the parameters $a$ and $b$, we use \texttt{O-REJ} to
avoid scanning each edge adjacent to $Q.cur$ at each step. Thus, we
set \texttt{sampling\_method} to \texttt{O-REJ} and implement
\texttt{MaxWeight}. The \texttt{Weight} function is configured
based on Equation \ref{eq:n2v}. Once the length of
$Q$ meets the target length, we terminate it.
\end{example}
\endgroup
ThunderRW applies user-defined functions to RW queries,
and evaluates the queries based on RW type and selected
sampling method in parallel. Thus, users can easily implement customized RW algorithms with ThunderRW, which significantly reduces
the engineering effort. For example, users write only around ten lines of code to implement Node2Vec as shown in Example \ref{exmp:n2v}.
\textbf{Parallelization.} RW algorithms contain massive random walk queries
each of which can be completed independently and rapidly. Therefore, ThunderRW adopts
the static scheduling method to keep load balancing among workers. Specifically,
we regard each thread as a worker and evenly assign $\mathbb{Q}$ to the workers.
A worker independently executes the assigned queries with
Algorithm \ref{algo:ThunderRW_framework}. Our experiment results show
that the simple scheduling method achieves good performance.
\subsection{Analysis} \label{sec:analysis}
In this subsection, we analyze the space and time cost of
Algorithm \ref{algo:ThunderRW_framework} on different RW types with variant sampling methods. As the
cost of \texttt{Weight} and \texttt{Update} is determined by users' implementation, we assume their cost
is a constant value for the ease of analysis.
\begin{table}[t]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{The time complexity of ThunderRW on different random walk types with variant sampling methods}
\label{tab:summary_time_complexity}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{c|c|c|c}
\hline
\textbf{Method} & \textbf{Unbiased} & \textbf{Static} & \textbf{Dynamic} \\ \hline\hline
NAIVE & $O(T)$ & \textit{N/A} & \textit{N/A} \\ \hline
ITS & $O(|E| + T \times \log d_{avg})$ & \textit{Same as unbiased} & $O(T \times (d_{avg} + \log d_{avg}))$ \\ \hline
ALIAS & $O(|E| + T)$ & \textit{Same as unbiased} & $O(T \times (d_{avg} + 1))$ \\ \hline
REJ & $O(|E| + T \times \mathbb{E})$ & \textit{Same as unbiased} & $O(T \times (d_{avg} + \mathbb{E}))$ \\ \hline
O-REJ & $O(T \times \mathbb{E})$ & \emph{Same as unbiased} & \textit{Same as unbiased} \\ \hline
\end{tabular}
}
\end{table}
\textbf{Space.} \sun{The space for storing the graph is $O(|E| + |V|)$,
and that for maintaining the output is $O(\sum_{Q \in \mathbb{Q}}|Q|)$.}
\texttt{Gather} in Algorithm \ref{algo:ThunderRW_framework}
requires $O(d_{max})$ space to store $C$ where $d_{max}$ is the max degree
value of $G$. Suppose that ThunderRW has $n$ threads. Then, the memory
cost is $O(n \times d_{max})$. When there is a preprocessing step,
the memory cost of \texttt{ITS} and \texttt{ALIAS} is $O(|E|)$,
while that of \texttt{REJ} is $O(|V|)$ based on the analysis in Section
\ref{sec:sampling_methods}.
\textbf{Time.} Given a sampling method, $\alpha$ and $\beta$ denote
the cost of its initialization phase and generation phase, respectively.
Let $d_{avg}$ represent the average degree of $G$. Thus, the cost
of \texttt{Gather} in Algorithm \ref{algo:ThunderRW_framework} is
$d_{avg} + \alpha$, and that of \texttt{Move} is $\beta$. For static RW, the preprocessing cost is $\sum_{v \in V}(d_v + \alpha)$,
while the cost of processing one step is $\beta$ as it does not conduct
\texttt{Gather} during execution. From Section \ref{sec:sampling_methods}
we can get the value of $\alpha$ and $\beta$ for the sampling methods.
Support that $T = \sum_{Q \in \mathbb{Q}} |Q|$, which is the total
number of steps of all queries. Table \ref{tab:summary_time_complexity}
summarizes the time complexity on different RW types with
variant sampling methods.
As shown in the table, \texttt{NAIVE} supports unbiased RW only.
For \texttt{ITS}, \texttt{ALIAS} and \texttt{REJ}, the cost on unbiased
and static RW consists of the preprocessing cost and the execution
cost. Because RW algorithms can have massive RW queries with a long length, the execution cost is generally much more expensive than
the preprocessing cost. As \texttt{O-REJ} has no initialization phase,
it neither performs the preprocessing for unbiased and static RW
nor executes \texttt{Gather} for dynamic RW. Thus, the time complexity is
the same for the three RW types.
\textbf{Recommendation.} From the analysis, we have the following guidelines for setting sampling methods for users:
(1) \texttt{NAIVE} is the best sampling method for unbiased RW;
(2) \texttt{ALIAS} is a good choice for static RW since
the execution time is generally longer than the preprocessing time; and (3)
if we can set a reasonable max value for the transition probability,
then use \texttt{O-REJ} for dynamic RW. Users can easily tell
the RW type based on the properties of transition probability.
To further ease the programming efforts, we set the default sampling method
of unbiased, static and dynamic RW to \texttt{NAIVE},
\texttt{ALIAS} and \texttt{ITS}, respectively. We use \texttt{ITS}
instead of \texttt{ALIAS} for dynamic RW because the initialization
cost of \texttt{ALIAS} at each step is much more than that of \texttt{ITS}
in practice. If users can set a good max value for the transition probability,
then they can select \texttt{O-REJ} for dynamic RW.
\section{Step-Interleaving} \label{sec:step_interleaving}
In this section, we present the step interleaving technique,
which reduces the pipeline stall caused by random memory accesses.
\subsection{General Idea} \label{sec:step_interleaving_general_idea}
Based on the step-centric model, ThunderRW processes a step of a query
$Q$ with the GMU operations. \sun{According to the profiling results in Section \ref{sec:workload_profiling},
there can be two main sources for random memory accesses under the model. First, the \texttt{Move} operation
picks an edge randomly and moves $Q$ along the selected edge. Second, the operations in user-defined functions
can introduce cache misses, for example, the distance check operation in Node2Vec. As operations in the user space (i.e., user-defined functions)
are determined by RW algorithms, and can be very flexible, we target at memory issues incurred by the system (i.e., the \texttt{Move} operation).}
Motivated by the profiling result, we propose to use the
software prefectching technique \cite{lee2012prefetching} to accelerate
in-memory computation of ThunderRW. However, a step of a query $Q$ does not
have enough computation workload to hide memory access latency
because steps of $Q$ have dependency relationship.
Therefore, we propose to hide memory access latency via
executing steps of different queries alternately.
Specifically, given a sequence of operations in \texttt{Move},
we decompose them into multiple stages such that the computation of a stage
consumes the data generated by previous stages and
it retrieves the data for the subsequent stages if necessary. We execute
a group of queries simultaneously. Once a stage of a query $Q$
completes, we switch to stages of other queries in the group.
We resume the execution of $Q$ \HBS{when stages of other queries complete.}
In such a way we hide the memory access latency in a single query and
keep CPUs busy. We call this approach \emph{step interleaving}.
\begin{figure}[h]\small
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.49]{example_figures/sequential_vs_interleaving.pdf}
\caption{Sequential versus step interleaving.}
\label{fig:sequential_vs_interleaving}
\end{figure}
\begin{example}
Figure \ref{fig:sequential_vs_interleaving} presents an example where
a step is divided into four stages. If executing a query step-by-step
sequentially, then CPUs are frequently stalled due to memory access.
Even with prefetching, the computation of a stage cannot hide the memory
access latency. In contrast, the step interleaving hides the memory
access latency by executing steps of different queries alternately.
\end{example}
\HBS{Let’s perform a simple back-of-envelop calculation on the performance gain of interleaving execution.} Given a group containing $k$ queries, we assume that \texttt{Move} of
each query executes the same number of stages and the cost $W_C$ of
each stage is the same for the ease of analysis. Suppose that there are
$m$ stages with memory access and $\overline{m}$ without.
$W_L$ denotes the latency of memory access. Then, the cost of
moving a step for the queries in sequential is equal to
$W_0 = k((m + \overline{m})W_C + mW_L$). Let $W_S$ denote the cost of switching.
The cost of \texttt{Move} with step interleaving is
$W_1 = k((m + \overline{m})(W_C + W_S) + m(\max(W_L - kW_S - (k - 1)W_C, 0))$
where the last term calculates whether step interleaving hides memory access
latency. Therefore, the gain of step interleaving
for a step of $k$ queries can be estimated by Equation \ref{eq:gain}
where $W_{hide} = \max(W_L - kW_S - (k - 1)W_C, 0)$.
\begin{equation}\label{eq:gain}
\begin{aligned}
W_{gain} &= (W_0 - W_1)/k \\
&= mW_L - (m + \overline{m})W_S - mW_{hide}.
\end{aligned}
\end{equation}
From Equation \ref{eq:gain}, we can see that step interleaving requires
an efficient switch mechanism to reduce the overhead $W_S$ of performing
switching, and enough workload to overlap the memory access latency $W_{hide}$.
\subsection{Stage Dependency Graph}
We design the \emph{stage dependency graph} (SDG) to model stages of
a sequence of operations in a step. Each node in SDG is a stage
containing a set of operations and edges represent the dependency
relationship among them. Given the sequence of operations, we build
SDG in two steps, abstracting stages (nodes) and extracting
dependency relationships (edges).
\HBS{\textbf{Defining stages:}} As we hide memory access latency by switching the execution of queries,
the constraint on stages is that each stage contains at most
one memory access operation and the operations consuming the data are in subsequent
stages. Note that we view the
operation containing jump operation as a single stage for the ease of
the implementation of switching. We present an example in the following.
\begin{table}[t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\footnotesize
\caption{Stages of \texttt{Move} with \texttt{ALIAS} and \texttt{REJ} ($v = Q.cur$).}
\label{tab:stages}
\begin{tabular}{c|l}
\hline
\textbf{Stage} & \multicolumn{1}{c}{\textbf{\texttt{ALIAS}}} \\ \hline\hline
$S_0$ & $O_0$: Load $d_v$. \\ \hline
\multirow{3}{*}{$S_1$} & $O_1$: Generate an int random num $x$ in $[0, d_v)$. \\ \cline{2-2}
& $O_2$: Generate a real random num $y$ in $[0,1)$. \\ \cline{2-2}
& $O_3$: Load $C[x] = (H[x], A[x])$. \\ \hline
\multirow{2}{*}{$S_2$} & $O_4$: If $y < H[x]$, $v' = A[x].first$; Else $v' = A[x].second$. \\ \cline{2-2}
& $O_5$: Add $v'$ to $Q$ and return $e(v, v')$. \\ \hline\hline
\textbf{Stage} & \multicolumn{1}{c}{\textbf{\texttt{REJ}}} \\ \hline\hline
$S_0$ & $O_0$: Load $d_v$. \\ \hline
$S_1$ & $O_1$: Load the maximum value $p_v ^ *$. \\ \hline
\multirow{3}{*}{$S_2$} & $O_2$: Generate an int random num $x$ in $[0, d_v)$. \\ \cline{2-2}
& $O_3$: Generate a real random num $y$ in $[0, p_v ^ *)$. \\ \cline{2-2}
& $O_4$: Load $C[x] = p$. \\ \hline
$S_3$ & $O_5$: If $y > C[x]$, jump to $O_2$; Else jump to $O_6$. \\ \hline
$S_4$ & $O_6$: Load $e(v,v') = E_v[x]$. \\ \hline
$S_5$ & $O_7$: Add $v'$ to $Q$ and return $e(v, v')$. \\ \hline
\end{tabular}
\end{table}
\begin{example} \label{exmp:stages}
The right column of Table \ref{tab:stages} illustrates the sequence of operations
in the \texttt{Move} function with the \texttt{ALIAS} and \texttt{REJ} sampling methods, respectively,
to perform the neighbor selection. The left column lists stages. For example, $S_0$ of \texttt{ALIAS}
loads $d_v$ consumed in $O_1$ of $S_1$. $O_5$ in \texttt{REJ}
has the jump operation. Therefore, we regard it as a \HBS{separate} stage.
\end{example}
\HBS{\textbf{Defining edges:}} Next, we add edges among nodes in SDG based on their
dependency relationships. Given
stages $S$ and $S'$, if there is a dependency relationship
between $S$ and $S'$, we add an edge from $S$ to $S'$.
The edges are categorized into three types,
\emph{memory dependency}, \emph{computation dependency}
and \emph{control dependency}. We call the first two
relationship as \emph{data dependency}. More specifically,
if $S'$ consumes the data loaded from memory by $S$,
then the edge type is memory dependency. Otherwise,
$S'$ depends on the data computed by $S$ and the
edge type is computation dependency. The data leading to
the dependency is attached to each edge as properties.
Furthermore, if $S$ contains the operation jumping to $S'$,
we add the control dependency from $S$ to $S'$.
SDG allows that there are multiple edges (i.e., dependency
relationships) between nodes. \HBS{If we only consider data
dependency}, SDG is a directed acyclic graph (DAG), while the control dependency can
generate cycles in SDG.
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{5pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.49]{example_figures/sdg.pdf}
\caption{Stage dependency graph.}
\label{fig:sdg}
\end{figure}
\begin{example} \label{exmp:sdg}
Continuing with Example \ref{exmp:stages},
Figure \ref{fig:sdg} shows SDGs.
In SDG of \texttt{ALIAS}, $S_2$ relies on $x,y$,
which are random numbers generated by $S_1$, while $(H[x],A[x])$
is the data retrieved from memory. As such, $S_1$ and $S_2$ have
both memory and computation dependency relationships. SDG of
\texttt{ALIAS} is a DAG because there is no control dependency.
In contrast, there is a cycle containing $S_2$ and $S_3$
in SDG of \texttt{REJ} because of the control dependency.
\end{example}
In summary, SDG is a methodology
to abstract stages from a sequence of operations in \texttt{Move}
and model the dependency relationship among them. Note that
the stage design of \texttt{MOVE} does not require user input but
it is implemented in the system space.
\subsection{State Switch Mechanism}
In this subsection, we introduce the implementation of step interleaving
under SDG. Based on Equation \ref{eq:gain}, we need an efficient switch mechanism.
For example, using multi-threading is forbidden because the overhead of context
switch among threads is in microseconds, whereas the main memory latency is in nanoseconds.
As each thread tends to take many RW queries, we switch the execution among stages in a single thread.
We categorize stages of a SDG into two classes
based on whether they belong to cycles in SDG, and efficiently handle them in different manners. For stages not in cycles (called \emph{non-cycle stages}),
a query visits them exactly once to complete \texttt{Move}. Given a group
of queries $\mathbb{Q}'$, we execute them in a coupled manner. Particularly, once a query $Q_i \in \mathbb{Q}'$
completes a stage $S$, we switch to the next query $Q_{i + 1} \in \mathbb{Q}'$ to process $S$.
After all queries complete $S$, we move to the next stage. In contrast, stages in cycles (called \emph{cycle stages})
can be visited variant times for different queries. To deal with the irregularity, we process them
in a decoupled manner. Specifically, each query $Q$ records the stage $S$ to be executed. When switching to $Q$,
we execute $S$, set the next stage of $Q$ based on SDG, and switch to the next query after completing $S$.
As a result, each query executes asynchronous.
\sun{For data communication between different stages in a query, we
create two kinds of ring buffers based on SDG, in which the computation dependency edge indicates the information
requiring to be stored. In particular, the \emph{task ring} is used for data communication across all stages of a query,
while the search ring serves to process cycle stages. As we need to explicitly record states of cycle stages
and control the switch of them, processing cycle stages not only causes implementation complexities, but also incurs more overhead.
Note that the SDGs of \texttt{NAIVE} and \texttt{ALIAS} have no cycle stages because
there are no for loops in their generation phases, whereas that of \texttt{ITS}, \texttt{REJ}
and \texttt{O-REJ} have. The implementation details are introduced in the appendix.}
\subsection{Ring Size Tuning} \label{sec:tune_ring_size}
The task ring size $k$ and the search ring size $k'$ determine the group size of queries executed
simultaneously in a thread, and therefore control memory level parallelism of executing non-cycle
stages and cycle stages, respectively. According to Equation \ref{eq:gain}, we can
improve the performance by increasing $k$ to reduce $W_{hide}$.
However, $k$ is limited by hardware. Particularly,
modern CPUs can issue a limited number of outstanding memory requests,
and the L1 data cache size is only tens of kilobytes. Setting $k$ to
a large value can evict data before the usage. In ThunderRW,
we tune ring sizes by pre-executing a number of queries.
We start a RW query from each vertex with the target length as 10
and set the RW type as static. We first select the \texttt{NAIVE}
and \texttt{ALIAS} sampling methods, respectively and vary $k$ from
$1,2,...,512, 1024$ to pick an optimal value $k ^ *$. Next we fix $k$
to $k ^ *$ and vary $k'$ from $1,2,..., k^*$ to select optimal values
for \texttt{ITS}, \texttt{REJ} and \texttt{O-REJ}, respectively.
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t]
\caption{ThunderRW using Step Interleaving}
\label{algo:ThunderRW_framework_si}
\footnotesize
\SetKwProg{func}{Function}{}{}
\SetKwFunction{Gather}{Gather}
\SetKwFunction{Move}{Move}
\SetKwRepeat{Do}{do}{while}
\KwIn{a graph $G$ and a set $\mathbb{Q}$ of random walk queries\;}
\KwOut{the walk sequences of each query in $\mathbb{Q}$\;}
Add the first $k$ queries in $\mathbb{Q}$ to $\mathbb{Q}'$\;
$completed \leftarrow 0$, $submitted \leftarrow k$\;
\While{$completed < |\mathbb{Q}|$}{
$\mathbb{C} \leftarrow \{\}$\;
\For{$Q \in \mathbb{Q}'$}{
$C \leftarrow$\Gather{$G, Q, \textbf{Weight}$}\;
Add $\mathbb{C}[Q]$ to $C$\;
}
$\mathbb{U} \leftarrow$ \Move{$G, \mathbb{Q}', \mathbb{C}$}\;
\For{$Q \in \mathbb{Q}'$}{
\If{\textbf{Update}($Q, \mathbb{U}[Q]$) is true}{
Remove $Q$ from $\mathbb{Q}'$\;
$completed \leftarrow completed + 1$\;
\If{$submitted < |\mathbb{Q}|$}{
Get next query $Q'$ from $\mathbb{Q}$ and add it to $\mathbb{Q}'$\;
$submitted \leftarrow submitted + 1$\;
}
}
}
}
\end{algorithm}
\subsection{Integration with ThunderRW}
Algorithm \ref{algo:ThunderRW_framework_si} illustrates
our ThunderRW framework using step interleaving. Line 1
adds the first $k$ queries in $\mathbb{Q}$ to $\mathbb{Q}'$ where
$k$ is the parameter setting the group size.
Lines 3-15 repeatedly execute GMU operations on queries in
$\mathbb{Q}'$ until all queries in $\mathbb{Q}$ complete. Specifically,
Lines 5-7 first execute the \texttt{Gather} operation on each query in $\mathbb{Q}'$.
Next, Line 8 invokes the \texttt{Move} operation using step interleaving to
process queries in $\mathbb{Q}'$. After that, Lines 9-15 apply the \texttt{Update}
operation to all queries in the group. If a query completes, then Lines 11-15 remove
it and submit the next query in $\mathbb{Q}$ to $\mathbb{Q}'$.
Thus, the step interleaving technique can be seamlessly integrated with ThunderRW
\HBS{without changing APIs}.
\textbf{Time and space.} The time complexity of Algorithm \ref{algo:ThunderRW_framework_si}
is the same with the analysis in Section \ref{sec:analysis} because the step interleaving
does not change the number of steps moved. Suppose that there are $n$ threads.
Then, the memory cost is $O(n \times k \times d_{max})$ in addition to the space storing
the graph and the output because each thread has at most $k$ queries in flight.
\section{Experiments} \label{sec:experiments}
We conduct experiments to evaluate the performance of ThunderRW in this section.
\subsection{Experimental Setup} \label{sec:experimental_setup}
We conduct experiments on a Linux server equipped with an Intel Xeon W-2155 CPU and 220GB RAM. The CPU has ten physical cores with hyper-threading disabled for consistent measurement.
The sizes of L1, L2 and L3 (last level cache, LLC) caches are 32KB, 1MB and 13.75MB, respectively.
\begin{table}[t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\footnotesize
\caption{Properties of real-world datasets.}
\label{tab:datasets}
\begin{tabular}{cccclcc}
\hline
\textbf{Dataset} & \textbf{Name} & \textbf{$|V|$} & \textbf{$|E|$} & $d_{avg}$ & $d_{max}$ & \textbf{Memory} \\ \hline\hline
\sun{amazon} & \sun{\textit{am}} & \sun{0.55M} & \sun{1.85M} & \sun{3.38} & \sun{549} & \sun{0.01GB} \\ \hline
youtube & \textit{yt} & 1.14M & 2.99M & 5.24 &28754 & 0.03GB \\ \hline
us patents & \textit{up} & 3.78M & 16.52M & 8.74 &793 & 0.17GB \\ \hline
eu-2005 & \textit{eu} & 0.86M & 19.24M & 44.74 &68963 & 0.15GB \\ \hline
\sun{amazon-clothing} & \sun{\textit{ac}} & \sun{15.16M} & \sun{63.33M} & \sun{4.18} & \sun{12845} & \sun{0.35GB} \\ \hline
\sun{amazon-book} & \sun{\textit{ab}} & \sun{18.29M} & \sun{102.12M} & \sun{5.58} & \sun{58147} & \sun{0.52GB} \\ \hline
livejournal & \textit{lj} & 4.85M & 68.99M & 28.45 &20333 & 0.54GB \\ \hline
com-orkut & \textit{ot} & 3.07M & 117.19M & 76.34 &33313 & 0.89GB \\ \hline
\sun{wikidata} & \sun{\textit{wk}} & \sun{40.96M} & \sun{265.20M} & \sun{6.47} & \sun{8085513} & \sun{1.29GB}\\ \hline
uk-2002 & \textit{uk} & 18.52M & 298.11M & 32.19 &194955 & 2.30GB \\ \hline
twitter & \textit{tw} & 41.66M & 1.21B & 58.08 &2997487 & 9.27GB \\ \hline
friendster & \textit{fs} & 65.61M & 1.81B & 55.17 &5214 & 13.71GB \\ \hline
\end{tabular}
\end{table}
\textbf{Datasets.} \sun{Table \ref{tab:datasets} lists the statistics of the twelve real-world graphs in our experiments.
\emph{ab} and \emph{ac} are downloaded from \cite{amazon_review}, \emph{wk} is obtained from \cite{wikidata},
\emph{eu}, \emph{uk} and \emph{tw} are obtained from \cite{networkrepo}, and the other graphs are downloaded from
\cite{snapnets}.} The datasets are from different categories such as web, social and citation, and have different densities.
The number of vertices is ranged from hundreds of thousands to tens of millions, and the number of edges scales from millions to billions.
\sun{Except \emph{am}, all the graphs outsize LLC.}
\textbf{Workloads.} We study PPR, DeepWalk, Node2Vec and MetaPath to
evaluate the performance and generality of competing methods. The settings of the four algorithms are the same as that in
Section \ref{sec:workload_profiling}. \sun{\emph{ab} and \emph{ac} are weighted graphs where weights denote review ratings for products.
\emph{wk} has 1327 distinct labels, which represents the relationship between entities in a knowledge base. The other graphs are unweighted
and unlabeled.} Given a graph having no labels or weights, we set the weight and label of edges
with the same setting as previous work \cite{yang2019knightking}: (1) We choose a real number from [1, 5) uniformly at random, and assign
it to an edge as its weight; and (2) We set the edge label by randomly choosing a label from a set containing five distinct labels.
\textbf{Comparison.} \sun{We compare the performance of ThunderRW (called \emph{TRW} for short) with the following methods.}
\begin{itemize}[noitemsep,topsep=1pt,leftmargin=*]
\item \sun{\emph{BL}: Baseline approaches that first load a graph entirely into memory and then execute random walks,
the detail of which is presented in Section \ref{sec:workload_profiling}.}
\item \sun{\emph{HG}: Our homegrown implementation optimizing \emph{BL} from two aspects: (1) select a suitable sampling method for each algorithm
according to the recommendation in Section \ref{sec:analysis}; and (2) regard each query as a parallel task with OpenMP.}
\item \emph{GW}: GraphWalker \cite{wang2020graphwalker}, the state-of-the-art RW framework in a single machine. \HBS{For the fair of comparison, we configure GraphWalker to execute in-memory, without any disk I/O.}
\item \emph{KK}: KnightKing \cite{yang2019knightking}, the state-of-the-art distributed RW framework. \HBS{
It supports to execute in a single machine. }
\end{itemize}
\begin{table*}[t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\footnotesize
\caption{Overall performance comparison (seconds).}
\label{tab:overall_comparison}
\resizebox{0.96\textwidth}{!}{
\begin{tabular}{c|ccccc|cccc|cccc|ccc}
\hline
\textbf{} & \multicolumn{5}{c|}{\sun{\textbf{PPR}}} & \multicolumn{4}{c|}{\textbf{DeepWalk}} & \multicolumn{4}{c|}{\textbf{Node2vec}} & \multicolumn{3}{c}{\textbf{MetaPath}} \\ \hline
\textbf{Dataset} &\sun{\textbf{\emph{BL}}} & \sun{\textbf{\emph{HG}}} & \sun{\textbf{\emph{GW}}} & \sun{\textbf{\emph{KK}}} & \sun{\textbf{\emph{TRW}}} & \sun{\textbf{\emph{BL}}}& \textbf{\emph{HG}} &\textbf{\emph{KK}} & \textbf{\emph{TRW}} & \sun{\textbf{\emph{BL}}}&\textbf{\emph{HG}} & \textbf{\emph{KK}} & \textbf{\emph{TRW}} & \sun{\textbf{\emph{BL}}}& \textbf{\emph{HG}} & \textbf{\emph{TRW}} \\ \hline\hline
\sun{\textit{am}} &0.06 &0.008 &0.42 &0.012 &\textbf{0.007 } &2.16 &0.21 &0.44 &\textbf{0.07 } &9.97 &0.26 &2.08 &\textbf{0.14 } &0.22 &0.018 &\textbf{0.012 }\\ \hline
\textit{yt} &0.33 &0.04 &1.68 &0.05 &\textbf{0.015 } &9.78 &0.98 &1.93 &\textbf{0.26 } &853.13 &1.30 &5.94 &\textbf{1.03 } &6.18 &\textbf{0.23 } &0.24 \\ \hline
\textit{up} &1.24 &0.13 &7.19 &0.19 &\textbf{0.07 } &45.44 &4.33 &8.41 &\textbf{0.95 } &369.00 &6.20 &16.92 &\textbf{4.01 } &4.88 &0.40 &\textbf{0.24 }\\ \hline
\textit{eu} &0.16 &0.02 &0.99 &0.03 &\textbf{0.011 } &8.16 &0.82 &1.56 &\textbf{0.20 } &2731.07 &1.47 &4.43 &\textbf{1.14 } &90.55 &\textbf{3.18 } &3.55 \\ \hline
\sun{\textit{ac}} &4.84 &0.51 &19.31 &0.65 &\textbf{0.19 } &173.66 &17.86 &31.88 &\textbf{3.31 } &6951.12 &24.54 &87.86 &\textbf{6.26 } &45.01 &2.01 &\textbf{1.69 }\\ \hline
\sun{\textit{ab}} &8.86 &0.94 &26.74 &1.09 &\textbf{0.26 } &212.80 &22.24 &40.07 &\textbf{4.01 } &26231.45 &32.04 &100.78 &\textbf{7.87 } &128.35 &5.06 &\textbf{4.47 }\\ \hline
\textit{lj} &1.69 &0.19 &7.90 &0.23 &\textbf{0.06 } &55.63 &5.44 &10.67 &\textbf{1.19 } &2951.33 &9.09 &24.95 &\textbf{6.20 } &18.08 &0.94 &\textbf{0.75 }\\ \hline
\textit{ot} &1.49 &0.16 &5.25 &0.19 &\textbf{0.04 } &38.54 &3.70 &7.97 &\textbf{0.80 } &5891.28 &7.28 &15.16 &\textbf{4.82 } &40.77 &1.72 &\textbf{1.57 }\\ \hline
\sun{\textit{wk}} &21.86 &2.21 &47.05 &3.07 &\textbf{0.59 } &502.27 &49.67 &95.17 &\textbf{9.26 } & \emph{OOT} &68.43 &216.24 &\textbf{27.68 } &5.98 &\textbf{0.54 } &0.55 \\ \hline
\textit{uk} &6.47 &0.69 &27.72 &0.90 &\textbf{0.24 } &203.86 &20.42 &21.40 &\textbf{4.56 } &12630.01 &34.36 &94.69 &\textbf{28.68 } &322.66 &12.84 &\textbf{12.56 }\\ \hline
\textit{tw} &26.42 &2.73 &77.12 &3.61 &\textbf{1.16 } &575.43 &61.18 &115.92 &\textbf{11.13 } & \emph{OOT} &130.72 &232.41 &\textbf{91.00 } & \emph{OOT} &12300.32 &\textbf{9780.20 }\\ \hline
\textit{fs} &79.14 &8.20 &223.81 &10.72 &\textbf{4.10 } &1043.93 &108.23 &208.45 &\textbf{17.67 } & \emph{OOT} &178.15 &364.51 &\textbf{120.16 } &683.05 &28.69 &\textbf{25.01 }\\ \hline
\end{tabular}
}
\vspace*{-10pt}
\end{table*}
We implement all our methods including \emph{BL}, \emph{HG} and \emph{TRW} in C++.
\emph{GW}\footnote{\url{https://github.com/ustcadsl/GraphWalker}, Last accessed on 2020/12/07.}
and \emph{KK}\footnote{\url{https://github.com/KnightKingWalk/KnightKing}, Last accessed on 2020/12/20.}
are programmed in C++ as well. All the source code is compiled by g++ 8.3.2 with -O3 enabled.
\sun{\emph{BL} executes in serial, while the other methods are running on all the cores of the single socket,
with one thread per core.}
We consider C-SAW \cite{pandey2020c}, the state-of-the-art RW framework
on GPUs, as well. However, its open source package\footnote{\url{https://github.com/concept-inversion/C-SAW}, Last accessed on 2020/12/07.}
supports 4000 queries at most, which cannot handle the workload containing massive queries in the experiment.
Previous experiment results \cite{yang2019knightking,wang2020graphwalker} show that
\emph{KK} and \emph{GW} significantly outperform generic graph computing frameworks
such as Gemini \cite{zhu2016gemini} on RW algorithms.
Therefore, our experiment does not involve C-SAW as well as any generic graph
computing frameworks.
As for RW algorithms, \emph{GW} only supports unbiased RW. Thus, we execute PPR without considering edge weights,
and evaluate \emph{GW} on PPR only. Despite that \emph{KK} studies MetaPath in the original paper~\cite{yang2019knightking},
its open source package cannot handle labeled graphs. As such, it cannot execute MetaPath.
In contrast, \emph{TRW} supports all the four algorithms, which demonstrates its flexibility.
\sun{As for sampling methods, \emph{BL} uses \texttt{NAIVE} for PPR, while adopts \texttt{ALIAS} for the other three algorithms.
As discussed in Section \ref{sec:workload_profiling}, building alias tables for dynamic RW in an indexing phase can consume a
huge amount of memory. Therefore, in the experiments, \emph{BL} dynamically computes the alias table (i.e., perform the initialization of \texttt{ALIAS})
at each step of a query, which is the same as the computation flow of \emph{TRW} for dynamic RW.
Different from \emph{BL}, \emph{HG} adopts \texttt{O-REJ} for Node2Vec, and \texttt{ITS} for MetaPath. This is because (1)
the max value of transition probability of Node2Vec can be easily set as $\max (1, 1/a, 1/b)$, and \texttt{O-REJ}
can avoid scanning the neighbors of $Q.cur$ at each step; and (2) the probability distribution of MetaPath is skewed due to
filtering based on labels, which increases the generation cost of rejection sampling, and the initialization phase of \texttt{ITS}
is much faster than that of \texttt{ALIAS} in practice. \emph{TRW} adopts the same sampling method as \emph{HG} for each algorithm.}
\sun{\textbf{Ring Size Setting.} We tune the ring size with the method in Section
\ref{sec:tune_ring_size}. Despite that the graphs have variant structures, the optimal setting for them is close.
First, the optimal value for the graphs except \emph{am} is $k = 64$ and $k' = 32$ because the optimal ring size is
closely related to the instructions available for computation, the switch overhead, the memory access latency, and
the maximum number of outstanding memory requests, which are determined by the program and hardwares.
Second, the optimal value for \emph{am} is $k = 32$ and $k' = 32$ as \emph{am} fits in LLC and the memory access latency
is smaller than that of the other graphs.
Additionally, the tuning process is very
efficient, which takes less than one minute for most of the graphs. Even for \emph{fs} with more than 1.8 billion edges,
the tuning is completed with around four minutes.}
\textbf{Metrics.} The \emph{total time} is the elapsed time on evaluating RW algorithms without counting the time on
loading data from the disk. For static random walk, the total time consists of the \emph{preprocessing time}, which is the time spent
on the preprocessing, and the \emph{execution time}, which is the time spent on executing queries. \sun{To complete experiments
in a reasonable time, we set the time limit for each algorithm as eight hours. If an algorithm cannot be completed within the limit,
we terminate it and record the execution time as \emph{OOT} (i.e., out-of-time).}
We measure the \emph{throughput} (steps per second) by
dividing the number of steps of all queries by the execution time. To provide more insights,
we adopt \emph{Intel Vtune Profiler} to examine the pipeline slot utilization and use \emph{Linux Perf}
to examine the \emph{instructions per step} and \emph{cycles per step}, which are the number of instructions and
the number of cycles on one step, respectively.
\sun{\textbf{Supplement experiments.} More experiment results including the impact of ring sizes,
memory bandwidth utilization, the effectiveness of prefetching data to different cache levels, the impact of the step interleaving on existing systems
and the comparison with AMAC \cite{kocberber2015asynchronous} are presented in the appendix.}
\subsection{Overall Comparison} \label{sec:overall_performance_comparison}
Table \ref{tab:overall_comparison} gives an overall comparison of competing methods
on the four RW algorithms. Although \emph{GW} is parallel, it runs slower than \emph{BL}, the sequential
baseline algorithm. \emph{KK} runs faster than \emph{GW} and \emph{BL}, but slower than
\emph{HG} because (1) the framework incurs extra overhead compared with \emph{HG}; and (2) \emph{HG} adopts an appropriate
sampling method for each algorithm. \emph{TRW} runs 54.6-131.7X and 1.7-14.6X faster
than \emph{GW} and \emph{KK}, respectively.
\sun{Benefiting from parallelization, \emph{HG} achieves 7.5-10.5X speedup over \emph{BL} on PPR and DeepWalk. Moreover,
\emph{HG} runs 38.3-1857.9X and 11.1-28.5X faster than \emph{BL} on Node2Vec and MetaPath, respectively, because \emph{HG}
adopts \texttt{O-REJ} sampling for Node2Vec, which avoids scanning the neighbors of $Q.cur$ at each step, and uses
\texttt{ITS} sampling for MetaPath, the initialization phase of which is more efficient than that of \texttt{ALIAS} in practice.
\emph{TRW} runs 8.6-3333.1X faster than \emph{BL}. Even compared with \emph{HG}, \emph{TRW} achives up to 6.1X speedup benefiting from
our step-centric model and step interleaving technique. As MetaPath is dynamic and both \emph{TRW} and \emph{HG} use \texttt{ITS} sampling,
the gather operation at each step dominates the cost. Still, MetaPath on ThunderRW outperforms
that on HG for nine out of twelve graphs, and is slightly slower on the other three graphs. \emph{tw} is
dense but highly skewed (as shown in Table~\ref{tab:datasets}) and the vertices with high degrees
are frequently visited. Consequently, the execution time on MetaPath against \emph{tw} is much longer
than that on other graphs.}
\sun{In summary, ThunderRW significantly outperforms
state-of-the-art frameworks and homegrown solutions (e.g., \emph{BL} takes more than eight hours for Node2Vec on \emph{tw}, while
\emph{TRW} completes the algorithm in two minutes). Furthermore, ThunderRW saves a lot of engineering effort on the
implementation and parallelization of RW algorithms compared with \emph{BL} and \emph{HG}.}
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/vary_rw_algorithms/livejournal_breakdown.pdf}
\caption{Pipeline slot breakdown.}
\label{fig:lj_breakdown}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/vary_rw_algorithms/livejournal_speedup.pdf}
\caption{Speedup.}
\label{fig:lj_speedup}
\end{subfigure}
\caption{\sun{Vary RW-algorithms on \emph{lj}.}}
\label{fig:vary_rw_algorithms}
\end{figure}
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/vary_sampling_methods/livejournal_breakdown.pdf}
\caption{Pipeline slot breakdown.}
\label{fig:lj_sampling_method_breakdown}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/vary_sampling_methods/livejournal_speedup.pdf}
\caption{Speedup.}
\label{fig:lj_sampling_method_speedup}
\end{subfigure}
\caption{Vary sampling methods on \emph{lj}.}
\label{fig:vary_sampling_methods}
\end{figure}
\subsection{Evaluation of Step Interleaving} \label{sec:evaluate_individual_techniques}
We evaluate the effectiveness of step interleaving in this subsection. For brevity,
we use \emph{lj} as the representative graph by default.
\textbf{Varying RW algorithms.} We first evaluate the effectiveness of step interleaving
on different RW algorithms. Figure \ref{fig:vary_rw_algorithms} presents the pipeline slot breakdown
and speedup among the RW algorithms. wo/si and w/si denote ThunderRW without and with
the step interleaving technique, respectively. Enabling step interleaving drastically
reduces memory bound on PPR and DeepWalk, and improves the instruction retirement.
Correspondingly, w/si achieves significant speedup over wo/si in
Figure \ref{fig:lj_speedup}. \sun{The speedup on PPR is lower than that on DeepWalk because PPR issues all queries from a given
vertex and the expected length of a query is 5, which by default exhibits better memory locality than DeepWalk.}
The memory bound on Node2Vec is reduced from around 60\% to 40\%
because the \texttt{Weight} function checks whether two vertices are neighbors with a binary search,
which causes a number of random memory access. The speedup on MetaPath is small because MetaPath
is dynamic and the gather operation dominates the cost at each step.
\textbf{Varying sampling methods.} We next examine the performance of step interleaving on
variant sampling methods. As the gather operation dominates the cost on dynamic random walk,
we focus on unbiased and static random walk. Particularly, we use DeepWalk as the representative
RW algorithm and evaluate it with the five sampling methods in Section \ref{sec:sampling_methods}, respectively.
When adopting \texttt{NAIVE}, we regard DeepWalk as unbiased random walk (i.e., without considering edge weight).
Figure \ref{fig:vary_sampling_methods} presents the pipeline slot breakdown and speedup on \emph{lj} with
variant sampling methods. We can see that the step interleaving technique significantly reduces memory
bound on all the five sampling methods and achieves remarkable speedup. The results demonstrate both
the generality and effectiveness of the step interleaving technique.
\sun{\textbf{Varying datasets.} To explore the impact of graph structures on the performance, we evaluate the speedup of enabling
step interleaving for DeepWalk on different datasets. Figure \ref{fig:varying_dataset} presents the experiment results.
The speedup on \emph{am} and \emph{yt} is smaller than that on other graphs because \emph{am} can fit in LLC, and \emph{yt}
is only two times larger than LLC. The speedup on
\emph{eu} and \emph{uk} is lower than the other graphs that are much larger than LLC since \emph{eu} and \emph{uk} have
dense communities (e.g., \emph{uk} has a clique containing around 1000 vertices \cite{chang2019efficient}), and RW queries
exhibit good memory locality. In contrast, the speedup on \emph{ac} and \emph{ab} is generally higher than the other graphs
because they are bipartite graphs and very sparse, and RW queries have poor memory locality. In summary, the optimization
tends to achieve higher speedup on large and sparse graphs than small graphs and graphs with dense community structures
because RW queries have poorer memory locality on the former one. Nevertheless, the optimization brings up to 3X speedup even on graphs
entirely fitting in LLC (i.e., \emph{am}) since L1 cache is only tens of kilobytes, but around ten times faster than LLC, and the
step interleaving directly fetches the data to L1 cache.}
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[scale=0.23]{experiment_figures/varying_dataset/vary_datasets_speedup.pdf}
\caption{\sun{Vary datasets for DeepWalk.}}
\label{fig:varying_dataset}
\end{figure}
\subsection{Scalability Evaluation} \label{sec:scalability_evaluation}
In this section, we evaluate the scalability of ThunderRW. By default, we
execute $10 ^ 7$ RW queries on \emph{lj} with the target length
as 80. Each query starts from a vertex selected from the graph randomly.
We first evaluate the throughput in terms of steps per second
with the number of queries and the length of queries varying, respectively.
In that case, we set the RW as static and use the \texttt{ALIAS}
sampling method as the representative. Next, we evaluate the speedup
with the number of threads varying. When setting the RW as unbiased,
we use the \texttt{NAIVE} sampling method, while we examine the speedup
on \texttt{ITS}, \texttt{ALIAS}, \texttt{REJ} and \texttt{O-REJ}, respectively,
when setting the RW as static and dynamic.
\textbf{Varying number and length of queries.} \sun{Figure \ref{fig:vary_num_queries}
presents the throughput with the number of queries varying from $10 ^ 2$ to
$10 ^ 7$. For $10 ^ 2 - 10 ^ 4$ queries, the execution time
is very short and the start up and shut down time can dominate it. For example, for $10 ^ 2$ queries,
each thread spends less than 0.1 ms on performing random walks,
while the execution time is around 2 ms because of the cost on resource (e.g., memory and threads) initialization and release.
As a result, the benefit of the optimization is limited, and the throughput is lower than that with a large number of queries.}
The throughput is more than $3 \times 10 ^ 8$ and keeps
stable with the number of queries varying from $10 ^ 6$ to $10 ^ 7$.
Figure \ref{fig:vary_length_queries} presents the throughput with the length
of queries varying from 5 to 160. The throughput is steady. In summary,
ThunderRW has good scalability in terms of the number and length of queries.
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/scalability/vary_number_of_queries_throughput.pdf}
\caption{\sun{Varying number of queries.}}
\label{fig:vary_num_queries}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/scalability/vary_length_throughput.pdf}
\caption{Varying length of queries.}
\label{fig:vary_length_queries}
\end{subfigure}
\caption{Throughput on \emph{lj} with number and length of queries varying.}
\label{fig:vary_number_length}
\end{figure}
\textbf{Varying number of threads.} Figure \ref{fig:scalability_with_thread_varying}
shows the speedup with the number of threads varying from 1 to 10 \HBS{(i.e., the number of cores in the machine)}. For all
the five sampling methods on unbiased/static RW, ThunderRW achieves
nearly linear speedup with the number of threads as shown in Figure \ref{fig:scalability_on_static}. Particularly, when the number of threads is 10,
the speedup is from 8.8X to 9.6X. Figure \ref{fig:scalability_on_dynamic}
presents the speedup on dynamic RW. The speedup is from
7.8X to 9.0X. Overall, ThunderRW achieves good scalability in terms
of the number of threads.
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/scalability/vary_num_thread_static_speedup.pdf}
\caption{Unbiased/static RW.}
\label{fig:scalability_on_static}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/scalability/vary_num_thread_dynamic_speedup.pdf}
\caption{Dynamic RW.}
\label{fig:scalability_on_dynamic}
\end{subfigure}
\caption{Speedup on \emph{lj} with number of threads varying.}
\label{fig:scalability_with_thread_varying}
\end{figure}
\subsection{\sun{Generality Evaluation}}
\sun{To evaluate the generality of ThunderRW, we repeat the first experiment in Section \ref{sec:scalability_evaluation} on a machine
equipped with an Intel Xeon Gold 6246R CPU, which has 16 physical cores. The sizes of L1, L2 and LLC caches are 32KB, 1MB and 35.75MB,
respectively. Additionally, the CPU is based on the \emph{Cascade Lake} microarchitecture, while that used in other experiments is
based on \emph{Skylake}. As the CPU has 16 physical cores, we set the number of workers as 16.
As shown in Figure \ref{fig:throughput_generality}, enabling the optimization significantly improves the throughput.
Moreover, using the new CPU increases the throughput, for example, when the length of queries is 160,
the throughput grows from $3 \times 10 ^ 8$ to $4.1 \times 10 ^ 8$. The experiment results show that
the techniques proposed in this paper are generic to different architectures.}
\begin{figure}[h]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/generality/vary_number_of_queries_throughput.pdf}
\caption{Varying number of queries.}
\label{fig:vary_number_of_queries_throughput_generality}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/generality/vary_length_throughput.pdf}
\caption{Varying length of queries.}
\label{fig:vary_length_throughput_generality}
\end{subfigure}
\caption{\sun{Throughput on \emph{lj} with number and length of queries varying on processors with different architectures.}}
\label{fig:throughput_generality}
\end{figure}
\subsection{\sun{Discussions}}
\sun{ThunderRW regards a step of a query as a parallel task unit, which parallelizes the computation from the perspective of queries
instead of the graph data. As RW algorithms consist of massive queries and the cost of moving a step is extremely small (e.g.,
around 34 ns for DeepWalk on \emph{lj}), there are a large number of small parallel tasks, which can be easily parallelized. As such,
the parallelization of ThunderRW can achieve significant speedup over the sequential despite that graph structures are
complex and flexible. Moreover, the sampling method has an important impact on the performance, and therefore providing variant
sampling methods is essential.}
\sun{The step interleaving technique executes different queries alternately to reduce memory bound incurred by random memory accesses.
Its effectiveness is closely related to the memory locality of workloads, which is determined by RW algorithms and graph structures.
In general, the optimization tends to achieve higher speedup on large and sparse graphs than small graphs and graphs with dense community
structures because RW queries have poorer memory locality on the former graphs. Nevertheless, the random memory access is a common issue
for RW algorithms since (1) graphs are much larger than cache sizes; and (2) RW queries wander randomly in the graph.
Thus, the step interleaving can achieve significant speedup even on graphs entirely fitting LLC.
However, the speedup achieved by the step interleaving on high order RW algorithms can be lower than that on first order algorithms.
First, the operations in user-defined functions can introduce random memory accesses.
Despite that, the optimization still brings 1.2-4.3X speedup on Node2Vec. Second,
the \texttt{Gather} operation dominates the cost at each step when performing it in run time.}
\section{Conclusion} \label{sec:conclusion}
In this paper, we propose ThunderRW, an efficient in-memory RW engine on which users can easily implement customized RW algorithms.
We design a step-centric model to abstract the computation from the local view of moving one step of a query.
Based on the model, we propose the step interleaving technique to hide memory access latency by executing multiple queries
alternately. We implement four representative RW algorithms including PPR, DeepWalk, Node2Vec and MetaPath with our framework.
Experimental results show that ThunderRW outperforms state-of-the-art RW frameworks by up to one order of magnitude
and the step interleaving reduces the memory bound from 73.1\% to 15.0\%. Currently, we implement the step interleaving technique in ThunderRW by explicitly and manually storing and restoring states
of each query. An interesting future work is to implement the method with \emph{coroutines}, which is an efficient technique supporting
interleaved execution~\cite{jonathan2018exploiting,psaropoulos2017interleaving,he2020corobase}.
\section{Other Profiling Results}
In this section, we evaluate the impact of varying the length and the number of queries, respectively.
We use a micro benchmark that assembles the access pattern of RWs and also we can control the parameters easily.
Particularly, we set the number of queries as $10 ^ 7$ and configure the target length as 80 by default. Each query starts from a vertex
randomly selected from the graph. We use the \texttt{ALIAS} sampling method to perform the queries.
We first evaluate the impact of varying the length from 5 to 160, and then examine the performance of varying the number of queries
from \sun{$10^2$ to $10^8$}.
Tables \ref{tab:vary_length} and \ref{tab:vary_num_queries} present the results with the length of queries varying from 5 to 160 and the
number of queries varying from \sun{$10 ^ 2$ to $10 ^ 8$} on the \emph{livejournal} graph, respectively. We can see that the memory bound
is consistently above 60\% despite the variance in the length and number of queries. With the length (or the number of queries) increasing,
the memory bound grows slightly. The memory bandwidth utilization is also far from the maximum bandwidth in all test cases.
In summary the in-memory computation of RW algorithms suffers severe performance
issues due to memory stalls caused by cache misses and under-utilizes the memory bandwidth
regardless of the length and number of queries.
\begin{table}[h]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Pipeline slot breakdown and memory bandwidth with the length of queries varying.}
\label{tab:vary_length}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\textbf{\begin{tabular}[c]{@{}c@{}}Length of\\ Queries\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Front\\ End\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Bad\\ Spec\end{tabular}} & \textbf{Core} & \textbf{Memory} & \textbf{Retiring} & \textbf{\begin{tabular}[c]{@{}c@{}}Memory\\ Bandwidth\end{tabular}} \\ \hline\hline
5 & 3.6\% & 5.5\% & 16.6\% & 61.3\% & 13.0\% & 7.7GB/s \\ \hline
10 & 2.7\% & 4.0\% & 18.5\% & 63.4\% & 11.2\% & 6.6GB/s \\ \hline
20 & 2.7\% & 4.1\% & 18.1\% & 64.0\% & 11.1\% & 6.0GB/s \\ \hline
40 & 2.5\% & 4.0\% & 18.1\% & 64.5\% & 10.9\% & 5.8GB/s \\ \hline
80 & 2.3\% & 3.7\% & 18.6\% & 64.8\% & 10.6\% & 5.6GB/s \\ \hline
160 & 2.3\% & 3.6\% & 18.5\% & 65\% & 10.5\% & 5.6GB/s \\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Pipeline slot breakdown and memory bandwidth with the number of queries varying.}
\label{tab:vary_num_queries}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\textbf{\begin{tabular}[c]{@{}c@{}}Num of\\ Queries\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Front\\ End\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Bad\\ Spec\end{tabular}} & \textbf{Core} & \textbf{Memory} & \textbf{Retiring} & \textbf{\begin{tabular}[c]{@{}c@{}}Memory\\ Bandwidth\end{tabular}} \\ \hline\hline
\sun{$10 ^ 2$} & \sun{4.1\%} & \sun{2.6\%} & \sun{16.5\%} & \sun{66.4\%} & \sun{10.4\%} & \sun{5.9GB/s} \\ \hline
$10 ^ 3$ & 4.5\% & 7.4\% & 12.1\% & 63.8\% & 12.2\% & 8.0GB/s \\ \hline
$10 ^ 4$ & 4.4\% & 6.9\% & 12.7\% & 64.3\% & 11.8\% & 6.6GB/s \\ \hline
$10 ^ 5$ & 4.0\% & 6.2\% & 16.5\% & 60.9\% & 12.4\% & 6.0GB/s \\ \hline
$10 ^ 6$ & 2.7\% & 4.1\% & 19.0\% & 63.2\% & 11.0\% & 5.8GB/s \\ \hline
$10 ^ 7$ & 2.3\% & 3.7\% & 18.6\% & 64.8\% & 10.6\% & 5.6GB/s \\ \hline
$10 ^ 8$ & 2.3\% & 3.6\% & 18.5\% & 65.1\% & 10.5\% & 5.6GB/s \\ \hline
\end{tabular}
\end{table}
\section{Other Implementation Details}
In this section, we present the implementation details of the stage switch mechanism,
the graph storage, the walker management and the input/output of the framework.
\textbf{Stage switch.} Continuing with Example \ref{exmp:sdg}, we use \texttt{Move} with
the \texttt{REJ} sampling method to demonstrate the implementation of stage switch.
Algorithm \ref{algo:move_with_rej} presents the details where $\mathbb{Q}'$
is a group of queries and $\mathbb{C}$ maintains the transition probability
$C$ for each $Q \in \mathbb{Q}'$. Line 2 creates a task ring $TR$ with
$|\mathbb{Q}'|$ slots. Each slot records states of a query $Q \in \mathbb{Q}'$.
The load operations are replaced with the \texttt{PREFETCH} operations. We
process a non-cycle stage $S$ in SDG with a for loop where all queries evaluate
$S$ one by one. For example, Lines 4-5 deal with $S_0$ in which we
fetch the degree of $Q.cur$ for each $Q \in \mathbb{Q}'$. The \texttt{Search}
function handles cycle stages. Line 16 first creates a search ring $SR$ with
$k'$ slots to process cycle stages. If a slot $R \in SR$ is empty and
there are queries in $TR$ not submitted to $SR$ (Line 20),
Lines 21-23 submit a query $Q$ to $SR$ and initialize the slot $R$.
If the stage is $S_2$, Lines 25-27 perform the operations in $S_2$.
Moreover, Line 28 sets $R.S$ to $S_3$ and stores $x,y$ because
the next stage is $S_3$ and $S_3$ depends on the value of $x, y$ according to
SDG. When $S_3$ is completed, we write $x$ to $TR$ because
$S_4$ consumes it as shown in SDG. Lines 18-34 repeat the process until
all queries jump out the cycle. Lines 9-13 continue the computation
with values generated by \texttt{Search}. Line 14 returns $\mathbb{U}$ that
maintains the selected edge for each query.
\setlength{\textfloatsep}{0pt}
\begin{algorithm}[t]
\caption{\texttt{Move} with \texttt{REJ} using Step Interleaving}
\label{algo:move_with_rej}
\footnotesize
\SetKwProg{func}{Function}{}{}
\SetKwFunction{Move}{Move}
\SetKwFunction{Search}{Search}
\func{\Move{$G, \mathbb{Q}', \mathbb{C}$}}{
Initialize a task ring $TR$ each slot of which corresponds to $Q \in \mathbb{Q}'$\;
Initialize $\mathbb{U}$ as $\{\}$ to store the selected edge for $Q \in \mathbb{Q}'$\;
\tcc{Stage $S_0$.}
\ForEach{$Q \in \mathbb{Q}'$}{
\texttt{PREFETCH} $d_v$ where $v = Q.cur$\;
}
\tcc{Stage $S_1$.}
\ForEach{$Q \in \mathbb{Q}'$}{
\texttt{PREFETCH} $p_v ^ *$ where $v = Q.cur$\;
}
\Search{$\mathbb{C}, TR$}\;
\tcc{Stage $S_4$.}
\ForEach{$Q \in \mathbb{Q}'$}{
\texttt{PREFETCH} $E_v[TR[Q].x]$ where $v = Q.cur$\;
}
\tcc{Stage $S_5$.}
\ForEach{$Q \in \mathbb{Q}'$}{
Add $v'$ to $Q$ where $v = Q.cur$ and $e(v, v') = E_v[TR[Q].x]$\;
Set $\mathbb{U}[Q]$ to $e(v, v')$\;
}
\KwRet $\mathbb{U}$\;
}
\func{\Search{$\mathbb{C}, TR$}}{
Initialize a search ring $SR$ with $k'$ slots where $k' \leqslant |TR|$\;
$submitted, completed, index \leftarrow 0$\;
\While{$completed < |TR|$}{
$R \leftarrow SR[index]$\;
\If{$R.S = null$ and $submitted < |TR|$} {
Get next slot $R' \in TR$ and set $v$ to $R'.Q.cur$\;
Set $R.Q$, $R.S$, $R.d$ and $R.p ^ *$ to $R'.Q$, $S_2$, $d_v$ and $p_v ^ *$, respectively\;
$submitted \leftarrow submitted + 1$\;
}
\tcc{Stage $S_2$.}
\ElseIf{$R.S = S_2$}{
Generate an int random number $x$ in $[0, R.d)$\;
Generate a real random number $y$ in $[0, R.p ^ *)$\;
\texttt{PREFETCH} $C[x]$ where $C = \mathbb{C}[R.Q]$\;
Set $R.S$, $R.x$ and $R.y$ to $S_3$, $x$ and $y$, respectively\;
}
\tcc{Stage $S_3$.}
\ElseIf{$R.S = S_3$}{
\lIf{$R.y > C[R.x]$}{Set $R.S$ to $S_2$}
\Else{
Set $R.S$ and $TR[R.Q].x$ to \emph{null} and $R.x$, respectively\;
$completed \leftarrow completed + 1$\;
}
}
$index \leftarrow (index + 1) \mod k'$\;
}
}
\end{algorithm}
\textbf{Graph storage.} We store the graph $G$ in \emph{compressed sparse row} (CSR)
where $G$ consists of an array of vertices and an array of edges. Each vertex in CSR
points to the start of its adjacent edges in the edge array. Moreover, we associate
the edge label and edge weight to each edge and store them as two arrays, respectively.
\textbf{Walker management.} Given a set $\mathbb{Q}$ of random walk queries,
we assign an unique ID from $0$ to $|\mathbb{Q}| - 1$ to each of them. For a
query $Q \in \mathbb{Q}$, we maintain the query ID, the source vertex,
the length of $Q$ and a pointer linking to the payload (e.g., the walk path).
In addition, user can customize the data associating with each query.
\sun{\textbf{Input and output.} ThunderRW provides APIs for users to specify the source
vertices of RW queries and the number of queries from each source. For example, we can
start a RW query from each vertex in $G$ for DeepWalk, while issue a number of queries
from a given vertex for single source PPR.}
\sun{ThunderRW outputs the walk path for each RW query. The output can be either consumed by down streaming tasks on the fly or
stored for the future usage. The former case consumes a small amount of memory space, whereas the memory cost of the latter
can be $O(\sum_{Q \in \mathbb{Q}}|Q|)$. Fortunately, it is unnecessary to maintain all walks in memory in practical implementation.
Instead, we can use the classic double buffering mechanism to efficiently dump the output to the disk in batch..}
\sun{Specifically, one is used to write results to the disk, while the other records new results generated
by the engine. When the second one is full, we swap the role of the two buffers. In this way, the I/O cost can be easily and seamlessly
overlapped by the computation because (1) modern computers support direct memory access (DMA), which transfers data independent of CPUs,
and operating systems provide simple APIs for async I/O programming (e.g., \emph{aio\_write} in Linux); and (2) the time on filling a
buffer is much longer than that on writing to disks because of the rapid advancement of storage hardwares.
For example, the time on filling 2 GB buffer by the engine is around 1.79 second in
our test bed (equipped with Samsung PM981 NVMe SSD), which can be completely stored to disk in around 1.20 second.
Moreover, the 980 PRO series with PCIe-4.0 achieve up to 5100MB/second sequential write speed, which
can output 2 GB data in around 0.4 second.}
\section{Supplement Experiments}
\subsection{Tuning Ring Sizes}
\sun{\textbf{Time on tuning ring sizes.} Table \ref{tab:tuning_ring_size} presents the time on tuning the ring size.
We can see that the tuning process is very
efficient. Even for \emph{fs} having more than 1.8 billion edges, the tuning takes around four minutes, whereas the tuning on most of the graphs
takes less than one minute.}
\begin{table}[h]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{\sun{The time on tuning ring sizes (seconds).}}
\label{tab:tuning_ring_size}
\begin{tabular}{c|cccccc}
\hline
\textbf{Dataset} & \textit{am} & \textit{yt} & \textit{up} & \textit{eu} & \textit{ac} & \textit{ab} \\ \hline
\textbf{Time} & 0.87 & 2.67 & 9.45 & 2.55 & 35.12 & 39.23 \\ \hline\hline
\textbf{Dataset} & \textit{lj} & \textit{ot} & \textit{wk} & \textit{uk} & \textit{tw} & \textit{fs} \\ \hline
\textbf{Time} & 13.19 & 9.82 & 132.4 & 51.86 & 156.37 & 241.44 \\ \hline
\end{tabular}
\end{table}
\textbf{Impact of ring sizes.} We evaluate the impact of ring sizes on
the performance. Based on our parameter tuning method, we first vary the
task ring size from 1 to 1024 on \texttt{NAIVE} and \texttt{ALIAS} to
pick the optimal value $k^*$, and then fix the task ring size to $k^*$ and
vary the search ring size from 1 to $k^*$ on \texttt{ITS}, \texttt{REJ} and \texttt{O-REJ}
to determine the search ring size.
As shown in Figure \ref{fig:vary_task_ring},
the speedup first increases quickly with $k$ varying from 1 to 8 because
one core in our CPUs can support ten L1-D outstanding misses as it has ten
MSHRs. The optimal speedup is achieved when $k=64$ because we need to introduce
enough computation workload between the data request and the data usage to hide
memory access latency. Further increasing $k$ degrades the performance
as the L1-D cache size is limited and the request data can be evicted.
Next, we fix the task ring size and vary the search ring size. When $k' = 32$, ThunderRW achieves the highest speedup.
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/vary_ring_size/vary_task_ring_speedup.pdf}
\caption{Task ring size ($k$).}
\label{fig:vary_task_ring}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/vary_ring_size/vary_search_ring_speedup.pdf}
\caption{Search ring size ($k'$).}
\label{fig:vary_search_ring}
\end{subfigure}
\caption{Speedup with ring size varying on \emph{lj}.}
\label{fig:vary_ring_size}
\end{figure}
\subsection{Prefetching Data to Different Cache Levels}
We use the intrinsic \texttt{\_mm\_prefetch(PTR, HINT)}
to prefetch the data. The intrinsic fetches the line of data from memory containing
address \texttt{PTR} to a location in the cache hierarchy specified by locality hint
\texttt{HINT} \cite{lee2012prefetching}. The intrinsic can load the data to L1, L2 or
L3 cache based on the hint. When fetching the data to L1 or L2,
it loads the data to the higher cache level as well. Moreover, we can specify the
data as non-temporal with \texttt{\_MM\_HINT\_NTA}. Then, the intrinsic will
load the data to L1 cache, mark it as non-temporal and bypass L2 and L3 caches. We set \texttt{HINT} to
\texttt{\_MM\_HINT\_T0} to fetch the data to L1 cache with respect to all level caches,
which has good performance based on our experiment.
We evaluate the effectiveness of prefetching the data to L1, L2, and L3 cache, respectively.
Table \ref{tab:cache_level} lists the experiment results on the \emph{livejournal} graph. The performance of fetching
data to L1/L2/L3 cache is close. In contrast, marking the data as non-temporal degrades the performance. This is because
the penalty of L3 cache miss is much more than that of L1/L2 cache misses and bypassing L3 cache results in
more L3 cache misses. Thus, ThunderRW uses the \texttt{\_MM\_HINT\_T0} cache locality hint to fetch
the data to L1 cache
\begin{table}[h]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\footnotesize
\caption{Effectiveness of prefetching data to different cache levels (Speedup over loading data to L1 Cache).}
\label{tab:cache_level}
\begin{tabular}{c|c|c|c|c}
\hline
\textbf{Method} & \textbf{L1 Cache} & \textbf{L2 Cache} & \textbf{L3 Cache} & \textbf{Non-temporal Data} \\ \hline\hline
\texttt{NAVIE} &1.00 &0.97 &0.95 &0.79 \\ \hline
\texttt{ITS} &1.00 &1.01 &1.00 &0.95 \\ \hline
\texttt{ALIAS} &1.00 &0.95 &0.95 &0.80 \\ \hline
\texttt{REJ} &1.00 &1.00 &0.99 &0.92 \\ \hline
\texttt{O-REJ} &1.00 &1.01 &1.01 &0.96 \\ \hline
\end{tabular}
\end{table}
\subsection{Pipeline Slot Breakdown and Memory Bandwidth of ThunderRW}
Tables \ref{tab:vary_length_thunderrw} and \ref{tab:vary_num_queries_thunderrw} present the pipleline
slot breakdown and memory bandwidth of ThunderRW with the length of queries and the number of queries varying,
respectively. Compared with results in Tables \ref{tab:vary_length} and \ref{tab:vary_num_queries},
ThunderRW dramatically reduces the memory bound, while significantly increases the retiring. Moreover,
the memory bandwidth utilization is improved. \sun{The memory bound for $10 ^ 2$ queries is higher than other settings
because each thread has only 10 queries, whereas the optimal task ring size $k$ is 64.}
\begin{table}[h]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Pipeline slot breakdown and memory bandwidth of ThunderRW with the length of queries varying.}
\label{tab:vary_length_thunderrw}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\textbf{\begin{tabular}[c]{@{}c@{}}Length of\\ Queries\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Front\\ End\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Bad\\ Spec\end{tabular}} & \textbf{Core} & \textbf{Memory} & \textbf{Retiring} & \textbf{\begin{tabular}[c]{@{}c@{}}Memory\\ Bandwidth\end{tabular}} \\ \hline\hline
5 & 5.0\% & 10.8\% & 25.7\% & 27.0\% & 31.5\% & 29.4GB/s \\ \hline
10 & 6.4\% & 10.3\% & 29.9\% & 18.0\% & 36.1\% & 29.8GB/s \\ \hline
20 & 6.8\% & 10.6\% & 30.6\% & 12.4\% & 40.1\% & 30.8GB/s \\ \hline
40 & 6.8\% & 10.7\% & 31.0\% & 9.2\% & 42.3\% & 31.1GB/s \\ \hline
80 & 6.9\% & 10.8\% & 31.2\% & 7.9\% & 43.2\% & 31.1GB/s \\ \hline
160 & 7.0\% & 10.8\% & 31.3\% & 7.3\% & 43.7\% & 31.2GB/s \\ \hline
\end{tabular}
\vspace*{-10pt}
\end{table}
\begin{table}[h]
\footnotesize
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Pipeline slot breakdown and memory bandwidth of ThunderRW with the number of queries varying.}
\label{tab:vary_num_queries_thunderrw}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\textbf{\begin{tabular}[c]{@{}c@{}}Num of\\ Queries\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Front\\ End\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Bad\\ Spec\end{tabular}} & \textbf{Core} & \textbf{Memory} & \textbf{Retiring} & \textbf{\begin{tabular}[c]{@{}c@{}}Memory\\ Bandwidth\end{tabular}} \\ \hline\hline
\sun{$10 ^ 2$} & \sun{5.3\%} & \sun{6.5\%} & \sun{28.1\%} & \sun{27.3\%} & \sun{32.8\%} & \sun{26.1GB/s} \\ \hline
$10 ^ 3$ & 6.3\% & 10.4\% & 30.7\% & 9.8\% & 42.8\% & 30.1GB/s \\ \hline
$10 ^ 4$ & 7.2\% & 11.1\% & 32.2\% & 7.7\% & 43.9\% & 29.0GB/s \\ \hline
$10 ^ 5$ & 6.9\% & 10.8\% & 31.1\% & 7.9\% & 43.2\% & 31.5GB/s \\ \hline
$10 ^ 6$ & 6.9\% & 10.8\% & 31.0\% & 8.0\% & 43.3\% & 31.4GB/s \\ \hline
$10 ^ 7$ & 6.9\% & 10.7\% & 31.4\% & 8.2\% & 42.8\% & 31.1GB/s \\ \hline
$10 ^ 8$ & 6.8\% & 10.7\% & 31.4\% & 8.4\% & 42.7\% & 31.0GB/s \\ \hline
\end{tabular}
\end{table}
\subsection{Impact on Existing Systems}
\sun{In principle, the step interleaving technique is a generic optimization for RW
algorithms because it accelerates in-memory computation by hiding memory access latency in a single query via executing
a group of queries alternately, and RW algorithms generally consist of a number of random walks. However,
directly implementing it in the code base of GraphWalker and KnightKing is difficult because (1) their walker-centric model
regards each query as a task unit, which cannot support to execute steps of different queries alternately; and (2)
their source code does not consider the extensibility to support further enhancement. As such, we emulate the execution paradigm
of the two systems to study the impact of our optimization on their in-memory computation.}
\sun{Specifically, the in-memory computation of KnightKing adopts the BSP model, which executes random walks iteratively and moves one step
for all queries at each iteration. We implement this procedure, and integrate SI into it as follows: (1) divide queries into a
number of groups; (2) run queries in a group with the step interleaving; and (3) execute queries group by group at each iteration.
The implementation without/with the step interleaving is denoted by \emph{KK}/\emph{KK-si}.
The in-memory computation of GraphWalker adopts the ASP model, which assigns a query to each core and executes it independently.
We implement the procedure, and integrate the step interleaving into it as follows: assign a group of random walks to each core
and execute them with the step interleaving. The implementation without/with the step interleaving is denoted by \emph{GW}/\emph{GW-si}.}
\sun{Figure \ref{fig:impact} presents experiment results of DeepWalk on \emph{lj} with \texttt{ALIAS} sampling.
We set the group size as 64, which is the same as the optimal ring size. Enabling step interleaving
significantly reduces memory bound for both \emph{GW} and \emph{KK}, and improves the instruction retirement.
Figure \ref{fig:impact_speedup} shows the speedup over \emph{GW}. We find that \emph{KK}, which uses BSP, runs 1.8X faster than \emph{GW},
which utilizes ASP, because modern CPUs execute instructions out-of-order and steps of different queries at each iteration are independent of each other,
which benefits from this feature. After adopting the step interleaving, both \emph{GW} and \emph{KK} achieve a significant speedup. \emph{GW-si}
runs faster than \emph{KK-si} since \emph{KK-si} executes each query at one iteration and the context switch of each query incurs overhead.}
\begin{figure}[t]\small
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\captionsetup[subfigure]{aboveskip=0pt,belowskip=0pt}
\centering
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/impact/impact_livejournal_breakdown.pdf}
\caption{Pipeline slot breakdown.}
\label{fig:impact_pipeline_slot}
\end{subfigure}
\begin{subfigure}[t]{0.23\textwidth}
\centering
\includegraphics[scale=0.23]{experiment_figures/impact/impact_livejournal_speedup.pdf}
\caption{Speedup.}
\label{fig:impact_speedup}
\end{subfigure}
\caption{\sun{Impact on in-memory computation of GraphWalker (GW) and KnightKing (KK).}}
\label{fig:impact}
\end{figure}
\subsection{Comparison with AMAC \cite{kocberber2015asynchronous}}
To compare with prefetching techniques designed for index lookups in database systems,
we implement the \texttt{Move} operation with the stage switch mechanism in AMAC \cite{kocberber2015asynchronous}. AMAC explicitly maintains states of all
stages in a SDG and performs the stage transition, which is similar to the method processing cycle stages.
Table~\ref{tab:detailed_metrics} presents instructions per step and cycles per step of wo/si, w/si and AMAC.
Enabling step interleaving leads to more instructions per step due to the overhead of prefetching
and stage transitions. The overhead on \texttt{NAIVE} and \texttt{ALIAS} is smaller than that on the
other three methods because all stages in \texttt{NAIVE} and \texttt{ALIAS} are non-cycle stages.
The benefit of hiding memory access latency offsets the overhead of executing
extra instructions. Therefore, the step interleaving technique significantly reduces
cycles per step. As \texttt{NAIVE} and \texttt{ALIAS} have only a few stages, instructions per step
of AMAC is close to that of w/si on the two methods. However, AMAC takes 1.57-2.03X more instructions
per step than w/si on \texttt{ITS}, \texttt{REJ} and \texttt{O-REJ}, which consist of several stages.
AMAC incurs more overhead because it explicitly maintains states of all stages in SDG and controls
the stage transition. In contrary, our stage switch mechanism processes cycle stages and non-cycle
stages with different methods, and controls the stage transition for cycle stages only.
Consequently, AMAC spends 1.18-1.64X more cycles per step than w/si. The results demonstrate the effectiveness
of our stage switch mechanism.
\begin{table}[t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\captionsetup{aboveskip=0pt}
\captionsetup{belowskip=0pt}
\footnotesize
\caption{Detailed metrics with sampling method varying.}
\label{tab:detailed_metrics}
\begin{tabular}{c|ccc|ccc}
\hline
& \multicolumn{3}{c|}{\textbf{Instructions per Step}} & \multicolumn{3}{c}{\textbf{Cycles per Step}}\\ \hline
\textbf{Method} &\textbf{wo/si} & \textbf{w/si} & \textbf{AMAC} &\textbf{wo/si} & \textbf{w/si} & \textbf{AMAC} \\ \hline\hline
\texttt{NAIVE} &131.24 &132.32 &137.42 &596.12 &111.26 &112.55 \\ \hline
\texttt{ITS} &157.06 &335.75 &681.05 &1716.52 &327.65 &537.09 \\ \hline
\texttt{ALIAS} &134.56 &139.17 &179.54 &740.73 &139.14 &140.26 \\ \hline
\texttt{REJ} &187.87 &260.83 &464.78 &940.75 &273.44 &352.84 \\ \hline
\texttt{O-REJ} &180.14 &264.56 &414.27 &1000.66 &333.21 &392.21 \\ \hline
\end{tabular}
\end{table}
\subsection{\sun{Future Extension}}
\sun{In case for extremely large graphs that cannot fit into the main memory of a single machine, we consider two approaches. First, we can develop
external memory graph systems to host the graph in the hard disk. With the recent advent of emerging storage such as Intel DCPMM persistent memory,
the I/O cost can be largely overlapped by in-memory processing (where ThunderRW can be leveraged and adopted for performance improvement).
Second, we plan to develop distributed systems such as KnightKing, where our ThunderRW can be leveraged as a single-node engine.
We leave the extension of ThunderRW in the future work.}
|
1,314,259,993,557 | arxiv | \section{Conclusions}
\textbf{Conclusions:} In this paper we present a new paradigm to infuse non-trivial topological characteristics into a trivial insulator by means of modulating the scalar potential. The scalar potential is utilised to enhance the mixing between different quantum states which in turn drives the system into a topologically non-trivial regime which we verified explicitly by calculating the Chern number of the system. This is further confirmed by demonstrating the appearance of the edge states with specific group velocity. In addition our method also allows to control the topological properties by local means, which is not possible with a topological insulator. We demonstrate that the edge states can be controlled by selective placement of the scalar potential. Note that although the results are shown for uniformly spaced scalar impurity, one can observe the same qualitative behaviour for randomly distributed scalar potentials as well \cite{Suppl}. These predictions can be realised in real materials available experimentally. Most suitable candidate for such study is a CdTe-HgTe-CdTe quantum well where the topological phases can be controlled by changing the width of the well. For Hg$_{0.32}$Cd$_{0.68}$Te-HgTe quantum well, the mass gap ($2m$) is $\sim$50\,meV for a thickness of 50\,$\rm \AA$ \cite{Bernevig2006} which indicates the scalar potential induced topological transition can be observed for $V_0 \lesssim 0.5$\,eV. Our results thus open several new possibilities in controlling the topological properties and designing highly controllable devices for topological electronics.
\textbf{Acknowledgments:} SG would like to acknowledge helpful discussions with Emil Prodan.
\bibliographystyle{apsrev4-1}
|
1,314,259,993,558 | arxiv |
\section*{Supplementary Material}
\input{sections/supplementary}
\clearpage
\clearpage
{\small
\bibliographystyle{ieee_fullname}
\section{Method Overview}
We now present each module and the corresponding losses of our full model as shown in \reffig{model}.
We formulate object detection as a keypoint detection problem similar to {CenterNet}~\cite{Zhou19arxiv},
where each object is represented by its center point in the 2D image (\refsec{key_point_detection}).
From the detected center points, we directly estimate
realistic shapes (\refsec{shape_reconstruction})
and oriented 3D object bounding boxes (\refsec{3d_bounding_box})
To further promote reconstructions that are physically plausible, we propose a collision loss that penalizes the intersection of multiple objects (\refsec{collision_loss}).
\subsection{Object Detection as Keypoint Detection}
\label{sec:key_point_detection}
The first part of our method is a keypoint detector that follows the setup of {CenterNet}~\cite{Zhou19arxiv}.
Given a single RGB image $I \in \mathbb{R}^{W \times H \times 3}$,
the detector localizes keypoints (here: object centers) by predicting class-specific heatmaps $\hat{Y} \in [0,1]^{\frac{W}{R} \times \frac{H}{R} \times C}$
(\reffig{detection}, \emph{left})
where $C$ is the number of object classes and $R$\,=\,$4$ is a down-sampling factor.
The detected center points $\{\hat{\mathbf{p}}_i \in \mathbb{R}^2 \}$ (shown as $\circ$ in \reffig{detection}) correspond to the local maxima in the predicted heatmaps $\hat{Y}$.
They are obtained using non-maximum-suppression, which is implemented as a $3 \times 3$ max pooling.
We associate a confidence score $s_i = \hat{Y}_{\hat{\mathbf{p}}_i}$ to each detected keypoint $\hat{\mathbf{p}}$.
The feature backbone -- which takes the input image $I$ and generates the output heatmaps $\hat{Y}$ --
is implemented as a stacked hourglass model~\cite{Newell2016ECCV}.
During training, we follow \cite{Law2018ECCV, Zhou19arxiv} and generate the target heatmaps $Y$ by splatting the ground truth center points $\mathbf{p}_i$ using Gaussian kernels $\mathcal{N}(\mathbf{p}_i, \sigma_i)$ with $\sigma_i$ depending on the projected size of the object $i$.
Training the keypoint detector relies on the focal loss~\cite{Lin17ICCV} and is computed over all pixels $(x, y)$ and classes $c \in \{1, \ldots, C\}$ in the heatmaps:
\begin{equation}
\mathcal{L}_{key}\text{\,=\,}\frac{-1}{N} \sum_{xyc} \begin{cases}
(1\text{\,-\,}\hat{Y}_{xyc})^{\alpha} \cdot \text{log}(\hat{Y}_{xyc}) &\hspace{-25px}\text{if $Y_{xyc}$\,=\,1}\\
(1\text{\,-\,}Y_{xyc})^{\beta} \cdot (\hat{Y}_{xyc})^{\alpha} \cdot \text{log}(1\text{\,-\,}\hat{Y}_{xyc}) &\hspace{-0px}\text{else}\\
\end{cases}
\end{equation}
where $N$ is the number of ground truth objects, $\alpha$\,=\,2 and $\beta$\,=\,4 are the hyper-parameters of the focal loss.
After detecting the object instances as center points, the network jointly
selects 3D shapes (\refsec{shape_reconstruction}) and
estimates 3D bounding boxes (\refsec{3d_bounding_box})
for each object in the scene.
\input{figures/shape_selection}
\subsection{Shape Selection}
\label{sec:shape_reconstruction}
Instead of directly reconstructing shape representations such as meshes, voxel grids or point clouds~\cite{Fan16,groueix2018,Popov20ECCV},
our method operates indirectly, by selecting shape exemplars.
More precisely, the network is trained to select for each detection one shape exemplar $z$
among a set of $K$ shape exemplars from a given shape database.
This choice is motivated by our goal to reconstruct realistic scenes, since it guarantees valid shapes from the object database unlike recent reconstruction methods which can produce incomplete, noisy or over-smoothed reconstructions \cite{Avetisyan20ECCV}.
Similarly, the recent work of Tatarchenko \etal~\cite{Tatarchenko19CVPR} concludes that current methods for single-view 3D reconstruction primarily work because of recognizing the type of shape depicted in the image, rather than truly recovering the geometric details unique to that particular instance.
To reiterate, in this work, the shape estimation problem is formulated as a shape selection problem which chooses one shape exemplar $\hat{z}$ from a given shape database $\mathcal{Z}$ of $K$ shape exemplars.
After predicting an exemplar $\hat{z}$, an explicit shape representation $X$ (voxel-grid, point cloud, CAD model \etc) can be chosen freely from the precomputed databases $\mathcal{Z}_X$ (described next) depending on the task or loss function at hand.
As such, the presented model is agnostic towards any particular shape representation.
\paragraph{Building the shape database $\mathcal{Z}$.}
The presented shape database is a set of representative shape exemplars selected from a given set of CAD models.
Once our shape database is built, the full set of the original CAD models is no longer required.
We now describe how those exemplary shapes are selected.
First, the CAD models are transformed into a canonical orientation, position and scale.
Specifically, all models are facing down the negative Z-axis, the centroids are translated to the origin, and we apply anisotropic scaling such that the models fit into the unit cube.
Then, for each object $i$, we compute the signed distance function (SDF) representation $\phi^i$ of the corresponding CAD model.
After discretization, downsampling to $32^3$ grids and flattening to vectors, we cluster the objects using k-Means++~\cite{David07ACM} with $k$\,=\,$50$, for each object class separately.
The total number $K$ of shape exemplars in the database $\mathcal{Z}$ is $K = k \cdot C$ where $C$ is the number of object types (chairs, bottle, \etc).
The objects appearing in the training images are already annotated by their corresponding CAD model.
Hence, we can re-label each object with their nearest shape exemplar $z^k$.
Additionally, the shape database can be extended to store explicit shape representations such as SDFs
$\mathcal{Z}_{\phi}$\,=\,$\{\phi^k\}_{k=1}^K $, point clouds $\mathcal{Z}_{\mathcal{P}}$\,=\,$\{\mathcal{P}^k\}_{k=1}^K$ or CAD models $\mathcal{Z}_{\text{CAD}}$\,=\,$\{\text{CAD}^k\}_{k=1}^K$.
In each case, the stored representation corresponds to the model that is closest to the cluster center under the clustering metric ($L_2$ distance over $\phi$).
\paragraph{Training the shape selection network module.}
One straightforward approach consists in training a 1-of-$K$ classifier.
Specifically, for each object $i$ in the input image, the network predicts a vector $\hat{{\mathbf{z}}}^i \in \mathbb{R}^{K}$
scoring it against each of the $K$ exemplar shapes in the shape database $\mathcal{Z}$.
We can then place a cross-entropy loss $CE(\cdot, \cdot)$ on this output and supervise it with the ground truth one-hot encoding of the target shape $\mathbf{z}^i \in \{0, 1\}^{K}$ (\reffig{shape_selection}, \emph{left}):
\begin{align}
\mathcal{L}'_{\mathbf{z}} &= \frac{1}{M} \sum_{i=1}^M CE\big(\mathbf{z}^i, \sigma{(\hat{\mathbf{z}}^i)}\big) \\
&= -\frac{1}{M} \sum_{i=1}^M \sum_{k=1}^K {z}^i_k \cdot \text{log}\big(\sigma(\hat{{\mathbf{z}}}^i)_k\big)
\label{eq:shape_hard}
\end{align}
where $M$ is the number of detections in the image, $\sigma$ is the softmax function (\cf next paragraph, where we use sigmoid $S$ instead), and $z^i_k$ is the $k$-th entry in vector $\mathbf{z}^i$.
At test time, the predicted shape exemplary $\hat{z}^i$ is computed as $\hat{z}^i = \text{argmax}_k(\hat{\mathbf{z}}^i)$.
This approach corresponds to the clustering baseline presented by Tatarchenko \etal in \cite{Tatarchenko19CVPR}.
The issue with this approach is that two objects $\{i, j\}$ that are geometrically similar (\ie $\phi^i \approx \phi^j$) can have disagreeing supervision signals $\{\mathbf{z}^{i}, \mathbf{z}^{j}\}$.
This can have a negative impact on the network training, as the network is asked to simultaneously predict a high value for one of the $K$ database shapes, while also predicting a low value for another, very similar shape.
Instead, we propose as alternative formulation a soft relaxation of the binary target labels $\mathbf{z} \in \{0, 1\}^K$ which takes the geometric similarity of shapes into account.
Specifically, we allow to predict multiple shape exemplars simultaneously and they are no longer mutually exclusive.
Formally, we redefine the target labels $\mathbf{z}$ using a shape similarity function $d(\cdot, \cdot)$ (\reffig{shape_selection}, \emph{right}) such that:
\begin{equation}
\mathcal{L}_{\mathbf{z}} = - \frac{1}{M} \sum_{i=1}^M \sum_{k=1}^K d(i, k) \cdot \text{log}\big(S(\hat{z}_k)\big)
\label{eq:shape_soft}
\end{equation}
where $S$ is the sigmoid function and
\begin{equation}
d(i, k) = [1 - \lVert \phi^i - \phi^k \rVert_2 ]_{+}
\end{equation}
where $[\,\cdot\,]_+ = \text{max}(\,\cdot\,,\,0)$ and $\lVert \cdot \rVert_2$ is the Euclidean distance between the shape exemplars' SDFs $\phi^k$ in the shape database $\mathcal{Z_{\phi}}$, and $\phi^i$ is the ground truth SDF of object $i$.
In the following, we will refer to these labels as \emph{soft}-labels, and when using the one-hot encoding as \emph{hard}-labels.
In \refsec{experiments}, we show that our alternative soft formulation is key to improve shape selection.
At test time, we simply select the shape exemplar with the highest output value by the network.
Next, we describe our approach to estimate the 3D bounding boxes,
which are subsequently used to transform the estimated object shapes from their canonical database pose into the scene coordinate frame.
\subsection{3D Bounding Box Estimation (9-DoF Poses)}
\label{sec:3d_bounding_box}
Along with the realistic shape representation, we aim at finding a 9-DoF bounding box for each object in the input image $I$.
We describe now the estimation of the 9-DoF bounding box parameters, capturing the object pose in the scene.
They include a 3D rotation $\hat{\mathbf{R}} \in SO(3)$, a 3D translation $\hat{\mathbf{t}} \in \mathbb{R}^3$ and a 3D scale $\hat{\mathbf{s}} \in \mathbb{R}^3$.
These parameters are used to transform the estimated object shape from its canonical database pose to the scene coordinate frame.
In \emph{CenterNet}, Zhou \etal~\cite{Zhou19arxiv} formulate the rotation estimation as a combination of classification over quantized bins followed by regression to a continuous offset.
That formulation requires the definition of multiple loss functions along with carefully tuned loss weights.
Instead, we directly parameterize the object rotation as a 3D rotation matrix $\hat{\mathbf{R}} \in SO(3)$.
Specifically, our network predicts a 9-dimensional output interpreted as a $3\times3$ rotation matrix $\mathbf{M}$ with (differentiable) SVD decomposition~\cite{Gene1996} $\mathbf{M}$\,=\,$\mathbf{U\Sigma V}^{\top}$.
The corresponding symmetric orthogonal rotation matrix $\hat{\mathbf{R}}$ is then obtained by projecting $\mathbf{M}$ into $SO(3)$~\cite{Levinson20arxiv}:
\begin{equation}
\hat{\mathbf{R}} = \mathbf{U} \mathbf{\Sigma}' \mathbf{V}^{\top}, \; \text{where } \mathbf{\Sigma}' = \text{diag}\big([1, 1, \text{det}(\mathbf{U}\mathbf{V}^{\top})]\big)
\end{equation}
This formulation is straightforward and can directly be optimized using, \eg, the Frobenius norm~\cite{Gene1996}: $\lVert \mathbf{R} - \hat{\mathbf{R}} \rVert_F$.
The translation $\hat{\mathbf{t}} \in \mathbb{R}^3$ is defined as the vector from the scene origin to the 3D bounding box centroid,
and can be optimized, \eg, with the \emph{Huber} loss (smooth-$L_1$): $\lVert \mathbf{t} - \hat{\mathbf{t}} \rVert_H$.
Instead, we propose to jointly optimize both the rotation $\hat{\mathbf{R}}$ and the translation $\hat{\mathbf{t}}$ using the concatenated transformation $\mathbf{T} = [\mathbf{R}\,| \,\mathbf{t}]$.
Specifically, we minimize the squared Euclidean distance between the point cloud $\mathcal{P}^i$ of the object under the estimated $\hat{\mathbf{T}}$ and ground truth transformation $\mathbf{T}$.
Formally, we have:
\begin{equation}
\mathcal{L}_{\mathbf{Rt}} = \sum_{i=1}^M \sum_{\mathbf{x} \in \mathcal{P}^i} \lVert \mathbf{T}^i\,\mathbf{x} - \hat{\mathbf{T}}^i\,\mathbf{x} \rVert_2^2
\label{eq:pose}
\end{equation}
where $M$ is the number of objects in the image, $\mathbf{x} \in \mathbb{R}^3$ is a point in the point cloud $\mathcal{P}^i$ sampled from the surface of the ground truth object $i$ in the input image.
Finally, the scale loss $\mathcal{L}_{\mathbf{s}}$ is implemented as the $\text{L}_1$ distance between predicted and ground truth 3D scale averaged over all objects in the input image.
Similar to \cite{Zhou19arxiv}, the neural network branch that predicts the bounding box parameters is class-agnostic (\ie the same for all classes $c$) and only receives supervision at the ground truth center locations.
In summary, the loss for the 9-DoF bounding box estimation consists of two terms:
$\lambda_{Rt}\mathcal{L}_{Rt} +\lambda_{s}\mathcal{L}_{s}$.
\input{figures/collision}
\subsection{Collision Loss}
\label{sec:collision_loss}
Towards our goal of realistic multi-object reconstruction,
it is not only important that the individual objects exhibit realistic shapes,
but also that their poses form a physically plausible spatial configuration in the scene.
One specific concern is that reconstructed objects should not intersect or collide
with each other.
However, the model we just presented in practice often predicts colliding shapes, especially for nearby objects.
As a remedy, we propose to add a collision loss that inflicts a penalty whenever two or more reconstructed objects collide.
In particular, we rely on the convenient property of our model that it can choose from multiple shape representations,
and use the SDF representation $\phi^j$ of an object $j$ and the point cloud $\mathcal{P}^i$ of another object $i$ to compute the point-to-surface distance.
Specifically, the SDF reveals $\phi^j$ the distance of a point to the nearest surface of object $j$. It is negative inside the object and positive outside. Therefore we define $\widetilde \phi = \text{min}(-\phi, 0)$ such that the values are positive inside the object and zero outside.
Formally, the collision loss for one object $i$ with all other objects $j$ is:
\begin{equation}
\mathcal{L}_{coll}^{i} = \sum\limits_{\substack{j = 1 \\ i\neq j}}^M \sum_{\mathbf{x} \in \mathcal{P}^i}
\widetilde{\phi}^j(\mathbf{T}^{ij} \mathbf{x})
\end{equation}
where $M$ is the total number of detections in the scene,
$\mathbf{T}^{ij}$ is the transformation matrix
placing the point cloud $\mathcal{P}^i$ of object $i$ into the local coordinate system of object $j$.
As we store the SDFs values as discrete voxel grids, we perform differentiable trilinear interpolation when sampling $\widetilde\phi^j$ at the continuous point positions $ \mathbf{T}^{ij} \mathcal{P}^i$.
\reffig{collision_illustration} provides a visual interpretation of the loss. Inside object $j$, the SDF $\widetilde\phi^j$ is positive and zero outside.
Note that the SDF $\phi$ and point clouds $\mathcal{P}$ can be pre-computed, as the shape reconstruction task is formulated as an exemplar selection problem in our model, so all possible output shapes are known beforehand.
The collision loss over all objects in a scene is:
\begin{equation}
\mathcal{L}_{coll} = \sum_{i = 1}^M \rho ( \mathcal{L}_{coll}^i )
\end{equation}
where $\rho(x)$\,=\,$\frac{x^2/2}{1 + x^2}$ is the robust Geman-McClure loss \cite{Gene1996} compensating for varying point densities among objects.
\subsection{Training Details}
The full model is optimized by minimizing the multi-task loss $\mathcal{L}$ defined using the previously introduced losses:
\begin{equation}
\mathcal{L} = \mathcal{L}_{key} +
\lambda_{\mathbf{Rt}}\mathcal{L}_{\mathbf{Rt}} +
\lambda_{\mathbf{s}}\mathcal{L}_{\mathbf{s}} + \lambda_{\mathbf{z}}\mathcal{L}_{\mathbf{z}} + \lambda_{coll}\mathcal{L}_{coll}
\end{equation}
where $\lambda$ are weighting coefficients with associated values $\{10,\,10,\,0.1,\,1.0\}$ respectively.
One important observation is that the collision loss can contradict the pose losses $\mathcal{L}_{Rt}, \mathcal{L}_{s}$, especially in the beginning of the training process when the initial object pose estimates are still quite far away from the ground truth.
Penalizing colliding objects at this stage is not helpful and even has a negative impact on convergence speed.
Therefore, we enable the collision loss only after 100 epochs; before that we set its weight $\lambda_{coll} = 0$.
We train the entire network from scratch and end-to-end using the Adam optimizer,
and a batch size of 32 for 300 epochs on four P100 GPUs.
Training the model to convergence takes about 48 hours.
After 5 epochs of warm-up, we use a constant learning rate of $10^{-3}$ and perform cosine-decay after 200 epochs.
We implemented our model in TensorFlow\,2.
We found strong data augmentation to be critical for training stability.
Specifically, we perform HSV-color augmentation and random horizontal image flipping (\reffig{data_augmentation}).%
\input{figures/data_augmentation}
\section{Details on inference phase}
During training, the model heatmaps $\hat{Y}$ are solely supervised at ground truth pixel locations $\mathbf{p}$, all other pixels are ignored.
The predicted heatmap values $\hat{Y}_{\mathbf{\hat{p}}_i}$ can directly be interpreted as confidence scores $s_i$.
The details of the training process are described in Sec.\,3.1 of the paper and are analogous to \cite{Zhou19arxiv}.
At test time, each pixel in the predicted heatmaps $\hat{Y} \in [0, 1]^{\frac{W}{R} \times \frac{W}{R} \times C}$ corresponds to a potential object prediction.
In our experiments, the heatmaps have a resolution of $64 \times 64$.
However, we only consider detections with a confidence score $s_i$ above a fixed threshold $\tau$, \ie $s_i > \tau$. For the experiments on the synthetic CoReNet~\cite{Popov20ECCV} datasets, we set $\tau = 10^{-2}$ and for the inference on the real-world images (discussed below), we set $\tau = 10^{-1}$ to compensate for the larger uncertainty originating from the generalization gap since we evaluate on real data while training on synthetic data.
\section{Inference speed}
One full pass of the presented method from the input image of size $256$\,$\times$\,$256$\,px$^2$ to the multi-object 3D reconstruction takes about $29$\,ms. This corresponds to \textbf{34 FPS}, \ie our method is real-time capable running an a P100 GPU.
Unlike \cite{Popov20ECCV}, our method does not require additional post-processing steps (\eg, marching cubes) to extract the final object surface. Instead, we place the detected shape exemplar from the shape database into the scene and align it to the predicted 9-DoF bounding box.
\newpage
\section{Additional qualitative results on real scenes}
We show additional qualitative result on real pictures taken with a mobile phone.
The images in this real dataset show comparable object classes and imitate the general placement of objects of the synthetic dataset on which it is trained.
Besides cropping and resizing the input images to $256$\,$\times$\,$256$\,px$^2$, no additional pre-processing steps are performed.
As described in the main paper, our model is trained on the synthetic ShapeNet-triplet dataset presented in \cite{Popov20ECCV} and now evaluated on real images. We also show qualitative results from the CoReNet $m_9$ model, trained an the same synthetic dataset and evaluated on the newly recorded real images.
Qualitative results on both models are shown below.
\section{Discussion and comparison to CoReNet}
In general, the method presented in this work generalizes notably better to real images than CoReNet, in terms of qualitative reconstructions. This is mostly due to the fact that we represent objects as points and formulate reconstruction as a shape selection problem. In particular, this means that our reconstructions are always valid shapes without holes or detached components.
Our strong data augmentation is likely to contribute as well to the improved generalization from synthetic training data.
Further, our method performs reconstruction on an object level, such that different instances of the same semantic class are represented individually along with a 9-DoF bounding box each.
Additionally, our method performs reconstruction per object instance.
In CoReNet, different instances of the same object class are not easily obtained since individual objects of the same semantic class share a dense voxel-grid.
Separating different instances could be achieved via connected components but might fail in case of wrongly disconnected parts.
On a technical side, CoReNet predicts a dense $128^3$ voxel grid aligned to the image.
This leads to scalability issues when increasing the scene size (cubic growth).
Further, CoReNet cannot reconstruct objects that are truncated in the input image and therefor outside of the dense $128^3$ voxel grid.
Finally, it reconstructs objects in the camera coordinate system while our method predicts objects in the world-coordinate system.
Our approach has the advantage that objects can directly be rendered on a ground plane.
However, both methods perform the reconstruction only up to scale due to missing camera extrinsics at test time.
\fordir{1758}{''}
\fordir{1785}{''}
\fordir{1730}{''}
\fordir{1868}{''}
\fordir{1844}{''}
\fordir{1504}{''}
\fordir{1493}{''}
\fordir{1489}{''}
\fordir{1451}{''}
\fordir{1423}{''}
\fordir{1411}{''}
\fordir{1410}{''}
\section{Conclusion}
\vspace{-5px}
We have presented an end-to-end trainable model for realistic and joint 3D multi object reconstruction from a single input RGB image.
Specifically, we extend the CenterNet paradigm to coherently predict multiple 3D objects.
That is, objects are first detected as points, then reconstructed by jointly estimating 9-DoF object bounding boxes and corresponding 3D shape exemplars from a given shape database.
The presented model is agnostic to shape representations and flexible towards changing the representations in the shape database.
We further aim towards realistic and physically plausible reconstructed scenes.
To that end, the model encourages collision-free reconstructions and uses CAD models as shape representations to guarantee valid and realistic object shapes.
On a technical level, we propose a mechanism to perform shape selection using soft labels, and propose to combine SVD-based rotation and translation estimation as opposed to individual optimizations.
\section{Introduction}
Extracting 3D information from a single image has multiple applications in computer vision, robotics and specifically on mobile AR/VR devices.
Indeed, the field has gained great momentum in the computer vision community~\cite{Gkioxari2019MeshR,kuo2020mask2cad,Nie20CVPR,Popov20ECCV,Jiajun2016}.
3D information can come in many forms: 3D bounding boxes, point clouds, meshes, voxels or distance fields. The choice of the representation often depends on the task at hand.
In this paper, we aim to extract all the above information in an efficient and scalable way, all from just a single view and in a single pass.
Recent methods \cite{Gkioxari2019MeshR, kuo2020mask2cad} perform multi-object reconstruction by independently processing detections from state-of-the-art object detectors \cite{HeMaskRCNN, Kuo19ARXIV} or jointly predict multiple objects in a dense voxel grid \cite{Popov20ECCV}, which can be computationally expensive due to scalability issues.
Instead, inspired by CenterNet~\cite{Zhou19arxiv}, a framework for accurate and efficient 2D object detection, we propose to use a keypoint detector to localize objects as sparse center-points and directly predict 9-DoF bounding boxes and shapes jointly for all objects in the scene. The CenterNet architecture is modular and can easily be extended to solve varying tasks such as 2D detection, 3D detection, human body pose estimation and tracking \cite{yin2020center, zhou2020tracking}.
In this paper, we argue for a complete and coherent 3D reconstruction of multiple objects using CenterNet where each pixel votes for a class label, a 3D bounding box, and a 3D shape exemplary to place objects into the world coordinate frame.
Another key aspect relates to the question of the best shape representation.
While numerous representations have been proposed, \eg Signed Distance Functions (SDF) \cite{Park2019CVPR}, meshes \cite{Gkioxari2019MeshR, groueix2018}, voxel grids \cite{Popov20ECCV}, point clouds \cite{Fan16, Jiang2018GALGA}, and even hybrid approaches \cite{Runz2020CVPR}, they all have task-dependent advantages and disadvantages.
In this work, we propose a representation-independent shape selection mechanism.
That is, shape exemplars are selected from a given shape database that can implement different (or multiple) representations.
The most convenient representation is chosen depending on the specific task, be it for defining objective functions or for visualization purposes \reffig{teaser}.
Additionally, we take extra provisions for a realistic and physically plausible reconstruction.
In particular, objects should be properly placed in the world frame and should not intersect with each other. Inspired by recent methods on human body pose estimation in 3D scenes \cite{Hassan19ICCV, Jiang20CVPR, Zhang203DV}, we add a collision loss that supports plausible reconstructions such that reconstructed objects do not intersect.
To summarize, given a RGB image, our single-stage method performs lightweight reconstruction, it is real-time capable, fully differentiable and end-to-end trainable.
In our experiments, we compare different 9-DoF bounding box formulations,
we evaluate our shape selection mechanism using soft labels and compare with the current state-of-the-art CoReNet~\cite{Popov20ECCV}.
\vspace{-12px}
\paragraph{Contributions.}
Our key contributions are:
\begin{itemize}
\vspace{-6px}
\item We propose a method for multi-object 3D reconstruction that extends the CenterNet~\cite{Zhou19arxiv} framework to perform fully holistic 3D scene reconstruction in a single-stage network and from a single RGB image.
\vspace{-6px}
\item We present a shape-selection mechanism to perform 3D object reconstruction,
where we reformulate the 1-of-K classification task using soft target labels based on geometric similarities between exemplar 3D shapes: this significantly improves over hard-labels as used in previous baselines~\cite{Tatarchenko19CVPR}.
\vspace{-6px}
\item We obtain physically plausible reconstructions by leveraging a collision loss that encourages non-intersecting reconstructions.
Further, the use of CAD models guarantees valid and realistic shapes.
\vspace{-6px}
\item Our approach is agnostic to different shape representations.
Since we formulate the shape reconstruction problem as selecting a shape exemplar
(\ie, index in a precomputed database of shapes),
we can choose from any representation given the estimated shape exemplar.
\end{itemize}
\section{Experiments}
\label{sec:experiments}
The quantitative evaluation consists of three parts and each one addresses a core contribution of this paper:
We compare multiple 9-DoF bounding box estimation mechanisms and report improved scores over the one used in CenterNet~\cite{Zhou19arxiv},
the collision loss reduces the number of collisions which increases the realism and physical plausibility of the reconstructions,
and we show that our shape selection mechanism using soft-labels improves over hard-labels as used by Tatarchenko \etal~\cite{Tatarchenko19CVPR}.
Finally, we compare our method to the current state-of-the-art for multi-object reconstruction CoReNet~\cite{Popov20ECCV}.
Qualitative results on synthetic data are shown in \reffig{qualitative_results} and on real data in the appendix.
\vspace{-5px}
\paragraph{Datasets.}
We evaluate the reconstruction performance following \emph{CoReNet}~\cite{Popov20ECCV}, using two datasets \emph{ShapeNet-pairs} and \emph{ShapeNet-triplets}.
They contain $256$\,$\times$\,$256$\,px$^2$ photorealistic renderings of either pairs or triplets of ShapeNet~\cite{Chang2015ARXIV} models
placed on a ground plane with full global illumination using the PBRT~\cite{Matt16} renderer against an environment map background.
The scenes are rendered from a random camera view point (yaw and pitch).
Objects are placed at random locations on the ground plane, with random scale and rotation, making sure they do not overlap in 3D.
This setup is especially well suited to evaluate the physical plausibility of our multi-object reconstruction approach.
To build the shape database $\mathcal{Z}$ described in \refsec{shape_reconstruction},
we use ShapeNet~\cite{Chang2015ARXIV} as the correspondences between its CAD models and the objects rendered in the images are readily available in the CoReNet datasets. We set $k$\,=\,$50$, with the number of object types $C$\,=\,$6$ (ShapeNet-triplets) we obtain $K$\,=\,$300$ exemplary shapes in the shape database $\mathcal{Z}$.
\vspace{-5px}
\paragraph{How should the 3D bounding box be optimized?}
In \refsec{3d_bounding_box}, we presented different approaches to estimate the rotation and translation of 3D bounding boxes which we compare here.
Specifically, we compare the combined loss $\mathcal{L}$ from \refeq{pose} with the individual losses $\mathcal{L}_{\textbf{R}}$ and $\mathcal{L}_{\textbf{t}}$ defined using the Frobenious norm and the Huber loss, as described in \refsec{3d_bounding_box}. $\mathcal{L}_{\textbf{M}}$ is equal to $\mathcal{L}_{\textbf{R}}$ but does not perform the projection into $SO(3)$, such that the resulting matrix is not guaranteed to be a valid rotation matrix~\cite{Levinson20arxiv}.
Finally, we compare to the rotation parameterization from \cite{Zhou19arxiv}, \ie, the combination of a classification loss $\mathcal{L}_{\text{bin}\mathbf{R}}$ over quantized bins followed by a regression loss $\mathcal{L}_{\text{off}\textbf{R}}$ to continuous offsets.
We use the mean average precision (mAP) as 3D object detection metric \cite{Qi19ICCV} with 3D IoU threshold 0.25 and 0.5, as originally proposed in \cite{Song15CVPR}.
The results of this experiment are shown in \reftab{ablation}. In summary, the best option is to directly predict the rotation matrix $\mathbf{R}$ using SVD and optimize it together with the translation $\mathbf{t}$ using our $\mathcal{L}_{\mathbf{Rt}}$.
\input{results/ablation}
\paragraph{How effective is the collision loss?}
An important aspect of multiple object reconstruction is physical plausibility, \ie, reconstructed objects should not intersect.
To evaluate the effectiveness of the collision loss, we measure collisions as the mean intersecting volume (mIV) between colliding objects and the overall number of collisions.
We report both scores in \reftab{ablation_collision_loss} on the validation split of ShapeNet-triplets.
The collision loss substantially decreases the intersecting volume and reduces the number of collisions by 60\%.
\input{results/ablation_collision_loss}
\vspace{-5px}
\paragraph{How do soft- and hard-labels affect shape estimation?}
In \refsec{shape_reconstruction}, we present two approaches to select shape exemplars from the shape database $\mathcal{Z}$.
The first one optimizes $\mathcal{L}'_{\textbf{z}}$ (\refeq{shape_hard}) using hard-labels, \ie, one-hot encoding of target labels $z$ as presented in \cite{Tatarchenko19CVPR}.
The second approach $\mathcal{L}_{\textbf{z}}$ (\refeq{shape_soft}) relies on soft-labels taking into consideration the geometric similarity of object shapes,
therefore allowing to predict multiple plausible shapes instead of forcing the network to make a hard decision on one particular shape.
Using the evaluation methodology from \cite{Popov20ECCV}, we measure the shape reconstruction as intersection-over-union (IoU) on a $128^3$ voxel grid (\reftab{soft_hard_labels}). We report both mean IoU over all classes and class-agnostic global IoU.
Our shape-selection mechanism using soft-labels significantly improves shape prediction by $+4.1$ mIoU over the hard-labels baseline \cite{Tatarchenko19CVPR}.
\input{results/soft_hard_labels}
\newpage
\paragraph{How does the presented method compare to CoReNet?}
First, we compare the reconstruction performance of our method with CoReNet~\cite{Popov20ECCV}.
Given an image, \cite{Popov20ECCV} predicts a dense $128^3$ voxel grid.
Each voxel is either empty or assigned to an object-class, trained with the focal loss ($m_8$)~\circlenum{1} or the IoU loss ($m_9$) \circlenum{2}, see \reftab{corenet_triplet}.
Our method reaches a higher relative 3D IoU (59.5 \vs 43.9) but does not quite match CoReNet's absolute 3D IoU (36.4 \vs 43.9).
The relative score takes the maximum possible score into account, \ie,
as our model is supervised with clustered shapes (from the shape database $\mathcal{Z}$) it can only be as good as this supervision.
The oracle \circlenum{5} indicates this best possible score for our model, using the ground truth 9-DoF bounding box and the ground truth shapes from $\mathcal{Z}$ used to supervise our model.
We also perform Procrustes alignment\circlenum{4} to the ground truth to abstract from 9-DoF estimation errors (48\% \vs 36\%).
Second, we analyze the generalization capabilities of both models under varying number of objects and class-type combinations (\reftab{corenet_num_objects}).
We train on ShapeNet-pairs and evaluate on ShapeNet-triplets, and vice-versa.
Our model generalizes well when trained on triplets and evaluating on pairs (36.41 \vs 36.21).
Both CoReNet and ours experience performance drops when trained on pairs and evaluating on triplets, but ours looses less than CoReNet (-10\% \vs -22\%).
\input{results/corenet_num_objects}
\section{Related Work}
\vspace{-5px}
\paragraph{3D from a single image.}
Single image 3D reconstruction has seen tremendous progress over the last years, with various shape representations being examined. Works like \cite{choy20163d, Girdhar16b, henzler2019platonicgan, richter2018, marrnet, Jiajun2016} operate on voxel grids, a representation that fits very well with convolutional neural networks. Other methods output point clouds \cite{Fan16, Jiang2018GALGA}, taking advantage of their compactness.
Another line of work~\cite{DIBR19, groueix2018, kato2018neural, liu2019softras, wang2018pixel2mesh} outputs meshes, a powerful representation that provides neighborhood structure to the 3D shape.
Recently, implicit representations \cite{chen_cvpr19,Mescheder2019CVPR,Niemeyer2020CVPR,Park_2019_CVPR} have gained popularity for their ability to represent fine details at arbitrary resolutions. An alternative to the 3D shape regression is the work of \cite{Tatarchenko19CVPR} that poses the 3D reconstruction as a classification/retrieval problem.
However, all of these methods focus on the single object case: the image contains a single object to be reconstructed often on a white background. By having every pixel predict a 3D bounding box, a shape index similar to \cite{Tatarchenko19CVPR}, and the 9-DoF, we are able to handle arbitrary number of objects in the scene in a single forward pass.
\vspace{-15px}
\paragraph{Multi-object 3D reconstruction.}
Recently, there has been progress also in the case of multi-object 3D reconstruction. Im2CAD \cite{izadinia2017im2cad} performs object detection and room layout estimation in an input image, and then retrieves 3D shapes from a database and aligns them to match the detections. However, it involves a secondary non-differentiable optimization step, that renders and matches the estimations with the input image.
3D-RCNN~\cite{3DRCNN_CVPR18} estimates the 3D shape of each object instance in an image through a render-and-compare learning approach, where the shape is represented as a linear basis from a dataset of 3D models. This shape representation though is mostly accurate for classes with low intra-class variability such as cars and humans. Given an image and a set of object proposals, Tulsiani \etal.~\cite{factored3dTulsiani17} decompose the underlying 3D scene into a room layout, a set of voxels grids for every object, together with their rotation/translation/scaling parameters. In the work of~\cite{huang2018holistic}, the 3D scene is represented as a graph that is being optimized so the configuration of objects and room layout matches the semantic and geometric properties of the input image. Mesh R-CNN~\cite{Gkioxari2019MeshR} can be seen as an extension of Mask-RCNN~\cite{HeMaskRCNN} to estimate 3D meshes for every object instance in an image, but without reasoning about their scale/depth ambiguity. More recently, Total 3D Understanding~\cite{Nie20CVPR} proposes a system that estimates the room layout, 3D object bounding boxes and a mesh for every object as well. However, the 3D estimates depend on the initial 2D bounding boxes.
Our work is similar to the recent Mask2CAD~\cite{kuo2020mask2cad} which also predicts object centers and retrieves CAD models for reconstruction.
However, they estimate the 3D placement of the retrieved shapes individually for each object and do not model collision avoidance, while our approach performs a joint estimation of all objects and avoids intersections between reconstructed objects.
One common element of the aforementioned works is that they are based on
two-stage architectures where the objects are first detected and then a second stage individually estimates their shapes.
In contrast, we propose a single-stage method which scales well with the number of objects and does not involve further post-processing mechanisms.
\textbf{CoReNet}~\cite{Popov20ECCV} performs dense shape prediction in a fixed $128^3$ voxel grid, which does not scale with the size of the reconstructed world. In our case, we can allocate $128^3$ voxels to each individual object, thus the total voxel count scales with the number of objects in a scene. Moreover, CoReNet bakes all scene information into one model during training (number of objects, class combinations). Instead, our approach is more modular and can detect and reconstruct a variable number of objects, as well as new combinations of classes not seen during training.
Our approach predicts both a 9-DoF oriented bounding box and shape.
Additionally, our shape representation is independent of the actual representation: we can predict signed distance functions, point clouds, occupancy grids and meshes, which naturally leads to realistic scene reconstructions,
while CoReNet reconstructions often show detached object parts or holes in the reconstructed shapes, especially in multi-object scenes.
|
1,314,259,993,559 | arxiv | \section{Introduction}
The question of how dynamics of a physical system depends
on the choice of its Hamiltonian is one of the most important
in theoretical and mathematical physics.
Its significance is only enhanced by the fact that
in modern theory dynamics is considered not only in physical
time, but in many other variables, including the coupling constants
of a theory and the shape of functional-integration domain
(the so-called renormalization-group dynamics \cite{RG}).
Normally dynamics
is described in terms of a phase portrait
or of eigenstates configuration for classical and
quantum systems respectively, and the question is how
these portraits and configurations change under
variation of the Hamiltonian.
It is well known that this change is not everywhere smooth:
at particular "critical" or "bifurcation" points
in the space of Hamiltonians the phase portraits get
reshuffled and change {\it qualitatively}, not only
{\it quantitatively}: this phenomenon is also known as
"phase transition".
Normally these bifurcations are described in terms of
the change of stability properties of various periodic
orbits (including fixed points, cycles and "strange
attractors").
More delicate information is provided by the study of
intersections of {\it unstable} orbits, but it is
a little more difficult to extract.
Dynamical systems are much better studied in the case of
discrete dynamics: this reveals many properties,
which get hidden in transition to continuous evolution.
In other words, this resolves ambiguities of continuous
dynamics: there are many different discrete dynamics
behind a single continuous one, and to reveal the properties
of the latter it is often needed to look at the whole
variety of the former.
In classical case discrete dynamics (with a time-independent
"Hamiltonian") is the theory of iterated maps:
\begin{eqnarray}
x \rightarrow f(x) \rightarrow f^{\circ 2}(x) = f\big(f(x)\big)
\rightarrow \ldots \rightarrow f^{\circ p}(x) =
f\Big(f^{\circ(p-1)}(x)\Big) \rightarrow \ldots
\end{eqnarray}
and is actually a branch of algebraic geometry \cite{DM}
(for generalization of \cite{DM} to discrete
dynamics of many variables see sections 7 and 8 of \cite{nolal}).
According to \cite{DM},
the structure of the phase portrait is controlled by the
{\it Julia set}: collection of all periodic orbits
of the map $f$ in the $x$ space, i.e. of all roots of
all functions
\begin{eqnarray}
F_p(x) = f^{\circ p}(x)-x
\label{Fpdef}
\end{eqnarray}
Therefore the {\it Universal Mandelbrot set} (UMS),
consisting of all points
$f(x)$ in the space of functions (Hamiltonians) where some
two periodic orbits coincide, can be alternatively
characterized as the {\it Universal Discriminantal variety}
formed by the roots of various resultants
$R_x\Big(F_{pm}(x)/F_p(x),F_p(x)\Big)$.
This is almost a tautological identification,
since by definition the resultant of two functions vanishes
whenever they have a common zero,
still it establishes relation between {\it a priori} different
sciences: the theory of phase transitions and
algebraic geometry.
Usually considerations are restricted to
particular {\it sections} of the Universal
Mandelbrot set, by choosing specific one-dimensional families of
functions: see Figs.\ref{Mand2}-\ref{0Man4} for the three famous
examples, $f(x;c) = x^d + c$ with $d=2$, $d=3$ and $d=4$.
In Fig.\ref{symmebre} we show also the result of a deviation from
this simple form.
We keep the name "Mandelbrot set" for any of such
one-complex-dimensional sections of infinite-dimensional UMS,
while "Mandelbrot Set" (with two capital letters) refers to the
original example in Fig.\ref{Mand2}. Today all kinds of {\it
experimental data} about these sets can be obtained with the help
of available computer programs, like {\it Fractal Explorer}
\cite{FE}, which is used to make Figures \ref{Mand2}-\ref{0Man4}
in the present paper\footnote{
However, one should be careful in using this program for
non-canonical families, like in Fig.\ref{symmebre},
see \cite{DM} and s.\ref{interpol} below for explanations.}.
\bigskip
\Figeps{Mand2}
{500,328}
{\footnotesize {\bf A.} Mandelbrot set
${\cal M}_2$ for the family $f(x;c)=x^2+c$ \cite{Mand}. The
boundary of the black domain in the complex plane of $c$-variable
consists of all values of $c$ where {\it Julia set} is reshuffled:
as explained in \cite{DM} this happens when a stable orbit crosses
an unstable one. A given orbit ${\cal O}$ is stable within and
{\it elementary domain}, which is -- with incredibly good accuracy
-- either a cardioid $c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{2}e^{i\phi}\right)$ at the center (root) of a cluster
or a circle $c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}$ at the
non-root nodes of the tree. Explanation of this fact (that only
these two shapes occur and exactly in these roles: at roots and
higher nodes respectively) is the task of the present paper.
Projection to the line of real $c$ is also shown. \ \ \ \ {\bf B.}
Two pieces of the Mandelbrot set ${\cal M}_2$ under microscope:
exactly the same structures are seen as the central cluster in
{\bf A}, with the same central cardioids and attached circles.
There are infinitely many such structures of different sizes
$r_{{\cal O}}$ in ${\cal M}_2$, since $r_{{\cal O}}$ are very
small, they are not actually seen in {\bf A}, but can be easily
studied with the help of the {\it Fractal Explorer} \cite{FE}.
More numerical characteristics of the lowest elementary domains
are collected in a Table in s.\ref{accu}. \ \ \ \ {\bf C.} Domains
$(1)$ and $(2,1)$, obtained as numerical solutions of exact
equations (\ref{shapeeq}), see s.\ref{homsol}. They coincide with
domains, seen in {\bf A}. This is non-trivial, because pictures
{\bf A} and {\bf B} are obtained by absolutely different procedure
(actually, black region consists of points $c$, with limited
sequences $f^{\otimes n}(c)$), and there is no {\it a priori}
reason for any parts of them to satisfy any kind of algebraic
equations. Separation of Mandelbrot sets into domains, possessing
an algebraic description, in particular the relation between
pictures ${\bf A}$ and ${\bf C}$, is important property of
iterated maps. \ \ \ \ {\bf D.} Tree structure of the central
cluster (only a few lowest branches are shown), each branching
occurs at the center of a new elementary domain, and the number at
the vertex is the order of the periodic orbit, which is stable
inside this domain. Thus elementary domains are naturally labeled
by sequences of divisors, leading to the root of the tree. All
other clusters are represented by exactly the same trees, only
numbers are multiplied by the order of the root orbit. Thus entire
${\cal M}_2$ has a natural {\it forest} structure. }
\bigskip
\Figeps{0Man3}
{450,255}
{\footnotesize
Mandelbrot set ${\cal M}_3$ for the family $f(x;c)=x^3+c$.
Everything said about ${\cal M}_2$ is true in this case,
only the place of simple cardioids at roots of clusters
is taken by the two-cusp ones
$c-c_{{\cal O}} \approx r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{3}e^{2i\phi}\right)$.
Descendant domains are nicely approximated by single-cusp
cardioids
$c-c_{{\cal O}} \approx r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{2}e^{i\phi}\right)$, see eq.(\ref{cuca}).
No circles are present.}
\bigskip
\Fig{0Man4}
{450,224}
{\footnotesize
Mandelbrot set ${\cal M}_4$ for the family $f(x;c)=x^4+c$.
Everything said about ${\cal M}_2$ is true in this case,
only the place of simple cardioids at roots of clusters
is taken by the three-cusp ones
$c-c_{{\cal O}} \approx r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{4}e^{3i\phi}\right)$.
Descendant domains have the shape of deformed 2-cusp
cardioid,
$c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}\left(
1 - ae^{i\phi} + \frac{b}{3}e^{2i\phi}\right)$
with $a = 2^{-4/3} \approx 0.40\ldots$ and
$b \approx 11/(9\cdot 2^{2/3}) = 0.77\ldots$,
see eq.(\ref{quca}).
No circles or single-cusp cardioids are present.
}
\bigskip
\Figeps{symmebre}
{450,385}
{\footnotesize
Mandelbrot set ${\cal M}_{3-1}$ for the family
$f(x;c)=ax^3+(1-a)x^2+c$
with two different values of additional
parameter $a=4/5$ {\bf (A)} and $a=2/3$ {\bf (B)}.
Everything said about ${\cal M}_2$ is true in this case,
only in addition to the simple single-cusp cardioids
$c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{2}e^{i\phi}\right)$
in the role of central (root) domains there are also
two-cusp ones,
$c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{3}e^{2i\phi}\right)$.
Moreover, for distinguished value $a=1$, see Fig.\ref{0Man3},
simple cardioids do not appear as central (root)
domains of individual clusters at all --
their place at roots is taken by
2-cusp curves. Instead for $a=1$ the single-cusp cardioids
fully replace circles in the role of descendant domains.
For smaller values of $a$ the simple cardioids start to emerge
as roots (and circles -- as descendants), but in remote clusters,
at large distances from the central domain.
\ \ \ The central cluster in these pictures look a little
asymmetric: this is wrong, and is an artefact of the
erroneous algorithm, used to construct Mandelbrot sets
by {\it Fractal Explorer}, see introductory remarks to
s.\ref{interpol} below.
}
Mandelbrot sets are often considered as typical examples of
"fractal structures", serving mostly for admiration, philosophical
speculations and, perhaps, numerical exercises.
However, as explained in \cite{DM}, they can actually be
subjected to systematic scientific investigation, in the style
of {\it experimental mathematics}, with questions coming from
direct observations and numerical experiments, and rigorous answers
provided by knowledge of underlying algebro-geometric structures.
Our presentation below can be considered as an example
of this increasingly important approach to modern mathematical
physics problems.
As explained in \cite{DM} -- and clearly seen in
Figs.\ref{Mand2}-\ref{symmebre}, -- the Mandelbrot set consists
of infinitely many separated {\it clusters},
of which only the central one is well seen in the main picture,
while examples of smaller clusters
are shown in auxiliary pictures with enhanced resolution.
Though separated, clusters form a well organized
structure: they are connected by "trails", populated with
other clusters.
Further, each cluster has its own {\it tree} structure,
Fig.\ref{Mand2}.D, with two types of {\it elementary domains}:
one type at the root of the tree and another type at all higher
nodes (we call them {\it descendants}).
Fig.\ref{symmebre} demonstrates that a given Mandelbrot set
can contain different types of clusters, while
for special families\ $f(x;c) = x^d+c$, where maps possess
additional $Z_{d-1}$ symmetry $x \rightarrow e^{2\pi i n/(d-1)}x$,
all clusters are of the same type.
In Mandelbrot Set (i.e. for $d=2$) the root
{\it elementary domains} are nearly ideal cardioids,
Fig.\ref{cardi}.A,
\begin{eqnarray}
c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{2}\,e^{i\phi}\right)
\label{card2}
\end{eqnarray}
while descendants are nearly ideal circles
\begin{eqnarray}
c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}
\label{circdef}
\end{eqnarray}
For $d>2$ we have nearly ideal $(d-1)$-cusp cardioids,
Fig.\ref{cardi},
\begin{eqnarray}
c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}\left(
1 - \frac{1}{d}\,e^{i(d-1)\phi}\right)
\label{carddef}
\end{eqnarray}
at roots, while descendant domains are
{\it deformed} cardioids with $d-2$ cusps
and some non-vanishing coefficients $a_k$ in
\begin{eqnarray}
c-c_{{\cal O}} = r_{{\cal O}}e^{i\phi}\left(
1 + \sum_{k=1}^{d-3}a_ke^{ik\phi} -
\frac{1}{d-1}\,e^{i(d-2)\phi}\right)
\label{defcarddef}
\end{eqnarray}
Actual shapes of the domains slightly deviate from these
ideal cardioids and circles and depend on particular cluster
and node, but deviations are at the level of a few percents
at most.
\bigskip
\Fig{cardi}
{400,130}
{\footnotesize Cardioid curves described by
the equations (\ref{card2}) and (\ref{cardi}). These pictures
reproduce the shapes of the central domains in Mandelbrot sets
${\cal M}_d$. \ \ {\bf A.} $d=2$. This is the central domain of
${\cal M}_2$ in Fig.\ref{Mand2}. \ \ {\bf B.} $d=3$. This is the
central domain of ${\cal M}_3$ in Fig.\ref{0Man3}. \ \ {\bf C.}
$d=4$. This is the central domain of ${\cal M}_4$ in
Fig.\ref{0Man4}. }
\bigskip
Each elementary domain is associated with some periodic orbit
${\cal O}$ of the map $f$,
which is stable exactly within the domain.
Thus the domain is labeled by the order $p$ of this orbit.
This fact allows one to give an explicit {\it analytical}
description of the domain's shape, see eq.(\ref{shapeeq}) below.
However, there are many different periodic orbits with the same
$p$ and thus many elementary domains with the same $p$.
They differ by the choice of the root $c_p$ of the equation
$f_p(c) \equiv f^{\circ p}(x=x_{cr};c) = x_{cr}$, which lies in the
"center" of the domain
($x_{cr}$ is the root of $f'(x)$, $f'(x_{cr},c)=0$, in most examples
below $x_{cr}=0$).
Some of these order-$p$ domains are at roots of some clusters,
some at higher-level nodes, but in that case the root
of the cluster should be associated with a divisor of $p$.
Actually the tree underlying the cluster is the {\it divisor tree},
and the entire forest structure
(i.e. collection of trees, associated with all clusters)
is that of {\it divisor forest} of natural numbers.
Accordingly, elementary domain is labeled by a sequence of
integers $\Big((p_r\cdot m_1\cdot \ldots\cdot m_k)_{\alpha_{r,k}},
\ \ldots,\ (p_r\cdot m_1)_{\alpha_{r,1}},\ (p_r)_{\alpha_r}\Big)$,
to be read from rights to left.
Sequence of multiples
\ $[m_k,\ldots, m_1]\ $ characterizes domain's position
in the tree, $k$ is the distance from the root,
$p=(p_r\cdot m_1\cdot \ldots\cdot m_k)$
is the period of the corresponding orbit,
and $p_r$ is the order
of the orbit, associated with the root domain.
Since there can be many different root domains with the same $p_r$
in particular Mandelbrot set
and many different descendants with the same $p$ in a given tree,
there are additional labels $\alpha$, distinguishing between
different trees in the forest with the same order $p_r$ at roots
and between different branches with the same $p$ in each tree.
While divisor trees are the same for all particular Mandelbrot sets,
collections of ${\alpha}$'s are different: they are defined by
the way the given section crosses the $p$-variety in the
Universal Mandelbrot set, since it can be crossed many times,
there are many traces of the same variety in the given section
(in the role of either root or descendant domains)
-- and this is the origin of the {\it forest} and of the $\alpha$
parameters, which at the level of particular Mandelbrot set look
somewhat arbitrary.
At the level of UMS there is a single divisor tree and a section
intersects it many times and
cuts it into many similar trees: any cut-off branch looks like a
separate tree and gives rise to a separate cluster.
The memory of their common origin at UMS level is preserved
in the {\it trail} structure, connecting the clusters, but its
detailed description is not yet available.
Two adjacent elementary domains {\it touch} at exactly one point
(i.e. along a complex-codimension-one variety in UMS),
where their corresponding orbits cross and exchange stability.
This important statement, however, needs to be treated carefully:
as we shall see, in general (beyond the $x^d+c$ families)
the elementary domains can {\it overlap}: there can be several
stable orbits at the same value of $c$.
This means that arbitrarily chosen $c$ is not a good
coordinate on a Mandelbrot set, which is actually a fibration
over a region in the complex-$c$ plane than a region {\it per se}.
Fibration structure is inherited from {\it Julia sheaf} over the
Mandelbrot set \cite{DM}.
In this general situation the word {\it touch} is not fully
informative:
when different domains seem to overlap,
they rather lie in different
fibers over the same region on $c$-plane, and these fibers are
sewed exactly at a single point, where the orbis cross.
What we can show in simple $2d$ pictures are {\it projections}
of the domains, these projections can overlap and {\it touch}
at the orbit-crossing point.
{\it Touch} means that the tangents to two domains are collinear,
in practice this can be either a smooth touching (typical for
crossing of orbits of different orders) or a cusp (when the orders
are the same).
Crossing of orbits is possible only when the order of the smaller
one (with domain lying closer to the root in the cluster) divides
the order of the bigger one. Analytically, crossing takes place
at the root of associated {\it resultant}.
If the two orders differ by a factor of $2$, this is the celebrated
period-doubling bifurcation \cite{pdb,LL} -- and the chain of exactly
such bifurcations occurs along the real line in Fig.\ref{Mand2},--
but in fact {\it doubling} is in no way distinguished: bifurcation
can multiply period by arbitrary integer $m$.
Crossing of {\it unstable} orbits is not seen at the level of
Mandelbrot sets shown in Figs.\ref{Mand2}-\ref{0Man4},--
to study these phenomena
(also essential for bifurcations of Julia sets) the full
(or {\it Grand}) Mandelbrot set should be considered.
Actually, behind UMS there are even more fundamental entities:
the Universal Julia Sheaf (UJS), consisting of all periodic
orbits of all orders "hanged" over the UMS, and
the Grand UJS, including also all {\it pre-orbits} of periodic orbits.
UMS is a projection of UJS, obtained by neglect of the phase-space
dimension $x$, where the orbits live, and, as any projection, it
can and does suffer from overlap ambiguities, namely, when
two different stable orbits coexist at the same point of UMS.
\bigskip
We refer to \cite{DM} for further details and explanations.
The task of this paper is to provide close-to-Earth
illustrations of somewhat abstract formulations from the
previous paragraphs and to fill some of the gaps
left in \cite{DM}, which concern three closely related
subjects:
(i) the shape of elementary domains;
(ii) Feigenbaum indices,
defining the ratio of sizes of adjacent elementary domains
(immediate descendant as compared to its parent)
from the ratio of the corresponding orders;
(iii) reshuffling of Mandelbrot set and its elementary sets
under the change of selected family of functions, i.e.
new properties of
$2_C$-dimensional sections of
Universal Mandelbrot set as compared to the
$1_C$-dimensional sections.
In fact these subjects capture the main aspects of the general
theory and at the same time they can be considered by simple
methods of theoretical physics with minimal involvement of
abstract algebro-geometric constructions.
Even resultant and discriminant analysis, which was the
main machinery in \cite{DM}, will be at periphery of our
simplified presentation in this paper.
As a key puzzle and a starting point for all considerations
we choose the question, posed in the title of this paper:
why exactly the cardioids (\ref{card2})
and circles (\ref{circdef}) and exactly in the right places
in the {\it divisor forest} emerge as the shapes of elementary
domains of the Mandelbrot Set in Fig.\ref{Mand2}, and
how this picture is continuously deformed
into Figs.\ref{0Man3} and \ref{0Man4}.
As already shown in \cite{DM}, cardioids (\ref{carddef})
{\it exactly} describe the {\it central} domains $(1)$
for the families $x^d+c$, while in description of all other
domains they straightforwardly appear in the
"small-size approximation" (SSA) to exact shape-defining
eqs.(\ref{shapeeq}).
In what follows we
-- explain, why for the\ $x^d+c\ $ families the cardioids
(\ref{carddef}) do not exhaust all possible shapes:
deformed cardioids (\ref{defcarddef}) with one less cusp
are also allowed;
-- explain, why (\ref{carddef}) appear exactly at {\it roots}
of clusters, while all descendants have one cusp less:
instead of this lacking cusp a descendant domain has a
merging-point to the parent domain;
-- provide a detailed description of interpolation between
Mandelbrot sets and Julia sheafs
for the families $x^2+c$ and $x^3+c$
(actually only the orbits of two lowest orders $p=1$ and $p=2$
will be analyzed, but this is already enough to reveal many
interesting details of the process);
-- demonstrate inaccuracy of {\it Fractal Explorer} \cite{FE}
(and thus the underlying text-book interpretations of
Mandelbrot sets) in application to UMS studies and appeal
for the writing of corrected and fully adequate computer programs
on the base of improved knowledge provided by \cite{DM};
-- demonstrate high accuracy of the small-size approximation (SSA)
in evaluation of various characteristics of the Mandelbrot Set
by comparing its predictions for the (complex-valued) sizes
$r_{{\cal O}}$ and Feigenbaum indices with exact answers
(when they are already available) and
experimental data provided by the {\it Fractal Explorer} \cite{FE}.
Concerning the last story -- about the SSA -- it deserves saying
that no {\it theoretical} explanation for its spectacular accuracy
is known:
particular corrections are not small, but various
corrections always combine into a small quantity, whenever
characteristics of Mandelbrot Set are evaluated.
At the same time, as explained in \cite{DM},
SSA fails completely in description of Julia sets; still it
describes reasonably well the Mandelbrot sets for the families
$x^d+c$ with $d>2$ (though some qualitative properties are
spoiled, e.g. cusps are somewhat smoothed), but works much worse
for interpolations between different $d$.
In any case, today SSA remains the only available tool
for theoretical investigation of high branches in divisor tree,
in particular for approximate evaluation
(rather than {\it measuring})
of various Feigenbaum indices --
and for this purpose it works spectacularly
well, even for $d > 2$ and even for interpolations.
Still rigorous algebro-geometric theory of Feigenbaum indices
remains to be found.
\bigskip
We begin in s.\ref{qual} from qualitative description of elementary
domains supported by the limited amount of {\it exactly-solvable}
examples in s.\ref{exact}: these include some non-trivial cases
and, as usual, provide the solid ground for future approximate
considerations.
Section \ref{interpol} is devoted to interpolation $\{ax^3+bx^2+c\}$
between the two best-known Mandelbrot sets: $\{x^2+c\}$ and
$\{x^3+c\}$.
Other examples of $3_R$-dimensional sections of UMS
(actually of its central domain) are also given in this section.
Then in s.\ref{ssasec} we introduce the small-size approximation
and present some calculations for the Mandelbrot Set.
Their results are compared with experimental data in s.\ref{accu}.
After some more borrowing SSA calculations in s.\ref{shapes2},
we discuss Feigenbaum indices in s.\ref{Feig}.
Section \ref{dfam} is devoted to SSA consideration of some other
Mandelbrot sets.
Brief conclusions are collected in s.\ref{conc}.
\section{The shape of elementary domains. Qualitative description
\label{qual}}
\setcounter{equation}{0}
\subsection{Defining equations}
According to eqs.(10) and (38) of \cite{DM},
the boundary of an elementary domain satisfies
a pair of equations:
\begin{eqnarray}
f \in {\cal M} \ \Leftrightarrow \
\left\{\begin{array}{c}
F_p^\prime(x) + 1 = e^{i\theta} \\
F_p(x) = 0
\end{array} \right.
\label{shapeeq}
\end{eqnarray}
Here $F_p(x) = f^{\circ p}(x) - x$,
prime denotes $x$-derivative
and $\theta$ is a {\it real-valued} angle-parameter used to
coordinatize the boundary of the domain (it can actually
vary between $0$ and a multiple of $2\pi$, see below).
After $x$ is excluded from the pair of equations (\ref{shapeeq}),
we obtain a real-codimension-one hypersurface
in the space of functions $f$, i.e. a collection of curves
$c(\theta)$ in $1_C$-dimensional Mandelbrot set.
Particular curves -- branches of $c(\theta)$ -- are boundaries
of particular elementary domains of the order $p$,
root and descendant.
\subsection{Cusps}
Even if function $c(\theta)$ is smooth, the corresponding
curve in the complex-$c$ plane can be singular.
Generical singularity is self-intersection, which takes
place when
$c(\theta_1) = c(\theta_2)$ for $\theta_1\neq\theta_2$.
Of interest for us are {\it cusps}:
degenerated self-intersections, appearing in the limit when
$\theta_2 \rightarrow \theta_1$, i.e. when
$\frac{dc}{d\theta}(\theta_0) =0$ at some $\theta_0$.
In the vicinity of such point
$\sigma(\vartheta) = c(\theta) - c(\theta_0) =
a\vartheta^2 + b \vartheta^3 +
\ldots$, where $\vartheta = \theta - \theta_0$.
This means that
\begin{eqnarray}
{\rm Re}\left(\frac{\sigma}{a}\right) =
\vartheta^2 + \ldots,\ \ \ \ {\rm while} \ \ \ \
{\rm Im}\left(\frac{\sigma}{a}\right) =
{\rm Im}\left(\frac{b}{a}\right)\vartheta^3 +
\ldots
\end{eqnarray}
i.e.
\begin{eqnarray}
{\rm Im}\left(\frac{\sigma}{a}\right)\sim
{\rm Im}\left(\frac{b}{a}\right)
\left\{{\rm Re}\left(\frac{\sigma}{a}\right)\right\}^{3/2}
\end{eqnarray}
Thus we see that a cusp emerges at points where $dc/d\theta = 0$
and its orientation in the complex-$c$ plane is defined by the
phase of the complex-valued parameter $a$.
If $\ {\rm Im}(b/a)=0$, i.e.
\begin{eqnarray}
{\rm Im}\left(\frac{d^3c/d\theta^3}{dc/d\theta}\right)=0
\label{cuspint}
\end{eqnarray}
along with $dc/d\theta = 0$, then a self-intersection point
collides with the cusp and disappears.
\subsection{Cardioids}
Cardioids are represented by polynomials of the unimodular variable,
they form the simplest natural class of curves with cusps.
For {\bf quadratic cardioid},
\begin{eqnarray}
c = r\left(e^{i\phi} + ae^{2i\phi}\right) \ = \
\frac{r}{4a}\Big((1+2ae^{i\phi})^2 - 1\Big),
\end{eqnarray}
derivative vanishes, $dc/d\phi = 0$, when $2ae^{i\phi} = -1$.
This never happens if $|a| \neq \frac{1}{2}$.
Thus a cusp (and exactly one) occurs only when $|a|=\frac{1}{2}$,
the curve is everywhere smooth for $|a|<\frac{1}{2}$ and
possesses one self-intersection for $|a|>\frac{1}{2}$.
\bigskip
For {\bf cubic cardioid},
\begin{eqnarray}
c = r\left(e^{i\phi} + ae^{2i\phi} + be^{3i\phi}\right),
\label{3card}
\end{eqnarray}
derivative $dc/d\phi = 0$ vanishes when
\begin{eqnarray}
1+2ae^{i\phi} + 3be^{2i\phi} = 0,
\label{3carder}
\end{eqnarray}
i.e. when the r.h.s. $e^{-i\phi} = -a \pm \sqrt{a^2-3b}\ $
has unit modulus.
If $a$ and $b$ are real, then cusp can occur when either
$1+2a+3b=0$ (then there is one cusp at $\phi = 0$, unless
$a=0$ and $b=-\frac{1}{3}$, when another cusp appears at
$\phi = \pi$, see Fig.\ref{cardicusp})
or $-1<a<1,\ b=+1/3$ (then two cusps arise at $\phi=\pm\phi_0
\neq 0,\pi$).
In general, (\ref{3carder}) defines
a hypersurface of real codimension one in the space
of complex parameters $a$ and $b$ (parameterized by $\phi$),
where cubic cardioids (\ref{3card}) have cusps (one or two).
Transition point (\ref{cuspint}) between a phase with
and without self-intersection is defined by a system of two
equations,
$$
\left\{\begin{array}{c} {dc}/{d\phi} \sim 1+2a+3b = 0 \\
{d^3c}/{d\phi^3} \sim 1+8a+27b = 0
\end{array}\right.
$$
i.e. $a=-4/5$ and $b=1/5$, see Fig.\ref{cardicusp}.
MAPLE program for cardioid studies, which was used to generate
Figs.\ref{cardi} and \ref{cardicusp}, can be found in Appendix to
this paper, see s.\ref{MAPcard}.
\bigskip
\Fig{cardicusp}{450,422}
{\footnotesize
Cardioids with cusps and self-intersections.
The cusp-possessing subset in the family (\ref{3card})
with $1+2a+3b=0$ is shown.\ \
{\bf A.} $a=\frac{1}{5}$, $b=-\frac{7}{15}$.
Both cusp and self-intersection are present.\ \
{\bf B.} $a=-\frac{4}{5}$, $b=\frac{1}{5}$. This is the point where
self-intersection point hits the cusp and disappears.\ \
{\bf C.} $a=-\frac{1}{8}$, $b=-\frac{1}{4}$.
A single cusp is present at $\phi = 0$.
\ \
{\bf D.} $a=0$, $b=-\frac{1}{3}$. The second cusp appears at
$\phi = \pi$.
}
\subsection{Cusps in the boundaries of elementary domains
\label{cueldo}}
The second component of eq.(\ref{shapeeq}) implies that
\begin{eqnarray}
\dot F_p\frac{dc}{d\theta} = -F'_p\frac{dx}{d\theta}
\end{eqnarray}
(dot and prime denote $c$- and $x$-derivatives respectively),
so that $\frac{dc}{d\theta} = 0$ when $F'_p=0$, provided
$\dot F_p\neq 0$ at the same point.
Together with the first eq.(\ref{shapeeq}) this means
that cusp can occur only when $\theta = 0$.
Thus the number of cusps depends essentially on the
range of variation of $\theta$-variable.
If $\theta$ runs from $0$ to $2\pi (d-1)$, we can
expect up to $d-1$ cusps to occur.
\subsection{Why descendant domains have one cusp less
than the root ones? \label{minuscusp}}
Descendant domain differs from the root one, because it
always has one special point at the boundary where $F' =0$
and $\dot F =0$ together.
This means that there is no cusp at this point, and
if the total number of zeroes of $F'$ at the boundary was
$d-1$, but the domain was a descendant,
then the total number of cusps will be $d-2$.
Characteristic feature of any descendant is reducibility of
the corresponding function $F_{mp}$: it is divisible by $F_p$
of a parent domain,
\begin{eqnarray}
F_{mp}(x) = \tilde G_{mp}(x)F_p(x),
\end{eqnarray}
(in variance with $G_{mp}$ from ref.\cite{DM} such $\tilde G_{mp}$
can still be reducible, but this does not matter for our purposes
in this paper).
Then $F'_{mp} = \tilde G_{mp}' F_p + \tilde G_{mp} F_p'$
and $\dot F_{mp} = \dot{\tilde G}_{mp} F_p + \tilde G_{mp} \dot F_p$
vanish {\it simultaneously} whenever both
$F_p=0$ and $\tilde G_{mp}=0$, i.e. when $x$ belongs simultaneously
to orbits of orders $p$ and $mp$.
According to \cite{DM} the last two equations possess
exactly one common zero at the boundary of {\it descendant} domain:
it is exactly the merging point, where descendant domain is attached
to the parent one, and in the $c$-space it is a zero of the
resultant $R(\tilde G_{mp},F_p)$.
Discriminant $D(\tilde G_{mp})$ also vanishes when
$R(\tilde G_{mp},F_p) = 0$, because different points of the
order-$mp$ orbit (roots of $G_{mp}$)
should merge $m$-wise to merge with the points
of the order $p$-orbit (roots of $F_p$).
Of more interest are {\it other} zeroes of $D(G_{mp})$,
representing crossings of different orbits of order $mp$.
\section{Exact results about elementary domains \label{exact}}
\setcounter{equation}{0}
This section is devoted to exactly-solvable examples.
Here exact solvability means not obligatory explicit analytical
solutions -- though they will also be considered.
Whenever the problem can be effectively studied by user-friendly
computer tools like MAPLE or Mathematica, it is considered
equally (and may be even better) solvable as if explicit formulas
were derived.
We shall see that sometime the best way to analyze such explicit
formula is to generate its plot with the help of the same MAPLE.
One should keep in mind, however, that the number of problems
solvable in this way is also very limited: in most cases even
clearly formulated algebraic problems can be handled only by
specially designed programs, which usually {\it could} but
{\it never were} written. This makes such problems {\it potentially}
solvable (as many other hot problems in theoretical physics),
but they are clearly different from {\it practically} solvable.
We also distinguish these {\it solvable} problems from those
which are effectively solved, but only {\it approximately}:
under certain additional assumptions or when improving of accuracy
is increasingly difficult
(like it happens, for example, in perturbation theory).
We turn to approximate methods in ss.\ref{ssasec}-\ref{dfam}.
Before we are going to describe what is known today at {\it exact}
level.
Our primary goal is to understand what is the domain of variation
of the $\theta$-variable --
because we already know from s.\ref{cueldo} that it is its
size that defines the number of cusps, both for root and descendant
domains.
Moreover, we want to see how this variation domain is changed in
transition from one Mandelbrot set to another, i.e. to study the
bifurcations of Mandelbrot sets themselves, which happen in
complex-codimension-two in the Universal Mandelbrot space.
Examples will be also used in other sections, where we
derive (approximately) the {\it analytical} shape of the domains.
\subsection{Elementary domains of order $p=1$
for special Mandelbrot sets}
Let us consider a Mandelbrot set for a one-parametric family
\begin{eqnarray}
f(x,c) = P_d(x) + c
\label{addfam}
\end{eqnarray}
with polynomial $P_d(x)$ of degree $d$ (we do not require it to
be homogeneous $x^d$ at this moment).
Additive dependence on $c$-parameter considerably
simplifies consideration of such families.
For $p=1$ equations (\ref{shapeeq}) state simply that
\begin{eqnarray}
\left\{ \begin{array}{c} P'_d(x) = e^{i\theta} \\
c = x - P_d(x)
\end{array}\right.
\label{shapeeqp1}
\end{eqnarray}
and for every particular choice of $P_d(x)$ the function
$c(\theta)$ can be easily plotted with the help of MAPLE
or Mathematica.
Moreover, there are two important examples when even
{\it analytical} solution is immediately available.
The first case is homogeneous $P_d(x) = x^d$, associated with
the standard $Z_{d-1}$-symmetric Mandelbrot sets ${\cal M}_d$.
The second case is generic {\it cubic} polynomial
$P_3(x) = x^3+ax^2+bx$: associated family of Mandelbrot sets
${\cal M}_3(a,b)$ interpolates between
${\cal M}_2={\cal M}_3(\infty,0)$ and ${\cal M}_3={\cal M}_3(0,0)$.
For such interpolation one can also use a one-dimensional
and "better" parameterized family ${\cal M}_{2,3}(a)$ with
$P_3(x) = a x^3+(1-a)x^2$
(then ${\cal M}_2 = {\cal M}_{2,3}(0)$
and ${\cal M}_3 = {\cal M}_{2,3}(1)$).\footnote{
It deserves saying that these "families" of Mandelbrot sets
are somewhat artificial entities.
${\cal M}\{a,b,c\}$ is actually
a $3_C$-dimensional section of the Universal Mandelbrot set,
and $1_C$-dimensional Mandelbrot sets with coordinate $c$ are
obtained if $a$ and $b$ are artificially considered as
"external" parameters.
Of course, one can instead take $a$ for coordinate and $b,c$
for parameters: no distinguished choice exists and
all such sets should be studied on equal footing.
It is nothing but a historical casus that particular sets
${\cal M}_d$ are more popularized than the others (and even
for these particular sets the period-{\it doubling} is
better known than tripling etc -- despite it is in no way
distinguished).
Worse than that: the {\it standard} presentations like
\cite{Mand} and even the software like our favorite
{\it Fractal Explorer} \cite{FE}
implicitly exploit specific properties of these maps and
produce errors in application to generic families, say,
when $c$-dependence is not additive like in (\ref{addfam})
and even if $P_d$ in (\ref{addfam}) is non-homogeneous,
see also introductory remarks to s.\ref{interpol} below.
}
\subsection{Analytically solvable examples for $p=1$}
\subsubsection{Homogeneous $P_d(x)$}
For homogeneous $P_d(x)=x^d$ eq.(\ref{shapeeqp1}) converts directly
into (\ref{carddef}):
\begin{eqnarray}
\left\{\begin{array}{c}
dx^{d-1} = e^{i(d-1)\phi}, \ \ \ \ \ \ {\rm i.e.}
\ \ \ \ \ \ x = r(d)e^{i\phi} \\ \\
c = x\Big(1-x^{d-1}\Big) = r(d)e^{i\phi}
\Big(1 - \frac{1}{d}\,e^{i(d-1)\phi}\Big)
\end{array}\right.
\label{c1phi}
\end{eqnarray}
where $r(d) = d^{-\frac{1}{d-1}}$ and $\phi = \frac{\theta}{d-1}$.
It is obvious that in this case $\theta$ changes from $0$ to
$2\pi(d-1)$ and $\phi$ is the right angle-parameter.
Now we can use another solvable example with $P_3(x)$ in order
to deform these ideal cardioids and see how their order (number of
cusps) can actually change in interpolation between $x^2$ and $x^3$,
see s.\ref{interpol}.
\subsubsection{Cubic polynomial}
For $P_3(x) = ax^3+b x^2$ the first equation in
(\ref{shapeeqp1}),
$\ 3a x^2+2b x = e^{i\theta}$,
is quadratic in $x$ and has explicit analytic solution:
\begin{eqnarray}
x = \frac{-b \pm \sqrt{b^2 + 3a e^{i\theta}}}{3a}
\end{eqnarray}
(only the "+" branch has a finite limit at $a\ \rightarrow 0$).
Substituting this into the second equation (\ref{shapeeqp1}),
$\ c_1 = x-P_3(x) = -a x^3 - b x^2 + x$,
we obtain the analytical expression for
the boundary of the root domain of the central cluster
for arbitrary values of complex parameters $a\ $ and $b$:
\begin{eqnarray}
c_1 = \frac{\left(b \mp \sqrt{b^2 + 3a e^{i\theta}}\right)
\left(5b^3-9a + 3ae^{i\theta} \mp 5b
\sqrt{b^2 + 3ae^{i\theta}}\right)}
{27a^2}
\end{eqnarray}
Clearly, the phase transition line, separating the two regimes --
$\theta = 2\phi$ (near $b = 0$) and
$\theta = \phi$ (near $a\ = 0$), --
is $|b|^2 = 3|a\ |$.
If $b = 1-a\ $, see s.\ref{interpol},
it crosses the real-$a\ $ line at
$a_{cr}^\pm = \frac{5\pm\sqrt{21}}{2}$, i.e.
$a^-_{cr} = 0.208712\ldots$ and
$a^+_{cr} = 5-a^-_{cr} = 4.791288\ldots$
\subsection{Solvable examples for $p=2$}
\subsubsection{Equations in case of separated $c$-dependence}
For $p=2$ equations (\ref{shapeeq}) can be rewritten as follows:
\begin{eqnarray}
\left\{ \begin{array}{c}
f(x) = z; \\
f(z) = x; \\
f'(z)f'(x) = e^{i\theta}
\end{array}\right.
\end{eqnarray}
and when
\begin{eqnarray}
f(x,c) = P(x) + c
\label{addf}
\end{eqnarray}
with $c$-independent $P(x)$, as
\begin{eqnarray}
\left\{ \begin{array}{c}
P(z)+z = P(x)+x; \\
P'(z)P'(x) = e^{i\theta}
\end{array}\right.
\label{xzeqs}
\end{eqnarray}
Then $c(\theta)$ can be defined from
\begin{eqnarray}
c = z - P(x) = x- P(z).
\label{cvsx2}
\end{eqnarray}
Since we did not factor out $F_1$ from $F_2 = G_2F_1$, these
equations describe not only the $(2)$ and $(2,1)$ domains, but
also the $(1)$ ones.
The $(1)$ domains satisfy the system (\ref{xzeqs}) with first
equation substituted by $x=z$, while for the $(2)$ and $(2,1)$
domains it should be substituted by $\frac{P(z)-P(x)}{z-x} = -1$.
\subsubsection{MAPLE-generated solution for homogeneous $P_d(x)$
\label{homsol}}
For $P_d(x) = x^d$ the second equation in (\ref{xzeqs})
can be solved explicitly:
\begin{eqnarray}
xz = d^{-\frac{2}{d-1}} e^{i\varphi} \equiv \xi
\end{eqnarray}
where we substituted $\theta = (d-1)\phi$.
Then the first equation (\ref{xzeqs}) turns into
\begin{eqnarray}
x^d - \frac{\xi^d}{x^d} = -x + \frac{\xi}{x}
\label{hompolseqs}
\end{eqnarray}
One solution, $x=\xi/x$, i.e. simply
$x=z=d^{-\frac{1}{d-1}} e^{i\phi}$ with $\phi = \frac{\varphi}{2}$
changing from $0$ to $2\pi$, provides
\begin{eqnarray}
c_1(\phi) = z-x^d = x-z^d = d^{-\frac{1}{d-1}}
e^{i\phi}\left(1-\frac{1}{d}\,e^{i\phi}\right)
\end{eqnarray}
which is our familiar eq.(\ref{c1phi}) for the central root
domain $(1)$, with examples shown in Fig.\ref{cardi}.
Remaining solutions, describing the root $(2)$ and
descendant $(2,1)$ domains,
can be solved by MAPLE or Mathematica, see Fig.\ref{hompols}.
In these solutions $\varphi = \phi$.
No root $(2)$ domains occur for homogeneous $P_d(x)=x^d$,
but this is a peculiarity of {\it both} homogeneity and $p=2$:
root domains $(p)$ exist for all $p\neq 2$ even if $P_d(x)=x^d$,
and $(2)$ domains are normally present for generic
non-homogeneous $P_d(x)$, see s.\ref{interpol} for examples.
\bigskip
\Fig{hompols}
{450,203}
{\footnotesize
The first two domains $(1)$ and $(2,1)$ of the central cluster,
obtained from solving eq.(\ref{hompolseqs}) with the help of MAPLE.
In the case of $d=2$ and $3$ analytical solutions are also available:
see eqs.(\ref{d2sols}) and (\ref{d3sols}) respectively (and they
are explicitly used by MAPLE).
{\bf A:} $d=2$, $f(x,c) = x^2+c$, {\bf B:} $d=3$, $f(x,c) = x^3+c$,
{\bf C:} $d=3$, $f(x,c) = x^4+c$, {\bf D:}
$d=4$, $f(x,c) = x^{4}+c$.
Mandelbrot set for $f(x,c) = x^d + c$ has $Z^{d-1}$ symmetry under
rotations around the point $c=0$.
}
\bigskip
\subsubsection{Analytical solutions for homogeneous $P_2(x)$ and
$P_3(x)$}
For $d=2$ and $d=3$ {\it analytical} solutions are also available.
Indeed, then eqs.(\ref{hompolseqs}),
after exclusion of solutions $x=z$, turn into
\begin{eqnarray}
{\bf d=2:}\ \ \ \ \ x+z+1=x + \frac{\xi}{x} + 1 = 0,\ \ \ \
\xi = \frac{1}{4}e^{i\phi}
\end{eqnarray}
and
\begin{eqnarray}
{\bf d=3:} \ \ \ \ \ x^2+xz+z^2 = x^2+\xi + \frac{\xi^2}{x^2}= -1,
\ \ \ \
\xi = \frac{1}{3}e^{i\phi}
\end{eqnarray}
respectively, which are explicitly solvable quadratic
and biquadratic equations.
Then it follows that for $d=2$
\begin{eqnarray}
x= -\frac{1}{2} \pm \frac{1}{2} \sqrt{1-e^{i\phi}}, \nonumber \\
z= -\frac{1}{2} \mp \frac{1}{2} \sqrt{1-e^{i\phi}}
\end{eqnarray}
and
\begin{eqnarray}
c_{2,1}(\phi) = z - x^2 = x-z^2 = - 1 + \frac{1}{4}e^{i\phi}
\label{d2sols}
\end{eqnarray}
is an ideal circle of radius $r_{2,1} = \frac{1}{4}$
centered at $c_{2,1}=-1$, see Fig.\ref{hompols}.A.
Similarly for $d=3$
\begin{eqnarray}
x = \pm \frac{1}{\sqrt{2}}\sqrt{-1-\frac{1}{3}e^{i\phi} \pm
\sqrt{1 + \frac{2}{3}e^{i\phi} - \frac{1}{3}e^{2i\phi}}}, \nonumber \\
z = \pm \frac{1}{\sqrt{2}}\sqrt{-1-\frac{1}{3}e^{i\phi} \pm
\sqrt{1 + \frac{2}{3}e^{i\phi} - \frac{1}{3}e^{2i\phi}}}
\end{eqnarray}
and, see Fig.\ref{hompols}.B,
\begin{eqnarray}
c_{2,1}(\phi) = z - x^3 = x - z^3
\label{d3sols}
\end{eqnarray}
In both examples $\phi = \varphi$ changes in between $0$ and $2\pi$.
\subsubsection{MAPLE-generated solution for arbitrary cubic
polynomial}
For arbitrary cubic $P_3(x)$ the second equation in (\ref{xzeqs})
is quadratic in $z$ and can be solved explicitly.
After substitution of this $z(x)$ the first equation can be
solved with the help of MAPLE or Mathematica and (\ref{cvsx2})
produces the final answer.
This is how Figs.\ref{a1_10}--\ref{tubesD4} are obtained.
\section{The first two elementary domains in interpolation
between ${\cal M}_2$ and ${\cal M}_3$ \label{interpol}}
\setcounter{equation}{0}
After equations are solved, we can turn to description of solutions.
For particular homogeneous polynomials $P_d(x) = x^d$
we obtained the well known shapes of central
domains in ${\cal M}_d$ Mandelbrot sets, see Fig.\ref{hompols},
-- only this time this is
not a result of computer {\it simulations} by {\it Fractal Explorer},
the shapes are now obtained as solutions
(sometime even analytical) to algebraic equations (\ref{shapeeq}).
Even more interesting is the possibility to study quantitatively
interpolation between the ${\cal M}_2$ and ${\cal M}_3$ sets.
So far only qualitative description was known \cite{DM}, and
the usual computer {\it simulations} fail.
Such simulations \cite{Mand}
are often based on the study of the sequence
$f^{\circ n}(c)$, i.e. the orbit of $x_{cr}=0$.
Interior of the Mandelbrot space, i.e.
the black regions in Figs.\ref{Mand2}--\ref{symmebre},
is {\it assumed} to consist of all functions $f$
where this sequence is bounded and does not go away to infinity.
However, this assumption is not always true and then
this simple algorithm fails --
and Fig.\ref{symmebre} is the first example.
The reasons for failure can be different:
from $x_{cr}\neq 0$ to
attraction of unstable orbits to finite, rather than
to infinitely remote ones.
There is a strong need to cure this problem and make
a modification of {\it Fractal Explorer} which would treat
properly any kind of Mandelbrot set.\footnote{In the
absence of such modification we had to make use of various pictures,
which are at best qualitatively, but not fully correct:
this is the case with Fig.\ref{symmebre} in this paper and with
numerous Figures in \cite{DM}, including even the picture at
the cover of that book. Below in this section we provide much
better views of the $2$-parametric section of the Universal
Mandelbrot Set, these pictures will be fully correct, but
instead only order-$1$ and $2$ domains will be shown.
}
\subsection{Particular Mandelbrot sets ${\cal M}_{2,3}(a)$
for the families $ax^3+(1-a)x^2 + c$ at different values of $a$
\label{23interp}}
At $a=0$ the Mandelbrot Set acquires its standard form,
Fig.\ref{Mand2}, and its first two domains, $(1)$ and
attached $(2,1)$,
are shown in Figs.\ref{Mand2}.C and \ref{hompols}.A.
\subsubsection{Vicinity of the Mandelbrot Set: small $|a|$}
However, as soon as $a$ infinitesimally deviates from $a=0$,
it gets clear that Fig.{\ref{Mand2} has a twin: an exact copy
of the same shape and size, but with opposite orientation
-- a mirror twin, -- located at infinity of the complex-$c$ plane.
As $|a|$ grows, the twin moves closer, and Figs.\ref{a1_10}
and \ref{a-1_10} show its location at $a=\pm 1/10$
(the sizes of the domains are practically the same as in
Fig.\ref{Mand2} -- just the scale of the picture is different,
because the twin of Fig.\ref{Mand2} is still far away).
Moreover, it appears that additional mirror pair of $(2)_\pm$
domains -- roots of two more clusters -- were hidden at
infinity of $c$ plane and are now located in between the
two root domains $(1)_\pm$ for positive $a>0$ and on the
opposite sides of those for negative $a<0$.
Since for small $a$ these domains are tiny as compared
to the $(1)$ and $(2,1)$, they can be easily
overlooked, therefore one of them is marked by a circle and shown
in a bigger scale in a separate picture at the right low corner.
Clearly this root $(2)_-$ domain has cardioid shape and is
exact copy of the root domain $(1)_-$, only smaller.
In fact it has a $(4,2)_-$ domain attached to it in exactly
the same manner as $(2,1)_-$ is attached to $(1)_-$ -- it is not
shown, because we explicitly construct only domains of
orders $p=1$ and $2$.
For interested reader we add also slices of the Julia sheaf:
show behavior of the orbits in the $x$ space\footnote{
Since $f(x)$ is cubic, there are three order-$1$ orbits and
up to three branches will be seen in the pictures.
Since $G_2(x) = F_2(x)/F_1(x)$ has degree $6$ in $x$, there are
$6/2=3$ orbits of order $p=2$ and
up to six branches will be seen in the pictures.
}
with the change
of $c$, which becomes more and more interesting as we go far
from the "pure" points $a=0$ and $a=1$.
The problem is that Julia sheaf is embedded into a $4_Rd=2_Cd$
space, with complex $x$ and $c$,
and can not be shown {\it in full}, even if $a$ is fixed.
Therefore different sections and projections
are presented, $2_R$- and $3_R$-dimensional.
$3_R$-dimensional are especially informative, but only when
presented on computer screen, where they can be rotated and
regarded from different angles.
This advantage is lost in the printed version of the text,
but one can either use simple MAPLE programs, collected in
s.\ref{MAProgs} below or directly look at the results in
\cite{maplesamples}.
\bigskip
\Fig{a1_10}
{500,367}
{\footnotesize
The picture in the left upper corner represents the Mandelbrot
set for the family $ax^3+(1-a)x^2 + c$ with $a=1/10$.
This is the picture in complex $c$ plane and shown are only
domains, associated with the order $p=1$ and $p=2$ orbits.
The orbits themselves lie in the complex $x$ plane and
form the {\it Julia sheaf} over Mandelbrot set.
Julia sheaf itself is a $1_Cd$ complex variety, embedded into
a $2_Cd$ complex space and can not be shown in an ordinary
drawing.
Instead two different {\it views}, one $2_R$-dimensional,
another $3_R$-dimensional (at fixed ${\rm Re}(c)=C(a)$)
are shown in the low left and upper right corners respectively.
As all other three dimensional sections in this paper, it
can be rotated and looked from different angles: this
clarifies the pattern a lot, but can be done only on
computer screen, see ref.\cite{maplesamples}.
(In particular, there are NO intersecting orbits in this
section -- the seeming intersection is an artefact of the
drawing, resolved by rotation of the picture).
Dilute lines represent the three order-$1$ orbits,
while dense lines -- the six order-$2$ orbits.
In $2_Rd$ picture solid segments
show {\it stable} orbits: those of order $1$ are stable
inside the $(1)_\pm$ domains of the Mandelbrot set,
those of order $2$ -- inside the $(2,1)_\pm$ and
$(2)_\pm$ domains (the last two are two small to have
the corresponding solid segments seen in our pictures).
Enlarged picture of the $(2)_-$ domain -- a root of
a new cluster -- is shown in the
low right corner, and it is clear that it is an exact
diminished copy of the root $(1)_-$ domain.
$(2)_+$ is its exact mirror copy, in accordance with
$Z_2$ symmetry of the Mandelbrot set w.r.t. the vertical
line ${\rm Re}(c) = C(0.1) = -8.4$.
Shown are also the roots $c=d_{1,2}$ of discriminants
$D_{1}$ and $D_{2}$ and $c=r_{12}$, $c=r_{24}$ of
the resultants $R_{12}$ and $R_{24}$
(they lye at intersections of vertical lines with
the real-$c$ axis).
According to \cite{DM}, the last two are the crossing
points of the orbits of orders $1$ and $2$ and $2$ and $4$,
define merging points between the domains $(2,1)$ and $(1)$
and between $(4,2)$ and $(2,1)$ respectively, and thus
define the stability segments of orbits of orders $1$ and $2$.
Similarly, discriminant zeroes are intersection points of
the orbits of the same order:
$p=1$ with $p=1$ and $p=2$ with $p=2$.
$2_Rd$ view in the low left corner is in fact a {\it section}
of the $c$ plane with given ${\rm Im}(c)=0$ and
{\it projection} on the ${\rm Re}(x)$ plane.
Accordingly, when two real-valued orbits intersect and
become complex-valued, they remain shown in the picture,
but since they are complex conjugate, two lines are
projected onto one -- this should be taken into account
in analysis of the figure.
}
\Fig{a-1_10}
{500,402}
{\footnotesize
Analogous picture for negative real $a=-1/10$.
The only essential difference from Fig.\ref{a1_10}
is that the two root domains
$(2)_\pm$ are not in between the two $(1)_\pm$, but
on the opposite sides of those.
In other words, at $a=0$ the $(2)$ domains pass through
$(1)_-$, so that $(2)_+$ re-appears from $c=+\infty$,
while $(2)_-$ returns back from $c=-\infty$,
but {\it slower} than $(1)_-$, see also Fig.\ref{speeds}.
Because of this, when $|a|$ increases further in the
direction of negative $a$, the overlap will occur
between the domains $(2,1)_-$ and $(2,1)_+$, unlike
in the positive-$a$ case, where $(2)_-$ and $(2)_+$
will be the first to meet.
}
\Fig{speeds}
{450,363}
{\footnotesize
Behavior in the vicinity of $a=0$
of the {\it root} domains $(1)_-$ and $(2)_\pm$,
denoted respectively by circles and solid lines???.
As $a \rightarrow 0$, these three domains, together with
their entire clusters, travel to infinity in the complex-$c$
plane, therefore the picture is drawn in coordinate $1/c$
(only real values of $c$ are plotted).
The Mandelbrot Set from Fig.\ref{Mand2}, including the
central root domain $(1)_+$, stays in the vicinity of $c=0$
and is not shown in this picture.
Behavior of the $(2)_+$ and $(2)_-$ domains is somewhat
different: the former one re-appears from infinity
at the opposite end of the ${\rm Re}(c)$ axis, while the
latter one returns from the same end, only exchanges
positions with the $(1)_-$ domain -- in full accordance
with Figs.\ref{a1_10} and \ref{a-1_10}.
All the domains and in fact the entire clusters shrink to
a single point at $a=0$, the reason for this is the
choice of a very singular map in homogeneous coordinates:
$\big(x,\ y\big) \longrightarrow
\big(ax^3+bx^2y +cy^3,\ y^3\big)$.
Such singular behavior at $a=0$ would be smoothed and
become similar to bifurcations at finite $a$,
shown in forthcoming pictures, if
$y^3$ is substituted by generic cubic polynomial.
See \cite{nolal} for related considerations.
}
All Mandelbrot sets ${\cal M}_{2,3}(a)$ possess a discrete
$Z_2$ symmetry under reflection w.r.t. the vertical line
\begin{eqnarray}
{\rm Re}(c)=C(a)=-\frac{b}{3a}\left(1+\frac{2b^2}{9a}\right)
\end{eqnarray}
with $b=1-a$,
which is lifted to entire Julia sheaf over ${\cal M}_{2,3}(a)$:
\begin{eqnarray}
\begin{array}{ccc}
\tilde x = x+\frac{b}{3a}& \ \longrightarrow\ & -\tilde x \\
\tilde c = c-C(a)& \ \longrightarrow\ & -\tilde c
\end{array}
\label{symmman}
\end{eqnarray}
For example, the equation $F_1(x,c)=0$ for the first-order
periodic orbits is obviously symmetric, since
$F_1(x,c) = ax^3 + bx^2 + c - x = a\tilde x^3 -
\left(1+\frac{b^2}{3a}\right)\tilde x + \tilde c$.
In accordance with this symmetry, the $(1)_-$ domain -- the
twin of the $(1)_+$ domain, centered at $c=0$,-- is the
mirror-reflected cardioid with center at $c = 2C(a)$.
Similarly, the two next root domains $(2)_\pm$ are centered
at $c_{2+} = -\frac{1}{a} + 2 + O(a)$ and $c_{2-} = 2C(a)
- c_{2+}$, see s.6.4.5 of ref.\cite{DM}.
Since $c_{2+}$ is {\it odd} function of $a$ for
small $a$, while $c_{1-}$ is {\it even}, it is clear that
they exchange order when $a$ goes through zero --
in accordance with what is shown in Figs.\ref{a1_10},
\ref{a-1_10} and \ref{speeds}.
\subsubsection{Overlapping domains \label{doverlap}}
As $|a|$ increases, the two domains $(1)_\pm$ move closer.
The speed of approaching is somewhat different for positive
and negative $a$.
At some stage of this movement two different clusters
unavoidably {\it meet}.
There are, however, two sorts of meeting:
{\it overlap} and {\it collision}.
For above-explained reasons it is still difficult to
analyze the behavior of entire clusters.
{\it Approximate} results about clusters
(or, better, possible approach to their future derivation)
will be discussed in the last sections \ref{ssasec}-\ref{dfam}
of this paper.
{\it Exact} results to be considered right now
concern meetings of the low ($1$ and $2$) order domains.
{\it Overlap} of these domains takes place soon after it happens
to the upper $(p=\infty)$ leaves of the clusters, while
{\it collision} can start at the $p=\infty$ leaves, but can
also begin at the low-$p$ level.
Overlaps and collisions of particular domains occur when zeroes
of the corresponding resultants collide, i.e. are controlled by
zeroes of double-resultants, like $c$-discriminants of the
$x$-resultants, listed in the following table
(italic lines are quotations from MAPLE program, boldfaced are
{\it real}-valued roots, belonging to the segment $0\leq a \leq 1$).
\bigskip
\begin{tabular}{|l|}
\hline
Two zeroes of $D1$ merge at
$a = -\frac{1\pm i\sqrt{3}}{2}$. \\
$discrim(discrim(F1,x),c) =
96*a^2+112*a^3+96*a^4+48*a+48*a^5+16+16*a^6$\\
\hline
Two zeroes of $R_{12}$ merge at
$a=\frac{5\pm\sqrt{21}}{2}$ and at
$a = -2 \pm \sqrt{3}$. \\
Only one of these points,
$a = \frac{5-\sqrt{21}}{2} = 0.20871215\ldots$ belongs to the interval
$0<a<1$ on a real-$a$ line. \\
$discrim(resultant(G2,F1,x),c) =
16*a^{10}*(a^2+4*a+1)^2*(a^2-5*a+1)$\\
\hline
Two zeroes of $D2$ merge at
$a = 4 \pm \sqrt{15}$. \\
$discrim\Big(\sqrt{discrim(G2,x)/R_{12}}\Big) \sim (a^2-8*a+1)^3$\\
\hline
Two zeroes of $R_{24}$ merge at \\
$a = 0.1487496031\pm 0.03597329725i$,\
$a = 6.351250397\pm 1.535973297i$,\\
$a = -4.485250968, -0.2229529645,\ {\bf 0.1163898556},\
8.591814077 = 1/0.1163898556$,\\
$a = {\bf 0.1497297977},\ 6.678697327 = 1/0.1497297977$,\\
$a=0 .5857864376\pm 0.8104654524i$,\\
$a = \frac{9\pm\sqrt{65}}{4}$.\\
The four series correspond to zeroes of the four factors in\\
$discrim\Big(\sqrt{resultant(G2,G4,x)},c\Big) \sim
(2*a^4-26*a^3+93*a^2-26*a+2)\cdot $\\
$\cdot(a^4-4*a^3-39*a^2-4*a+1)^2
(a^4-8*a^3+10*a^2-8*a+1)^2(2*a^2-9*a+2)^4 $\\
\hline
\end{tabular}
\bigskip
Derivative $\partial d_1^-/\partial a$
for the zero $d_1^-$ of discriminant $D_1$, which defines the
position of the cusp in the $(1)_-$ domain, defines the speed
of motion of the cluster $(1)_-$ with the change of $a$.
Since for small $a$ all the clusters are diminished copies of
the central one in Fig.\ref{Mand2}, with all the same proportions
one can actually predict what happens to clusters from the
data about their root domains.
The first event to happen on our way from $a=0$ is
{\it overlap}. If $a>0$ this is the overlap of domains
$(2)_+$ and $(2)_-$, while if $a<0$ it is that of
$(2,1)_+$ and $(2,1)_-$.
The two domains of order $2$
{\it overlap} when the two zeroes of $R_{24}$ coincide.
If we move from $a=0$ along the real-$a$ line
this first happens at $a=0.1163898556\ldots$ if $a>0$
and at $a=-0.2229529645\ldots$ if $a<0$.\footnote{
In fact, as explained in the previous paragraph,
we can approximately find the values of $a_{cluster}$,
when the overlap of the {\it clusters} occurs.
From Fig.\ref{Mand2} we know, that
the total size of the cluster is $\approx 1.65$
times bigger than the size of the root domain,
and the latter size is nothing but the difference
$d_2 - r_{24}$, we get a rough estimate:
$a_{do} - a_{cluster} \approx 0.65(a_{co}-a_{do})$
where $a_{do}$ and $a_{co}$ are moments when
$r_{24}^-(a_{do}) = r_{24}^+(a_{do})$
and $d_2^-(a_{co}) = d_2^+(a_{co})$, i.e.
$a_{do}=0.1163898556\ldots$ and
$a_{co} = 4-\sqrt{15}=0.12701665\ldots$.
Then $a_{cluster} \approx 1.65 a_{do}-0.65 a_{co} \approx
0.109$. It is assumed that the speed of motion
of the cluster with the change of $a$ and the cluster's
size are approximately constant, appropriate corrections
can be easily taken into account.
\label{estcluoverlap}}
Figs.\ref{a1_9s}, and \ref{a-1_4s} show Mandelbrot sets
soon after these points are passed.
It is clear from these pictures, that when overlap occurs,
nothing interesting happens to the orbits -- and this is
what makes {\it overlap} different form {\it collision},
when intersection of orbits takes place, see below.
Overlap means simply that two (or more) different orbits
are simultaneously stable at the same value of $c$:
in this case these are two order-$2$ orbits.
When overlap increases, it involves the $(1)_\pm$ domains
and at the same values of $c$ can coexist two
stable orbits of other orders: $2$ and $1$ (Fig.\ref{a-1_4})
and $1$ and $1$ (Fig.\ref{a-1}).
For $a<-1$ the further increase of $|a|$ leads to
{\it diminishing} of the overlap: the story repeats in the
opposite order, the overlap picture for $a=-4$ ressembles
that for $a=-0.25$ (shown in Fig.\ref{a-1_4}), for
$a = -4.(4)$ -- that for $a=-0.225$ (shown in Fig.\ref{a-1_4s}),
and after that the overlap disappears.
The reason for reversed evolution is that the family
${\cal M}_{2,3}(a)$ and even its Julia sheaf
has a discrete "symmetry" w.r.t.
inversion of parameter $a\rightarrow 1/a$:
\begin{eqnarray}
x(a^{-1},\theta) = -ax(a,\theta), \nonumber \\
c(a^{-1},\theta) = -ac(a,\theta)
\label{symmemaf}
\end{eqnarray}
This symmetry complements the
(\ref{symmman}) of particular Mandelbrot sets
at fixed $a$, and it allows to consider only the variation of
$a$ within interval $|a|\leq 1$, all Mandelbrot sets outside
this interval are exact rescaled copies of those inside.
More interesting things are taking place for positive $a$.
\Fig{a1_9s}
{500,279}
{
\footnotesize Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ (domains of orders $p=1,2$)
at $a=0.117$, immediately after the two root
domains $(2)_\pm$ touched at $a=0.1163898556\ldots$.
Now they {\it overlap}:
for $c$ in close vicinity of $c=-6.24$ there are two stable
order-$2$ orbits at once.
However, as clear from the Julia-sheaf pictures around,
nothing special happens to the orbits themselves.
}
\Fig{a-1_4s}
{500,336}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$
at $a=-1/4+1/40 = -0.225$, soon after the two
descendant domains $(2,1)_\pm$ touched at
$a=-0.2229529645\ldots$.
Two order-$2$ orbits are simultaneously stable in the close
vicinity of $c= -0.88$.
Again, nothing special happens to the orbits themselves
when the overlap occurs.
}
\Fig{a-1_4}
{500,379}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=-1/4$.
The overlap increases,
and now the descendant $(2,1)_\pm$ domains overlap not only
each other, but also the twin parent $(1)_\mp$ domains.
This means that in the vicinity of $c=-0.65$ there are
two coexisting stable orbits of order $2$, while in the
vicinities of $c=-0.5$ and $c=-0.8$ coexisting are stable
orbits of orders $2$ and $1$.
Nearly cusp-like behavior of the orbits in Julia-sheaf
view in the upper right corner occurs in the vicinity
of the zeroes $r_{12}$ of the resultant $R_{12}$, where
the orbits of order two cross those of order one and
the $(2,1)_\pm$ domains are attached to their parent
$(1)_\pm$ domains. This singular behavior has nothing
to do with the {\it overlap} of the domains.
}
\Fig{a-1}
{500,316}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=-1$.
The overlap increased even further as compared to
Figs.\ref{a-1_4s} and \ref{a-1_4}.
Descendant domains $(2,1)_\pm$ now passed through
the root ones $(1)_\mp$ and are no longer involved
in the overlap. Instead overlapping are the root
domains $(1)_\pm$ and two stable orbits of order
$1$ coexist in the vicinity of $c=0$.
If $(1)_\pm$ domains continued to move in the same
direction with further increase of $|a|$, the
two order $1$ orbits would cross (when the
two zeroes $d_1^\pm$ of discriminant $D_1$ merge),
and overlap would finally end in a {\it collision}
-- like it happens for positive values of $a$, see
Fig.\ref{a1_8s}.
However, the merging of $D_1$ zeroes occurs at
complex values $a = -\frac{1\pm i\sqrt{3}}{2}$,
and at $a=-1$ the overlap is the biggest possible
for real negative values of $a$.
As $a$ decreases further (and $|a|$ grows)
along the real-$a$ line, the $(1)_\pm$ domains
start to move in the opposite direction,
see Fig.\ref{a-2}.
We pass again through the overlap
patterns like Fig.\ref{a-1_4} (at $a=-4$)
and Fig.\ref{a-1_4s} (at $a=-4.(4)$)
and finally come back to the no-overlap pattern
like Fig.\ref{a-1_10} (at $a=-10$) -- all this in
accordance with the symmetry (\ref{symmemaf}).
}
\Fig{a-2}
{500,310}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=-2$.
The overlapping clusters start to diverge after
maximal approach at $a=-1$, stopping short from
collision of two zeroes of discriminant $D_1$,
which would unify two clusters into a single one
This actually happens at complex values of
$a = \frac{1\pm i\sqrt{3}}{2}$, and we will enconter
this unified cluster in our travel along real-$a$
axis, but at positive values of $a$.
}
\subsubsection{Colliding domains}
We left evolution in the positive-$a$ direction at the stage
of Fig.\ref{a1_9s}, when overlap of the two root domains
$(2)_\pm$ just occured.
In variance with the case of negative $a$, this time
as the overlap increases
the two zeroes $d_2$ of discriminant $D_2$, defining
positions of the cusps of these two domains, coincide
at $a=4 - \sqrt{15}=0.12701665\ldots$
and a new phenomenon takes place.
The two stable order-$2$ orbits (they were simultaneously
stable in the overlap region) cross each other, and the
pattern of orbits around crossing is pretty sophisticated,
see Fig.\ref{a1_8s}.
Most interesting is preservation of two small overlap sections,
where two different stable order-$2$ orbits continue to
coexist, but exhibit non-trivial monodromy under a travel
in the complex-$c$ plane
around the cusps at zeroes of discriminant $D_2$.
\Fig{a1_8s}
{500,491}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set at $a=1/8+1/224 = 0.12946(428571)$.
The two root domains $(2)_+$ and $(2)_-$ just {\it collided}
at $a=4 - \sqrt{15}=0.12701665\ldots$
(after a period of overlap, originated at
$a=0.1163898556\ldots$, see Fig.\ref{a1_9s}),
and their clusters are already united into a single new
cluster, so that the union of $(2) = (2)_+ \cup (2)_-$,
$(2)_+\cap (2)_- \neq \empty$,
is now the new root domain.
Arrows in the picture at the low left corner show the
action of evolution $x \rightarrow f(x)$ on the points
of the order-$2$ orbit.
Enlarged picture in the left upper corner shows in
more detail the new root cluster $(2)$.
It has two narrow regions of self-overlap along the
${\rm Re}(c) = {\rm Re}(d_2)$ vetical axis, where two
different order $(2)$ orbits are simultaneously stable.
In all other points of the $(2)$ cluster only one order $2$
orbit is stable, but non-trivial monodromy occurs when we
go around the cusps at
$d_2^\pm = {\rm Re}(d_2)\pm{\rm Im}(d_2)$ (points $D$):
if we pick up the single stable order-$2$ orbit,
say, at $c={\rm Re}(d_2)$ (point $O$)
and carry it into the upper overlap region from the right
(counter-clock-wise), we obtain one of the two orbits,
stable in that region, but if we carry the same orbit
into the same region from the left (clock-wise), we obtain
{\it another} stable orbit. If we continue to carry our
orbits in the same direction and leave the overlap domain,
the orbits loose stability. In other words, the two stable
orbits in the overlap domain are permuted when carried around
the cusp, and each orbit continues to be stable outside the
overlap domain only if it is carried away in one out of two
possible directions: either to the right or to the left.
Another end of the overlap domain (point $E$), which is on the
boundary of the $(2)$ cluster is not a cusp and not a
singularity. As any point on the boundary of an elementary
domain it has in its infinitesimal vicinity an infinite number
of zeroes of various resultants $R_{2,2m}$, where the
stable order-$2$ orbit exchanges stability with some order $2m$
orbit, see \cite{DM}. However, the set of relevant $\{m\}$ changes
irregularly as we move along the boundary, and accordingly,
for the point $E$ this set depends irregularly on $a$.
}
\subsubsection{Colliding clusters}
After collision of two root $(2)_\pm$ domains and formation
of a unified cluster with the root $(2)$, the two other
clusters, growing from the $(1)_\pm$ roots, continue to
move towards each other and soon collide with the $(2)$
cluster, sandwiched in between them.
Now this is indeed a {\it collision}, not just {\it overlap},
and, in variance with collision of the $(2)_\pm$ domains
it now originates at the highest leaves (at $p=\infty$)
rather than at the root domains.
Full description of this process is impossible with the
knowledge about the $p=1,2$ orbits only,
thus our illustrations will be necessarily incomplete.
Still, a lot is seen even with these limited tools.
Immediately after collision of the $(2)_\pm$ domains
in Fig.\ref{a1_8s}
they begin growing and soon become comparable in size
with the $(2,1)_\pm$ domains, belonging to approaching
clusters.
Even earlier the overlapping region inside $(2)$
shrinks down and disappears.
Finally, when the zeroes $r_{24}$ of the resultant $R_{24}$,
marking the closest points of the $(2)$ and $(2,1)_\pm$
domains, coincide, collision wave, going down from the
upper leaves of the clusters, reach the $p=2$ level:
cluster collision gets seen at the level of our
consideration. This happens at $a=0.1497297977\ldots$,
see Fig.\ref{a3_20}.\footnote{
Like in footnote \ref{estcluoverlap} one can try to
{\it estimate} the moment of {\it clusters} collision.
However, this time the clusters shape deviates considerably
from that in Fig.\ref{Mand2}, therefore such estimate is
less reliable.
}
Figs.\ref{a1_7s} and \ref{a1_5} show Mandelbrot sets soon
after that and a little later, when continuing approach
of $(1)_\pm$ domains (which are now {\it two roots} of
a single cluster!) starts pushing the unified $(2,1)$ domain
outside of the region between them.
This push-away process leads to the next bifurcation
at $a= \frac{5-\sqrt{21}}{2} = 0.20871215\ldots$
where the two zeroes $r_{12}$ of another resultant $R_{12}$
coincide, marking collision of the $(1)_\pm$ domains:
collision wave reached the $p=1$ level.
At this moment the $(2)$ cluster is ripped into two
disconnected pieces, see Fig.\ref{a1_5s}.
Clearly, just the same push-away and ripping processes
took place with all the higher-order domains
$(2^k,2^{k-1},\ldots,2,1)$ in between the moment of clusters
collision till it reached the $p=2$ level at $a=0.2087\ldots$
\Fig{a3_20}
{500,348}
{\footnotesize
Domains of the orders $p=1,2$ of the Mandelbrot set
${\cal M}_{2,3}(a)$ at $a=3/20=0.15$.
The $(2)$ domain just merged with the $(2,1)_\pm$
at $a=0.1497297977\ldots$ to form a single descendant
$(2,1)$ domain.
This unified domain has a characteristic
four-sausage structure,
remembering about its recent formation from four
distinct elementary domains $(2)_\pm$ and $(2,1)_\pm$.
}
\Fig{a1_7s}
{500,298}
{\footnotesize
Domains of the orders $p=1,2$ of the Mandelbrot set
${\cal M}_{2,3}(a)$ at $a=1/7+1/84 = 0.15(476190)$.
The shape of the unified $(2,1)$ domain evolved
from the four-sausage to a two-sausage one: the memory
about the difference between the $(2)$ and $(2,1)$
domains is almost erased, but distinction between
$+$ and $-$ is still preserved.
}
\Fig{a1_5}
{500,355}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=1/5=0.2$.
The $(2,1)$ domain is being pressed away by approaching
$(1)_\pm$ domains, which are now the two roots of a
single cluster.
Note re-appearance of the narrow overlap regions
inside the $(2,1)$ domain and non-trivial monodromy
of the order $2$ orbits when they are carried along
a circle in the Mandelbrot set, surrounding one
of the cusps $d_2^\pm$.
}
\subsubsection{Mandelbrot sets with the topology of ${\cal M}_3$
(in the vicinity of $a=1$)}
The further evolution of ${\cal M}_{2,3}(a)$ with increasing
$a$ consists mostly of continuous deformation of the shape
of emerged unified root $(1)$ domain with two cusps:
from a bone-like region in Fig.\ref{a1_5s} it grows into
nearly oval one (deviating from oval only near the
cusps) in Figs.\ref{a1_4}, \ref{a1_3} and finally at $a=1$,
when the cusps extend their region of influence, acquires the
standard form of Fig.\ref{a1_1},
familiar from Fig.\ref{0Man3}.
\Fig{a1_5s}
{500,381}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=1/5+1/80 = 0.2125$.
The two root domains $(1)_\pm$ just collided at
$a= \frac{5-\sqrt{21}}{2} = 0.20871215\ldots$ and formed a
single root domain $(1)$. It has a typical bone-like shape,
with two pieces of the former descendant $(2,1)$ domain
attached at the merging point.
Overlap regions inside these $(2,1)_\pm$ fragments are
now pretty large and cusps are not well seen, still
the picture in the low right corner
shows that they are still there.
$3_Rd$ view in the low left corner is taken at ${\rm Im}(c)=0$,
where only order $p=1$ orbit can be stable inside the root
$(1)$ domain.
}
\Fig{a1_4}
{500,399}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=1/4$.
In this picture we are near the point
$a = \frac{9-\sqrt{65}}{4}=0.23443556\ldots$
where inside-out reshuffling of the $(2,1)$
domains takes place (two zeroes $r_{24}$ of $R_{24}$
coincide at this transition point).
Overlap regions, seen in Fig.\ref{a1_5s}, grew up
to the full size of the $(2,1)$ domains, and
boundaries of the regions changes roles with boundaries
of the domains. Later the former boundary domains will
get closer and form new small overlap regions in
Fig.\ref{a1_3} that will later disappear at $a=0.42\ldots$}
\Fig{a1_3}
{500,567}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=1/3$.
The root $(1)$ domain acquired an almost ellipsoidal
form (outside the cusp regions).
The interesting phenomenon, seen already in Fig.\ref{a1_4},
is appearance of additional
overlap regions, marked by arrows in the upper left corner:
between the root $(1)$ and descendant
$(2,1)_\pm$ domains, where order-$1$ and order-$2$
orbits are simultaneously stable.
This time there is no interesting monodromy: when we
leave the overlap region in the direction of $(1)$ the
order-$1$ orbit remains stable, when we go deep into
$(2,1)_\pm$, the stable one is an order-$2$ orbit.
}
\Fig{a1_1}
{500,438}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot set ${\cal M}_{2,3}(a)$ at $a=1$.
This is the central part of the standard Mandelbrot
set ${\cal M}_3$, shown in Fig.\ref{0Man3}.
All overlap regions disappeared, the $(1)$ domain
is ideal cubic cardioid, other elementary domains
acquire nearly cardioid shapes.
}
Still, while nothing equally drastic happens after the
cluster collision, evolution is not quite event-less.
In above pictures one can see that the overlap regions
in the $(2,1)$ domain(s) appear and disappear, signaling
about the motion of {\it orbits} in Julia sheaf with the
change of $a$.
In particular, an interesting inside-out reshuffling is shown in
Fig.\ref{a1_4}.
Moreover, in Fig.\ref{a1_3} one can see
that overlap occurs even between the $(1)$ and $(2,1)$ domains.
We emphasize once again, that no bifurcations (phase transitions,
orbit crossing or reshuffling) are associated with the overlaps,
still they affect the shape and even the very presentation
of the Mandelbrot set (when overlaps exist, it is not a clever
idea to draw it all in black, like we did in
Figs.\ref{Mand2}-\ref{symmebre}) and in fact this is a signal
that the phase portrait gets richer: a non-trivial pattern
of attractors and repulsers occurs, nothing to say that
the vicinities of unstable orbits are not necessarily
attracted to infinity, as implicitly assumed in some algorithms,
mentioned in the first paragraphs of s.\ref{interpol}.
Events, encountered in the evolution of Mandelbrot set
${\cal M}_{2,3}(a)$ from $a=0$ to $a=1$, i.e. in interpolation
between Figs.\ref{Mand2} and \ref{0Man3}, are collected
in the following table:
\vspace{-0.5cm}
\centerline{
{\footnotesize
\begin{tabular}{|c|c|c|}
\hline
&&\\
$a$ & typical feature & picture \\
&&\\
\hline\hline
&&\\
$a<0$&$\ldots$&\\
&&\\
\hline\hline
&&\\
$a=0$&the standard Mandelbrot Set ${\cal M}_2$ &
Fig.\ref{Mand2}\\
&&\\
\hline
&&\\
$0<a<0.1164\ldots$ &two root domains of type $(1)$;
& Fig.\ref{a1_10}\\
&two descendant domains $(2,1)$, attached to them;&\\
&two isolated root domains $(2)$; & \\
&two $(4,2,1)$ domains and two $(4,2)$ domains,&\\
&attached to $(2,1)$ and $(2)$ at real values of $c$ &\\
&&\\
\hline
&&\\
$a=0.1164\ldots$&{\it projections} of two domains $(2)$ meet,&\\
&i.e. two zeroes of $R_{24}$ coincide,& \\
&responsible for stability of two
{\it different} order-$2$ orbits&\\
&&\\
\hline
&&\\
$0.1164\ldots < a < 0.1270\ldots$
&{\it projections} of two domains $(2)$ overlap;&\\
&two {\it stable} order-$2$ orbits coexist& Fig.\ref{a1_9s}\\
&in the region of overlap&\\
&&\\
\hline
&&\\
$a=4-\sqrt{15}=0.1270\ldots$&cusps of overlapping domains $(2)$
merge,&\\
&i.e. two zeroes of $D_2$ coincide&\\
&&\\
\hline
&&\\
$0.127\ldots< a < 0.130\ldots$&overlap of the two domains $(2)$ &
Fig.\ref{a1_8s}\\
&splits into two isolated components&\\
&&\\
\hline
&&\\
$a = 0.130\ldots$&overlap region shrinks down&\\
&&\\
\hline
&&\\
$0.130\ldots< a < 0.1497\ldots
$&only one unified root domain $(2)$ exists&\\
&&\\
\hline
&&\\
$a=0.1497\ldots$&$(2,1)$ domains merge with the $(2)$ domain,
& Fig.\ref{a3_20}\\
&i.e. two pairs of zeroes of $R_{24}$ coincide,&\\
&each pair responsible for stability&\\
&of {\it the same} order-$2$ orbit&\\
&&\\
\hline
&&\\
$0.1497\ldots < a < 6.6800\ldots$
&only one $(2,1)$ domain exists;& Fig.\ref{a1_7s}\\
&no $(2)$ domains &\\
&&\\
\hline
&&\\
$a=0.1875\ldots$ &coexisting stable order-$2$ orbits re-emerge&
Fig.\ref{a1_5}\\
&&\\
\hline
&&\\
$a = \frac{5-\sqrt{21}}{2} = 0.2087\ldots$
&two $(1)$ domains meet and&Fig.\ref{a1_5s}\\
&the $(2,1)$ domain splits into two,&\\
&two zeroes of $R_{12}$ coincide&\\
&&\\
\hline
&&\\
$0.2087\ldots < a < 0.42\ldots $
&overlap regions, where two stable order-$2$ orbits
& Figs.\ref{a1_4} \& \ref{a1_3}\\
&or order-$2$ and order-$1$ orbits can coexist;&\\
$0.2087\ldots < a < 4.7916\ldots $ &one $(1)$ domain;&\\
&two attached $(2,1)$ domains;&\\
&no $(2)$ domains&\\
&&\\
\hline
&&\\
a = 0.42\ldots &overlap region shrinks down&\\
&&\\
\hline
&&\\
$a=1$&the standard Mandelbrot set ${\cal M}_3$ &
Figs.\ref{a1_1} \& \ref{0Man3}\\
&&\\
\hline\hline
&&\\
$a>1$&$\ldots$&\\
&&\\
\hline
\end{tabular}
}}
\bigskip
In s.\ref{doverlap} we briefly discussed what happens
beyond the realm of this table: for negative values of $a$.
The evolution of ${\cal M}_{2,3}(a)$ can be also continued
to the region where $a>1$ (where $b=a-1<0$).
This evolution appears to be reverse of what we already
considered: the Mandelbrot set of Fig.\ref{0Man3} at $a=1$
passes through the same stages of Figs.\ref{a1_3},
\ref{a1_4}, \ref{a1_5s}
(at $a=3,\ 4, \ 4.706\ldots$ respectively) and so on.
In particular, at $a= \frac{5+\sqrt{21}}{2} = 4.7912878\ldots$
the single root $(1)$ domain is split into two, while
two descendant $(2,1)$ domains merge into one.
For illustration we show in Fig.\ref{a5_1} the counterpart
of Fig.\ref{a1_5}.
The full picture will be shown in s.\ref{UMS} below.
\Fig{a5_1}
{500,393}
{\footnotesize
Domains of the orders $p=1,2$ of the
Mandelbrot space ${\cal M}_{2,3}(a)$ at $a=5$.
This picture is direct analogue of Fig.\ref{a1_5},
and serves as an illustration of the symmetry property
(\ref{symmemaf}):
that the same sequence of bifurcations
happens to the Mandelbrot set ${\cal M}_{2,3}(a)$ on the way
from $a=1$ to $a=-1$ along the
real-$a$ line through $a=\infty$ as on direct way, presented in
Figs.\ref{a1_10}--\ref{a1_1}.
}
\subsection{First steps towards UMS \label{UMS}}
Let us now return to interpretation of Mandelbrot sets as
sections of a single Universal Mandelbrot Set (UMS).
It implies that all what we observe about particular
collection of Mandelbrot sets, like our one-parameter
family ${\cal M}_{2,3}(a)$, can be re-interpreted as result
of particular view on one and the same solid structure:
variation of patterns is result of the changing view,
the structure is always the same.\footnote{We can not
avoid stressing analogy with the well-known
{\it projection} approach
to {\it integrable} systems, see \cite{UFN3}}.
For example, the entire collection of the $p=1$ domains
in Figs.\ref{a1_10}-\ref{a5_1} which consist of a single
component for $a$ in between $\frac{5\mp \sqrt{21}}{2}$
and split into two components outside this segment,
see Fig.\ref{p1domsdia}.A,
can be alternatively described as the image of a single
cardioid-like domain in the slices, evolving with the
change of the section in Fig.\ref{p1domsdia}.B.
Moreover, in this approach one can even start from
an ordinary circle, not from a cardioid,
see Fig.\ref{cardcircdia}.
In fact these pictures are nothing but approximate drawings,
attempting to capture the properties of exact formula
\begin{eqnarray}
c = -\frac{b}{3a}\left(1+\frac{2b^2}{9a}\right) \pm
\left(1+\frac{2b^2}{9a} -\frac{e^{i\theta}}{3}\right)
\frac{\sqrt{b^3 +
3ae^{i\theta}\phantom{5^{5^5}}\hspace{-0.45cm}}}{3a},
\label{p1doms}
\end{eqnarray}
just now we interpret it is an evolution of
(degenerate) {\it elliptic} mappings
\begin{eqnarray}
(c-c_0)^2 = k(u-u_1)(u-u_2)(u-u_3),\ \ \
{\rm with}\ a-{\rm dependent\ parameters}\ c_0, k\ {\rm and}\ u_i;
\ \ {\rm actually}\ u_2=u_3
\label{ellip}
\end{eqnarray}
of a complex-$u$ plane with a unit circle on it
into a complex-$c$ plane, where the image of the unit circle
looks differently: like our $p=1$ domains, evolving and
even bifurcating (splitting and merging) under the change
of the mapping.
\Fig{p1domsdia}
{500,315}
{\footnotesize
{\bf A.} The shapes of $p=1$ elementary domains for different
values of $a$ in the family $ax^3+(1-a)x^2+c$,
as given by eq.(\ref{p1doms}).\ \
{\bf B.} The same (qualitatively) pictures arise in the
sections of a single cardioid-like cylinder by different
parabolic-like sections ($x-(z^2/2-s)=0$ for various $s$) with a complex-$c$ plane. \ \
{\bf C.} The cylinder can be even made circlic??? with the
help of the circle-cardioid relation shown in
Fig.\ref{cardcircdia}.
The corresponding sections are now generic conics (quadrics???),
not obligatory paraboloid.
The true sections behind eq.(\ref{p1doms}) are {\it cubic},
see (\ref{ellip}),
not a quadric, which are a little less convenient to draw.
}
\Fig{cardcircdia}
{500,188}
{\footnotesize
Cardioid cylinder of the previous Figure \ref{p1domsdia}
can be substituted by an ordinary rotation-symmetric cylinder
at expense of a more sophisticated slicing.
Cardioid as a square of a circle.
Analytically this correspondence is represented by
$c = \frac{u}{2}\left(1-\frac{u}{2}\right)$ or
$1-4c=(u-1)^2$.
}
At this early stage of investigation of UMS it is unclear,
what is its best and simplest possible representation.
In particular, nothing as simple as Fig.\ref{p1domsdia} is
immediately available when the order-$2$ orbits are taken into
account.
Therefore, instead of playing with different realizations,
we show in Figs.\ref{tubeD3}, \ref{tubeD3L1longrange1}
the most straightforward views of the
Universal Mandelbrot Set, directly in the $a-c$ coordinates.
Unfortunately, only $3_R$-dimensional section of the full
$2_C$-dimensional pattern can be drawn in a picture
and it can be rotated (what is very informative!) only on
computer screen, see \cite{maplesamples}
for details about this option.
\Fig{tubeD3}
{500,604}
{\footnotesize
Collection of order-$1$ and -$2$ domains of
Mandelbrot sets ${\cal M}_{2,3}(a)$ for the family
$f(x) = ax^3+(1-a)x^2+c$ with various values of $a$
in the region $0.10\leq a\leq 0.28$,
where most bifurcations are taking place,
represented as slices of a single $3_Rd$ entity,
which can be also considered as a $3_Rd$ section of UMS.
Horizontal is the plane of complex $c$, vertical is the
line of real $a$.
This picture provides a concise summary of all the properties,
described throughout s.\ref{23interp}.
The lower part of the figure represents separately the $p=1$ and
$p=2$ domains: all pictures are also shown from different angles,
what can help to appreciate the beauty of the structure.
}
\Fig{tubeD3L1longrange1}
{450,395}
{\footnotesize
The same collection of Mandelbrot sets ${\cal M}_{2,3}(a)$
for the family $f(x) = ax^3+(1-a)x^2+c$
with all real values of $a$
(including those represented in Figs.\ref{a1_10}-\ref{a5_1}).
Only central domains of order $1$ are shown.
}
In Fig.\ref{tubesD4} we give some more examples of $2_C$-sections
of the UMS. In particular, we demonstrate that topologies of this
section can be very different and they can be investigated by
already available tools. As a small illustration, the chain of
pictures in Fig.\ref{tubesD4},B shows how a loop in particular
section of the UMS can be (on Fig.\ref{tubesD4},A) contracted. Of
course, we are very far from calculation of homologies of the UMS,
but the way is already open.
\Fig{tubesD4}
{500,440}
{\footnotesize
Examples of other $3_Rd$ sections of the Universal Mandelbrot Set.
Only central domains of order $1$ are shown.
}
\section{Small-size approximation (SSA) \label{ssasec}}
\setcounter{equation}{0}
We now switch from transparent exact results
to subtle approximate methods.
\subsection{On status of the SSA}
The shape of elementary domains can be also considered in the
"small-size" approximation (see s.4.9.3 of \cite{DM}
and its less accurate predecessor in \cite{LL}
and many other text-books).
In SSA we expand all functions of $x$ and $c$ in powers
of their deviations from the critical (or, simply, mean)
values and
leave only the first three (constant, linear and the next)
terms in these expansions.
In what follows $x_{cr}=0$, but $c_{cr}$ will have different
values. "Next" normally means quadratic, but for homogeneous
polynomials $P_d(x) = x^d$ (giving rise to $Z_{d-1}$-symmetric
Mandelbrot sets) it will actually mean $x^d$.
SSA would be very natural, if typical deviations of $x$ and $c$
from the mean values were small. However, while it can seem
reasonable for the study of elementary domains --
except for the first few, they are indeed pretty small in
the $c$-plane (most are not even seen in
Figs.\ref{Mand2}--\ref{0Man4}),-- actually this assumption is wrong:
as we already know from Figs.\ref{a1_10}--\ref{a5_1},
the $x$-variables in solutions to eqs.(\ref{shapeeq}) are not small.
At today's level of knowledge justification of SSA comes only
{\it a posteriori}, say, from comparison with experimental
data in s.\ref{accu}).
SSA seems adequate for {\it phenomenological} description of
experimentally observed \cite{pdb,Mand}
self-similarity (fractal or scaling) property of the Mandelbrot
sets, but no clear {\it theoretical} reason for this adequateness
is known.
The algebro-geometrical approach of \cite{DM} only adds to the
mystery: the (experimentally) obvious scaling properties
of the universal discriminantal variety call for clear conceptual
explanations.
In any case, today SSA is the only available approach to
evaluation of Feigenbaum indices and other characteristics of
elementary domains when their order $p \rightarrow \infty$.
Surprisingly or not, it does a very good job in this field:
see s.\ref{Feig} below.
\subsection{SSA for the Mandelbrot Set}
The {\bf first step} of the SSA in application to the Mandelbrot
Set (i.e. to the family $f(x;c) = x^2+c$) is to expand
\begin{eqnarray}
F_p(x) = f_p(c) - x + x^2\gamma_p(c) + O(x^{4})
\label{approx}
\end{eqnarray}
It is used here that that $f'(x=0)=0$, otherwise expansion in powers
of $x$ should be substituted by that in powers of $x-x_{cr}$,
where $f'(x_{cr})=0$.
The {\bf second step} is to substitute (\ref{approx}) into
(\ref{shapeeq}). This gives:
\begin{eqnarray}
2x\gamma_p(c) = e^{i\phi}
\label{xvsphi}
\end{eqnarray}
and
\begin{eqnarray}
2f_p(c)\gamma_p(c) = 2 x\gamma_p(c) \Big(1-x\gamma_p(c)\Big)
\ \stackrel{(\ref{xvsphi})}{=}\
e^{i\phi}\left(1 - \frac{1}{2}\,e^{i\phi}\right)
\label{shape}
\end{eqnarray}
Eq.(\ref{shape}) provides the answer: the l.h.s. depends on the
shape of the function $f(x;c)$, i.e. on $c$, and (\ref{shape})
describes a curve $c(\phi)$.
Actually this curve can have many disconnected components,
and the {\bf third step} is to consider a particular component,
surrounding a particular root $c_p$ of the Mandelbrot function
$f_p(c) = f(x_{cr},c)$:
\begin{eqnarray}
f_p(c_p) = 0
\end{eqnarray}
Expand (\ref{shape}) around $c_p$:
$$
c = c_p+\sigma,
$$ \vspace{-0.25cm}
\begin{eqnarray}
f_p(c) = f_p(c_p+\sigma) = \dot f_p\,\sigma\left(1 +
\frac{\ddot f_p}{2\dot f_p}\, \sigma + O(\sigma^2)\right)
\label{expans}
\end{eqnarray}
\vspace{-0.25cm}
$$
\gamma_p(c) = \gamma_p(c_p+\sigma) =
\gamma_p\left(1 + \frac{\dot\gamma_p}{\gamma_p}\,\sigma +
O(\sigma^2)\right)
$$
From now on we denote through $f_p$ and $\gamma_p$
the values of the corresponding functions at $c=c_p$:
$\gamma_p \equiv \gamma_p(c_p)$ etc.
Substituting (\ref{expans}) into (\ref{shape}), we get:
\begin{eqnarray}
\frac{\sigma}{r_p}\left(1 -\frac{\xi_p}{2}
\frac{\sigma}{r_p}\right) \approx
r_p e^{i\phi} \left(1 - \frac{1}{2}\,e^{i\phi}\right)
\label{shape1}
\end{eqnarray}
with
\begin{eqnarray}
r_p = \frac{1}{2\dot f_p\gamma_p}
\label{diam}
\end{eqnarray}
and
\begin{eqnarray}
\xi_p = -\frac{1}{\dot f_p\gamma_p}\left(
\frac{\ddot f_p}{2\dot f_p} + \frac{\dot\gamma_p}{\gamma_p}
\right)
\label{xip}
\end{eqnarray}
Eq.(\ref{shape1}) is our final SSA answer for the shape of
the elementary domain of the Mandelbrot Set, surrounding
a point $c_p$. We see that the complex-valued $r_p$ defines the
{\it size} and {\it orientation} of the domain, while its
{\it shape} is fully controlled by the value of $\xi_p$:
for $|\xi_p|\ll 1$ we get a cardioid, while for $|\xi_p-1|\ll 1$
it turns into a circle.
\bigskip
Thus our problem is reduced to:
-- the check of accuracy of the small-size approximation:\ \
we do this in s.\ref{accu} by comparing the values of $r_p$,
predicted by (\ref{diam}) with their actual values for the
family $f(x;c) = x^2+c$, measured with the help of the
{\it Fractal Explorer} \cite{FE} or defined from the roots of
the relevant resultants;
-- evaluation of parameter $\xi_p$ with the help of (\ref{xip}):
\ in s.\ref{shapes2} we show that indeed
in the small-size approximation $\xi_p=1$ for elementary domains,
which are {\it not} roots of any clusters (i.e. are descendants
of some lower-level domains);
-- demonstration that higher-order cardioids emerge in the special
case of maps with $Z_{d-1}$ symmetry: in this case the symmetry
requires that $\gamma_p=0$ and (\ref{shape1}) gets substituted
by a more sophisticated expression (\ref{shaped}), investigated
in s.\ref{dfam} (emerging shapes are somewhat less ideal than for
$d=2$, deviations can reach tens of percents).
\subsection{Comments}
Note that at the third step we kept terms up to $\sigma^2$
in expansions of the $c$-dependent functions, like we did
at the first step with the $x$ functions.
As usual for this type of method to work successfully
it is important to correlate all approximations:
attempt to make one part of calculation more accurate than
another {\it decreases} the total accuracy.
Note also that keeping linear terms only, without quadratic
corrections, would make eq.(\ref{xvsphi}) senseless, and
according to the just formulated mnemonical rule one is
forced to keep $\sigma^2$ terms as well.
And indeed, neglecting them would provide a disaster in
description of descendant domains: we already learned
in s.\ref{minuscusp} that their
characteristic difference from the root domains is that
$\dot F_p(x,c) = \dot f_p(c) + x^2 \dot\gamma(c) + O(x^4)$
should vanish somewhere at the boundary, while in neglect of the
$\sigma^2$ terms this would be very difficult to achieve
while keeping $\dot f_p = \dot f_p(c_p) \neq 0$ in the center
$c_p$ of the domain.
In fact, the difference between descendant and root domains
is exactly in $\sigma^2$-terms: their relative magnitude
is measured by parameter $\xi_p$, and it is negligibly small
for root domains and close to unity for descendant ones.
\section{On accuracy of the small-size approximation for
the family $f = x^2 + c$ \label{accu}}
\setcounter{equation}{0}
Numerical characteristics of the lowest (in divisor forest)
elementary domains of the Mandelbrot set ${\cal M}_2$ are
represented in the following table (positions of these domains
are marked by arrows in Fig.\ref{6th}).
\Figeps{6th}
{500,294}
{\footnotesize Mandelbrot Set from
Fig.\ref{Mand2} with arrows, pointing at particular elementary
domains, represented in the table in s.\ref{accu}. {\it Trails}
are also seen in pictures with increased resolutions. Trails are
densely populated by clusters. The theory of trails remains an
open subject. An interesting task is to study how a trail between
some two clusters is formed, when we travel across Universal
Mandelbrot Space as in s.\ref{23interp}, and these two clusters
emerge from a splitting of a single cluster. Another interesting
problem is to find the locations of the {\it crossing points},
where many trails meet together, examples of such crossings are
points ${\bf A}$ and ${\bf B}$.}
\bigskip
{\footnotesize
$$
\hspace{-0.6cm}
\begin{array}{|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&&\\
p & 1 & 2 & 3 & 3 & 4 & 4 & 4 & 4 \\
&&&&&&&&\\
\hline
&&&&&&&&\\
c_p
&0&-1&-1.754877667&-0.1225611669&-1.940799807
&-0.1565201668&0.2822713908&-1.310702641\\
&&&&\pm 0.7448617670i&
&\pm 1.032247109i&\pm 0.5300606176i&\\
&&&&&&&&\\
\hline
&&&&&&&&\\
{\rm distance}
&{\bf 0}&{\bf 1}&{\bf 0}&{\bf 1}&{\bf 0}&{\bf 0}&{\bf 1}&{\bf 2}\\
{\rm from\ the\ root}
&(1)&(2,1)&(3)&(3_\pm,1)&(4_1)&(4_{2\pm})&(4_{\pm},1)&(4,2,1)\\
&&&&&&&&\\
\hline\hline
&&&&&&&&\\
\dot f_p
&1&-1&-5.649435914&-1.67528205&-25.53361247
&-9.826826127&-2.273407347&1.734079638\\
&&&&\mp 1.1245590i&
&\mp 1.391722418i&\mp 2.878229429i&\\
&&&&&&&&\\
\hline
&&&&&&&&\\
\ddot f_p
&0&2&17.89661552&-5.9483077&247.9985718
&-13.82424296&-30.39333448&-15.56341649\\
&&&&\pm 6.7473541i&
&\pm 86.41684550i&\mp 9.578562508i&\\
&&&&&&&&\\
\hline
&&&&&&&&\\
\gamma_p
&1&-2&-9.29887185&-1.35056409&-39.49472178
&-10.55453437&-0.142465954&4.88872230\\
&&&&\mp 2.249118i&
&\mp 5.448066568i&\mp 3.098932717i&\\
\hline
&&&&&&&&\\
\dot \gamma_p
&0&2&22.91612617&-7.458063&-39.49472178
&-42.68642404&-14.72856928&-21.82510289\\
&&&&\pm 3.767907i&
&\pm 80.90620560i&\mp 16.79943562i&\\
\hline\hline
&&&&&&&&\\
\xi_p
&{ 0}&{ 1}&{ 0.07706201109}
&{ 1.0212516}&{ 0.01367007063}
&{ 0.05842559742}&{ 1.011598455}&{ 1.055967271}\\
{\rm from}\ (\ref{xip})
&&&&{\pm 0.04763015i}&
&{\pm 0.08449808566i}&{\pm 0.07045065295i}&\\
&{\bf \ll 1}&{\bf =1} &{\bf \ll 1}&
{\bf \approx 1}&{\bf \ll 1}&{\bf |\xi_p|\ll 1}
&{\bf \approx 1}&{\bf \approx 1}\\
&&&&&&&&\\
\hline
&&&&&&&&\\
{\bf 2r_p} = (\dot f_p \gamma_p)^{-1}
&{\bf 1}&{\bf 0.5}&{\bf 0.0190355}
&{\bf -0.009518}&{\bf 0.00099163}
&{\bf 0.0069178}&{\bf -0.066394}
&{\bf 0.1179602} \\
{\rm from}\ (\ref{diam})
&&&&{\bf \mp 0.18867i}&
&{\bf \mp 0.0049095i}&{\bf \mp 0.057585i}&\\
&&&&&&&&\\
\hline\hline
&&&&&&&&\\
c_{p+}
&0.25&-0.75&-1.75&-0.125&-1.940550789
&-0.1547246055&0.25&-1.25\\
&&&&\pm 0.6495190528i&
&\pm 1.031047228i&\pm 0.5i&\\
&&&& (=\pm 3\sqrt{3}i/8)&&&&\\
\hline
&&&&&&&&\\
c_{p-}
&-0.75&-1.25&-1.768529153&-0.1157354238
&-1.941537753&-0.1613575037&0.3161758500&-1.368098940\\
&&&&\pm 0.8379990280i&&\pm 1.036031085i&\pm 0.5574717760i&\\
\hline
&&&&&&&&\\
c_{p+}-c_{p-}
&{\bf 1}&{\bf 0.5}&{\bf 0.018529153}&{\bf -0.0092645762}
&{\bf 0.000986964}&{\bf 0.0066328982}
&{\bf -0.0661758500}&{\bf 0.118098940}\\
&&&&{\bf \mp 0.1884799750i}&&{\bf \mp 0.004983857i}
&{\bf \mp 0.057471776i}&\\
&&&&&&&&\\
\hline
&&&&&&&&\\
\kappa &{\bf 4}&{\bf 2}&{\bf 4}&{\bf 2}
&{\bf 4}&{\bf 4}&{\bf 2}&{\bf 2}\\
(c_{p+}-c_{p})\kappa
&{\bf 1}&{\bf 0.5}&{\bf 0.019510668}&{\bf -0.0048776662}
&{\bf 0.000996072}&{\bf 0.0071822452}&{\bf -0.0645427816}
&{\bf 0.121405282}\\
&&&&{\bf \mp 0.1906854280i} &&{\bf \mp 0.004799524i}
&{\bf \mp 0.0601212352i}&\\
&&&&&&&&\\
\hline
\end{array}
\hspace{+0.6cm}
$$
}
\noindent
The entries in the last two rows
should be both compared with $2r_p$, which is
calculated within SSA in the middle part of the table.
Since the shapes of root and descendant domains
are different, parameters $\kappa$ in the last row
are also different: $\kappa = 4$ for cardioid-shape
root domains and $\kappa=2$ for circle-shape descendant
domains.
{\footnotesize
$$
\begin{array}{|c|c|c|c|c|c|c|}
\hline
&&&&&&\\
p
&5&5&5&5&5&5\\
&&&&&&\\
\hline
&&&&&&\\
c_p
&-1.625413725&-1.860782522&-1.985424253
&0.3592592248&-0.04421235770&-0.1980420994\\
&&&&\pm 0.6425137371i&\pm 0.9865809763i&\pm 1.100269537i\\
&&&&&&\\
\hline
&&&&&&\\
{\rm distance}
&{\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}\\
{\rm from\ the\ root}
&(5_1)&(5_2)&(5_3)&(5_{4\pm})&(5_{5\pm})&(5_{6\pm})\\
&&&&[\ldots,8,4_{\pm},1]&[\ldots,12,6,3_\pm,1]
&[\ldots,12,6,3_\pm,1]\\
&&&&&&\\
\hline\hline
&&&&&&\\
\dot f_p
&-12.346786&27.952811&-106.51134
&-15.264582&-1.4606030&-34.451623\\
&&&&\mp 4.049590i&\mp 15.65788i&\pm 8.245915i\\
&&&&&&\\
\hline
&&&&&&\\
\ddot f_p
&32.445339&-211.46971&3811.7679
&-270.4039&-190.8108&421.6135\\
&&&&\pm 144.8293i&\mp 151.4911i&\pm 736.0759i\\
&&&&&&\\
\hline
&&&&&&\\
\gamma_p
&-19.95443&45.84473&-161.34688
&-9.4954135&6.050675&-41.054355\\
&&&&\mp 9.112127i&\mp 17.26642i&\mp 3.02090i\\
&&&&&&\\
\hline
&&&&&&\\
\dot \gamma_p
&17.8489&-272.6037&5625.2834
&-237.5248&-108.0265&207.2558\\
&&&&\mp 0.045178i&\mp 230.0270i&\pm 911.8688i\\
&&&&&&\\
\hline\hline
&&&&&&\\
\xi_p
&{0.008963643}&{0.007591840}&{0.00306997}
&{0.02825982}&{0.03863736}&{0.00311728}\\
{\rm from}\ (\ref{xip})
&&&&{\pm 0.1305419i}&{\mp 0.06450516i}
&{\pm 0.02358242i}\\
&{\bf \ll 1}&{\bf \ll 1}&{\bf \ll 1}
&{\bf |\xi_p|\ll 1}&{\bf |\xi_p|\ll 1}&{\bf |\xi_p|\ll 1}\\
&&&&&&\\
\hline
&&&&&&\\
{\bf 2r_p} = (\dot f_p \gamma_p)^{-1}
&{\bf 0.004058884}&{\bf 0.0007803423}&{\bf 0.0000581894}
&{\bf 0.00250125}&{\bf -0.0033726}&{\bf 0.00067682}\\
{\rm from}\ (\ref{diam})
&&&&{\bf\mp 0.00411026i}&{\bf\pm 0.00083981i}
&{\bf\pm 0.00011025i}\\
&&&&&&\\
\hline\hline
&&&&&&\\
c_{p+}
&-1.624396989&-1.860586973&-1.985409691&0.3599331332
&-0.04506136598&-0.1978729467\\
&&&&\pm 0.6415066668i&\pm 0.9868115622i&\pm 1.100298438i\\
&&&&&&\\
\hline
&&&&&&\\
\kappa &{\bf 4}&{\bf 4}&{\bf 4}&{\bf 4}
&{\bf 4}&{\bf 4}\\
(c_{p+}-c_{p})\kappa
&{\bf 0.004066944}&{\bf 0.000782196}&{\bf 0.000058248}
&{\bf 0.0026956336}&{\bf -0.00339603312}&{\bf 0.0006766108}\\
&&&&{\bf\mp 0.0040282812i}&{\bf\pm 0.0009223436i}
&{\bf\pm 0.000115604i}\\
&&&&&&\\
\hline
\end{array}
$$
}
{\footnotesize
$$
\hspace{-1.0cm}
\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&\\
p
&5&5&5&6&6&6&6\\
&&&&&&&\\
\hline
&&&&&&&\\
c_p
&-1.256367930&0.3795135880&-0.5043401754
&-1.476014643&-1.907280091&-1.966773216&-1.996376138\\
&\pm 0.3803209635i&\pm 0.3349323056i&\pm 0.5627657615i
&&&&\\
&&&&&&&\\
\hline
&&&&&&&\\
{\rm distance}
&{\bf 0}&{\bf 1}&{\bf 1}
&{\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}\\
{\rm from\ the\ root}
&(5_{7\pm})&(5_{1\pm},1)&(5_{2\pm},1)
&(6_1)&(6_2)&(6_3)&(6_4)\\
&[\ldots,12,6_\pm,2,1]&&&&&&\\
\hline\hline
&&&&&&&\\
\dot f_p
&-9.720195&-2.700383&-3.449207
&9.557119&-73.91417&135.0997&-431.94389\\
&\mp 11.81330i&\mp 5.404227i&\mp 1.112266i
&&&&\\
&&&&&&&\\
\hline
&&&&&&&\\
\ddot f_p
&-59.44225&-48.15074&9.823111&-106.133658
&491.1284&-2562.839&60341.247\\
&\pm 171.0187i&\mp 78.38748i&\pm 30.87527i
&&&&\\
&&&&&&&\\
\hline
&&&&&&&\\
\gamma_p
&-5.71067&0.809902&-2.871463&20.23824
&-115.0273&207.856&-649.8852\\
&\mp 21.95456i&\mp 3.399590i&\mp 2.098768i
&&&&\\
&&&&&&&\\
\hline
&&&&&&&\\
\dot \gamma_p
&-165.8458&2.830436&0.04567&-150.6058
&576.3060&-3609.262&90085.134\\
&\pm 183.4574i&\mp 47.49326i&\pm 29.22689i
&&&&\\
&&&&&&&\\
\hline\hline
&&&&&&&\\
\xi_p
&{ 0.0176798}&{ 1.00083}&{ 0.984283}
&{ 0.067182}&0.00098004&0.00095613&0.0007426\\
{\rm from}\ (\ref{xip})
&{\mp 0.0451181i}&{\pm 0.0866142i}
&{\mp 0.000539978i}&&&&\\
&{\bf |\xi_p|\ll 1}&{\bf \approx 1}&{\bf \approx 1}&
{\bf\ll 1}&{\bf\ll 1}&{\bf\ll 1}&{\bf\ll 1}\\
&&&&&&&\\
\hline
&&&&&&&\\
{\bf 2r_p} = (\dot f_p \gamma_p)^{-1}
&{\bf -0.001692541}&{\bf -0.04612246}&{\bf 0.04556086}
&{\bf 0.005170116}&{\bf 0.000117617}&{\bf 0.000035611}
&{\bf 0.0000035623}\\
{\rm from}\ (\ref{diam})
&{\bf \mp 0.00233202i}&{\bf \mp 0.0107757i}&
{\bf \mp 0.0627926i}&&&&\\
&&&&&&&\\
\hline\hline
&&&&&&&\\
c_{p+}
&-1.256801994&0.3567627458&-0.4817627458&-1.47469537780
&-1.90725067795&-1.96676431090&-1.99637524690\\
&\pm 0.3797412022i&\pm 0.3285819450i&\pm 0.5316567552i
&&&&\\
&&&&&&&\\
\hline
&&&&&&&\\
\kappa &{\bf 4}&{\bf 2}&{\bf 2}&{\bf 4}
&{\bf 4}&{\bf 4}&{\bf 4}\\
(c_{p+}-c_{p})\kappa
&{\bf -0.0017363}&{\bf -0.04550168}&{\bf 0.04515486}
&{\bf 0.00527706}&{\bf 0.000117652}
&{\bf 0.000035620}&{\bf 0.000003564}\\
&{\bf \mp 0.00231905i}&{\bf \mp 0.01270072i}
&{\bf \mp 0.06221801i}&&&&\\
&&&&&&&\\
\hline
\end{array}
\hspace{+1.0cm}
$$
}
{\footnotesize
$$
\begin{array}{|c|c|c|c|c|c|c|}
\hline
&&&&&&\\
p
&6&6&6&6&6&6\\
&&&&&&\\
\hline
&&&&&&\\
c_p
&0.4433256334&0.3965345700&0.3598927390
&-0.01557038602&-0.1635982616&-0.2175267470\\
&\pm 0.3729624167i&\pm 0.6041818105i&\pm 0.6847620202i
&\pm 1.020497366i&\pm 1.097780643i&\pm 1.114454266i\\
&&&&&&\\
\hline
&&&&&&\\
{\rm distance}
&{\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}&{\bf 0}\\
{\rm from\ the\ root}
&(6_{5\pm})&(6_{6\pm})&(6_{7\pm})&(6_{8\pm})
&(6_{9\pm})&(6_{10\pm})\\
&[\ldots,10,5_{1\pm},1]&[\ldots,8,4_\pm,1]&[\ldots,8,4_\pm,1]
&[\ldots,6,3_\pm,1]&[\ldots,6,3_\pm,1]&[\ldots,6,3_\pm,1]\\
&&&&&&\\
\hline\hline
&&&&&&\\
\dot f_p
&-22.131316&-7.730347&-45.44965&-47.08247&-13.70794&-94.05752\\
&\mp 8.549589i&\mp 29.76415i&\pm 11.11460i&\mp 42.3870i
&\mp 59.0423i&\pm 65.91697i\\
&&&&&&\\
\hline
&&&&&&\\
\ddot f_p
&-868.64319&-427.073&-431.8717&-3003.383&-2600.199&7694.999\\
&\mp 90.27354i&\mp 1083.050i&\pm 2162.074i&\pm 437.063i
&\mp 913.171i&\pm 2405.039i\\
&&&&&&\\
\hline
&&&&&&\\
\gamma_p
&-8.156137&6.87116&-37.7109&-28.2862&7.68865&-125.4789\\
&\mp 11.41327i&\mp 23.4912i&\mp 9.24215i&\mp 64.5198i
&\mp 69.5663i&\pm 40.9618i\\
&&&&&&\\
\hline
&&&&&&\\
\dot \gamma_p
&-389.998&151.331&-1097.368&-3129.19&-2370.323&7560.924\\
&\mp 307.274i&\mp 866.959i&\pm 1391.979i&\mp 931.571i
&\mp 1950.970i&\pm 5049.389i\\
&&&&&&\\
\hline\hline
&&&&&&\\
\xi_p
&0.00405&0.07215&-0.01588&0.014630&0.010020&-0.001788\\
{\rm from}\ (\ref{xip})
&\pm 0.16159i&\mp 0.01058i&\pm 0.034629i&\pm 0.005837i
&\mp 0.01208i&\pm 0.006621i\\
&{\bf |\xi_p| \ll 1}&{\bf |\xi_p| \ll 1}&{\bf |\xi_p| \ll 1}
&{\bf |\xi_p| \ll 1}&{\bf |\xi_p| \ll 1}&{\bf |\xi_p| \ll 1}\\
&&&&&&\\
\hline
&&&&&&\\
{\bf 2r_p} = (\dot f_p \gamma_p)^{-1}
&{\bf 0.0007487}&{\bf -0.0013280}&{\bf 0.00055046}
&{\bf -0.00007044}&{\bf -0.00023408}&{\bf 0.000039602}\\
{\rm from}\ (\ref{diam})
&{\bf \mp 0.0029099i}&{\bf \pm 0.00004046i}
&{\bf \mp 0.000000276i}&{\bf \mp 0.0002127i}
&{\bf\mp 0.00002776i}&{\bf\pm 0.00005275i}\\
&&&&&&\\
\hline\hline
&&&&&&\\
c_{p+}
&0.44355069141&0.39619446067&0.3600296164
&-0.01558797607&-0.163657003678&-0.217516881\\
&\pm 0.372246821i&\pm 0.60419336859i&\pm 0.684763498i
&\pm 1.0204438975i&\pm 1.0977739136i&\pm 1.114467467i\\
&&&&&&\\
\hline
&&&&&&\\
\kappa &{\bf 4}&{\bf 2}&{\bf 2}&{\bf 4}
&{\bf 4}&{\bf 4}\\
(c_{p+}-c_{p})\kappa
&{\bf 0.000900232}&{\bf -0.001360437}&{\bf 0.00054751}
&{\bf -0.000070360}&{\bf -0.000234968}&{\bf 0.000039463}\\
&{\bf\mp 0.00286238i}&{\bf\pm 0.00004623i}
&{\bf \pm 0.00000591i}&{\bf\mp 0.00021387i}
&{\bf\mp 0.00002692i}&{\bf\pm 0.00005280i}\\
&&&&&&\\
\hline
\end{array}
$$
}
{\footnotesize
$$
\begin{array}{|c|c|c|c|c|c|c|}
\hline
&&&&&&\\
p
&6&6&6&6&6&6\\
&&&&&&\\
\hline
&&&&&&\\
c_p
&-0.5968916446&-1.284084926&0.3890068406
&-1.772892903&-0.1134186559&-1.138000667\\
&\pm 0.6629807446i&\pm 0.4272688960i&\pm 0.2158506509i
&&\pm 0.8605694725i
&\pm 0.2403324013i \\
&&&&&&\\
\hline
&&&&&&\\
{\rm distance}
&{\bf 0}&{\bf 0}&{\bf 1}&{\bf 1}&{\bf 2}&{\bf 2}\\
{\rm from\ the\ root}
&(6_{11\pm})&(6_{12\pm})&(6_\pm,1)&(6,3)
&(6,3_{\pm},1)&(6_\pm,2,1)\\
&[\ldots,10,5_{2\pm},1]&[\ldots, 12,6_\pm,2,1]&&&&\\
&&&&&&\\
\hline\hline
&&&&&&\\
\dot f_p
&-21.17430&-49.4808&-2.84507&6.02446709
&2.710416&3.0373845\\
&\mp 1.239334i&\mp 43.51796i&
\mp 8.81348i&&\pm 2.01496i&\pm 0.674370i\\
&&&&&&\\
\hline
&&&&&&\\
\ddot f_p
&258.9018&-401.263&-13.2205&-699.2133940
&53.81286&-9.154561\\
&\pm 390.3375i&\pm 2355.001i&\mp 228.2727i
&&\mp 56.37906i&\mp 50.431515i\\
&&&&&&\\
\hline
&&&&&&\\
\gamma_p
&-19.86933&-40.9497&1.51341&19.09634167&3.35657&3.882374 \\
&\mp 7.65439i&\mp 85.4133i&
\mp 3.51012i&&\pm 5.72766i&\pm 4.659455i\\
&&&&&&\\
\hline
&&&&&&\\
\dot \gamma_p
&122.1821&-1796.0107&47.3746&-1106.4705637
&71.1246&28.408033\\
&\pm 435.7183i&\pm 2809.8473i&
\mp 74.1184i&&\mp 30.8637i&\mp 64.35131i\\
&&&&&&\\
\hline\hline
&&&&&&\\
\xi_p
&0.062667&0.005776&0.99288&1.00806&1.03542&1.0497\\
{\rm from}\ (\ref{xip})
&\pm 0.034437i&\mp 0.006297i&
\pm 0.09882i&&\pm 0.013004i&\pm 0.0438i\\
&{\bf |\xi_p| \ll 1}&{\bf |\xi_p| \ll 1}
&{\bf \approx 1}&{\bf \approx 1}&{\bf \approx 1}&{\bf \approx 1}\\
&&&&&&\\
\hline
&&&&&&\\
{\bf 2r_p} = (\dot f_p \gamma_p)^{-1}
&{\bf 0.0020161}&{\bf -0.000043399}
&{\bf -0.0281207}
&{\bf 0.008692230}&{\bf -0.0048602}&{\bf 0.0242924}\\
{\rm from}\ (\ref{diam})
&{\bf \mp 0.0009153i}&{\bf \mp 0.00015422i}&
{\bf \pm 0.0026745i}&&{\bf\mp 0.044335i}&{\bf \mp 0.047098i}\\
&&&&&&\\
\hline\hline
&&&&&&\\
c_{p+}
&-0.5963742242&-1.284095877
&0.375&-1.768529153&-0.1157354238&-1.125\\
&\pm 0.6627532528i&\pm 0.4272302898i&\pm 0.2165063509i
&&\pm 0.837999027i&\pm 0.2165063509i\\
&&&&&&\\
\hline
&&&&&&\\
\kappa &{\bf 4}&{\bf 4}&{\bf 2}&{\bf 2}&{\bf 2}&{\bf 2}\\
(c_{p+}-c_{p})\kappa
&{\bf 0.00206968}&{\bf -0.00004380}&{\bf -0.028013681}&{\bf 0.008727500}
&{\bf -0.0046335357}&{\bf 0.0260013340}\\
&{\bf\mp 0.000909967i}&{\bf\mp 0.00015442i}
&{\bf\pm 0.001311400i}&&{\bf\mp 0.045140890254i}
&{\bf\mp 0.0476521007i}\\
&&&&&&\\
\hline
\end{array}
$$
}
\bigskip
A few comments to the table are now in order:
We present the values of parameters with high accuracy,
which strongly exceeds our needs in the present paper,
but this data can be used in the future investigations.
One should not be surprised by the high accuracy of experimental
data: since it is computer experiment over {\it Platonian} entity,
accuracy is unlimited. Moreover the numbers in the last column
can be also reproduced as resultants zeroes \cite{DM}:
though it is a difficult calculation (beyond capabilities of MAPLE
on an ordinary laptop already for $p > 6$), its accuracy is in
principle unlimited.
$c_+$ is closer to the root than $c_-$:
$c_+$ is the point of merging with the parent domain
or the cusp position if the domain is itself a root,
while $c_-$ is the "opposite" point, i.e. the
merging point with the next descendant of the order $2$.
Starting from $p=5$ there are many root domains of the type
$(5)$, $(6)$ etc, and -- starting from $p=3$ -- many descendants
$(3,1)$, $(4,1)$,\ldots, $(6,2,1)$ etc:
$\alpha$-parameters begin to emerge. The two domains $(3_\pm,1)$
differ by complex conjugation only, but in other cases
systematization in $\alpha$-sector is less straightforward,
their sizes and orientations depend essentially on $\alpha$.
Still, because of the symmetry of the Mandelbrot Set
under complex conjugation, the domains with centers at
non-real $c$ come in pairs.
Such complex conjugate domains are
always labeled by indices $\pm$.
Positions of the domains in divisor forest are shown
in the second line of the third row.
For root domains the original direction of the {\it trail},
connecting it to the central cluster,
is also shown in square brackets in the third row.
Of course, all root domains with centers at real values
of $c$ belong to the trail, originating at
$[\ldots, 2^n, \ldots, 16,8,4,2,1]$, and it is not
mentioned in the table.
\bigskip
From this table we observe:
-- the good accuracy of the relation (see the last three columns)
$$
2r_p\ \approx\ c_{p+}-c_{p-}
$$
between the theoretically-predicted
(in the small-size approximation) complex-valued size
$r_p$ of an elementary domain and the difference between
experimentally found extreme points $c_{p+}$ and $c_{p-}$;
-- the correlation between the value of $\xi_p$ and
the distance of elementary domain from the root of the
corresponding cluster (the corresponding columns are boldfaced):
$\xi_p$ is tiny for the roots (distance $= 0$)
and close to unity for all descendants (distance $\geq 1$).
\section{Why $\xi_p\approx 1$ for descendants:
{\it la raison d'etre} for circles \label{shapes2}}
\setcounter{equation}{0}
\subsection{Approach to description of descendants}
We can study descendants of a given elementary domain within the
same small-size approximation (SSA),
simply iterating approximate expression
$$
f^{\circ p}(x) \approx f_p(c) + \gamma_p(c) x^2
$$
to
\begin{eqnarray}
f^{\circ (2p)}(x) \approx f_{2p}(c) + \gamma_{2p}(c)x^2 \approx
f_p(c) + \gamma_p(c)\Big(f_p(c) + \gamma_p(c)x^2\Big)^2 \approx
f_p(c)\Big(1+f_p(c)\gamma_p(c)\Big) + 2f_p(c)\gamma_p^2(c)x^2
\label{f2pc}
\end{eqnarray}
and so on.
Thus in this framework
\begin{eqnarray}
f_{2p}(c) \approx f_p(c)\Big(1+f_p(c)\gamma_p(c)\Big),\nonumber \\
\gamma_{2p}(c) \approx 2f_p(c)\gamma^2_p(c)
\label{2it}
\end{eqnarray}
$c_{2p}$ is a non-trivial root of this new $f_{2p}(c)$,
\begin{eqnarray}
f_p(c_{2p}) \gamma_p(c_{2p}) = -1
\label{gampc2p}
\end{eqnarray}
This procedure -- if at all justifiable -- can be valid only for
$c_{2p}$, associated with a {\it descendant} domain of $c_p$
(but not a {\it root} domain of some new cluster),
since it relies on SSA
and assumes that $c_{2p}$ is very close to $c_p$.
The shift $\sigma_{2p}\equiv c_{2p}-c_p$ can actually be found
in SSA by solving (\ref{gampc2p}) iteratively:
\begin{eqnarray}
\frac{\sigma_{2p}}{2r_p}
\left(1 - \xi_p\frac{\sigma_{2p}}{2r_p}\right) = -1
\label{c2p}
\end{eqnarray}
Now we are going to demonstrate that $\xi_{2p}$, evaluated for
{\it such} $c_{2p}$ within SSA, is indeed equal to unity
(this is no more than a consistency check, because validity of
the SSA itself will not be theoretically justified).
Afterwards this calculation is extended to
descendant $c_{mp}$ for all $m$.
Further, eq.(\ref{c2p}) and its generalizations for $\sigma_{mp}$
are used in s.\ref{Feig} to evaluate SSA approximations of various
Feigenbaum indices.
Finally, in s.\ref{dfam}, we briefly consider the case of
specific $Z_{d-1}$-symmetric $f(x;c) = x^d+c$ families.
\subsection{Evaluation of $\xi_{2p}$ for
a descendant \label{calcul}}
This is a rather straightforward calculation.
From (\ref{f2pc}) we obtain -- in the small-size approximation,
after substitution of $c=c_{2p}$ and (\ref{gampc2p}), and
after expanding functions of $c_{2p}$ in powers of
$\sigma_{2p}=c_{2p}-c_p$ from (\ref{c2p}) --
the set of recurrent expressions:
\begin{eqnarray}
\dot f_{2p}(c) = \dot f_p(c)\Big(1 + 2f_p(c)\gamma_p(c)\Big) +
f^2_p(c)\dot\gamma_p(c)
\ \ \ \stackrel{\ c=c_{2p}}{\Longrightarrow}\ \ \
\dot f_{2p} \equiv \dot f_{2p}(c_{2p}) = \nonumber \\
= -\dot f_p(c_{2p}) -
f_p(c_{2p})\frac{\dot\gamma_p}{\gamma_p}(c_{2p}) \approx
-\dot f_p\left(1+\sigma_{2p}\left[\frac{\ddot f_p}{\dot f_p} +
\frac{\dot\gamma_p}{\gamma_p}\right]\right)
\label{appr1}
\end{eqnarray}
\begin{eqnarray}
\gamma_{2p}(c) = 2f_p(c)\gamma^2_p(c)
\ \ \ \stackrel{(\ref{gampc2p})}{\Longrightarrow}\ \ \
\gamma_{2p} \equiv \gamma_{2p}(c_{2p})
= -2\gamma_p(c_{2p}) = -2\gamma_p\left(1+\sigma_{2p}
\frac{\dot\gamma_p}{\gamma_p}\right)
\label{appr2}
\end{eqnarray}
\begin{eqnarray}
\ddot f_{2p}(c) = \ddot f_p(c)\Big(1 + 2f_p(c)\gamma_p(c)\Big) +
2\dot f_p^2(c)\gamma_p(c) + 4f_p(c)\dot f_p(c)\dot\gamma_p(c)
+ f^2_p(c)\ddot\gamma_p(c) \ \ \ \Longrightarrow\ \ \
\ddot f_{2p} = 2\dot f^2_p\gamma_p
\end{eqnarray}
\begin{eqnarray}
\dot\gamma_{2p}(c) = 2\dot f_p(c)\gamma^2_p(c) +
4f_p(c)\gamma_p(c)\dot\gamma_p(c) \ \ \ \Longrightarrow\ \ \
\dot\gamma_{2p} = 2\dot f_p(c)\gamma^2_p(c)
\label{appr4}
\end{eqnarray}
Substituting these expressions into (\ref{diam}) and (\ref{xip}),
we obtain:
\begin{eqnarray}
r_{2p} = \frac{1}{2\dot f_{2p}\gamma_{2p}} \approx
\frac{1}{4\dot f_{p}\gamma_{p}
\left\{1+\sigma_{2p}\left(\frac{\ddot f_p}{\dot f_p} +
2\frac{\dot\gamma_p}{\gamma_p}\right)\right\}}
= \frac{r_p}{2\left(1 - \xi_p\frac{\sigma_{2p}}{r_p}\right)}
\label{r2p}
\end{eqnarray}
and
\begin{eqnarray}
\xi_{2p} = -\frac{1}{\dot f_{2p}\gamma_{2p}}
\left(\frac{\ddot f_{2p}}{2\dot f_{2p}} +
\frac{\dot\gamma_{2p}}{\gamma_{2p}}\right) \approx
-\frac{1}{\dot f_{2p}\gamma_{2p}}\left(
\frac{2\dot f^2_p\gamma_p}{-2\dot f_p} +
\frac{2\dot f_p(c)\gamma^2_p(c)}{-2\gamma_p}\right) =
2\frac{\dot f_{p}\gamma_{p}}{\dot f_{2p}\gamma_{2p}}
\approx 1
\label{xi2p}
\end{eqnarray}
as required.
\subsection{The rules of SSA}
Note, that within SSA we consider $r_p\ddot f_p/\dot f_p$ and
$r_p\dot\gamma_p/\gamma_p$ as {\it small parameters} and
ignore their quadratic powers as well as higher derivatives.
This is needed for self-consistency of the SSA,
even despite individual corrections need not be small
(especially for domains which are {\it not} the {\it first}
descendants, i.e. when $\xi_p\approx 1$ is {\it not small})
-- however, if included, they should come {\it together}
with {\it other} corrections to the SSA, which were also ignored.
Actually, as we saw in s.\ref{accu} the summary effect of
{\it all} corrections is small, but the theoretical reason
for this conspiracy in the case of higher descendants remains
to be identified.
\bigskip
It deserves formulating the rules of SSA explicitly:
\bigskip
$\bullet$
Expand in powers of $x$ and leave the first two non-trivial terms
(constant and $x^2$ in the case of $x^2+c$ family) -- for generic
value of $c$.
$\bullet$
Expand in powers of $\sigma = c-c_{crit}$ and leave only the first
corrections $\sim \ddot f /\dot f$ and $\sim \dot\gamma/\gamma$.
$\bullet$
If two different but two close $c_{crit}$ appear in the problem
(say, centers of two adjacent elementary domains),
expand in powers of their difference, leaving only the first two
powers of the difference.
$\bullet$
Combining all these expansions, keep only the first two corrections
in expressions for the final quantities, in practice this means
keeping all powers of $\dot f$ and $\gamma$ and ignore everything
beyond the first powers of $\ddot f /\dot f$ and $\gamma/\gamma$.
\subsection{Position and radius of arbitrary descendant domain}
Generic descendant domain has a parent of order $p$
and has itself a multiple order $mp$. It is attached to the parent
at a zero of the resultant $R_x\Big(F_{mp}(x)/F_p(x),F_p(x)\Big)$.
Parent can be itself a descendant and a chain of ancestors lead
to a root domain of the cluster, however, only the first term in
this chain -- the mother domain, of which the domain of interest
is an {\it immediate} descendant, -- is relevant in the SSA-based
calculations.
It is easy to check that generalization of the SSA relation
(\ref{2it}) to arbitrary $m$ is
\begin{eqnarray}
f_{mp}(c) \approx
\frac{1}{\gamma_p(c)}f_m\Big(f_p\gamma_p(c)\Big)
\end{eqnarray}
\vspace{-0.4cm}
\begin{eqnarray}
\gamma_{mp}(c) \approx \gamma_p(c)\
\gamma_m\Big(f_p\gamma_p(c)\Big)
\label{mit}
\end{eqnarray}
Now descendant root $c_{mp,p}$ of $f_{mp}$
is defined by the choice of immediate descendant
$c_{m,1}$ for $f_m$: from
\begin{eqnarray}
f_m(c_{m,1}) = 0
\label{parcon}
\end{eqnarray}
we have
\begin{eqnarray}
f_p\gamma_p(c_{mp,p}) = c_{m,1}
\label{cmpp}
\end{eqnarray}
For comparison with s.\ref{calcul} one should keep in mind that
for $f=x^2+c$ there is a single order-two critical point
$c_{2,1}=c_2=-1$.
Note, that not all the zeroes $c_m$ of (\ref{parcon}) describe
{\it immediate} descendants $(m,1)$ of the central domain $(1)$:
some provide the new root domains $(m)$ or higher descendants
$(m,m_1,\ldots)$ with nontrivial divisors $m_1\neq 1$ of $m$.
These extra zeroes (especially associated with domains from
the different clusters) should not be used in the following
calculations, because they correspond to remote domains and
SSA has no reason to work for them.
Repeating for generic $m$ the calculations,
performed s.\ref{calcul} for particular case of $m=2$, we obtain:
\begin{eqnarray}
\dot f_{mp}(c) \approx
\frac{1}{\gamma_p(c)}\dot f_m\Big(f_p\gamma_p(c)\Big)
\Big(\dot f_p\gamma_p(c) + f_p\dot\gamma_p(c)\Big) -
\frac{\dot\gamma_p(c)}{\gamma^2_p(c)}f_m\Big(f_p\gamma_p(c)\Big)
\end{eqnarray}
In combination with (\ref{mit}) this implies that
$$
\frac{1}{2r_{mp,p}} = \dot f_{mp}\gamma_{mp}(c_{mp,p}) \approx
\dot f_m\gamma_m(c_{m,1})
\Big(\dot f_p\gamma_p + f_p\dot\gamma_p\Big)(c_{mp,p}) \approx
\dot f_m\gamma_m(c_{m,1})\Big(\dot f_p\gamma_p + \sigma_{mp,p}
\left(\ddot f_p\gamma_p + 2\dot f_p\dot\gamma_p\right)\Big)(c_p)
$$
In the first transformation we omitted one term with
$f_m(c_{m,1})=0$, and in the second transformation we defined
functions at $c_{mp,p}$ through their values at $c_p$,
keeping only the first non-trivial term of Taylor expansion
in powers of $\sigma_{mp,p} \equiv c_{mp,p}-c_p$.
This shift is defined in a similar manner from (\ref{cmpp}):
$$
\sigma_{mp,p} \dot f_p\gamma_p(c_p) + \frac{1}{2}\sigma_{mp,p}^2
\left(\ddot f_p\gamma_p + \dot f_p\dot\gamma_p\right)
(c_p) \approx c_{m,1}
$$
(we remind that $f_p(c_p)=0$).
Substituting for remaining parameters
$\dot f_m\gamma_m = (2r_m)^{-1}$, $\dot f_p\gamma_p = (2r_p)^{-1}$
and $\ddot f_p\gamma_p + 2\dot f_p\dot\gamma_p =
-\xi_p (2r_p^2)^{-1}$,
we obtain for the counterparts of (\ref{c2p}) and (\ref{r2p}):
\begin{eqnarray}
\frac{\sigma_{mp,p}}{2r_p}\left(1 - \xi_p\frac{\sigma_{mp,p}}{2r_p}
\right) \approx c_m
\label{cmp}
\end{eqnarray}
and
\begin{eqnarray}
r_{mp} \approx 2r_mr_p\left(1 - \xi_p\frac{\sigma_{mp,p}}{r_p}\right)^{-1}
\label{rmp}
\end{eqnarray}
\subsection{Evaluation of generic $\xi_{mp,p}$}
For evaluation of $\xi_{mp,p}$ we need also
\begin{eqnarray}
\ddot f_{mp}(c) \approx
\frac{1}{\gamma_p(c)}\ddot f_m\Big(f_p\gamma_p(c)\Big)
\left(\dot(f_p\gamma_p)\right)^2 +
\frac{1}{\gamma_p(c)}f_m\Big(f_p\gamma_p(c)\Big)
\ddot(f_p\gamma_p) - \nonumber \\ -
2\frac{\dot\gamma_p(c)}{\gamma_p^2(c)}
\dot f_m\Big(f_p\gamma_p(c)\Big)
\dot(f_p\gamma_p) +
2\frac{\dot\gamma_p^2(c)}{\gamma_p^3(c)}
\dot f_m\Big(f_p\gamma_p(c)\Big) + O(\ddot\gamma)
\end{eqnarray}
and
\begin{eqnarray}
\dot\gamma_{mp}(c) \approx
\gamma_p(c)\dot\gamma_m\Big(f_p\gamma_p(c)\Big)
\Big(\dot f_p\gamma_p(c) + f_p\dot\gamma_p(c)\Big) +
\dot\gamma_p(c)\gamma_m\Big(f_p\gamma_p(c)\Big)
\end{eqnarray}
At $c=c_{mp,p}$ the terms with $f_m(c_{m,1})=0$ do not
contribute, and we obtain
$$
\left(\frac{1}{2}\ddot f_{mp}\gamma_{mp} +
\dot f_{mp}\dot\gamma_{mp}\right)(c_{mp,p}) \approx
\left(\frac{1}{2}\ddot f_m\gamma_m + \dot f_m\dot\gamma_m
\right)(c_{m,1})
\Big(\dot f_p\gamma_p + f_p\dot\gamma_p\Big)^2(c_{mp,p})
\ + $$ \vspace{-0.25cm}
\begin{eqnarray}
+\ \dot f_m\gamma_m(c_{m,1})
\left(\frac{1}{2}\ddot f_p\gamma_p + \dot f_p\dot\gamma_p
\right)(c_{mp,p})
\end{eqnarray}
In order to get $\xi_{mp,p}$ we divide by the square of
\begin{eqnarray}
\dot f_{mp}\gamma_{mp} \approx \dot f_m\gamma_m(c_{m,1})
\Big(\dot f_p\gamma_p + f_p\dot\gamma_p\Big)(c_{mp,p})
\end{eqnarray}
and change sign, so that (\ref{xi2p}) generalizes to
and
\begin{eqnarray}
\xi_{mp} \approx -\frac{1}{\dot f_{mp}\gamma_{mp}}
\left(\frac{\ddot f_{mp}}{2\dot f_{mp}} +
\frac{\dot\gamma_{mp}}{\dot\gamma_{mp}}\right)(c_{mp})
\ \approx\ -\frac{1}{\dot f_{m}\gamma_{m}}
\left(\frac{\ddot f_{m}}{2\dot f_{m}} +
\frac{\dot\gamma_{m}}{\dot\gamma_{m}}\right)(c_{m}) -
\nonumber \\ -
\frac{1}{\dot f_m\gamma_m}
\left(\frac{\ddot f_{p}}{2\dot f_{p}} +
\frac{\dot\gamma_{p}}{\dot\gamma_{p}}\right)
\frac{1}{(\dot f_p\gamma_p +f_p\dot\gamma_p)^2(c_{mp})}
\approx \xi_m +
\frac{2r_m\xi_p}
{\left(1-\xi_p\frac{\sigma_{mp,p}}{r_p}\right)^2}
\ \stackrel{m\neq 1}{\approx}\ \xi_m
\label{ximp}
\end{eqnarray}
Keeping the second term at the r.h.s. is beyond the
accuracy of the SSA and it should be neglected
(we ignored it in (\ref{xi2p}), but kept in (\ref{ximp})
to preserve formal consistency with the case
$m=1$, when $r_1=\frac{1}{2}$, $\xi_1=0$ and, of course,
$\sigma_{p,p}\equiv 0$).
From (\ref{ximp}) it is clear that if $\xi_{m,1}\approx 1$ for
direct descendant $(m,1)$ of order $m$ of the central root
domain ($c_1=0$), then in the SSA
$\xi_{mp,p}\approx 1$ for all other descendants,
at all levels in all clusters.
In s.\ref{calcul} we exploited the fact that for $m=2$
the r.h.s. is extremely simple: for $f_2(c) = c(c+1)$,
$\gamma_2(c) = 2c$ and $c_2=-1$ it is obviously unity.
In s.\ref{accu} we saw that $\xi_{m,1}$ is indeed close
to unity for $m\leq 6$, and it is natural to believe that
this remains true for all $m$, however no theoretical
explanation of this fact is yet available.
Still, if accepted, it implies that $\xi_{mp,p}\approx 1$
for all $p>1$.
\section{Feigenbaum indices \label{Feig}}
\setcounter{equation}{0}
\subsection{The case of period-doubling, $m=2$}
It is now time to solve quadratic equation (\ref{r2p}):
the distance between the centers of a parent domain
$(p,\ldots)$ and its immediate descendant $(2p,p,\ldots)$ is
\begin{eqnarray}
\sigma_{2p} \approx \left\{\begin{array}{ccc}
-2r_p & {\rm if}\ \xi_p \approx 0 & {\rm i.e.\ for\
the\ } first\ {\rm descendant} \\ \\
(1-\sqrt{5})r_p & {\rm if}\ \xi_p \approx 1 & {\rm i.e.\ for\
a\ } higher\ {\rm descendant}\end{array}\right.
\end{eqnarray}
Substituting this into (\ref{r2p}), we obtain:
the radius of descendant domain $(2p,p,\ldots)$ is
\begin{eqnarray}
r_{2p} = \left\{\begin{array}{ccc} \frac{1}{2}r_p &
{\rm if}\ \xi_p \approx 0 & {\rm i.e.\ for\
the\ } first\ {\rm descendant} \\ \\
\frac{r_p}{2\sqrt{5}} & {\rm if}\ \xi_p \approx 1 &
{\rm i.e.\ for\ a\ } higher\ {\rm descendant}\end{array}\right.
\end{eqnarray}
Thus we get for the Feigenbaum doubling parameter
$\delta_2 = \lim_{p\rightarrow\infty} (r_p/r_{2p})$
\begin{eqnarray}
\delta_2 \approx 2\sqrt{5} = 4.4721\ldots
\end{eqnarray}
(exact value is known to be $\delta_2 = 4.6692\ldots$).
Note that $\delta_2$ approximately acquires this value already
for the second descendant of the root, far before the
$p \rightarrow \infty$ limit.
Consistency requires that
\begin{eqnarray}
c_{2p} + r_{2p} = c_p - r_p
\end{eqnarray}
i.e.
\begin{eqnarray}
-\sigma_{2p} = r_{2p} + r_p \approx
r_p\left(1+\frac{1}{2\sqrt5}\right)
\end{eqnarray}
This is indeed almost true:
$1+\frac{1}{2\sqrt5} \approx \sqrt{5}-1$\ \
($1.2236\ldots \approx 1.2361\ldots$).
The west-limit point $c_{\infty}^{(2)}$
of the central cluster
(superscript is $(2)$ because the point is obtained
by a sequence of doublings of the order of the orbits)
can be represented as
\begin{eqnarray}
c_{\infty}^{(2)} \approx -\frac{3}{4} - 2r_2 - 2r_4 - \ldots
= -\frac{3}{4} - 2\sum_{k=1}^\infty r_{2^km} =
- \frac{3}{4} - \frac{1/2}{1-\frac{1}{2\sqrt{5}}} =
-1.3940\ldots
\end{eqnarray}
(we remind that for the first descendant
$r_2=\frac{1}{2}r_1 = \frac{1}{4}$) or, alternatively, as
\begin{eqnarray}
c_{\infty}^{(2)} = c_2 + \sum_{k=1}^\infty \sigma_{2^{k+1}} \approx
c_2 - \frac{r_2(\sqrt{5}-1)}{1-\frac{1}{2\sqrt{5}}} =
-1 -\frac{1}{2}\frac{5-\sqrt{5}}{2\sqrt{5}-1} = -1,3980\ldots
\end{eqnarray}
The difference between these two values characterize accuracy of
the SSA, and within such error they coincide with
exact value $c_{\infty}^{(1)} = -1.4012\ldots$.
\subsection{The general case (arbitrary $m$)}
Solving (\ref{cmp}), we obtain:
\begin{eqnarray}
\sigma_{mp,p} = \left\{\begin{array}{ccc}
2c_mr_p & {\rm if}\ \xi_p \approx 0 & {\rm i.e.\ for\
the\ } first\ {\rm descendant} \\ \\
(1-\sqrt{1-4c_m})r_p &
{\rm if}\ \xi_p \approx 1 &
{\rm i.e.\ for\ a\ } higher\ {\rm descendant}\end{array}\right.
\label{sigmampp}
\end{eqnarray}
\begin{eqnarray}
r_{mp} = \left\{\begin{array}{ccc} 2r_mr_p &
{\rm if}\ \xi_p \approx 0 & {\rm i.e.\ for\
the\ } first\ {\rm descendant} \\ \\
\frac{2r_mr_p}{\sqrt{1-4c_m}} & {\rm if}\ \xi_p \approx 1 &
{\rm i.e.\ for\ a\ } higher\ {\rm descendant}\end{array}\right.
\end{eqnarray}
Thus we obtain for the Feigenbaum parameter
$\delta_m = \lim_{p \rightarrow \infty} (r_p/r_{mp})$
\begin{eqnarray}
\delta_m \approx \frac{\sqrt{1-4c_m}}{2r_m}
\end{eqnarray}
and for the complex-valued ratio $\varepsilon_m = \sigma_{mp,p}/r_p$
we get
\begin{eqnarray}
\varepsilon_m = 1-\sqrt{1-4c_m}
\end{eqnarray}
When $(p,\ldots)$ is itself a descendant domain and has circle
rather than cardioid shape, the consistency condition
\begin{eqnarray}
|\sigma_{mp}| = |r_p| + |r_{mp}|,
\label{d=r+r}
\end{eqnarray}
expressing the distance between centers of two touching circles
through their radiuses,
implies that
\begin{eqnarray}
|\varepsilon_m| = 1 + |\delta_m|^{-1}
\label{modcons}
\end{eqnarray}
or
\begin{eqnarray}
|1-\sqrt{1-4c_m}| = 1 + \frac{2|r_m|}{|\sqrt{1-4c_m}|}
\end{eqnarray}
A more detailed consistency condition includes not only distances,
like (\ref{d=r+r}), but also exact position (the phase $\phi_m$)
of the touching point between the circles $(mp,p,\ldots)$ and
$(p,\ldots)$:
\begin{eqnarray}
c_{mp} + r_{mp} = e^{i\phi_m} r_p + c_p
\end{eqnarray}
This means that
\begin{eqnarray}
\varepsilon_m = e^{i\phi_m} -\frac{1}{\delta_m}
\label{complcons}
\end{eqnarray}
The end-point of an infinite sequence of descendant domains
$(p)$, $(mp,p)$, $(m^2p,mp,p)$, $\ldots$
is given by
\begin{eqnarray}
c_{\infty}^{(p|m)} = c_p + \sum_{k=1}^\infty \sigma_{m^kp} =
c_{mp} + \varepsilon_m\sum_{k=1}^\infty r_{m^kp} =
c_{mp} + \frac{\varepsilon_m r_{mp}}{1-\delta_m^{-1}} =
c_p + 2r_p\left(c_m + \frac{r_m\varepsilon_m}{1-\delta_m^{-1}}
\right)
\end{eqnarray}
In particular, for the central cluster with $(p) = (1)$
\begin{eqnarray}
c_{\infty}^{(1|m)} =
c_m + \frac{r_m\varepsilon_m}{1-\delta_m^{-1}}
\label{endpointmm}
\end{eqnarray}
Within SSA the only input in all these formulas for a given $m$
consists of two complex numbers: $c_m = c_{m,1}$ and $r_m = r_{m,1}$,
characterizing the properties of the next-to-root domain
$(m,1)$ in the central cluster.
These $c_m$ and $r_m$ are entries of the table in s.\ref{accu}.
Taking $c_m$ and $r_m$ from that table, we now make a new one,
comparing predictions of eqs.(\ref{sigmampp})-(\ref{endpointmm})
with experimental data.
Numbers in square brackets in the last column are positions
of the limiting points, {\it measured} with the help of
{\it Fractal Explorer}.
\bigskip
{\footnotesize
$$
\hspace{-1.4cm}
\begin{array}{|c|c|c|c||c|c|c||c|c||c|}
\hline
&&&&&&&&&\\
{\rm domain}&c_m&2r_m&e^{i\phi_m}
&\sqrt{1-4c_m\phantom{5^5}\hspace{-0.4cm}}
&\varepsilon_m&\delta_m&
(\ref{modcons})&(\ref{complcons})& c_{\infty}^{(1|m)}\\
(m,1)&&&&&(\sigma_{mp}/r_p)&(r_p/r_{mp})&&&
{\rm from}\ (\ref{endpointmm}) \\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
(2,1)&-1&0.5&-1&\sqrt{5}&1-\sqrt{5}&2\sqrt{5}&
\sqrt{5}-1&1-\sqrt{5}&-1 -\frac{1}{2}\frac{5-\sqrt{5}}{2\sqrt{5}-1}\\
&&&&&&&
\approx 1+\frac{1}{2\sqrt{5}}&\approx -1-\frac{1}{2\sqrt{5}}
&= -1.3980\\
&&&&&&&1.2361\approx 1.2236&&[-1.401]\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
(3,1)&-0.123&-0.01&-0.5&1.55&-0.55&4.61&1.107
&-0.553+0.959i&-0.020\\
&+0.745i&-0.19i=&+0.87i&-0.96i&+0.96i&+8.42i=&\approx 1.104
&\approx -0.550+0.957i&+0.785i\\
&&0.19\cdot e^{1.03\frac{i\pi}{2}}&\phi_3=\frac{2\pi}{3}&&
&9.59\cdot e^{1.02\frac{i\pi}{3}} &&&[-0.0234+0.7836i]\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
(4,1)&+0.282&-0.066&&0.999&0.001&-0.2848
&1.061&0.001+1.06i&0.3115\\
&+0.530i&-0.06i=&i&-1.061i&+1.061i&+16.34i=
&\approx 1.061&\approx 0.001+1.06i&+0.4932i\\
&&0.089\cdot e^{-1.02\frac{3i\pi}{4}}&\phi_4=\frac{2\pi}{4}&&
&16.34\cdot e^{1.01 \frac{i\pi}{2}}&&&[0.3098+0.4947i]\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
(5_{1},1)&0.380&-0.046&0.309&0.677&0.323&-9.06
&1.041&0.323+0.989i&0.377\\
&+0.335i&-0.011i=&+0.951i&-0.989i&+0.989i&+23.7i=
&\approx 1.039&\approx 0.323+0.988i&+0.311i\\
&&0.047\cdot e^{-0.93\pi i}&\phi_{5_1}=\frac{2\pi}{5}&&
&25.35\cdot e^{3.08\frac{i\pi}{5}}&&&[0.3770+0.3117i]\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
(5_{2},1)&-0.504&0.046&-0.809&1.841&-0.841&20.25
&1.040&-0.841+0.612i&-0.503\\
&+0.563i&-0.063i=&+0.588i&-0.612i&+0.612i&+14.44i=
&\approx 1.040&\approx -0.842+0.611i&+0.605i\\
&&0.078\cdot e^{-0.997\frac{3\pi i}{10}}
&\phi_{5_2}=\frac{4\pi}{5}&&
&24.87\cdot e^{0.99\frac{i\pi}{5}}&&&[-0.5031+0.6048i]\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
(6,1)&0.389&-0.028&0.5&0.487&0.513&-20.57
&1.028&0.513+0.891i&0.380\\
&+0.217i&+0.003i=&+0.866i&-0.891i&+0.891i&+29.61i =
&\approx 1.028&\approx 0.516+0.889i&+0.206i\\
&&0.028\cdot e^{-0.97\pi i}&\phi_6\frac{2\pi}{6}
&&&36.05\cdot e^{2.08\frac{i\pi}{3}}
&&&[0.3810+0.2047i]\\
&&&&&&&&&\\
\hline
\end{array}
\hspace{1.4cm}
$$
}
\noindent
Of course, one can consider limiting points of other
sequences, not obligatory of the type $[\ldots,m,m,m]$.
One of the open questions is if there is any
difference between periodic (after some step) and
aperiodic, i.e. "rational" and "irrational" sequences.
Another important class consists of sequences
$[\ldots,2,2,\ldots,2,m_r,\ldots,m_1]$, ending by
$2$'s only -- they describe {\it normals} to the
cluster's boundary and serve as origins of {\it trails},
connecting the cluster with its neighbors.
\section{Cardioids and resultant zeroes}
As explained in \cite{DM}, a boundary of domain
$(p,\ldots)$ is densely populated with a countable
set of its merging
points with descendant domains $(mp,p,\ldots)$,
located at zeroes of the resultants $R(G_{mp},G_p)$
with all integer $m$.
Since within SSA the boundaries are well approximated
by cardioids and circles, and merging points are
characterized by the angles $\phi_m = \frac{2\pi}{m}$,
one can expect that simple approximations exist for
locations of the resultant zeroes in terms of
$c_p$, $r_p$ and $e^{ik\phi_m}$ with $k = 1,\ldots,m-1$.
This is indeed the case, at least for the Mandelbrot Set,
i.e. the family $\{f(x) = x^2+c\}$.
For example, the zeroes of $R(G_m,G_1)$
-- they can be found among the values of $c_{p+}$
in tables in s.\ref{accu} -- are given by
\begin{eqnarray}
c^{(1)}_{m_k}=\frac{e^{{2\pi i k}/{m}}}{2}
\left(1-\frac{e^{{2\pi i k}/{m}}}{2}\right)
\end{eqnarray}
The first values of this quantity are:
{\footnotesize
$$
\hspace{-0.9cm}
\begin{array}{|c|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&&&\\
m_k&(2)&(3)&(4)&(5_1)&(5_2)&(6)&(7_1)&(7_2)&(7_3)\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
c^{(1)}_{m_k}&-0.75&-0.1249999999&0.25
&0.3567627456&-0.4817627458
&0.375&0.3673751344&0.1139817500&-0.6063568845\\
&&+0.6495190530i&+0.5i&0.3285819454i&0.5316567550i
&+0.2165063510i&+0.1471837632i&+0.5959348910i&+0.4123997402i\\
&&&&&&&&&\\
\hline
\end{array}
\hspace{0.9cm}
$$
}
\bigskip
\noindent
When $k$ are not shown, it is equal to unity, $k=1$.
Similarly, the zeroes of $R(G_{2m},G_2)$,
belonging to the boundary of descendant domain $(2,1)$,
are given by:
\begin{eqnarray}
c^{(2,1)}_{m_k} \approx c_{2} + r_2\,e^{{2\pi i k}/{m}}
= -1 + \frac{e^{{2\pi i k}/{m}}}{4}
\end{eqnarray}
{\footnotesize
$$
\hspace{-1.4cm}
\begin{array}{|c|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&&&\\
m_k&(2)&(3)&(4)&(5_1)&(5_2)&(6)&(7_1)&(7_2)&(7_3)\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
c^{(2,1)}_{m_k}&-1.25&-1.125&-1&-0.9227457516&-1.202254249
&-0.875&-0.8441275496&-1.055630233&-1.225242217\\
&&+0.2165063510i&+0.25i&+0.2377641291i&+0.1469463130i
&+0.2165063510i&+0.1954578706i&+0.2437319780i&+0.1084709348i\\
&&&&&&&&&\\
\hline
\end{array}
\hspace{1.4cm}
$$
}
\bigskip
\noindent
and zeroes of $R(G_{3m},G_3)$, belonging to the boundary of
descendant domain $(3_\pm,1))$ -- by
\begin{eqnarray}
c^{(3_\pm\!,1)}_{m_k} \approx c_{3} + r_3\,e^{\pm{2\pi i k}/{m}}
= -0.1226\pm 0.7449i
- (0.0047\pm 0.0943i){e^{\pm {2\pi i k}/{m}}}
\end{eqnarray}
{\footnotesize
$$
\begin{array}{|c|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&&&\\
m_k&(2)&(3)&(4)&(5,1)&(5,2)&(6)&(7,1)&(7,2)&(7,3)\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
c^{(3_\pm\!, 1)}_{m_k}
&-0.118&-0.039&-0.028&-0.034&-0.063
&-0.043&-0.052&-0.030&-0.077\\
&\pm 0.839i&\pm 0.788i&\pm 0.740i&\pm 0.711i&\pm 0.818i
&\pm 0.694i&\pm 0.682i&\pm 0.761i&\pm 0.828i\\
&&&&&&&&&\\
\hline
\end{array}
$$
}
\bigskip
\noindent
while those belonging to the boundary of
the root domain $(3)$ are
\begin{eqnarray}
c^{(3)}_{m_k} \approx c_{3}
+ r_3\,e^{{2\pi i k}/{m}}
\left(1-\frac{e^{{2\pi i k}/{m}}}{2}\right)
= -1.7549 + 0.0095{e^{{2\pi i k}/{m}}}
\left(1-\frac{e^{{2\pi i k}/{m}}}{2}\right)
\end{eqnarray}
{\footnotesize
$$
\begin{array}{|c|c|c|c|c|c|c|c|c|c|}
\hline
&&&&&&&&&\\
m_k&(2)&(3)&(4)&(5_1)&(5_2)&(6)&(7_1)&(7_2)&(7_3)\\
&&&&&&&&&\\
\hline
&&&&&&&&&\\
c^{(3)}_{m_k}
&-1.769&-1.757&-1.750&-1.748&-1.764&-1.748
&-1.748&-1.753&-1.767\\
&&+0.012i&+0.009i&+0.006i&+0.010i&+0.004i
&+0.003i&+0.011i&+0.008i\\
&&&&&&&&&\\
\hline
\end{array}
$$
}
\bigskip
\noindent
Since domains $(3,1)$ and $(3)$ are circle and cardioid
only approximately, accuracy in the last two tables is
relatively low and we do not keep as many digits as in the
first two tables.
Still, the numbers in the tables reproduce actual
positions of resultant zeroes at percent-level accuracy,
standard for the SSA in the case of the Mandelbrot Set.
Thus, not only the shapes of elementary domains
are nicely represented by cardioids and circles,
but all the merging points of stable orbits at the
boundaries (zeroes of the corresponding resultants,
\cite{DM}) can be easily found by the SSA methods.
\section{The case of $Z_{d-1}$-symmetric maps $f(x;c)=x^d+c$
\label{dfam}}
\setcounter{equation}{0}
\subsection{SSA in the case of $Z_{d-1}$-symmetry}
In this case all the iterated maps are expanded in powers
of $x^d$ and in SSA we truncate them as follows:
$f^{\circ p}(x;c) = f_p(c) + x^d\gamma_p(c) + O(x^{2d})$.
Then the boundary of elementary domain, surrounding
a root $c_p$ of $f_p(c)$, is defined by
\begin{eqnarray}
\left\{\begin{array}{c}
f_p(c) = x\Big(1 - x^{d-1}\gamma_p(c)\Big) \\
d\cdot x^{d-1}\gamma_p(c) = e^{i(d-1)\phi}
\end{array}\right.
\end{eqnarray}
or, as generalization of (\ref{shape}),
\begin{eqnarray}
f_p(c) \Big(d\gamma_p(c)\Big)^{\frac{1}{d-1}} =
e^{i\phi}\left(1 - \frac{e^{i(d-1)\phi}}{d}\right)
\label{shaped}
\end{eqnarray}
Now we need to expand the l.h.s. in powers of $\sigma = c-c_p$
and leave the first $d$ terms of the expansion.
\subsection{Example of the $p=2$ domains for $d=3$ and $d=4$}
We consider here
the first descendants of the central elementary domain $c=0$
in the case of $d=3$ and $d=4$:
relations like (\ref{ximp}) should be used to extend the
result to all other descendants.
Also we restrict example to $p=2$ only.
From $f^{\circ 2}(x,c) = (x^3+c)^3+c$ we read:
\begin{eqnarray}
f_2(c) = c(c^2 + 1),\nonumber \\
\gamma_2(c) = 3c^2
\end{eqnarray}
and the critical values $c_2 = \pm i$.
Eq.(\ref{shaped}) now states:
\begin{eqnarray}
3c^2(c^2+1) = u\Big(1-\frac{1}{3}u^2\Big)
\label{shaped32}
\end{eqnarray}
with $u = e^{i\phi}$, we need to substitute $c = \pm i + \sigma$
and check that (\ref{shaped32}) is approximately -- modulo terms
$\sim O(u^4)$ -- solved by
\begin{eqnarray}
\sigma = r_2u\left(1- \left[\frac{1}{2}-a\right]u - bu^2\right)
\label{cuca}
\end{eqnarray}
with negligibly small $a$ and $b$.
Substitution of this ansatz into (\ref{shaped32}) gives
$a = \frac{1}{12} \ll 1$ and
$b = \frac{1}{24} \ll 1$.
Also from the same calculation $r_2 = \frac{i}{6}$,
and this is in good accordance with reality: the first descendant
domain $(2,1)_+$ in Fig.\ref{0Man3} is bounded by the points
$c_+ = 1,09i$ and $c_-=0.77i$, so that
$c_+ - c_- = 0.32i \approx 2r_2 = 0.33i$.
Similarly, for $d=4$ we have
\begin{eqnarray}
2^{4/3}c^2(c^3+1) = u\left(1 - \frac{1}{4}u^3\right)
\end{eqnarray}
and for $c = \omega_3 + \sigma$, $\omega_3 = e^{i\pi /3}$
we obtain
\begin{eqnarray}
\sigma = \frac{\omega}{6\cdot 2^{1/3}}(\omega u)
\Big(1-a(\omega u) + \frac{b}{3}(\omega u)^2 - c(\omega u)^3\Big)
\label{d4card}\label{quca}
\end{eqnarray}
with $a = 2^{-4/3} = 0.40$, $b = 11\cdot 4^{-1/3}/9 = 0.77$
and $c = 4/81 \ll 1$.
The biggest "diameter" of this elementary domain is
$2\frac{1}{6\cdot 2^{1/3}}\left(1+\frac{b}{3}\right) \approx 0.33$,
in good agreement with $c_- = -1.10$, $c_+ = -0.78$ for the
$(2,1)$ domain in Fig.\ref{0Man4}. The distance between the two
cusps of this deformed cardioid is approximately $0.4$ of the
biggest diameter, what is also in agreement with Fig.\ref{0Man4}
(ordinate of the cusp, which is shown by arrow in the picture,
is $0.063$, and $2\cdot 0.063/0.33 \approx 0.4$).
Since $b < 1$ in (\ref{d4card}), the cusps have finite angles,
what is {\it not} confirmed by Fig.\ref{Mand2}: the true value
of $b$ is close to unity -- the difference $1-b\approx 0.23$ is
inaccuracy of SSA in this example.
\section{Conclusion \label{conc}}
\setcounter{equation}{0}
In this paper we {\it calculated} the shapes of {\it elementary
domains} of the Mandelbrot set \cite{Mand}, following the
general algebro-geometric approach of \cite{DM}.
We explained the qualitative features of these shapes,
found the origin and number of cusps, explicitly showed how they
change when one Mandelbrot set is deformed into another inside
the unifying Universal Mandelbrot set.
We showed that the nearly ideal cardioid and circle shapes of
these domains in ${\cal M}_2$ (Fig.\ref{Mand2})
are nicely described in the
small-size approximation, based on truncating the relevant
polynomials to the first orders in deviations $x-x_{cr}$ and
$c-c_{cr}$ from their critical values.
It is not a big surprise, but some conspiracy is needed --
and was indeed found in the behavior of parameter $\xi_p$,
which is not always small, as one could naively expect --
to explain the coexistence of {\it different} structures:
cardioids of different orders.
We did not give a {\it theoretical} justification of the
{\it small-size approximation} -- next-order corrections were
not estimated -- instead its percents-order accuracy was
demonstrated by comparison of its predictions with the
properties of the actual Mandelbrot set (measured with the
help of the {\it Fractal Explorer} \cite{FE}).
Accuracy is actually much higher than one could expect from
the over-simplified calculations in \cite{LL}, for example the
small-size-approximation of the ordinary Feigenbaum constant
$\delta_2^{(SSA)} = 2\sqrt{5} = 4.4714\ldots$ is much closer
to experimental value $\delta_2 = 4.6692\ldots$\ than
$\delta_2^{(LL)} = 4+\sqrt{3} = 5.7321\ldots$\ of ref.\cite{LL}.
The systematic approach allows to find {\it all}
Feigenbaum indices in the same way, moreover other
characteristics, including continuous, like {\it shapes}
of elementary domains, not only their {\it sizes},
are straightforwardly {\it calculated}.
We demonstrated that
characteristics of elementary domains in ${\cal M}_2$
are nicely encoded by two parameters
like $r_p$ and $\xi_p$, which by recursive formulas like
\begin{eqnarray}
r_{mp} \approx r_p\cdot\delta^{-1}(c_{m,1})
\end{eqnarray}
and
\begin{eqnarray}
\xi_p \approx \left\{
\begin{array}{cc}
0 & {\rm for\ the\ root\ domain}\\
1 & {\rm for\ other\ domains}
\end{array} \right.
\end{eqnarray}
are expressed through the size $r_{root}$ of the root domain
in the given cluster and through the critical values $c_{m,1}$
-- positions of centers of immediate descendants of the central
root domain.
However, these remaining parameters need to be evaluated
from sophisticated algebraic equations.
As explained in \cite{DM}, the equations emerge from universal
structures in particular {\it section} of
the Universal Mandelbrot set (UMS).
Naturally, some characteristics of such arbitrary section look
arbitrary -- at least from its {\it internal} perspective.
Hopefully, a better understanding of $r_{root}$ and $c_m$
distributions can be found at the level of UMS, but this remains
beyond the scope of the present paper.
\bigskip
It remains to emphasize that investigation of Mandelbrot sets
is not just an interesting problem by itself, it is crucial
for understanding of the future physics, which is going to
deal with essentially multi-phase systems, far from equilibrium
and from the trivial end-points of renormalization groups.
One of the main lessons of Mandelbrot theory \cite{DM} is that
phase transitions are not just rare isolated events, concentrated
on smooth hypersurfaces in the space of coupling constants.
Examples of {\it such} phase transitions are given by particular
merging points between two elementary domains (say, between
$(2,1)$ and $(1)$) -- these isolated points in particular
${\cal M}_d$ in Fig.\ref{hompols} form a nice
complex-codimension-one hypersurface in UMS (partly represented
in Fig.\ref{tubeD3}).
However, the true picture -- Figs.\ref{Mand2}-\ref{0Man4} --
is very different: the entire variety of various phase transitions
(mergings of {\it all} elementary domains of {\it all} orders)
is not just a collection of particular transition lines.
Instead they form a profound new structure, moreover they
tend to condense and fill entire boundaries of elementary domains,
i.e. dimension of the phase transitions variety increases as
compared to the naive one
(and actually its {\it real}, not {\it complex} codimension
in the space of complex couplings, is one!).
Within particular slices like particular Mandelbrot sets,
different phases now get fully disconnected, and
analytical continuation between them, if at all possible,
essentially depends on the properties of the new fundamental entity:
the UMS, which scientists even did not begin to study!
It is the UMS that is behind the sophisticated phase structure
\cite{AMM} of stringy $\tau$-functions -- effective actions of
various multi-phase systems, classical or quantum.
It is the UMS that one encounters in various problems,
from baby-universe
creation in modern cosmological models to optimization of cooling
processes in various solid-state technologies.
Still, despite its central role in the mysteries of uncertainty,
there is no mystery in the UMS itself: it is one of the most
important and structurized mathematical objects -- the universal
discrminantal variety, a would-be classical topic of algebraic
geometry, which, however, did not attract much attention so far.
We believe that time has come for its investigation and this
paper is just a modest example of how one can approach the
fundamental problems of this kind: very simple methods are quite
effective and produce answers, which are not easy to foresee, and
numbers, which are not easy to guess. This looks like a real and
wonderful science to do.
\section{Appendix. Some elementary MAPLE programs for UMS studies
\label{MAProgs}}
We did our best to illustrate quantitative considerations of
Universal Mandelbrot Set and its particular sections with modest
illustrations.
However, the number of illustrations in a printed text is
necessarily restricted and can be non-sufficient for full
visualization of the object.
In order to cure this problem we collect in this appendix
a set of sample MAPLE programs, which were used to generate
some illustrations in the text.
One can easily play with these simple programs, change parameters,
accuracy of calculation and output formats in order to extract
more information, numerical and visual.
Programs are super-primitive, transparent and easy to modify,
they work fast and smoothly on ordinary PC's.
One can straightforwardly copy them into MAPLE file
(with $.mws$ extension) and use.
When substituting desired parameters instead of the question marks,
one should better do it in rational rather than decimal form,
say $a=1/10$ rather than $a=0.1$.
\subsection{Cardioids \label{MAPcard}}
MAPLE program for cardioid studies consists of just four lines:
\begin{verbatim}
> r:=1:
> a:=?: b:=-(1+2*a)/3;
> f:=r*(exp(I*t)+a*exp(I*2*t)+b*exp(I*3*t));
> plot([Re(f),Im(f),t=-Pi..Pi],scaling=CONSTRAINED);
\end{verbatim}
(cubic case is presented, generalization is obvious).
It remains to substitute various $a$ and $b$
instead of the question marks (say, $a = 0.4 + 0.2*I$)
and enjoy the pictures.
For looking at more details, especially at the critical
values $t=0$ and $t=\pi$, where cusps can occur
(or $t = \frac{2\pi k}{d-1}$ in general case) one can
enhance resolution:
\begin{verbatim}
> plot([Re(c),Im(c),t=Pi-0.01..Pi+0.01],scaling=constrained);
> plot([t,Re(c)/Im(c),t=Pi-0.01..Pi+0.01]);
> plot([t,Im(c)/Re(c),t=-0.01..+0.01]);
\end{verbatim}
Examples of output of this program are shown in
Figs.\ref{cardi} and \ref{cardicusp}.
\subsection{UMS through discriminants and resultants}
\begin{verbatim}
> F1:=f(x)-x:
> F2:=f(f(x))-x:
> F3:=f(f(f(x)))-x:
> F4:=f(f(f(f(x))))-x:
> F5:=f(f(f(f(f(x)))))-x:
> F6:=f(f(f(f(f(f(x))))))-x:
> G1:=F1:
> G2:=simplify(F2/G1):
> G3:=simplify(F3/G1):
> G4:=simplify(F4/(G2*G1)));
> G6:=simplify(F6/(G1*G2*G3));
> ...
> D2:=discrim(G2,x);
> D3:=discrim(G3,x);
> ...
> R24:=resultant(G2,G4,x);
> R36:=resultant(G3,G6,x);
> ...
\end{verbatim}
\subsection{Domains $(1)$, $(2)$ and $(2,1)$ of
${\cal M}_{ax^3+(1-a)x^2+c}$}
Parameter $M$ in the program defines the number $T$ of points
in the picture. The bigger $M$ the more detailed will be the
plot, but computer time will also increase.
To make sure that the program is working we added the line
$print("k=",k)$, one can safely omit it.
\bigskip
\begin{verbatim}
> with(plots):
>
> unassign('a','b','u','z','t','c'):
>
> a:=?:
> b:=1-a:
>
> D1:=factor(discrim(a*x^3+b*x^2+c-x,x));
> R21:=factor(resultant(a^3*x^6+2*a^2*x^5*b+a^2*x^4+2*a^2*x^3*c+a*b^2*x^4+2*a*x^3*b+a*x^2+2*x^2*c*a*b+x*c*a+a*c^2+b^2*x^2+x*b+c*b+1,c-x+a*x^3+b*x^2,x)):
> D2:=factor(simplify(discrim(a^3*x^6+2*a^2*x^5*b+a^2*x^4+2*a^2*x^3*c+a*b^2*x^4+2*a*x^3*b+a*x^2+2*x^2*c*a*b+x*c*a+a*c^2+b^2*x^2+x*b+c*b+1,x)/R21));
>
> ## various choices of s and MID
> #s:=evalf(solve(D1,c)):
> s:=evalf(solve(D2,c));
> #s:=evalf(solve(R21,c));
> MID:=s[1];
> #MID:=(s[1]+s[2])/2.;
>
> P:= x -> a*x^3+b*x^2:
> u:=exp(I*t):
>
> zp:=(x,t)->(-b+root[2](b^2+3*a*exp(I*t)/(3*a*x^2+2*b*x)))/(3*a):
> zm:=(x,t)->(-b-root[2](b^2+3*a*exp(I*t)/(3*a*x^2+2*b*x)))/(3*a):
> sp:=solve(P(zp(x,t))+zp(x,t)-P(x)-x,x):
> sm:=solve(P(zm(x,t))+zm(x,t)-P(x)-x,x):
>
> Tp:=0: Tm:=0:
> M:=200:
> for k to M do
>
> t:=evalf(2*Pi*k/M):
> N:=ArrayNumElems(Array([sp]));
> for i to N do
>
> wp:=allvalues(sp[i]): wm:=allvalues(sm[i]):
> n:=ArrayNumElems(Array([wp])): # nm:=ArrayNumElems(Array([wm])): print(n,nm);
> for j to n do
>
> Tp:=Tp+1: Tm:=Tm+1:
> if n >1 then
> Xp:=evalf(wp[j]): Xm:=evalf(wm[j]):
> else
> Xp:=evalf(wp): Xm:=evalf(wm):
> end if:
> Pp[Tp]:=evalf(zp(Xp,t)-(a*Xp^3+b*Xp^2)):
> Pm[Tm]:=evalf(zm(Xm,t)-(a*Xm^3+b*Xm^2)):
>
> xp1:=Xp: xm1:=Xm:
> zp1:=a*xp1^3+b*xp1^2+Pp[Tp]: zm1:=a*xm1^3+b*xm1^2+Pm[Tm]:
> chp:=a*zp1^3+b*zp1^2+Pp[Tp]-xp1:
> chm:=a*zm1^3+b*zm1^2+Pm[Tm]-xm1:
> ap:= evalf(Re(chp)^2+Im(chp)^2): am:=evalf(Re(chm)^2+Im(chm)^2):
>
> # MAGNIFY (Enhanced resolution for vicinity of a chosen value of 'c')
> ## CENTER POSITION
> zz:=s[1];
> ### version of defining zz
> #zz:=MID+I*0.:
> ## RADIUS
> rr:=0.3;
> if rr>0 then
> if (ap>10^(-5)) or abs(Pp[Tp]-zz)>rr then
> Tp:=Tp-1:
> else
> fi:
> if (am>10^(-5)) or abs(Pm[Tm]-zz)>rr then
> Tm:=Tm-1:
> else
> fi:
> else
> if (ap>10^(-5)) then Tp:=Tp-1: fi:
> if (am>10^(-5)) then Tm:=Tm-1: fi:
> fi:
>
> od:
> od:
> od:
>
> pp:=pointplot({seq([Re(Pp[n]),Im(Pp[n])],n=1..Tp)},
scaling=CONSTRAINED,color=red,symbol=circle,symbolsize=5):
> pm:=pointplot({seq([Re(Pm[n]),Im(Pm[n])],n=1..Tm)},
scaling=CONSTRAINED,color=red,symbol=circle,symbolsize=5):
>
> display({pp},{pm});a;
\end{verbatim}
\subsection{3D tubes}
\begin{verbatim}
> unassign('a','b','u','z','t','c'):
>
> a:=b->b^3:
> c:=b->1.:
> # |f'| VALUE
> MD:=1:
> sp:=solve(4*a(b)*x^3+3*b*x^2+2*c(b)*x-MD*exp(I*t),x):
>
> Tp:=0: Tm:=0:
> M:=00:
> M1:=15:M2:=60:
> zmi:=.2:zma:=.8:
> for k1 to M1 do
> print("k=",k1);
> for k2 to M2 do
>
> t:=evalf(2*Pi*k1/M1):
> b:=zmi+(zma-zmi)*k2/M2:
>
> N:=ArrayNumElems(Array([sp]));
> for i to N do
>
> wp:=allvalues(sp[i]):
> n:=ArrayNumElems(Array([wp])):
> for j to n do
> Tp:=Tp+1:
> if n >1 then
> Xp:=evalf(wp[j]):
> else
> Xp:=evalf(wp):
> end if:
> u:=evalf(Xp-(a(b)*Xp^4+b*Xp^3+c(b)*Xp^2)):
> Pp[Tp]:=array([Re(u),Im(u),b]):
> xp1:=Xp:
> zp1:=a(b)*xp1^4+b*xp1^3+c(b)*xp1^2+Pp[Tp][1]+I*Pp[Tp][2]:
> chp:=a(b)*xp1^4+b*xp1^3+c(b)*xp1^2+Pp[Tp][1]+I*Pp[Tp][2]-xp1:
> ap:= evalf(Re(chp)^2+Im(chp)^2):
> if (ap>10^(-5)) then
> Tp:=Tp-1:
> fi:
> od:
> od:
> od:
> od:
> # PREPARE ARRAY FOR 3D PLOT
> L:=1:
> N:=Tp+Tm;
> B:=array(1..N):
> k:=0:j:=0:
>
> for i to Tp do
> k:=k+1:
> B[k]:=Pp[i];
> od:
>
> for i to Tm do
> k:=k+1:
> B[k]:=Pm[i];
> od:
>
> j:=j+1:
> print(k,j,B[k]);
> # PLOT
> with(linalg):
> with(plots):
> with(plottools):
> setoptions3d(color=BLUE,symbol=CROSS,symbolsize=3);
> p:=pointplot3d(B,axes=BOXED):
> display(p);
\end{verbatim}
\subsection{Fragments of Julia sheaf ${\cal J}_{ax^3+(1-a)x^2+c}$:
orbits of orders $1$ and $2$ vs $c$ and $a$}
\begin{verbatim}
> unassign('x','a','b','c'):
> f:=x->a*x^3+b*x^2+c;
> simplify(diff(f(f(x)),x));
> fp1:=x->diff(f(x),x):
> fp:=x->diff(f(f(x)),x):
> G1:=f(x)-x;
> F2:=f(f(x))-x:
> G2:=simplify(F2/G1);
> ################################################
> a:=1/10;
> b:=1.-a;
>
> D1:=factor(discrim(a*x^3+b*x^2+c-x,x)):
> R21:=factor(resultant(a^3*x^6+2*a^2*x^5*b+a^2*x^4+2*a^2*x^3*c+
a*b^2*x^4+2*a*x^3*b+a*x^2+2*x^2*c*a*b+x*c*a+a*c^2+b^2*x^2+x*b+c*b+1,c-x+a*x^3+b*x^2,x)):
> D2:=factor(discrim(a^3*x^6+2*a^2*x^5*b+a^2*x^4+2*a^2*x^3*c+
a*b^2*x^4+2*a*x^3*b+a*x^2+2*x^2*c*a*b+x*c*a+a*c^2+b^2*x^2+x*b+c*b+1,x))/R21:
>
> # GET MIDDLE POINT
> s:=evalf(solve(D1,c)):
> ## versions of defining 's'
> #s:=evalf(solve(D2,c));
> #s:=evalf(solve(R21,c));
> MID:=(s[1]+s[2])/2.;
>
> # CHOOSE C VALUE
> c:=-6.24+I*0.:
> ## version of defining 'c'
> #c:=MID+I*0.;
>
> s1:=solve(G1,x);
> s2:=solve(G2,x);
>
> N1:=ArrayNumElems(Array([s1]));
> N:=ArrayNumElems(Array([s2]));
>
> # GET PAIRS
> k:=0:
> for i to N do
> for j from i+1 to N do
> if abs(f(s2[i])-s2[j])<0.0001 then
> k:=k+1:
> P[k][1]:=i:
> P[k][2]:=j:
> fi:
> od:
> od:
>
> print(P);
> # SHOW ROOT POSITION
> with(plots):
> p0:=pointplot({[Re(s1[1]),Im(s1[1])],[Re(s1[2]),Im(s1[2])],[Re(s1[3]),Im(s1[3])]},
color=BLACK,symbol=CROSS,symbolsize=15):
> p11:=pointplot({[Re(s2[P[1][1]]),Im(s2[P[1][1]])]},color=red):
> p12:=pointplot({[Re(s2[P[1][2]]),Im(s2[P[1][2]])]},color=red):
> p21:=pointplot({[Re(s2[P[2][1]]),Im(s2[P[2][1]])]},color=green):
> p22:=pointplot({[Re(s2[P[2][2]]),Im(s2[P[2][2]])]},color=green):
> p31:=pointplot({[Re(s2[P[3][1]]),Im(s2[P[3][1]])]},color=blue):
> p32:=pointplot({[Re(s2[P[3][2]]),Im(s2[P[3][2]])]},color=blue):
> display({p0,p11,p12,p21,p22,p31,p32});
\end{verbatim}
\subsubsection{Stability of orbits}
\begin{verbatim}
> # GET STABILITY INFO
> print("ORDER 1");
> F1:=fp1(x):
> for i to N/2 do
> x:=s1[i];
> print(abs(F1),x);
> od:
> unassign('x');
>
> print("ORDER 2");
> F:=fp(x):
> ## versions of defining 'F'
> #F:=2*b*x;
> #F:=3*a*x^2+2*b*x;
>
> for i to N/2 do
> x:=s2[P[i][1]];
> print(abs(F),x,f(x));
> od:
> unassign('x');
\end{verbatim}
\subsubsection{Attraction pattern}
\begin{verbatim}
> unassign('a','b','c'):
> f:=x->a*x^3+b*x^2+c;
> F2:=factor(f(f(x))-x);
>
> a:=1/3: b:=1-a:
> R21:=factor(resultant(a^3*x^6+2*a^2*x^5*b+a^2*x^4+2*a^2*x^3*c+
a*b^2*x^4+2*a*x^3*b+a*x^2+2*x^2*c*a*b+x*c*a+a*c^2+b^2*x^2+x*b+c*b+1,c-x+a*x^3+b*x^2,x));
> D2:=factor(discrim(a^3*x^6+2*a^2*x^5*b+a^2*x^4+2*a^2*x^3*c+
a*b^2*x^4+2*a*x^3*b+a*x^2+2*x^2*c*a*b+x*c*a+a*c^2+b^2*x^2+x*b+c*b+1,x));
>
> r12:=evalf(solve(R21,c));
> d2:=evalf(solve(D2,c));
> ND:=4:
> k:=0:
> for t to ND+1 do
> c:=d2[3]+0.01*exp(I*2*Pi/ND*(t-1)+I*Pi/2);
> C[t]:=c;
> sx:=evalf(solve(a^3*x^6+2*b*a^2*x^5+a^2*x^4+2*a^2*x^3*c+a*b^2*x^4+
2*b*a*x^3+a*x^2+2*b*a*x^2*c+a*x*c+a*c^2+b^2*x^2+b*x+c*b+1,x));
> for j to 6 do
> R[(t-1)*6+j]:=sx[j];
> k:=k+1:
> od:
> print("t=",t);
> od:
> print(k);
> # COMPUTE PATHS WITH RANDOM START
> rf:=rand(-100..100):
> L:=1:
> c:=evalf(C[L+1]);
> k:=0:
> BL:=1:EL:=20:
> LM:=2.8:
> for i from BL to EL do
>
> cx:=rf()/30+I*rf()/30:
> CX:=cx:
> N:=1:
> j1:=0:
> ep:=0.0003:
> for j to N do
> cx:=evalf(CX+0*exp(I*2*Pi*j/N)*ep):
> abs(cx-CX);
> for i1 to 50 do
> cx:=f(cx);
> if abs(Re(cx))<LM and abs(Im(cx))<LM then
> k:=k+1;
> S[k]:=cx;
> fi:
> od:
> od:
>
> od:
> print(k);
> # PLOT DATA
> with(plots):
> pr:=pointplot({seq([Re(R[n]),Im(R[n])],n=L*6+1..L*6+6)},
scaling=CONSTRAINED,color=blue,symbol=cross,symbolsize=25):
> ps1:=pointplot({seq([Re(S[n]),Im(S[n])],n=1..k)},
scaling=CONSTRAINED,color=red,symbol=circle,symbolsize=5):
> display({pr,ps1});
\end{verbatim}
\section{Acknowledgements}
This work is partly supported by Russian Nuclear Ministry, by RFBR
grants 07-01-00644 and 07-02-00645, by NWO 047.011.2004.026,
ANR-05-BLAN-0029-01 and E.I.N.S.T.E.IN 06-01-92059 projects and by
the Russian President's grant for support of the scientific
schools LSS-8004.2006.2.
|
1,314,259,993,560 | arxiv | \section{Introduction}\label{sec:intro}
Bochner's inequality is one of the most fundamental estimates in
geometric analysis. It states that
\begin{equation}\label{eq:intro-bochner}
\frac12\Delta|\nabla u|^2-\langle\nabla u, \nabla\Delta u\rangle\ge K \cdot |\nabla u|^2+\frac1N \cdot |\Delta u|^2
\end{equation}
for each smooth function $u$ on a Riemannian manifold $(M,g)$ provided
$K\in{\mathbb R}$ is a lower bound for the Ricci curvature on $M$ and $N\in
(0,\infty]$ is an upper bound for the dimension of $M$. The main
results of this paper is an analogous Bochner inequality on metric
measure spaces $(X,d,m)$ with linear heat flow and satisfying the
(reduced) curvature-dimension condition. Indeed, we will also prove
the converse: if the heat flow on a mms $(X,d,m)$ is linear then an
appropriate version of \eqref{eq:intro-bochner} (for the canonical
gradient and Laplacian on $X$) will imply the reduced
curvature-dimension condition. Besides that, we also derive new,
sharp $W_2$-contraction results for the heat flow as well as pointwise
gradient estimates and prove that each of them is equivalent to the
curvature-dimension condition. That way, we obtain a complete
one-to-one correspondence between the Eulerian picture captured in the
Bochner inequality and the Lagrangian interpretation captured in the
curvature-dimension inequality.
\bigskip
The {\bf curvature-dimension condition} ${\cd(K,N)}$ was introduced by
Sturm in \cite{S06}. It was later adopted and slightly modified by
Lott \& Villani, see also the elaborate presentation in the monograph
\cite{Vil09}. The ${\cd(K,N)}$-condition for finite $N$ is a sophisticated
tightening up of the much simpler $\cd(K,\infty)$-condition introduced
as a synthetic Ricci bound for metric measure spaces independently by
Sturm \cite{S06} and Lott \& Villani \cite{LV09}. From the very
beginning, a disadvantage of the ${\cd(K,N)}$-condition for finite $N$ was
the lack of a local-to-global result. To overcome this drawback,
Bacher \& Sturm \cite{BS10} introduced the \emph{reduced
curvature-dimension condition} ${\cd^*(K,N)}$ which has a local-to-global
property and which is equivalent to the local version of ${\cd(K,N)}$. The
curvature-dimension condition ${\cd(K,N)}$ has been verified for Riemannian
manifolds \cite{S06}, Finsler spaces \cite{Oht09}, Alexandrov spaces
\cite{Pet10}, \cite{ZZ10}, cones \cite{BS11} and warped products of
Riemannian manifolds \cite{Ket12}. Actually, in all these cases the
conditions ${\cd(K,N)}$ and ${\cd^*(K,N)}$ turned out to be equivalent.
\bigskip
A completely different approach to generalized curvature-dimension
bounds was set forth in the pioneering work of Bakry and \'Emery
\cite{BE85}. It applies to the general setting of Dirichlet forms and
the associated Markov semigroups and is formulated using the
(iterated) \emph{carr\'e du champ} operators built from the generator
of the semigroup. This \textbf{energetic curvature-dimension
condition} $\be(K,N)$ has proven a powerful tool in particular in
infinite dimensional situations. It yields hypercontractivity of the
semigroup and has successfully been used to derive functional
inequalities like the logarithmic Sobolev inequalities in a variety of
examples. Among the remarkable analytic consequences of the
Bakry--\'Emery condition $\be(K,\infty)$ we single out the point-wise
gradient estimates for the semigroup $H_t$. It implies that for any
$f$ in a large class of functions
\begin{align*}
\Gamma(H_t f) ~\le~ \mathrm{e}^{-2Kt}\, H_t\Gamma(f)\;,
\end{align*}
where $\Gamma$ is the carr\'e du champ operator.
\bigskip
The relation between the two notions of curvature bounds based on
optimal transport and Dirichlet forms has been studied in large
generality by Ambrosio, Gigli and Savar\'e in a series of recent works
\cite{AGS11b,AGS12}, see also \cite{AGMR12}. The key tool of their
analysis is a powerful calculus on metric measure spaces which allows
them to match the two settings. Starting from a metric measure
structure they introduce the so called Cheeger energy which takes over
the role of the 'standard' Dirichlet energy and is obtained by
relaxing the $L^2$-norm of the slope of Lipschitz functions. A key
result is the identification of the $L^2$-gradient flow of the Cheeger
energy with the Wasserstein gradient flow of the entropy. This is the
mms equivalent of the famous result by Jordan--Kinderlehrer--Otto
\cite{JKO98} and allows one to define unambiguously a heat flow in
metric measure spaces.
We say that a metric measure space is \emph{infinitesimally
Hilbertian} if the heat flow is linear. This is equivalent to the
Cheeger energy being the associated Dirichlet form. We denote its
domain by $W^{1,2}$. Under the assumption of linearity of the heat
flow, Ambrosio--Gigli--Savar\'e prove that $\cd(K,\infty)$ implies
$\be(K,\infty)$ and the converse also holds under an additional
regularity assumption. Combining linearity of the heat flow with the
$\cd(K,\infty)$ condition leads to the \textbf{Riemannian curvature
condition} $\rcd(K,\infty)$ introduced in \cite{AGS11b}. This
concept again turns out to be stable under Gromov--Hausdorff
convergence and tensorization.
\bigskip
Recently, also {\bf Bochner's inequality} has been extended to
singular spaces. Ohta \& Sturm \cite{OS11} proved it for Finsler
spaces and Gigli, Kuwada \& Ohta \cite{GKO10} and Zhang \& Zhu
\cite{ZZ12} for Alexandrov spaces. Finally, Ambrosio, Gigli \&
Savar\'e established the Bochner inequality without the dimension term
(i.e. with $N=\infty$) in $\rcd(K,\infty)$ spaces. However, in the
classical setting, the full strength of Bochner's inequality only
comes to play if also the dimension effect is taken into account,
i.e. with finite $N$. This can be seen for example from the famous
results of Li--Yau \cite{LY86} who derive from it a differential
Harnack inequality, eigenvalue estimates for the Laplacian and
Gaussian heat kernel bounds.
\bigskip
We prove the equivalence of
curvature-dimension bounds via optimal transport and via the
Bakry--\'Emery approach in full generality for infinitesimally Hilbertian metric measure spaces. In particular, we establish the full Bochner
inequality on such metric measure spaces.
\medskip
Our approach strongly relies on properties and consequences of a new
curvature-dimension condition, the so-called {\bf entropic curvature
dimension condition} ${\cd^e(K,N)}$. It simply states that the Boltzmann
entropy $\ent$ is $(K,N)$-convex on the Wasserstein space
$\mathscr{P}_2(X,d)$. Here a function $s$ on an interval $I\subset{\mathbb R}$ is
called $(K,N)$-convex if
\begin{equation}
s''~\ge~ K+\frac1N\cdot (s')^2\;.
\end{equation}
holds in distribution sense. A function $S$ on a geodesic space is
called $(K,N)$-convex if it is $(K,N)$-convex along each unit speed
geodesic -- or at least along each curve within a class of unit speed geodesics which connect each pair of points in $X$. This way, $(K,N)$-convexity is a weak formulation of
\begin{equation}
\Hess S~ \ge~ K+\frac{1}{N}\big(\nabla S \otimes\nabla S \big)\;.
\end{equation}
Our first result is the following
\begin{introtheorem}[Theorem~\ref{thm:equiv_as}]
For a essentially non-branching mms
(see Definition~\ref{def:ess-non-branch})
the entropic curvature-dimension condition ${\cd^e(K,N)}$ is equivalent
to the reduced curvature-dimension condition ${\cd^*(K,N)}$.
\end{introtheorem}
We say that a metric measure space satisfies the \textbf{Riemannian
curvature-dimension condition} ${\rcd^*(K,N)}$ if it is infinitesimally
Hilbertian and satisfies ${\cd^e(K,N)}$ or ${\cd^*(K,N)}$. This notion turns out
to have the natural stability properties. Namely, we prove (see
Theorems~\ref{thm:rcd-stable}, \ref{thm:rcd-tensor},
\ref{thm:rcd-locglob}) that the ${\rcd^*(K,N)}$ condition is preserved under
measured Gromov--Hausdorff convergence as well as under tensorization of metric measure
spaces and that it has a local--to--global property.
\bigskip
The geometric intuition coming from the analysis of $(K,N)$-convex
functions and their gradient flows leads to a new form of the {\bf
Evolution Variation Inequality} ${\evi_{K,N}}$ on the Wasserstein space
taking into account also the effect of the dimension bound.
Until now, the notion of ${\evi_{K,N}}$ gradient flow was known only without
dimension term (i.e. with $N=\infty$). These Evolution Variational
Inequalities first appeared in the setting of Hilbert spaces where
they characterize uniquely the gradient flows of $K$-convex
functionals.
In a general metric setting and in connection with optimal transport
these inequalities have been extensively studied in \cite{OW06, DS08,
AGS11b}. In particular, it turned out that $\rcd(K,\infty)$ spaces
can be characterized by the fact that the heat flow is an
$\evi_{K,\infty}$ gradient flow of the entropy.
Here we obtain a reinforcement of this result. Namely, the new
Riemannian curvature-dimension condition ${\rcd^*(K,N)}$ is equivalent to the
existence of an ${\evi_{K,N}}$ gradient flow of the entropy in the following
sense.
\begin{introtheorem}[Definition~\ref{def:KNflow}, Theorem~\ref{thm:RCDKN-equiv}]
A mms $(X,d,m)$ satisfies ${\rcd^*(K,N)}$ if and only if $(X,d)$ is a
length space, $m$ satisfies an integrability condition
\eqref{eq:exp-int} and every $\mu_0\in\mathscr{P}_2(X,d)$ is the starting
point of a curve $(\mu_t)_{t\ge0}$ in $\mathscr{P}_2(X,d)$ such that for any
other $\nu\in\mathscr{P}_2(X,d)$ and a.e. $t>0$:
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\skn{\frac12 W_2(\mu_t,\nu)}^2 + K\cdot\skn{\frac12 W_2(\mu_t,\nu)}^2~\leq~\frac{N}{2}\left(1-\frac{U_{N}(\nu)}{U_{N}(\mu_t)}\right).
\end{align}
Here $U_N(\mu)=\exp\Big(-\frac1N \ent(\mu)\Big)$ and
${\frak{s}}_\kappa(r)=\sqrt{1/\kappa}\sin\big(\sqrt{\kappa}r\big)$ provided
$\kappa>0$ and ${\frak{s}}_{\kappa}(r)=\sqrt{1/(-\kappa)}\sinh
\big(\sqrt{-\kappa}r\big),\ {\frak{s}}_0(r)=r$ for $\kappa<0$
resp. $\kappa=0$.
\end{introtheorem}
This curve is unique, in fact, it is the heat flow which we denote in
the following by $\mu_t=H_t\mu_0$.
\medskip
The Evolution Variation Inequality ${\evi_{K,N}}$ as stated above
immediately implies new, sharp contraction estimates (or, more
precisely, expansion bounds) in Wasserstein metric for the heat flow.
\begin{introtheorem}[Theorem~\ref{thm:contraction}, Theorem~\ref{thm:W2-contraction} and Proposition 2.12]
Let $(X,d,m)$ be a ${\rcd^*(K,N)}$
space. Then for any $\mu,\nu\in\mathscr{P}_2(X,d)$ and $s,t>0$:
\begin{align}\label{eq:intro-contraction}
\skn{\frac12
W_2(H_t\mu,H_s\nu)}^2~\leq~&e^{-K(s+t)}\cdot\skn{\frac12
W_2(\mu,\nu)}^
+ \frac{N}{K}\Big(1-e^{-K(s+t)}\Big)\frac{\big(\sqrt{t}-\sqrt{s}\big)^2}{2(s+t)}\;.
\end{align}
The latter implies the slightly weaker bound
\begin{align*}
W_2( H_t\mu,H_s\nu )^2
&~\le~
\mathrm{e}^{-K \tau(s,t)} \cdot W_2(\mu,\nu)^2 + 2N
\frac{ 1 - \mathrm{e}^{-K \tau(s,t)}}{ K \tau(s,t) }
\big( \sqrt{t} - \sqrt{s} \big)^2 \; ,
\end{align*}
where $\tau (s,t) = 2( t + \sqrt{ts} + s )/3$.
In the particular case $t=s$ this reduces to the well-known estimate
$W_2( H_t\mu,H_t\nu ) \le
\mathrm{e}^{-Kt}\cdot W_2(\mu,\nu)$.
\end{introtheorem}
\bigskip
Due to the work of Kuwada \cite{Kuw10}, it is well known that
$W_2$-expansion bounds are intimately related to pointwise gradient
estimates. The next result is a particular case of a more general
equivalence that will be the subject of a forthcoming publication
\cite{Kuw13}.
\begin{introtheorem}[Theorem~\ref{thm:grad-est}]
Assume that the mms $(X,d,m)$ is infinitesimally Hilbertian and
satisfies a regularity assumption (Assumption~\ref{ass:Ch-reg}). If
the $W_2$-expansion bound \eqref{eq:intro-contraction} holds then
for any $f$ of finite Cheeger energy:
\begin{align}\label{eq:intro-grad-est}
\wug{\bH_t f}^2 + \frac{4Kt^2}{N\big(e^{2Kt}-1\big)}\abs{\Delta \bH_t f}^2~\leq~e^{-2Kt}\bH_t\big(\wug{f}^2\big) \quad \mbox{$m$-a.e.}
\end{align}
\end{introtheorem}
Note that Assumption~\ref{ass:Ch-reg} is the same as what is assumed
in \cite{AGS12} and it is always satisfied if $(X,d,m)$ is
$\rcd(K',\infty)$ for any $K'\in {\mathbb R}$. Hence, Theorem 3 and Theorem 4
imply in particular that \eqref{eq:intro-grad-est} holds on a ${\rcd^*(K,N)}$
space. Here $\wug{f}$ denotes the weak upper gradient of $f$
introduced in \cite{AGS11a}. This kind of gradient estimate has first
been established by Bakry and Ledoux \cite{BL06} in the setting of
$\Gamma$-calculus. It is new in the framework of metric measure spaces
and allows us to establish the Bochner formula for the canonical
gradients and Laplacians on mms.
\begin{introtheorem}[Theorem~\ref{thm:Bochner}]
Assume that the mms $(X,d,m)$ is infinitesimally Hilbertian and
satisfies the gradient estimate \eqref{eq:intro-grad-est}. Then for
all $f\in D(\Delta)$ with $\Delta f\in W^{1,2}(X,d,m)$ and all $g\in
D(\Delta)$ bounded and non-negative with $\Delta g\in L^\infty(X,m)$
we have
\begin{align}\label{eq:intro-rough-bochner}
\frac12\int\Delta g |\nabla f|_w^2 \mathrm{d} m - \int
g\langle\nabla(\Delta f),\nabla f\rangle \mathrm{d} m \geq K\int g
|\nabla f|_w^2\mathrm{d} m + \frac{1}{N}\int g\big(\Delta f\big)^2\mathrm{d}
m\;.
\end{align}
\end{introtheorem}
\bigskip
\begin{introtheorem}[Proposition~\ref{prop:Bochner2BEW}, Theorem~\ref{thm:BEW2CDE}]
Assume that the mms $(X,d,m)$ is infinitesimally Hilbertian and
satisfies Assumption~\ref{ass:Ch-reg}.
Then the Bochner inequality
$\be(K,N)$ \eqref{eq:intro-rough-bochner} implies the entropic
curvature-dimension condition ${\cd^e(K,N)}$.
\end{introtheorem}
Thus we have closed the circle. All the previous key properties are
equivalent to each other, at least if we require the heat flow to be
linear.
\begin{introtheorem}[Summary]
Let $(X,d,m)$ be an infinitesimally Hilbertian metric measure space.
Then the following properties are equivalent:
\begin{itemize}
\item[(i)] ${\cd^*(K,N)}$,
\item[(ii)] ${\cd^e(K,N)}$,
\item[(iii)]$(X,d)$ is a length space, \eqref{eq:exp-int} and the
existence of the ${\evi_{K,N}}$ gradient flow of the entropy starting from
every $\mu\in\mathscr{P}_2(X,d)$.
\end{itemize}
If one of them is satisfied, we obtain the following:
\begin{itemize}
\item[(iv)] The $W_2$-expansion bound \eqref{eq:intro-contraction},
\item[(v)] The Bakry--Ledoux pointwise gradient estimate $\bl(K,N)$
\eqref{eq:intro-grad-est},
\item[(vi)] The Bochner inequality $\be(K,N)$ \eqref{eq:intro-rough-bochner}.
\end{itemize}
Moreover, under Assumption~\ref{ass:Ch-reg},
all of properties {\rm (i)--(vi)} are equivalent.
\end{introtheorem}
\bigskip
\begin{remark*}
Finally, let us point out -- on a more heuristic level -- two remarkable links between $(K,N)$-convexity and the Bakry-\'Emery condition $\be(K,N)$:
\begin{itemize}
\item[(I)] The $(K,N)$-convexity of a function $V$ on a Riemannian manifold $(M,g)$ can be interpreted as the $BE(K,N)$-condition for the re-scaled drift diffusion
\begin{equation}\label{SDE}
dX_t=\sqrt{2\alpha}\, dB_t-\nabla V(X_t)\,dt
\end{equation}
in the limit of vanishing diffusion.
\item[(II)]
The $\be(K,N)$-condition for the Brownian motion or heat flow on $M$ is equivalent to the $(K,N)$-convexity of the function $S=\ent(.)$ on the
Wasserstein space $\mathscr{P}_2(M)$.
\end{itemize}
Both links are related to each other since the heat flow is the solution to the ODE (''without diffusion'')
\begin{equation*}\label{ODE}
d\mu_t=-\nabla S(\mu_t)\,dt
\end{equation*}
on $\mathscr{P}_2(M)$ (regarded as infinite dimensional Riemannian manifold).
The link (II) is the main result of this paper.
To see (I), note that in the case $\alpha>0$, equilibration and regularization effects of the stochastic dynamic \eqref{SDE} can be formulated in terms of the Bakry-\'Emery estimate for the generator
$L=\alpha\Delta-\nabla V\cdot \nabla$
of the associated transition semigroup $\bH_tu(x)={\mathbb E}_x[u(X_t)]$.
The law of $X_t$ evolves according to the dual semigroup $(\bH_t^*)_{t>0}$ with generator $L^*u=\alpha\Delta u+\mathrm{div}(u\cdot\nabla V)$.
Assume that the manifold $M$ has dimension $\le n$ and Ricci curvature $\ge k$.
Then the time-changed operator $\tilde L:=\frac1\alpha L$ satisfies the Bakry-\'Emery condition $\be(\frac1\alpha K,\frac1\alpha N)$ provided
\begin{equation}\label{alpha}
\mathrm{Hess}\, V-\frac1{N-\alpha n}(\nabla V\otimes\nabla V)\ge K-\alpha k
\end{equation}
[Prop. 4.21].
In the Wasserstein picture, the $\be(\frac1\alpha K,\frac1\alpha N)$-condition for $\tilde L$ translates into the $(\frac1\alpha K,\frac1\alpha N)$-convexity of the functional $\tilde S(\mu)=\ent(\mu)+\frac1\alpha \int V\,d\mu$ [Thm. 7]. The latter in turn is equivalent to the $(K,N)$-convexity of $S(\mu)=\alpha\ent(\mu)+ \int V\,d\mu$
on $\mathscr{P}_2(M)$ [Lemma 2.9].
Note that this also makes perfectly sense for $\alpha=0$ in which case the associated gradient flow equation on the Wasserstein space $\mathscr{P}_2(M)$ reads
\begin{equation*}
d\mu_t=-\nabla V\,dt.
\end{equation*}
Obviously, this precisely describes the evolution on $M$ determined by the semigroup $(\bH_t^*)_{t>0}$ with generator $L^*u=\mathrm{div}(u\cdot\nabla V)$.
Equilibration and regularization for this evolution are characterized by the parameters $K$ and $N$ in the bound \eqref{alpha}
for $\alpha=0$, i.e.
\begin{equation*}
\mathrm{Hess}\, V-\frac1{N}(\nabla V\otimes\nabla V)\ge K.
\end{equation*}
This is the $(K,N)$-convexity of $V$ on $M$.
\end{remark*}
\textbf{Organization of the article.} First we illustrate the new
concept of $(K,N)$-convexity in a smooth and finite dimensional
setting. Since many of the arguments which relate geodesic convexity,
the Evolution Variational Inequality and space-time expansion bounds
for the gradient flow are of a purely metric nature we study
$(K,N)$-convexity, $\evi_{K,N}$ and its consequences in the general
setting of metric spaces in Section~\ref{sec:evi}. In
Section~\ref{sec:cds} we turn to the study of $(K,N)$-convexity of the
entropy on the Wasserstein space. The entropic curvature-dimension
condition is introduced in Section~\ref{sec:cdkn} and its basic
properties are established. In particular we prove equivalence with
the reduced curvature-dimension condition for essentially
non-branching spaces. In Section~\ref{sec:riem-cdkn} we prove that the
entropic curvature-dimension condition plus linearity of the heat flow
is equivalent to the existence of an $\evi_{K,N}$ gradient flow of the
entropy which leads to the Riemannian curvature-dimension
condition. Here we also prove the stability results for ${\rcd^*(K,N)}$.
Finally, in Section~\ref{sec:cde-bochner} we prove the equivalence of
the entropic curvature-dimension condition, space-time Wasserstein
expansion bounds, pointwise gradient estimates and the Bochner
inequality for infinitesimally Hilbertian metric measure spaces. As
applications, new functional inequalities deduced from ${\cd^e(K,N)}$ are
studied in Section~\ref{sec:FI} and the sharp Lichnerowicz bound for
${\rcd^*(K,N)}$ spaces is established in Section~\ref{sec:Lichnerowicz}.
\section{$(K,N)$-convex functions and their EVI gradient flows}\label{sec:evi}
\subsection{Gradient flows and $(K,N)$-convexity in a smooth setting}
\label{sec:smooth}
In order to illustrate the concept of $(K,N)$-convexity of the entropy
and the consequences for its gradient flow, we consider in this
section a smooth and finite-dimensional setting. \medskip
Let $M$ be a smooth connected and geodesically complete Riemannian
manifold with metric tensor $\ip{\cdot,\cdot}$ and Riemannian distance
$d$. Let $S:M\to{\mathbb R}$ be a smooth function. Given two real numbers
$K\in{\mathbb R}$ and $N>0$, we say that $S$ is $(K,N)$-convex, if
\begin{align}\label{eq:knconvex1}
\Hess S - \frac{1}{N}\big(\nabla S \otimes\nabla S \big)~\geq~K\;,
\end{align}
in the sense that for all $x\in M$ and $v\in T_xM$ we have
\begin{align*}
\Hess S(x)[v] - \frac{1}{N}\ip{\nabla S(x),v}_x^2~\geq~K\abs{v}_x^2\;.
\end{align*}
Obviously, this condition becomes weaker as $N$ increases and in the
limit $N\to\infty$ we recover the notion of $K$-convexity, i.e. $\Hess
S\geq K$. It turns out to be useful to introduce the function
$U_N:M\to{\mathbb R}_+$ given by
\begin{align*}
U_N(x)~=~\exp\left(-\frac{1}{N}S(x)\right)\;.
\end{align*}
A direct calculation shows that \eqref{eq:knconvex1} can equivalently
be written as:
\begin{align}\label{eq:knconvex2}
\Hess U_N ~\leq~ -\frac{K}{N}\cdot U_N\;.
\end{align}
This condition can be thought of as a ``concavity'' property of
$U_N$. As with concavity, it can be expressed in an integrated form. To
this end we introduce the following functions.
\begin{definition}\label{def:distortioncoefficients}
For $\kappa\in{\mathbb R}$ and $\theta\geq0$ we define the functions
\begin{align*}
{\frak{s}}_\kappa(\theta)~&=~
\begin{cases}
\frac{1}{\sqrt{\kappa}}\sin\left(\sqrt\kappa\theta\right)\;, & \kappa>0\;,\\
\theta\;, & \kappa=0\;,\\
\frac{1}{\sqrt{-\kappa}}\sinh\left(\sqrt{-\kappa}\theta\right)\;, & \kappa<0\;,
\end{cases}\\
{\frak{c}}_\kappa(\theta)~&=~
\begin{cases}
\cos\left(\sqrt\kappa\theta\right)\;, & \kappa\geq0\;,\\
\cosh\left(\sqrt{-\kappa}\theta\right)\;, & \kappa<0\;.
\end{cases}
\end{align*}
Moreover, for $t\in[0,1]$ we set
\begin{align*}
\sigma_\kappa^{(t)}(\theta)~=~
\begin{cases}
\frac{{\frak{s}}_\kappa(t\theta)}{{\frak{s}}_\kappa(\theta)}\;, & \kappa\theta^2\neq0 \text{ and } \kappa\theta^2<\pi^2\;,\\
t\;, & \kappa\theta^2=0\;,\\
+\infty\;, & \kappa\theta^2\geq \pi^2\;.
\end{cases}
\end{align*}
\end{definition}
\begin{lemma}\label{lem:knconvexint}
The following statements are equivalent:
\begin{itemize}
\item[(i)] The function $S$ is $(K,N)$-convex.
\item[(ii)] For each constant speed geodesic
$(\gamma_t)_{t\in[0,1]}$ in $M$ and all $t\in[0,1]$ we have with $d:=d(\gamma_0,\gamma_1)$:
\begin{align}\label{eq:knconvex3}
U_N(\gamma_t)~\geq~\sigkn{1-t}{d}\cdot U_N(\gamma_0) + \sigkn{t}{d}\cdot U_N(\gamma_1)\;.
\end{align}
\item[(iii)] For each constant speed geodesic
$(\gamma_t)_{t\in[0,1]}$ in $M$ we have that
\begin{align}\label{eq:knconvex4}
U_N(\gamma_1)~\leq~\ckn{d}\cdot U_N(\gamma_0) + \frac{\skn{d}}{d}\cdot\left.\frac{\mathrm{d}}{\mathrm{d}t}\right\vert_{t=0}U_N(\gamma_t)\;.
\end{align}
\end{itemize}
\end{lemma}
\begin{proof}
(i)$\Rightarrow$(ii): Let $(\gamma_t)_{t\in[0,1]}$ be a constant
speed geodesic. Then in particular
$\abs{\dot{\gamma_t}}_{\gamma_t}=d$ and \eqref{eq:knconvex2}
immediately yields that the function $u:t\mapsto U_N(\gamma_t)$
satisfies
\begin{align}\label{eq:geoconvex}
u''(t)~\leq~-\frac{K}{N}d^2\cdot u(t)\;.
\end{align}
The function $v:[0,1]\to{\mathbb R}$ given by the right-hand side of
\eqref{eq:knconvex3} has the same boundary values as
$u$ and satisfies $v''=-(K/N) d^2\cdot v$. A comparison principle thus
yields $u\geq v$.
(ii)$\Rightarrow$(iii): This follows immediately by subtracting
$U_N(\gamma_0)$ on both sides of \eqref{eq:knconvex3}, dividing by
$t$ and letting $t\searrow0$.
(iii)$\Rightarrow$(i): Let $\gamma:[-1,1]\to M$ be a constant speed
geodesic with $\gamma_0=x$ and $\dot\gamma_0=v$,
i.e. $d=d(\gamma_0,\gamma_1)=\abs{v}$. Using \eqref{eq:knconvex4}
for the rescaled geodesics $\gamma':[0,1]\to M,\; t\mapsto\gamma_{\varepsilon t}$ and
$\gamma'':[0,1]\to M,\;t\mapsto\gamma_{-\varepsilon t}$ and adding up we obtain
\begin{align*}
U_N(\gamma_{\varepsilon}) + U_N(\gamma_{-\varepsilon}) - 2\ckn{\varepsilon d}\cdot U_N(\gamma_{0})~\leq~ 0\;.
\end{align*}
Dividing by $\varepsilon^2$ and using the fact that $\ckn{\varepsilon
d}=1-\frac{K}{N}\varepsilon^2d^2+o(\varepsilon^2)$ finally yields
\begin{align*}
\Hess U_N(x)[v]~\leq~-\frac{K}{N}\abs{v}^2\;.
\end{align*}
\end{proof}
\begin{remark}\label{rem:finite-diam}
We note that the existence of a $(K,N)$-convex function $S:M\to{\mathbb R}$
with $K>0$ poses strong constraints on the manifold $M$. In
particular, it implies that the diameter of $M$ is bounded by
$\sqrt{\frac{N}{K}}\pi$. This is immediate from the characterization
\eqref{eq:knconvex3} and the singularity of the coefficient
$\sigma_\kappa^{(t)}(\cdot)$ at $\pi/\sqrt{\kappa}$.
\end{remark}
\begin{lemma}\label{lem:gradflowevi}
Assume that $S$ is $(K,N)$-convex and differentiable. A smooth curve
$x:[0,\infty)\to M$ is a solution to the gradient flow equation
\begin{align}\label{eq:smoothgradflow}
\dot x_t~=~-\nabla S(x_t) \quad \forall t>0\;,
\end{align}
if and only if the following Evolution Variation Inequality
($EVI_{K,N}$) holds: for all $z\in M$ and all $t>0$:
\begin{align}\label{eq:smoothevikn}
\frac{\mathrm{d}}{\mathrm{d}t}\skn{\frac12 d(x_t,z)}^2 + K\cdot\skn{\frac12 d(x_t,z)}^2 ~\leq~\frac{N}{2}\left(1-\frac{U_N(z)}{U_N(x_t)}\right)\;.
\end{align}
\end{lemma}
\begin{proof}
To prove the only if part, fix $t\geq0$, $z\in M$ and a constant
speed geodesic $\gamma:[0,1]\to M$ connecting $x_t$ to $z$. Observe
that by \eqref{eq:smoothgradflow} and the first variation formula we
have
\begin{align*}
\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}U_N(\gamma_s)~&=~-\frac{1}{N}U_N(x_t)\ip{\nabla S(x_t),\dot\gamma_0}~=~\frac{1}{N}U_N(x_t)\ip{\dot x_t,\dot\gamma_0}\\
&=~-\frac{1}{N}U_N(x_t)\frac{\mathrm{d}}{\mathrm{d}t} \frac12 d(x_t,z)^2\;.
\end{align*}
Combining this with the $(K,N)$-convexity condition in the form
\eqref{eq:knconvex4} we obtain with $d=d(x_t,z)$:
\begin{align}
\label{eq:smoothevialt}
U_N(z)~\leq~\ckn{d} U_N(x_t) - \frac{\skn{d}}{N d}U_N(x_t)\frac{\mathrm{d}}{\mathrm{d}t}\frac12 d(x_t,z)^2\;.
\end{align}
Using the identity
\begin{align}\label{eq:sc-trick}
\frac{2}{N}\skn{\frac12\theta}^2~=~\frac{1}{K}\big(1-\ckn{\theta})\;,
\end{align}
it is immediate to see that the last inequality is equivalent to
\eqref{eq:smoothevikn}.
For the if part, fix $t\geq0$ and a constant speed geodesic
$\gamma:[0,1]\to M$ with $\gamma_0=x_t$. Using the Evolution
Variational inequality in the form \eqref{eq:smoothevialt} with
$z=\gamma_\varepsilon$ for some $\varepsilon>0$ we obtain
\begin{align*}
U_N(\gamma_\varepsilon)-\ckn{\varepsilon\abs{v}}U_N(\gamma_0)~\leq~\frac{\skn{\varepsilon\abs{v}}}{N\varepsilon\abs{v}}U_N(\gamma_0)\ip{\dot x_t,\varepsilon v}\;,
\end{align*}
where $v=\dot\gamma_0$. Dividing by $\varepsilon$ and letting
$\varepsilon\searrow0$, taking into account that $\ckn{\varepsilon d}=1+o(\varepsilon)$
and $\skn{\varepsilon d}=\varepsilon d +o(\varepsilon^2)$, we obtain
\begin{align*}
\ip{-\nabla S(x_t),v}~\leq~\ip{\dot x_t,v}\;.
\end{align*}
Since the direction of $v\in T_{x_t}M$ was arbitrary we obtain
\eqref{eq:smoothgradflow}.
\end{proof}
We conclude this section by exhibiting some 1-dimensional models of
$(K,N)$-convex functions.
\begin{example}\label{ex:1D-model}
Each of the following are $(K,N)$-convex functions. Note that the
domain of definition is maximal in each case.
\begin{itemize}
\item[(i)] For $N>0$ and $K>0$ let $S:(-\frac{\pi}{2}\sqrt{\frac{N}{K}},\frac{\pi}{2}\sqrt{\frac{N}{K}})\to{\mathbb R}$ defined by
\begin{align*}
S(x)~=~-N \log\cos\left(x\sqrt{K/N}\right)\;.
\end{align*}
\item[(ii)] For $N > 0$ and $K = 0$ let $S: (0,\infty) \to {\mathbb R}$ defined by
\begin{align*}
S(x) = - N \log x \;.
\end{align*}
\item[(iii)] For $N>0$ and $K < 0$ let $S:(0,\infty)\to{\mathbb R}$ defined by
\begin{align*}
S(x)~=~-N \log\sinh\left(x\sqrt{-K/N}\right)\;.
\end{align*}
\item[(iv)] For $N>0$ and $K<0$ let $S:(-\infty,\infty)\to{\mathbb R}$ defined by
\begin{align*}
S(x)~=~-N \log\cosh\left(x\sqrt{-K/N}\right)\,.
\end{align*}
\end{itemize}
\end{example}
The cases (i) and (iv) of the previous example canonically extend to multidimensional spaces.
\begin{example}
Let $(M,g)$ be a $n$-dimensional Riemannian manifold, $z\in M$ be any point and $N>0$ be any real number.
\begin{itemize}
\item[(i)] Then for each $K>0$ the function
\begin{align*}
S(x)~=~-N \log\cos\left(d(x,z)\sqrt{K/N}\right)
\end{align*}
defined on the open ball $\big\{x\in M: \ d(x,z)<\frac{\pi}{2}\sqrt{N/K}\big\}$ is $(K,N)$-convex provided the sectional curvature of the underlying space is $\le K/N$. (This in particular applies to the Euclidean space ${\mathbb R}^n$.)
\item[(ii)]
For each $K<0$ the function
\begin{align*}
S(x)~=~-N \log\cosh\left(d(x,z)\sqrt{-K/N}\right)
\end{align*}
defined on all of $M$ is $(K,N)$-convex provided the sectional curvature of the underlying space is $\ge K/N$.
(This in particular applies to the Euclidean space ${\mathbb R}^n$.)
\end{itemize}
Indeed, analogous statements hold true on geodesic spaces with generalized bounds for the sectional curvature in the sense of Alexandrov \cite{BBI01}.
\end{example}
\subsection{$(K,N)$-convexity in metric spaces}
We proceed our study of $(K,N)$-convexity in a purely metric setting.
Let $(X,d)$ be a complete and separable metric space and let $S : X
\to [ - \infty , \infty ]$ be a functional on $X$.
We denote by $D(S):=\{x\in X~:~S(x) < \infty \}$ the proper domain of
$S$. Given a number $N\in(0,\infty)$ we define the functional
$U_N:X\to[0,\infty)$ by setting
\begin{align}\label{eq:def-U}
U_N(x)~:=~\exp\left(-\frac{1}{N}S(x)\right)\;.
\end{align}
\begin{definition}\label{def:knconvex}
Let $K\in{\mathbb R}$, $N\in(0,\infty)$. We say that the functional $S$ is
$(K,N)$-convex if and only if for each pair $x_0,x_1\in D(S)$ there
exists a constant speed geodesic $\gamma:[0,1]\to X$ connecting
$x_0$ to $x_1$ such that for all $t\in[0,1]$:
\begin{align}\label{eq:defknconvex}
U_N(\gamma_t)~\geq~\sigkn{1-t}{d(\gamma_0,\gamma_1)}\cdot U_N(\gamma_0) + \sigkn{t}{d(\gamma_0,\gamma_1)}\cdot U_N(\gamma_1)\;.
\end{align}
If \eqref{eq:defknconvex} holds for every geodesic $\gamma:[0,1]\to
D(S)$ we say that $S$ is \emph{strongly} $(K,N)$-convex.
\end{definition}
For investigating $(K,N)$-convexity (especially for the strong form),
the following equivalent conditions will be helpful in the sequel.
\begin{lemma}\label{lem:knconvexint2}
Let $u : X \to [ 0, \infty )$ be a upper semi-continuous function
and $\kappa \in {\mathbb R}$.
Then the following statements are equivalent:
\begin{itemize}
\item[(i)]
For each constant speed geodesic $\gamma : [ 0 , 1 ] \to X$ and $t \in [0 , 1]$,
$\displaystyle
u'' (\gamma_t)
~\leq~
- \kappa d ( \gamma_0 , \gamma_1 )^2 u (\gamma_t)
$
in the distributional sense, i.e.
\begin{align*}
\int_0^1 \varphi'' (t) u (\gamma_t) \,\mathrm{d} t
~\leq~
- \kappa d ( \gamma_0 , \gamma_1 )^2 \int_0^1 \varphi (t) u (\gamma_t) \,\mathrm{d} t
\end{align*}
for any $\varphi \in C_0^\infty ( ( 0 , 1 ) )$ with $\varphi \ge 0$.
\item[(ii)] For each constant speed geodesic $\gamma$ on $X$ and $t \in [0,1]$,
\begin{align}\label{eq:knconvex2-3}
u (\gamma_t)~\geq~
\sigma_{\kappa}^{(1-t)} ( d ( \gamma_0 , \gamma_1 ) ) \cdot u(\gamma_0)
+
\sigma_{\kappa}^{(t)} ( d ( \gamma_0 , \gamma_1 ) ) \cdot u(\gamma_1).
\end{align}
\item[(iii)] For each constant speed geodesic $\gamma$ on $X$,
there is $\delta = \delta_\gamma > 0$ such that
for all $0 \le s \le t \le 1$ with $t-s \le \delta$ and $\a \in [ 0 , 1 ]$,
\begin{align}\label{eq:knconvex2-3'}
u (\gamma_{(1-\a)s + \a t})~\geq~
\sigma_{\kappa}^{(1-\a)} ( d ( \gamma_s , \gamma_t ) ) \cdot u(\gamma_s)
+
\sigma_{\kappa}^{(\a)} ( d ( \gamma_s , \gamma_t ) ) \cdot u(\gamma_t).
\end{align}
\item[(iv)] For each constant speed geodesic $\gamma$ on $X$ and $t \in [ 0, 1 ]$,
\begin{equation*}
u(\gamma_t)~\geq~(1-t)\cdot u(\gamma_0)+ t \cdot u(\gamma_1)
+ \kappa d ( \gamma_0 , \gamma_1 )^2 \int_0^1 g (t,r) u(\gamma_r) \mathrm{d} r
\end{equation*}
with $g (t,r)= \min\{(1-t)r,(1-r)t \}$ being the Green function on the interval $[0,1]$.
\end{itemize}
In particular, when $ - \infty \notin S (X)$ and $S$ is lower semi-continuous,
$S$ is strongly $(K,N)$-convex if and only if
$u = U_N$ and $\kappa = K/N$ satisfies one of these conditions.
\end{lemma}
\begin{proof}
For simplicity of presentation,
we denote $\theta = \theta_\gamma = d ( \gamma_0 , \gamma_1 )$
in this proof whenever a fixed geodesic is under consideration.
we also denote the restriction of $\gamma$ on $[s,t]$
for $0 \le s < t \le 1$ by $\gamma^{[s,t]} : [ 0 ,1 ] \to X$,
that is, $\gamma^{[s,t]}_r := \gamma_{(1-r)s + rt}$.
(i) $\Rightarrow$ (iv):
Let us denote $u_* (s) := \int_0^1 g (s,r) u(\gamma_r) \,\mathrm{d} r$.
Since we have
\begin{equation*}
\int_0^1 \varphi'' (r) u_* (r) \,\mathrm{d} r = - \int_0^1 \varphi (r) u(\gamma_r) \,\mathrm{d} r
\end{equation*}
for any $\varphi \in C^\infty_0 ((s,t))$ with $\varphi \ge 0$,
(i) implies $( u ( \gamma_\cdot ) - \kappa \theta^2 u_* )'' \le 0$ on $[ 0 , 1 ]$
in the distributional sense.
Thus the distributional characterization of convex functions
(see \cite[Theorem~1.29]{S11}, for instance) yields that
$u ( \gamma_\cdot ) - \kappa \theta^2 u_*$ coincides
with a concave function a.e.~and hence concave
because $u$ is upper semi-continuous.
It immediately implies (iv) since $u_* (0) = u_* (1) = 1$.
(iv) $\Rightarrow$ (i):
Note first that $u ( \gamma_t )$ is continuous.
Indeed, the condition (iv) together with the upper semi-continuity of $u$
implies that
$u ( \gamma_t )$ is continuous at $t = 0 , 1$.
Thus the continuity follows
by applying the same for $\gamma^{[0,s]}$ and $\gamma^{[s,1]}$.
For $s \in ( 0 , 1 )$ and $h > 0$ with $s+h , s-h \in [ 0 , 1 ]$,
we apply (iv) to $\gamma^{[s-h,s+h]}$ and $t = 1/2$ to obtain
\begin{equation*}
\frac{u(\gamma_{s+h}) + u (\gamma_{s-h}) - 2 u ( \gamma_s )}{2}
\le
4 \kappa h^2 \theta^2
\int_0^1
g \left( \frac12 , r \right)
u ( \gamma_{ s + (2 r - 1 ) h } )
\,\mathrm{d} r .
\end{equation*}
Then (i) follows by multiplying $\varphi \in C_0^\infty ( ( 0 , 1 ) )$,
integrating w.r.t.~$t$ (for sufficiently small $h$),
dividing by $h^2$ and $h \to 0$ with a change of variable.
(i) $\Rightarrow$ (ii):
Take $\varepsilon > 0$ and $\varphi \in C_0^\infty ( ( 0 , 1 ) )$
with $\int_0^1 \varphi (x) \, \mathrm{d} x = 1$,
and let
\begin{equation*}
\tilde{u}_\varepsilon (t)
: =
\int_0^1
\varepsilon^{-1} \phi ( \varepsilon^{-1} ( t - r ) ) u ( \gamma_r )
\,\mathrm{d} r\,.
\end{equation*}
Then (i) implies $\tilde{u}_\varepsilon'' (t) \le -\kappa \theta^2 \tilde{u}_\varepsilon (t)$
for each $t \in [ a_\varepsilon , 1 ]$ for some $a_\varepsilon > 0$.
Note that $a_\varepsilon$ can be chosen so that $\lim_{\varepsilon \to 0} a_\varepsilon = 0$.
Thus, in the same way as in Lemma~2.2,
we obtain
\begin{equation*}
\tilde{u}_\varepsilon ( (1-t) a_\varepsilon + t )
\ge
\sigma_{\kappa}^{(1-t)} ( \theta ) \tilde{u_\varepsilon} ( a_\varepsilon )
+
\sigma_{\kappa}^{(t)} ( \theta ) \tilde{u_\varepsilon} (1).
\end{equation*}
By virtue of the equivalence (i) $\Leftrightarrow$ (iv),
$u \circ \gamma$ is continuous and
hence $\tilde{u}_\varepsilon \to u \circ \gamma$ as $\varepsilon \to 0$
uniformly on $[ 0 , 1 ]$.
Thus the conclusion follows by letting $\varepsilon \to 0$.
(ii) $\Rightarrow$ (iii):
It follows by considering (ii) for $\gamma^{[s,t]}$.
(iii) $\Rightarrow$ (i):
We imitate the proof of the implication (iv) $\Rightarrow$ (i)
by using the following:
\begin{equation*}
\lim_{h \to 0} \frac{1}{h^2}
\left( \frac12 - \sigma_\kappa^{(1/2)} (2h\theta) \right)
=
- \frac14 \kappa \theta^2 .
\end{equation*}
\end{proof}
We conclude this section with some remarks about $(K,N)$-convexity.
The first property is immediate from the definition.
\begin{lemma}\label{lem:scale-convex}
If $S$
is $(K,N)$-convex, then for $\lambda>0$
the functional $\lambda\cdot S$ is $(\lambda K,\lambda N)$-convex.
\end{lemma}
\begin{lemma}\label{lem:convex-sum}
Let $S^1:X\to(-\infty,\infty]$ be a $(K_1,N_1)$-convex functional
and $S^2:X\to(-\infty,\infty]$ a strongly $(K_2,N_2)$-convex
functional.
Then the functional $S:=S^1 + S^2$ is $(K_1+K_2,N_1+N_2)$-convex.
In particular, $S$ is strongly $(K_1+K_2,N_1+N_2)$-convex if $S^1$
is strongly $(K_1 , N_1 )$-convex.
\end{lemma}
\begin{proof}
Let us set $K=K_1+K_2$ and $N=N_1+N_2$ and given $x_0 , x_1 \in
D(S)=D(S^1)\cap D(S^2)$ take a constant speed geodesic
$\gamma:[0,1]\to X$ from $x_0$ to $x_1$ according to the convexity
assumption of $S^1$. By the convexity assumption on $S^1$ and $S^2$
we have
\begin{align*}
\log U_N(\gamma_t)~&=~\frac{N_1}{N}\frac{(-1)}{N_1}S^1(\gamma_t) + \frac{N_2}{N}\frac{(-1)}{N_2}S^2(\gamma_t)\\
&\geq~\frac{N_1}{N}G_t \left(\frac{(-1)}{N_1}S^1(\gamma_0),\frac{(-1)}{N_1}S^1(\gamma_1),\frac{K_1}{N_1} d ( \gamma_0 , \gamma_1 )^2 \right)\\
&\quad +\frac{N_2}{N}G_t \left(\frac{(-1)}{N_2}S^2(\gamma_0),\frac{(-1)}{N_2}S^2(\gamma_1),\frac{K_2}{N_2} d ( \gamma_0 , \gamma_1 )^2 \right)\;,
\end{align*}
where the function $G_t$ is given by \eqref{eq:convexG}.
By Lemma~\ref{lem:convexG} below,
$G_t$ is convex. Hence we obtain
\begin{align*}
\log U_N(\gamma_t)~\geq~G_t\left(\frac{(-1)}{N}S(\gamma_0),\frac{(-1)}{N}S(\gamma_1),\frac{K}{N} d( \gamma_0 , \gamma_1 )^2\right)\;.
\end{align*}
Taking the exponential on both sides yields the claim. The last
assertion is obvious from the proof.
\end{proof}
\begin{lemma}\label{lem:convexG}
For any fixed $t\in[0,1]$ the function
$G_t :{\mathbb R}\times{\mathbb R}\times(-\infty,\pi^2 )\to{\mathbb R}$ given by
\begin{align}\label{eq:convexG}
G_t (x,y,\kappa)~=~~\log\left[\sigma^{(1-t)}_\kappa(1)e^x + \sigma^{(t)}_\kappa(1)e^y\right]
\end{align}
is convex.
\end{lemma}
Note that we have $\sigma^{(s)}_\kappa (\theta) = \sigma^{(s)}_{\kappa \theta^2 } (1)$
for $s \in [ 0 , 1 ]$, $\theta \geq 0$ and $\kappa \in ( - \infty , \pi^2 / \theta^2 )$.
It is useful to apply this lemma.
\begin{proof}
We define the function
$g^{(t)}: \kappa\mapsto\log \sigma^{(t)}_\kappa(1)$
on $(-\infty, \pi^2 )$
and write
\begin{align*}
G_t (x,y,\kappa)~=~F\Big(x+g^{(1-t)}(\kappa),y+g^{(t)}(\kappa)\Big)\;,
\end{align*}
where $F(u,v)=\log\big(e^u+ e^v\big)$. The claim then follows by
noting that the function $F$ is convex, $a\mapsto F(u+a,v+a)$ is
increasing and that the functions $g^{(t)}$ are convex.
\end{proof}
Finally we remark that the notion of $(K,N)$-convexity is consistent
in the parameters $K$ and $N$.
\begin{lemma}\label{lem:convexconsistent}
If $S$ is $(K,N)$-convex then it is also $(K',N')$-convex for all
$K'\leq K$ and $N'\geq N$. Moreover, it is $K$-convex in the sense
that for each pair $x_0,x_1\in D(S)$ there exist a constant speed
geodesic $\gamma:[0,1]\to X$ connecting $x_0$ to $x_1$ such that for
all $t\in[0,1]$:
\begin{align}\label{eq:kconvex}
S(\gamma_t)~\leq~(1-t)S(\gamma_0) + t S(\gamma_1) -\frac{K}{2}t(1-t)d(\gamma_0,\gamma_1)^2\;.
\end{align}
\end{lemma}
\begin{proof}
Consistency in $K$ is immediate from the fact that for any fixed $t$
and $\theta$ the coefficient $\sigkn{t}{\theta}$ is increasing in
$K$.
Consistency in $N$ is
a consequence e.g. of Lemma~\ref{lem:convex-sum}
and the trivial observation that for any $N'>N$
the constant functional $S^0 \equiv 0$ is $(0,N'-N)$-convex.
Using the consistency in $N$ we can derive \eqref{eq:kconvex} by
subtracting $1$ on both sides of \eqref{eq:defknconvex}, multiplying
with $N$ and passing to the limit $N\nearrow\infty$. Here we use the
fact that $\sigkn{t}{\theta}=t+-K(t^3-t)\theta^2/(6N) + o(1/N)$ and
$U_N(x)=1-S(x)/N+o(1/N)$.
\end{proof}
\subsection{Evolution Variational Inequalities in metric spaces}
\label{sec:metric}
In this section we study the Evolution Variational Inequality with
parameters $K$ and $N$ and the associated notion of gradient flow in a
purely metric setting. In particular, we investigate the relation with
geodesic convexity. Our approach extends the results obtained in
\cite{DS08,AGS11b} where the case $N=\infty$ has been considered.
\medskip
Let $(X, d)$ be a complete separable geodesic metric space and $S : X
\to ( - \infty , \infty ]$ a lower semi-continuous functional. Note
that our framework is slightly more restrictive than that in the last
section. We define the \emph{descending slope} of $S$ at $x\in D(S)$
as
\begin{align*}
\abs{\nabla^-S}(x)~:=~\limsup\limits_{y\to x}\frac{\big(S(x)-S(y)\big)_+}{d(x,y)}\;.
\end{align*}
For $x\notin D(S)$ we set $\abs{\nabla^-S}=+\infty$. A curve
$\gamma:I\to X$ defined on an interval $I\subset{\mathbb R}$ is called
\emph{absolutely continuous} if
\begin{align}\label{eq:abscont}
d(\gamma_s,\gamma_t)~\leq~\int_s^tg(r)\mathrm{d} r\quad\forall s,t\in I\;,s\leq t\;,
\end{align}
for some $g\in L^1(I)$. For an absolutely continuous curve $\gamma$ the
\emph{metric speed}, defined by
\begin{align*}
\abs{\dot\gamma}(t)~:=~\lim\limits_{h\to0}\frac{d(\gamma_{t+h},\gamma_t)}{\abs{h}}\;,
\end{align*}
exists for a.e. $t\in I$ and is the minimal $g$ in \eqref{eq:abscont}
(see e.g. \cite[Thm.~1.1.2]{AGS08}). The following is a classical
notion of gradient flow in a metric space, see e.g. \cite{AGS08}.
\begin{definition}[Gradient flow]\label{def:metric-gf}
We say that a locally absolutely continuous curve $x:[0,\infty)\to
X$ with $x_0\in D(S)$ is a (downward) gradient flow of $S$ starting in $x_0$ if
the \emph{Energy Dissipation Equality} holds:
\begin{align}\label{eq:EDE}
S(x_{s})~=~S(x_{t})
+ \frac12\int_s^t\abs{\dot x_r}^2+\abs{\nabla^-S}(x_r)\mathrm{d} r\quad\forall 0\leq s\leq t\;.
\end{align}
\end{definition}
We introduce here a more restrictive notion of gradient flow based on
the Evolution Variational Inequality.
\begin{definition}[${\evi_{K,N}}$ gradient flow]\label{def:KNflow}
Let $K\in {\mathbb R}$, $
N\in(0,\infty)$ and let $x:(0,\infty)\to D(S)$ be a
locally absolutely continuous curve. We say that $(x_t)$ is an
${\evi_{K,N}}$ \emph{gradient flow} of $S$ starting in $x_0$ if
$\lim_{t\to0}x_t=x_0$ and if for all $z\in D(S)$ the \emph{Evolution
Variational Inequality}
\begin{align}\label{eq:knflow}
\frac{\mathrm{d}}{\mathrm{d}t}\skn{\frac12 d(x_t,z)}^2 + K\cdot\skn{\frac12 d(x_t,z)}^2~\leq~\frac{N}{2}\left(1-\frac{U_{N}(z)}{U_{N}(x_t)}\right)
\end{align}
holds for a.e. $t>0$.
\end{definition}
\begin{lemma}\label{lem:consistentKN}
If $(x_t)_t$ is an ${\evi_{K,N}}$ flow for $S$, then it is also an $\evi_{K',N'}$ flow
for $S$ for any $K'\leq K$ and $N'\geq N$. Moreover, $(x_t)$ is an
$\evi_K$ flow for $S$, i.e. for all $z\in D(S)$ and a.e. $t>0$:
\begin{align}\label{eq:kflow}
\frac12\frac{\mathrm{d}}{\mathrm{d}t} d(x_t,z)^2 + \frac{K}{2}d(x_t,z)^2~\leq~S(z) - S(x_t)\;.
\end{align}
\end{lemma}
\begin{proof}
Using the \eqref{eq:sc-trick}
one checks that \eqref{eq:knflow} is equivalent to either of the
following inequalities:
\begin{align}\label{eq:evialt1}
\frac12 \frac{\mathrm{d}}{\mathrm{d}t} d(x_t,z)^2 ~&\leq~\frac{Nd}{\skn{d}}\left[\ckn{d} -
\frac{U_{N}(z)}{U_{N}(x_t)}\right]\\\label{eq:evialt2}
\frac12 \frac{\mathrm{d}}{\mathrm{d}t} d(x_t,z)^2~&\leq~\frac{d}{\skn{d}}N\left[1-\frac{U_N(z)}{U_N(x_t)}\right] -
2Kd\frac{\skn{\frac12 d}^2}{\skn{d}}\;,
\end{align}
where we set $d=d(x_t,z)$. Consistency in $K$ can be seen from
\eqref{eq:evialt1} by noting that for any $\theta\geq0$ we have
that $\skn{\theta}$ and $\ckn{\theta}/\skn{\theta}$ is decreasing
in $K$. Consistency in $N$ follows from \eqref{eq:evialt2} and the
fact that for any $v\in{\mathbb R}$ and $\theta\geq0$ both
\begin{align*}
N\left[1-\exp\Big(-\frac{1}{N}v\Big)\right]\frac{1}{\skn{\theta}}\quad\text{ and }\quad -K\cdot\frac{\skn{\frac12\theta}^2}{\skn{\theta}}
\end{align*}
are increasing in $N$. \eqref{eq:kflow} follows immediately from
\eqref{eq:evialt2} by passing to the limit as $N\to\infty$. For
this we note that
\begin{align*}
\lim\limits_{N\to\infty}\skn{d}=d\;,\quad\lim\limits_{N\to\infty}N\left[1-\frac{U_N(z)}{U_N(x_t)}\right]=S(z)-S(x_t)\;.
\end{align*}
\end{proof}
\begin{remark}
This shows consistency with the theory of $\evi_K$ gradient flows of
geodesically $K$-convex functions. It can be thought of as the
limiting case $N=\infty$. By taking the limit $N\to\infty$ in the
estimates obtained in this section we recover the corresponding
results for $\evi_K$ flows established in \cite{DS08,AGS11b}.
\end{remark}
We summarize here some properties of ${\evi_{K,N}}$ gradient flows. To this
end we set for $\kappa\in{\mathbb R}$ and $t\geq0$:
\begin{align*}
e_\kappa(t)~=~\int_0^t\mathrm{e}^{\kappa s}\mathrm{d} s\;.
\end{align*}
\begin{proposition}\label{prop:evi-properties}
Let $(x_t)$ be an ${\evi_{K,N}}$ gradient flow of $S$ starting in
$x_0$. Then the following statements hold:
\begin{itemize}
\item[(i)] If $x_0\in D(S)$ then $(x_t)$ is also a metric gradient
flow in the sense of Definition~\ref{def:metric-gf}. In
particular, the map $t\mapsto S(x_t)$ is non-increasing.
\item[(ii)] We have the uniform regularization bound
\begin{align}\label{eq:regularization}
\frac{U_N(z)}{U_N(x_t)}~\leq~1 + \frac{2}{N e_K(t)}\skn{\frac12
d(x_0,z)}^2
\end{align}
\item[(iii)] If $S$ is bounded below we have the uniform continuity
estimate
\begin{align}\label{eq:uniform-continuity}
\skn{\frac12d(x_{t_1},x_{t_0})}^2~\leq~\frac{N}{2e_{-K}(t_1-t_0)}\left[1-\frac{U_N(x_{t_0})}{U_{N}^{\max}}\right]\;.
\end{align}
\end{itemize}
\end{proposition}
\begin{proof}
By Lemma~\ref{lem:consistentKN} $(x_t)$ is an $\evi_K$ flow of $S$
and hence a metric gradient flow by \cite[Prop.~3.9]{AG12}.
\eqref{eq:regularization} follows immediately from
\eqref{eq:evi-int} in Proposition~\ref{prop:evi-equiv} below by
taking $t_0=0$. The uniform continuity estimate
\eqref{eq:uniform-continuity} is obtained similarly by taking
$z=x_{t_0}$.
\end{proof}
The following result collects several equivalent reformulations of the
definition of ${\evi_{K,N}}$ gradient flows which will be useful in the
sequel. For this we say that a subset $D\subset D(S)$ is \emph{dense
in energy}, if for any $z\in D(S)$ there exists a sequence
$(z_n)\subset D$ such that $d(z_n,z)\to0$ and $S(z_n)\to S(z)$ as
$n\to\infty$. For a function $f:I\to{\mathbb R}$ on some interval $I$ we use
the notation
\begin{align*}
\frac{\mathrm{d}^+}{\mathrm{d}t} f(t)~=~\limsup\limits_{h\searrow0}\frac{f(t+h)-f(t)}{h}
\end{align*}
to denote the right derivative.
\begin{proposition}\label{prop:evi-equiv}
Let $D\subset D(S)$ be dense in energy and let $x:(0,\infty)\to
D(S)$ be a locally absolutely continuous curve with
$\lim_{t\to0}x_t=x_0$. Then $(x_t)$ is an ${\evi_{K,N}}$ gradient flow of
$S$ if and only if one of the following statements holds:
\begin{itemize}
\item[(i)] The differential inequality \eqref{eq:knflow} holds for
all $z\in D$ and a.e. $t>0$.
\item[(ii)] For all $z\in D$ and all $0\leq t_0\leq t_1$:
\begin{align}\label{eq:evi-int}
e_K(t_1-t_0)\frac{N}{2}\left(1-\frac{U_N(z)}{U_N(x_{t_1})}\right)~\geq~\mathrm{e}^{K(t_1-t_0)}\skn{\frac12 d(x_{t_1},z)}-\skn{\frac12 d(x_{t_0},z)}^2\;.
\end{align}
\item[(iii)] For all $z\in D$ and \emph{all} $t>0$:
\begin{align}\label{eq:knflowallt}
\frac{\mathrm{d}^+}{\mathrm{d}t}\skn{\frac12 d(x_t,z)}^2 + K\cdot \skn{\frac12
d(x_t,z)}^2~\leq~\frac{N}{2}\left(1-\frac{U_{N}(z)}{U_{N}(x_t)}\right)
\end{align}
\end{itemize}
\end{proposition}
\begin{proof}
We prove the equivalence of Definition~\ref{def:KNflow} and
(ii). Assume that $(x_t)$ is an ${\evi_{K,N}}$ flow and note that the
right hand side of \eqref{eq:knflow} can be rewritten as
\begin{align*}
\mathrm{e}^{-Kt}\frac{\mathrm{d}}{\mathrm{d}t}\left[\mathrm{e}^{Kt}\skn{\frac12 d(x_t,z)}^2\right]\;.
\end{align*}
Integrating from $t_0$ to $t_1$ and using that the map $t\mapsto
U_N(x_t)$ is non-decreasing by (i) of
Proposition~\ref{prop:evi-properties} then yields \eqref{eq:evi-int}
for all $z\in D(S)$. Conversely, differentiating \eqref{eq:evi-int}
yields \eqref{eq:knflow}. The fact that \eqref{eq:evi-int} holds for
all $z\in D(S)$ if and only if it holds for all $z\in D$ is
obvious. Similar arguments show the equivalence of
Definition~\ref{def:KNflow} with (i) and (iii).
\end{proof}
An important property of ${\evi_{K,N}}$ flows is the following expansion
bound.
\begin{theorem}\label{thm:contraction}
Let $(x_t),(y_t)$ be two ${\evi_{K,N}}$ gradient flows of $S$ starting
from $x_0$ resp. $y_0$. Then for all $s,t\geq0$:
\begin{align}\label{eq:contraction}
\skn{\frac12 d(x_t,y_s)}^2~\leq~\mathrm{e}^{-K(s+t)}\skn{\frac12 d(x_0,y_0)}^
+ \frac{N}{K}\Big(1-\mathrm{e}^{-K(s+t)}\Big)\frac{\big(\sqrt{t}-\sqrt{s}\big)^2}{2(s+t)}\;.
\end{align}
\end{theorem}
\begin{proof}
Let us fix $s,t>0$. Choose $\lambda,r>0$ such that $\lambda r=t$ and
$\lambda^{-1}r=s$, i.e. $\lambda=\sqrt{\frac{t}{s}}$ and
$r=\sqrt{ts}$. From \eqref{eq:evi-int} applied to $(x_t)$ with
$z=y_{\lambda^{-1}r}$ and $t_0=\lambda r,t_1=\lambda(r+\varepsilon)$ for
some $\varepsilon>0$ we obtain
\begin{align}\label{eq:contraction1}
\frac{N}{2}\frac{U_N(y_{\lambda^{-1}r})}{U_N(x_{\lambda(r+\varepsilon)})}~\leq~\frac{N}{2}&-\frac{1}{e_{-K}(\lambda\varepsilon)}\skn{\frac12 d(x_{\lambda(r+\varepsilon)},y_{\lambda^{-1}r})}^2\\\nonumber
&+\frac{1}{e_{K}(\lambda\varepsilon)}\skn{\frac12 d(x_{\lambda r},y_{\lambda^{-1}r})}^2\;.
\end{align}
Similarly, choosing $z=x_{\lambda(r+\varepsilon)}$ and
$t_0=\lambda^{-1}r,t_1=\lambda^{-1}(r+\varepsilon)$ and applying
\eqref{eq:evi-int} to $(y_s)$ we obtain
\begin{align}\label{eq:contraction2}
\frac{N}{2}\frac{U_N(x_{\lambda(r+\varepsilon)})}{U_N(y_{\lambda^{-1}(r+\varepsilon)})}~\leq~\frac{N}{2}&-\frac{1}{e_{-K}(\lambda^{-1}\varepsilon)}\skn{\frac12 d(y_{\lambda^{-1}(r+\varepsilon)},x_{\lambda(r+\varepsilon)})}^2\\\nonumber
&+\frac{1}{e_{K}(\lambda^{-1}\varepsilon)}\skn{\frac12 d(y_{\lambda^{-1}r},x_{\lambda(r+\varepsilon)})}^2\;.
\end{align}
Multiplying \eqref{eq:contraction1} and \eqref{eq:contraction2}
after taking square roots and using Young's inequality,
$2\sqrt{ab}\leq\lambda a + \lambda^{-1} b$, we deduce the estimate
\begin{align}\label{eq:contraction3}
&N\sqrt{\frac{U_N(y_{\lambda^{-1}r})}{U_N(y_{\lambda^{-1}(r+\varepsilon)})}}~\leq~\frac{N}{2}(\lambda^{-1}+\lambda)\\\nonumber
&+ \skn{\frac12 d(y_{\lambda^{-1}r},x_{\lambda(r+\varepsilon)})}^2\left[\frac{\lambda^{-1}}{e_{K}(\lambda^{-1}\varepsilon)}-\frac{\lambda}{e_{-K}(\lambda\varepsilon)}\right]\\\nonumber
& +\skn{\frac12 d(x_{\lambda r},y_{\lambda^{-1}r})}^2\left[\frac{\lambda}{e_{K}(\lambda\varepsilon)}-\frac{\lambda^{-1}}{e_{-K}(\lambda^{-1}\varepsilon)}\right]\\\nonumber
&-\frac{\lambda^{-1}\varepsilon}{e_{-K}(\lambda^{-1}\varepsilon)}\frac{1}{\varepsilon}\left[\skn{\frac12 d(y_{\lambda^{-1}(r+\varepsilon)},x_{\lambda(r+\varepsilon)})}^2-\skn{\frac12 d(x_{\lambda r},y_{\lambda^{-1}r})}^2\right]\;.
\end{align}
Note that as $\varepsilon\to0$ we have
\begin{align*}
\frac{e_{-K}(\lambda^{-1}\varepsilon)}{\lambda^{-1}\varepsilon}\to1\quad \text{and}\quad \left[\frac{\lambda^{-1}}{e_{K}(\lambda^{-1}\varepsilon)}-\frac{\lambda}{e_{-K}(\lambda\varepsilon)}\right]\to-\frac{K}{2}(\lambda+\lambda^{-1})\;.
\end{align*}
Hence, if we consider the function $g:{\mathbb R}_+\to{\mathbb R}$ given by
\begin{align*}
g(\tau)~=~\frac{2}{N}\skn{\frac12 d(x_{\lambda \tau},y_{\lambda^{-1}\tau})}^2
\end{align*}
and take the limit as $\varepsilon\searrow0$ in \eqref{eq:contraction3} we
obtain
\begin{align*}
\left.\frac{\mathrm{d}^+}{\mathrm{d}\tau}\right\vert_{\tau=r} g(\tau)~\leq~-K(\lambda+\lambda^{-1})g(r) + (\lambda+\lambda^{-1}-2)\;.
\end{align*}
By an application of Gronwall's lemma we deduce that
\begin{align*}
g(r)~\leq~e^{-K(\lambda+\lambda^{-1})r}\Big[g(0) + \frac{\lambda+\lambda^{-1}-2}{(\lambda+\lambda^{-1})}e_K\big((\lambda+\lambda^{-1})r\big)\Big]\;.
\end{align*}
Rewriting $r,\lambda$ in terms of $s,t$ finally yields
\eqref{eq:contraction}.
\end{proof}
\begin{remark}\label{rem:contraction-infinitesimal}
In the limit $d(x_0,y_0)\to0$ and $s\to t$ the contraction estimate
\eqref{eq:contraction} reads asymptotically as follows:
\begin{align}\label{eq:contraction-infinitesimal}
d(x_t,y_s)^2~\leq~e^{-2Kt}d(x_0,y_0)^2 + \frac{N}{K}\frac{1-e^{-2Kt}}{4t^2}\cdot\abs{s-t}^
+ o\big(d(x_0,y_0)^2+\abs{t-s}^2\big)\;.
\end{align}
\end{remark}
\begin{corollary}\label{cor:contraction}
For each $x_0\in\overline{D(S)}$ there exist at most one ${\evi_{K,N}}$
gradient flow of $S$ starting from $x_0$. The maps $P_t:x_0\mapsto x_t$,
where $(x_t)$ is the unique gradient flow starting from $x_0$
constitute a continuous semigroup defined on a closed (possibly
empty) subset of $\overline{D(S)}$.
\end{corollary}
The previous expansion estimate in Theorem~\ref{eq:contraction}
implies a slightly weaker estimate directly for the distance $d$ not
involving the functions $\mathfrak{s}_{K/N}$. More precisely, we have
the following:
\begin{proposition}\label{prop:simp-control}
The expansion bound \eqref{eq:contraction} implies the following
bound: For each $x_0 , x_1 \in X$ and $s , t \ge 0$, $x_t := P_t
x_0$ and $y_s : = P_s y_0$ satisfies
\begin{align*}
d ( x_t , y_s )^2
&~\le~
\mathrm{e}^{-K \tau(s,t)} d ( x_0 , y_0 )^2 + 2N
\frac{ 1 - \mathrm{e}^{-K \tau(s,t)}}{ K \tau(s,t) }
\big( \sqrt{t} - \sqrt{s} \big)^2 \; ,
\end{align*}
where $\tau (s,t) = 2( t + \sqrt{ts} + s )/3$.
In particular, setting
$t=s$ yields the following estimate:
\begin{align}\label{eq:WC0} d ( x_t , y_t ) & \le
\mathrm{e}^{-Kt} d ( x_0 , y_0 ).
\end{align}
\end{proposition}
\begin{proof}
For $0 < s' < t'$, let $\Phi : [ 0, 1 ] \to [ s' , t' ]$ be given by
$ \Phi (r) := ( \sqrt{s'} + ( \sqrt{t'} - \sqrt{s'} ) r )^2 $. Let
$( \gamma_u )_{u \in [0,1]}$ be a constant speed geodesic. By
\eqref{eq:contraction}, there exists $C_1 > 0$ such that
\begin{equation} \label{eq:small}
d ( P_{r} \gamma_{u} , P_{r'} \gamma_{u'} )
\le
C_1 \left( | u - u' | + | \sqrt{r} - \sqrt{r'} | \right)
\end{equation}
when $| u - u' |$ and
$| \sqrt{r} - \sqrt{r'} |$ is sufficiently small.
By the convexity of $z \mapsto z^2$ on ${\mathbb R}$,
for $k \in {\mathbb N}$,
\begin{align*}
d ( P_{t'}\gamma_1 , P_{s'} \gamma_0 )^2
\le
\sum_{j=1}^{k}
d \big(
P_{\Phi ((j-1)/k)} \gamma_{(j-1)/k} ,
P_{\Phi (j/k)} \gamma_{j/k}
\big)^2 k.
\end{align*}
By virtue of \eqref{eq:small}, we have
\begin{align*}
\lim_{k \to \infty} &
\sum_{j=1}^{k}
d \big(
P_{\Phi ((j-1)/k)} \gamma_{(j-1)/k} ,
P_{\Phi (j/k)} \gamma_{j/k}
\big)^2 k
\\
& \le
4 \lim_{k \to \infty}
\sum_{j=1}^{k}
\skn{ \frac12
d \big(
P_{\Phi ((j-1)/k)} \gamma_{(j-1)/k} ,
P_{\Phi (j/k)} \gamma_{j/k}
\big)
}^2 k
\\
& \le
4 \lim_{k \to \infty} \Bigg[
\sum_{j=1}^k \mathrm{e}^{-K ( \Phi (j/k) + \Phi ((j-1)/k) )}
\skn{
\frac12
d ( \gamma_{(j-1)/k} , \gamma_{j/k} )
}^2 k
\\
& \hspace{6em} +
\frac{N}{2} \sum_{j=1}^k
\frac{ 1 - \mathrm{e}^{-K( \Phi (j/k) + \Phi ( (j-1)/k ) )} }
{
K ( \Phi (j/k) + \Phi ( ( j-1 )/k ) )
}
\frac{ \big( \sqrt{ t' } - \sqrt{ s' } \big)^2 }{k}
\Bigg]
\\
& =
\int_0^1 \mathrm{e}^{-2 K \Phi (r)} dr d ( \gamma_0 , \gamma_1 )^2
+ 2N
\int_0^1 \frac{ 1 - \mathrm{e}^{-K \Phi (r)}}{ K \Phi (r)} dr
( \sqrt{t'} - \sqrt{s'} )^2 .
\end{align*}
Let $\lambda \ge 1$, $\tau , h > 0$,
$s' = \lambda^{-1} ( \tau + h )$,
$t' = \lambda ( \tau + h )$,
$\gamma_0 := P_{\lambda^{-1} r} y_0$ and $\gamma_1 : = P_{\lambda r} x_0$.
Then the last inequality implies
\begin{align*}
\frac{\mathrm{d}^+}{\mathrm{d}\tau}
d ( x_{\lambda \tau} , y_{\lambda^{-1} \tau} )^2
\le
- \frac{2K}{3} ( \lambda + \lambda^{-1} + 1 )
d ( x_{\lambda \tau} , y_{\lambda^{-1} \tau} )^2
+
2N
( \sqrt{\lambda} - \sqrt{ \lambda^{-1} } )^2 .
\end{align*}
Thus the conclusion follows from this estimate as in the proof of
Theorem~\ref{thm:contraction}.
\end{proof}
We now investigate the relation between the Evolution Variational
Inequality and geodesic convexity of the functional $S$.
\begin{theorem}\label{thm:eviconvex}
Assume that for every starting point $x_0\in \overline{D(S)}$ the
${\evi_{K,N}}$ flow for $S$ exists. Then $S$ is strongly $(K,N)$-convex.
\end{theorem}
\begin{proof}
Let $P$ denote the ${\evi_{K,N}}$ gradient flow semigroup of $S$. We treat
the case $K\neq0$ first. So let $(\gamma_s)_{s\in[0,1]}$ be a
constant speed geodesic. Let us fix $s\in[0,1], t>0$ and set
$\gamma_s^t:=P_t\gamma_s$. We can assume that
$d:=d(\gamma_0,\gamma_1)\neq0$. Using the identity \eqref{eq:sc-trick}
we see that \eqref{eq:evi-int} can be rewritten as
\begin{align}\label{eq:evi-int-alt}
\frac{U_N(z)}{U_N(P_{t_1}x)}e_K(t_1-t_0)~\leq~\frac{1}{K}\Big[\mathrm{e}^{K(t_1-t_0)}\ckn{d(P_{t_1}x,z)
-\ckn{d(P_{t_0}x,z)}\Big]\;.
\end{align}
Using \eqref{eq:evi-int-alt} with $t_0=0,t_1=t,x=\gamma_s$ and
$z=\gamma_0$ respectively $z=\gamma_1$ we immediately obtain
\begin{align*}
\sigkn{1-s}{d}\cdot U_N&(\gamma_0) + \sigkn{s}{d}\cdot U_N(\gamma_1)\\
~\leq~\frac{U_N(\gamma^t_s)}{K\cdot e_K(t)}\Big[&\sigkn{1-s}{d}\cdot\Big(\mathrm{e}^{Kt}\ckn{d(\gamma_s^t,\gamma_0)}-\ckn{d(\gamma_s,\gamma_0)}\Big)\\
+&\sigkn{s}{d}\cdot\Big(\mathrm{e}^{Kt}\ckn{d(\gamma_s^t,\gamma_1)}-\ckn{d(\gamma_s,\gamma_1)}\Big)\Big]\;.
\end{align*}
Let $A$ denote the term in square brackets in the last
inequality. The claim follows if we show that for $t$ small enough
we have $A\leq K\cdot e_K(t)=\mathrm{e}^{Kt}-1$ if $K>0$ and $A\geq \mathrm{e}^{Kt}-1$
if $K<0$. Using the fact that $d(\gamma_s,\gamma_{s'})=\abs{s-s'}d$,
we first find
\begin{align*}
A~=&~\frac{\mathrm{e}^{Kt}}{\skn{d}}\Big[\skn{(1-s)d}\cdot\ckn{d(\gamma_s^t,\gamma_0)}+\skn{sd}\cdot\ckn{d(\gamma_s^t,\gamma_1)}\Big]\\
&-\frac{1}{\skn{d}}\Big[\skn{(1-s)d}\cdot\ckn{sd}+\skn{sd}\cdot\ckn{(1-s)d}\Big]\\
:=&~ A_1+A_2 \;.
\end{align*}
By the angle sum identity for $\sin$ (resp. $\sinh$) we have
$A_2=-1$. To see that $A_1\leq \mathrm{e}^{Kt}$ (resp. $A_1\geq \mathrm{e}^{Kt}$), we
observe the following fact, which is easily verified using the
angle sum identities for trigonometric or hyperbolic functions: If
$\alpha,\alpha'\geq 0$ and
$\varepsilon,\varepsilon'\in[-\frac{\pi}{2},\frac{\pi}{2}]$ such that
$\varepsilon+\varepsilon'\geq0$, then, putting $\beta=\alpha+\varepsilon,
\beta'=\alpha'+\varepsilon'$, we have that
\begin{align*}
\sin(\a)\cos(\beta') + \cos(\beta)\sin(\a')~&\leq~\sin(\a+\a')\;,\\
\sinh(\a)\cosh(\beta') +
\cosh(\beta)\sinh(\a')~&\geq~\sinh(\a+\a')\;.
\end{align*}
To conclude, we apply this with $\alpha=(1-s)d$, $\alpha'=sd$ and
$\varepsilon=d(\gamma_s^t,\gamma_1)-(1-s)d$,
$\varepsilon'=d(\gamma_s^t,\gamma_0)-sd$ and note that $\varepsilon+\varepsilon'\geq0$ by
the triangle inequality.
Finally, we treat the case $K=0$. By Lemma~\ref{lem:consistentKN} $P$
is a $\evi_{K',N}$ flow for every $K'<0$. Thus by the previous argument
\eqref{eq:defknconvex} holds with $K'$ instead of $K$ and we can pass
to the limit as $K'\nearrow0$.
\end{proof}
\section{Entropic and Riemannian curvature-dimension conditions}\label{sec:cds}
\subsection{The entropic curvature-dimension condition}
\label{sec:cdkn}
In this section we introduce a new curvature-dimension condition for
metric measure spaces based on $(K,N)$-convexity of the entropy on the
Wasserstein space.
Let $(X,d,m)$ be a metric measure space, i.e. $(X,d)$ is a complete
and separable metric space and $m$ is a locally finite,
$\sigma$-finite Borel measure on $X$. We denote by $\mathscr{P}_2(M,d)$ the
$L^2$-Wasserstein space over $(X,d)$, i.e. the set of all Borel
probability measures $\mu$ satisfying
\begin{align*}
\int d(x_0,x)^2\mu(\mathrm{d} x)~<\infty
\end{align*}
for some, hence any, $x_0\in X$. The subspace of all measures
absolutely continuous w.r.t. $m$ is denoted by $\mathscr{P}_2(X,d,m)$. The
$L^2$-Wasserstein distance between $\mu_0,\mu_1\in\mathscr{P}_2(X,d)$ is
defined by
\begin{align*}
W_2(\mu_0,\mu_1)^2~=~\inf \int d(x,y)^2\mathrm{d} q(x,y)\;,
\end{align*}
where the infimum is taken over all Borel probability measures $q$ on
$X\times X$ with marginals $\mu_0$ and $\mu_1$. Let us denote by
$\geo(X)=\{\gamma:[0,1]\to X\ |\ \gamma \text{ const. speed
geodesic}\}$ the space of constant speed geodesics in $X$ equipped
with the topology of uniform convergence. For any $t\in[0,1]$ we
denote by $e_t:\geo(X)\to X$ the evaluation map $\gamma\mapsto
\gamma_t$. Recall that a \emph{dynamic optimal coupling} between
$\mu_0,\mu_1\in\mathscr{P}_2(X,d)$ is a probability measure
$\pi\in\mathscr{P}(\geo(X))$ such that $(e_0,\mathrm{e}_1)_\#\pi$ is an optimal
coupling of $\mu_0,\mu_1$. The curve $(\mu_t)_{t\in[0,1]}$ with
$\mu_t=(e_t)_\#\pi$ is then a geodesic in $\mathscr{P}_2(X,d)$ connecting
$\mu_0$ to $\mu_1$. Moreover, by \cite[Lem.~I.2.11]{S06}, for each
geodesic $\Gamma:[0,1] \to \mathscr{P}_2(X,d)$, there exists a probability
measure $\pi$ on $\geo(X)$ such that $\Gamma_t = (e_t)_\#\pi$ for all
$t\in[0,1]$.
Given a measure $\mu\in\mathscr{P}_2(X,d)$ we define its relative entropy by
\begin{align*}
\ent(\mu)~:=~\int \rho\log\rho\mathrm{d} m\;,
\end{align*}
if $\mu=\rho m$ is absolutely continuous w.r.t. $m$ and
$(\rho\log\rho)_+$ is integrable. Otherwise we set
$\ent(\mu)=+\infty$. The subset of probability measures with finite
entropy will be denoted by $\mathscr{P}_2^*(X,d,m)$. Moreover, for a number
$N\in(0,\infty)$ we introduce the functional
$U_N:\mathscr{P}_2(X,d)\to[0,\infty]$ by
\begin{align*}
U_N(\mu)~:=~\exp\left(-\frac{1}{N}\ent(\mu)\right)\;.
\end{align*}
\begin{definition}\label{def:ecdkn}
Given two numbers $K\in{\mathbb R}$, $N\in(0,\infty)$ we say that a metric
measure space $(X,d,m)$ satisfies the \emph{entropic
curvature-dimension condition} ${\cd^e(K,N)}$ if and only if for each
pair $\mu_0,\mu_1\in\mathscr{P}^*_2(X,d,m)$ there exists a constant speed
geodesic $(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}_2^*(X,d,m)$ connecting
$\mu_0$ to $\mu_1$ such that for all $t\in[0,1]$:
\begin{align}\label{eq:ecdkn}
U_{N}(\mu_t)~\geq~\sigkn{1-t}{W_2(\mu_0,\mu_1)} U_{N}(\mu_0) +
\sigkn{t}{W_2(\mu_0,\mu_1)} U_{N}(\mu_1)\;.
\end{align}
If \eqref{eq:ecdkn} holds for any constant speed geodesic
$(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}^*_2(X,d,m)$ we say that $(X,d,m)$ is a
\emph{strong} ${\cd^e(K,N)}$ space.
\end{definition}
In other words, the ${\cd^e(K,N)}$-condition means that the entropy is
$(K,N)$-convex along Wasserstein geodesic. As an immediate consequence
of Lemma~\ref{lem:convexconsistent} we obtain the following
consistency result.
\begin{lemma}\label{lem:consistent}
If $(X,d,m)$ satisfies the ${\cd^e(K,N)}$ condition, then it also
satisfies $\cd^e(K',N')$ for any $K'\leq K$ and $N'\geq
N$. Moreover, it satisfies the $\cd(K,\infty)$ condition.
\end{lemma}
Using similar arguments as in the case of the $\cd(K,\infty)$
condition introduced in \cite{S06} it is immediate to check that
${\cd^e(K,N)}$ is invariant under isomorphisms of metric measure
spaces. Moreover, adapting \cite[Thm.~I.4.20]{S06}, one can check that
it is stable under convergence of metric measure spaces in the
transportation distance ${\mathbb D}$, also introduced in \cite{S06}.
As an application of the additivity of $(K,N)$-convexity we note
the following
\begin{proposition}[Weighted spaces]\label{prop:weighted-cde}
Let $(X,d,m)$ be a metric measure space satisfying ${\cd^e(K,N)}$ and let
$V:X\to{\mathbb R}$ be a measurable function bounded from below that is
strongly $(K',N')$-convex in the sense of
Definition~\ref{def:knconvex}. Then $(X,d,\mathrm{e}^{-V}m)$ satisfies
$\cd^e(K+K',N+N')$. In particular, if $(X,d,m)$ satisfies strong
${\cd^e(K,N)}$, then $(X,d,\mathrm{e}^{-V}m)$ also satisfies strong
$\cd^e(K+K',N+N')$.
\end{proposition}
\begin{proof}
We will first show that the functional $\overline
V:\mathscr{P}_2(X,d)\to(-\infty,\infty]$ defined by $\overline V (\mu)=\int
V\mathrm{d} \mu$ is strongly $(K',N')$-convex on $\mathscr{P}_2(X,d)$. Let $\pi \in
\mathscr{P} ( \geo (X) )$ be an dynamic optimal coupling. and set $\mu_t
= (e_t)_\# \pi$. From the $(K',N')$-convexity of $V$ we have for
any $\gamma\in\geo(X)$ and $t\in[0,1]$:
\begin{align}\label{eq:weighted-cde}
\mathrm{e}^{-V(\gamma_t)/N'}~\geq~ \sigma^{(1-t)}_{K'/N'}\big(d(\gamma_0,\gamma_1)\big)\cdot\mathrm{e}^{-V(\gamma_t)/N'} + \sigma^{(t)}_{K'/N'}\big(d(\gamma_0,\gamma_1)\big)\cdot\mathrm{e}^{-V(\gamma_t)/N'}\;.
\end{align}
Take the logarithm on both sides of \eqref{eq:weighted-cde}. By
virtue of Lemma \ref{lem:convexG}, we can use Jensen's inequality
when integrating it w.r.t. $\pi$ to obtain
\begin{align*}
-\frac{1}{N'}\overline V(\Gamma_t)~=~-\frac{1}{N'}\int V(\gamma_t)\mathrm{d}\pi(\gamma)~&\geq~\int G_t \Big(-\frac{1}{N'} V(\gamma_0),-\frac{1}{N'} V(\gamma_1), \frac{K'}{N'}d(\gamma_0,\gamma_1)^2\Big)\mathrm{d} \pi(\gamma)\\
&\geq G_t \Big(-\frac{1}{N'} \overline V(\mu_0),-\frac{1}{N'} \overline V(\mu_1),\frac{K'}{N'} W_2(\mu_0,\mu_1)^2\Big)\;.
\end{align*}
Taking the exponential again then yields the claim. By the lower
boundedness of $V$ we have
$\mathscr{P}_2(X,d,\mathrm{e}^{-V}m)\subset\mathscr{P}_2(X,d,m)$. Now the assertion of the
proposition is a consequence of the observation
\begin{align*}
\ent(\mu\vert\mathrm{e}^{-V}m)~=~\ent(\mu\vert m) + \overline V(
\mu)
\end{align*}
and Lemma~\ref{lem:convex-sum}. The latter assertion is obvious from
the proof.
\end{proof}
We will now derive some first geometric consequences of the entropic
curvature-dimension condition.
\begin{proposition}[Generalized Brunn--Minkowski inequality]\label{prop:BMI}
Assume that $(X,d,m)$ satisfies the condition ${\cd^e(K,N)}$ with
$N\geq1$. Then for all measurable sets $A_0,A_1\subset X$ with
$m(A_0),m(A_1)>0$ and all $t\in[0,1]$ we have
\begin{align}\label{eq:BMI}
\bar{m}(A_t)^{1/N}~\geq~\sigkn{1-t}{\Theta}\cdot m(A_0)^{1/N} + \sigkn{t}{\Theta}\cdot m(A_1)^{1/N}\;,
\end{align}
where $\bar{m}$ is the completion of $m$, $A_t$ denotes the set of
$t$-midpoints and $\Theta$ the minimal/maximal distance between
points in $A_0$ and $A_1$, i.e.
\begin{align*}
A_t~&=~\{\gamma_t : \gamma:[0,1]\to X \text{ geodesic s.t. }\gamma_0\in A_0, \gamma_1\in A_1\}\;,\\
\Theta~&=~
\begin{cases}
\inf_{x_0\in A_0,x_1\in A_1}d(x_0,x_1)\;, & K\geq 0\;,\\
\sup_{x_0\in A_0,x_1\in A_1}d(x_0,x_1)\;, & K< 0\;.
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
We first prove the assertion under the assumption that
$m(A_0),m(A_1)<\infty$, the general case then follows by
approximating the sets $A_0,A_1$ by sets of finite volume. Applying
the condition $\cd^e(K,N)$ to $\mu_i=m(A_i)^{-1}{{\bf 1}}_{A_i}m$ for $i=0,1$
yields
\begin{align}\label{eq:BMI1}
U_{N}(\Gamma_t)~\geq~\sigkn{1-t}{W_2(\mu_0,\mu_1)}\cdot m(A_0)^{1/N} + \sigkn{t}{W_2(\mu_0,\mu_1)}\cdot m(A_1)^{1/N}\;,
\end{align}
where $\mu_t=\rho_tm$ is the $t$-midpoint of a geodesic connecting
$\mu_0$ and $\mu_1$. Since $\mu_t$ is concentrated on $A_t$, which
is a Souslin set, a double application of Jensen's inequality gives
that
\begin{align*}
U_{N}(\mu_t)~&=~\exp\Big(-\frac{1}{N}\int\log\rho_t\mathrm{d}\mu_t\Big)~\leq~\int\rho_t^{-1/N}\mathrm{d}\mu_t\\
~&=~\int\limits_{A_t}\rho_t^{1-1/N}\mathrm{d} \bar{m}~\leq~\bar{m}(A_t)^{1/N}\;.
\end{align*}
Hence \eqref{eq:BMI} follows by noting that
$\theta\mapsto\sigkn{t}{\theta}$ is increasing if $K\geq0$ and
decreasing if $K<0$ and that $W_2(\mu_0,\mu_1)\geq\Theta$
(resp. $\leq\Theta$).
\end{proof}
The Brunn--Minkowski inequality entails further geometric consequences
like a Bishop--Gromov type volume growth estimate and a generalized
Bonnet--Myers theorem. The following results can be deduced from
Proposition~\ref{prop:BMI} using similar arguments as in \cite{S06}
and replacing the coefficients $\tau^{(t)}_{K/N}(\cdot)$ by
$\sigkn{t}{\cdot}$.
\begin{remark}\label{rem:nonsharp}
The estimates presented below are not sharp, yet they provide
necessary local compactness results for example. We will see below
that under the assumption that $(X,d,m)$ is non-branching the
${\cd^e(K,N)}$ condition is equivalent to the ${\cd^*(K,N)}$ condition. It has
been proven by Cavaletti \& Sturm \cite{CS12} that under the same
assumption ${\cd^*(K,N)}$ implies the measure contraction property
$\text{MCP}(K,N)$ from which a sharp Bishop--Gromov and Lichnerowicz
inequality can be derived, see \cite{S06}.
\end{remark}
To state the volume growth estimate we introduce the following
notation. Given a metric measure space $(X,d,m)$ and a point
$x_0\in\supp[m]$ we denote by
\begin{align*}
v(r)~:=~m(\overline{B_r(x_0)})
\end{align*}
the volume of the closed ball of radius $r$ around $x_0$. Moreover, we
set
\begin{align*}
s(r)~:=~\limsup\limits_{\delta\to0}\frac{1}{\delta}m(\overline{B_{r+\delta}(x_0)}\setminus B_{r}(x_0))
\end{align*}
for the volume of the corresponding sphere.
\begin{proposition}[Generalized Bishop--Gromov inequality]\label{prop:BGI}
Assume that $(X,d,m)$ satisfies the condition ${\cd^e(K,N)}$ with
$N\geq1$. Then each bounded closed set $M\subset\supp[m]$ is compact
and has finite volume. More precisely, for each $x_0\in\supp[m]$ and
$0<r<R\leq\pi\sqrt{N/(K\vee0)}$,
\begin{align}\label{eq:BGI}
\frac{s(r)}{s(R)}~\geq~\left(\frac{\skn{r}}{\skn{R}}\right)^{N}
\quad \text{and} \quad
\frac{v(r)}{v(R)}~\geq~\frac{\int_0^r\skn{t}^{N}\mathrm{d} t}{\int_0^R\skn{t}^{N}\mathrm{d} t}\;.
\end{align}
\end{proposition}
\begin{corollary}[Generalized Bonnet--Myers theorem]\label{cor:BMT}
If $(X,d,m)$ satisfies the condition ${\cd^e(K,N)}$ with $K>0$ and $N\geq
1$, then the support of $m$ is compact and its diameter $L$ can be
bounded as $L\leq\pi\sqrt{N/K}$.
\end{corollary}
\begin{remark}\label{rem:exp-int}
${\cd^e(K,N)}$ or $\cd (K, \infty)$ yields that $\mathscr{P} (X,d)$ is a length
space and hence so is $( \supp m , d )$ \cite[Rem.~I.4.6(iii),
Prop.~2.11(iii)]{S06}. Thus, by the local compactness ensured in
Proposition~\ref{prop:BGI}, if $(X,d,m)$ is a ${\cd^e(K,N)}$ space then
$(\supp m , d)$ and hence $\mathscr{P}_2 ( \supp m, d )$ is a geodesic space
(see e.g.~\cite[Thm.~2.5.23]{BBI01}). In addition, the volume
growth estimate \eqref{eq:BGI} implies in particular that for any
$x_0\in X$ and $c>0$:
\begin{align}\label{eq:exp-int}
\int_X\mathrm{e}^{-cd(x_0,x)^2}\mathrm{d} m(x)~<~\infty\;.
\end{align}
It is well known that the latter implies that $\ent$ does not take
the value $-\infty$ on $\mathscr{P}_2(X,d)$ and is lower semi-continuous
w.r.t. $W_2$ (see e.g.~\cite[Sec.~7]{AGS11a}). Thus, when $\supp m
= X$, Definition~\ref{def:ecdkn} fits well into the setting of
Section~\ref{sec:metric}, where we assumed these additional
regularity properties.
\end{remark}
It turns out that under mild assumptions the modified
curvature-dimension condition ${\cd^e(K,N)}$ is equivalent to the reduced
curvature-dimension condition ${\cd^*(K,N)}$ introduced in \cite{BS10}. We
recall here the definition. Denote by $\mathscr{P}_\infty(X,d,m)$ the set of
measures in $\mathscr{P}_2(X,d,m)$ with bounded support.
\begin{definition}\label{def:reduced-cdkn}
We say that a metric measure space $(X,d,m)$ satisfies the
\emph{reduced curvature-dimension condition} ${\cd^*(K,N)}$ if and only if
for each pair $\mu_0=\rho_0 m,\mu_1=\rho_1 m\in\mathscr{P}_\infty(X,d,m)$
there exist an optimal coupling $q$ of them and a geodesic
$(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}_\infty(X,d,m)$ connecting them such
that for all $t\in[0,1]$ and $N'\geq N$:
\begin{align}\label{eq:cdskn}
\int\rho_t^{-\frac{1}{N'}}\mathrm{d}\mu_t~\geq~\int\limits_{X\times X}\Big[&\sigknp{1-t}{d(x_0,x_1)}\rho_0(x_0)^{-\frac{1}{N'}}\\\nonumber
&+ \sigknp{t}{d(x_0,x_1)}\rho_1(x_1)^{-\frac{1}{N'}}\Big]\mathrm{d} q(x_0,x_1) \;.
\end{align}
If \eqref{eq:cdskn} holds for any geodesic $(\mu_t)_{t\in[0,1]}$ in
$\mathscr{P}_\infty(X,d,m)$ we say that $(X,d,m)$ is a \emph{strong}
${\cd^*(K,N)}$ space.
\end{definition}
The assumption we need to prove equivalence of the different
curvature-dimension conditions is the following weak form of
non-branching.
\begin{definition}\label{def:ess-non-branch}
We say that a metric measure space $(X,d,m)$ is \emph{essentially
non-branching} if any dynamic optimal coupling
$\pi\in\mathscr{P}(\geo(X))$ between two absolutely continuous measures is
supported in a set of non-branching geodesics, i.e. there exists
$A\subset\geo(X)$ such that $\pi(A)=1$ and for all $\gamma,\tilde
\gamma\in A$:
\begin{align*}
\gamma_t=\tilde\gamma_t\quad \forall t\in[0,\varepsilon] \text{ for some } \varepsilon>0\ \Rightarrow\ \gamma=\tilde\gamma\;.
\end{align*}
\end{definition}
This condition has been introduced in \cite{RS12} and it has been
shown that strong $\cd(K,\infty)$ spaces are essentially
non-branching. It has also been noted there that the essential
non-branching condition is equivalent to the following apparently
stronger condition: Every dynamic optimal coupling $\pi$ between
absolutely continuous measures is concentrated on a set of geodesics
that do not meet at intermediate times, i.e. there is $A'\subset
\geo(X)$ such that $\pi(A')=1$ and for all $\gamma,\tilde\gamma\in
A'$:
\begin{align*}
\gamma_t=\tilde\gamma_t\quad \text{ for some } t\in(0,1)\
\Rightarrow\ \gamma=\tilde\gamma\;.
\end{align*}
Indeed, assuming the existence of a dynamic optimal coupling where
such crossings happen with positive probability, one can reshuffle the
geodesics before and after the crossing to produce a dynamic optimal
coupling of the same marginals where branching happens with positive
probability, contradicting the essentially non-branching assumption.
An immediate consequence of this observation is the following adaption
of \cite[Lem.~2.8]{BS10}.
\begin{lemma}\label{lem:non-branch}
Let $(X,d,m)$ be an essentially non-branching metric measure space
and let $\pi$ be a dynamic optimal coupling. Assume that
$\pi=\sum_{k=1}^n\a_k\pi^k$ for suitable $\a_k>0$ and dynamic
optimal couplings $\pi^k$. For given $t\in(0,1)$ and $i\in\{0,t\}$
we set $\mu_i^k=(e_i)_\#\pi^k$. If the family $\{\mu^k_0\}_k$ is
mutually singular, then also the family $\{\mu_t^k\}_k$ is mutually
singular.
\end{lemma}
\begin{theorem}\label{thm:equiv_as}
Let $(X,d,m)$ be an essentially non-branching metric measure
space. Then the following assertions are equivalent:
\begin{itemize}
\item[(i)] $(X,d,m)$ satisfies ${\cd^*(K,N)}$,
\item[(ii)] For each pair $\mu_0,\mu_1\in\mathscr{P}_\infty(X,d,m)$
there is a
dynamic optimal coupling $\pi$ of them such that we have
$( e_t )_\# \pi \ll m$ and
\begin{align}\label{eq:cdkn-as}
\rho_t(\gamma_t)^{-\frac{1}{N}}~\geq~&\sigkn{1-t}{d(\gamma_0,\gamma_1)}\rho_0(\gamma_0)^{-\frac{1}{N}}
+ \sigkn{t}{d(\gamma_0,\gamma_1)}\rho_1(\gamma_1)^{-\frac{1}{N}}\;,
\end{align}
for $\pi$-a.e. $\gamma\in\geo(X)$, where $\rho_t$ denotes the density of
$(e_t)_\# \pi$ w.r.t.~$m$.
\item[(iii)] $(X,d,m)$ satisfies ${\cd^e(K,N)}$.
\end{itemize}
\end{theorem}
\begin{proof}
The equivalence of (i) and (ii) has already been proven in
\cite[Prop.~2.8]{BS10} under the assumption that $X$ is
non-branching. Note that the statement (ii) is slightly different
there but equivalent, since under the non-branching assumption
$m^2$-a.e. pair of points is connected by a unique geodesic. Under
the weaker essential non-branching condition the equivalence of (i)
and (ii) follows by repeating almost verbatim the proof of
\cite[Prop.~2.8]{BS10} substituting \cite[Lem.~2.6]{BS10}
with Lemma \ref{lem:non-branch}. For details on the necessary
modifications see also the implication (iii)$\Rightarrow$(ii) below
which follows a similar argument.
(ii)$\Rightarrow$(iii): First note that by an approximation argument
as in \cite[Lem.~2.11]{BS10} one can show that \eqref{eq:cdkn-as}
also holds for $\mu_0,\mu_1\in\mathscr{P}_2(X,d,m)$ not necessarily with
bounded support. Now fix $\mu_0,\mu_1\in\mathscr{P}_2(X,d,m)$ and a dynamic optimal
coupling $\pi$ of them satisfying \eqref{eq:cdkn-as}. Taking logarithms on both sides of \eqref{eq:cdkn-as} we obtain
\begin{align}\label{eq:equiv-as1}
-\frac{1}{N}\log\rho_t(\gamma_t)~\geq~G_t\Big(-\frac{1}{N}\log\rho_0(\gamma_0),-\frac{1}{N}\log\rho_1(\gamma_1),\frac{K}{N} d(\gamma_0,\gamma_1)^2\Big)\;,
\end{align}
where the function $G_t$ is given by
\eqref{eq:convexG}.
Integrating \eqref{eq:equiv-as1} w.r.t. $\pi$ and using
Jensen's inequality with the aid of Lemma~\ref{lem:convexG} we obtain
\begin{align*}
-\frac{1}{N}\ent\big(\mu_t\big)~\geq~G_t\Big(-\frac{1}{N}\ent\big(\mu_0\big),-\frac{1}{N}\ent\big(\mu_1\big), \frac{K}{N} W_2(\mu_0,\mu_1)^2\Big)\;
\end{align*}
Hence \eqref{eq:ecdkn} follows by taking the exponential on both sides.
(iii)$\Rightarrow$(ii): Here we follow closely the arguments in the
proof of \cite[Prop.~II.4.2]{S06}. Fix $\mu_0,\mu_1\in\mathscr{P}_\infty(X,d,m)$ and
a dynamic optimal coupling $\pi$ of them. Let $\{M_n\}_{n\in{\mathbb N}}$ be a
$\cap$-stable generator of the Borel $\sigma$-field of $X$ with
$m(\partial M_n)=0$ for all $n$. For each $n$ consider the disjoint
covering of $X$ given by the $2^n$ sets $L_1=M_1\cap\dots\cap M_n$,
$L_2=M_1\cap\dots\cap M_n^c$, $\dots$, $L_{2^n}=M_1^c\cap\dots\cap
M_n^c$. For fixed $n$ and $i,j=1,\dots,2^n$ we define sets
$A_{i,j}=\{\gamma\in\geo(X)\ :\ (\gamma_0,\gamma_1)\in L_i\times L_j\}$ and probability measures
$\mu_0^{i,j},\mu_1^{i,j}$ by
\begin{align*}
\mu_0^{i,j}(B)=\a_{i,j}^{-1}\pi\big(\{\gamma_0\in B\cap L_i, \gamma_1\in L_j\}\big)\;,\quad \mu_1^{i,j}(B)=\a_{i,j}^{-1}\pi\big(\{\gamma_0\in L_i,\gamma_1\in B\cap L_j\}\big)\;,
\end{align*}
provided that $\a_{i,j}=\pi(A_{i,j})>0$. By (iii) we can choose
dynamic optimal couplings $\pi^{i,j}$ of them such that
\begin{align}\label{eq:equiv-as2}
U_N(\mu^{i,j}_t)~\geq~&\sigkn{1-t}{W_2(\mu^{i,j}_0,\mu^{i,j}_1)}\cdot U_N(\mu^{i,j}_0)
+ \sigkn{t}{W_2(\mu^{i,j}_0,\mu^{i,j}_1)}\cdot U_N(\mu^{i,j}_1)\;,
\end{align}
where $\mu_t^{i,j}=(e_t)_\# \pi^{i,j}$. Define
\begin{align*}
\pi^{(n)}:=\sum\limits_{i,j=1}^{2^n}\a_{i,j}\pi^{i,j}\;,\qquad \mu_t^{(n)}=(e_t)_\# \pi^{(n)}\;.
\end{align*}
Then $\pi^{(n)}$ is a dynamic optimal coupling of the measures
$\mu_0,\mu_1$ and $(\mu^{(n)}_t)_{t\in[0,1]}$ is a geodesic between
them. Since the measures $\mu_0^{i,j}\otimes\mu_1^{i,j}$ are
mutually singular and $X$ is essentially non-branching, also the
measures $\mu_t^{i,j}$ are mutually singular for each fixed $t$ by
Lemma~\ref{lem:non-branch}. We conclude that
$\rho_t^{(n)}(\gamma_t)=\a_{i,j}\rho_t^{i,j}(\gamma_t)$ on the
set $A_{i,j}$. Plugging this into \eqref{eq:equiv-as2} and taking
logarithms on both sides we find
\begin{align}\label{eq:equiv-as3}
&-\frac{\a_{i,j}^{-1}}{N}\int\limits_{A_{i,j}}\log\rho^{(n)}_t(\gamma_t)\mathrm{d} \pi^{(n)}\\\nonumber
&\geq~G_t \Big(-\frac{\a_{i,j}^{-1}}{N}\int\limits_{A_{i,j}}\log\rho_0(\gamma_0)\mathrm{d} \pi^{(n)},-\frac{\a_{i,j}^{-1}}{N}\int\limits_{A_{i,j}}\log\rho_1(\gamma_1)\mathrm{d} \pi^{(n)},\a_{i,j}^{-1}\frac{K}{N} \int\limits_{A_{i,j}} d^2(\gamma_0,\gamma_1)\mathrm{d} \pi^{(n)}\Big)\;
\end{align}
Since $\mu_0,\mu_1$ have bounded support, all geodesic in the
support of the measures $\pi^{(n)}$ stay within a single closed
bounded set $B$. By Proposition~\ref{prop:BGI} $B$ is compact and
has finite mass. Hence also the measures $\pi^{(n)}$ are supported
in a single compact set and thus converge weakly, up to extraction
of a subsequence, to a dynamic optimal coupling $\tilde \pi$ of
$\mu_0$ and $\mu_1$. Since $m(\partial M_i)=0$ for all $i$ we deduce
that
\begin{equation*}
\pi\big(\{\gamma_{0}\in M_{i}, \gamma_{1}\in M_{j}\}\big)~=~\lim\limits_{n\to\infty}\pi^{(n)}\big(\{\gamma_{0}\in M_{i}, \gamma_{1}\in M_{j}\}\big)~=~\tilde \pi\big(\{\gamma_{0}\in M_{i}, \gamma_{1}\in M_{j}\}\big)
\end{equation*}
for each $i,j$ and hence $(e_0 , e_1 )_\# \pi = ( e_0 , e_1 )_\#
\tilde{\pi}$. In particular $\tilde{\pi}$ is a dynamic optimal
coupling of $\mu_0$ and $\mu_1$. By weak lower semi-continuity of
the entropy we can pass to the limit as $n\to\infty$ in the left
hand side of \eqref{eq:equiv-as3}. Invoking furthermore the
convexity of $G_t$ given by Lemma~\ref{lem:convexG} and Jensen's
inequality we see that
\begin{align}\label{eq:equiv-as4}
&-\frac{\a^{-1}}{N}\int\limits_{A}\log\rho_t(\gamma_t)\mathrm{d} \tilde{\pi}\\\nonumber
&\geq~G_t\Big(-\frac{\a^{-1}}{N}\int\limits_{A}\log\rho_0(\gamma_0)\mathrm{d} \tilde{\pi},-\frac{\a^{-1}}{N}\int\limits_{A}\log\rho_1(\gamma_1)\mathrm{d} \tilde{\pi},\a^{-1}\frac{K}{N}\int\limits_{A} d^2(\gamma_0,\gamma_1)\mathrm{d} \tilde{\pi}\Big)\;,
\end{align}
for any set $A$ which is a union of a finite number of the sets
$A_{i,j}$ and $\a=\tilde{\pi}(A)$. This implies the
$\tilde{\pi}$-a.s. inequality \eqref{eq:cdkn-as}.
\end{proof}
\begin{corollary}\label{cor:strong-cde-cds}
For a metric measure space $(X,d,m)$ the following assertions are equivalent:
\begin{itemize}
\item[(i)] $(X,d,m)$ is a strong ${\cd^*(K,N)}$ space,
\item[(ii)] For each pair $\mu_0,\mu_1\in\mathscr{P}_\infty(X,d,m)$, and
each dynamic optimal coupling $\pi$ of it \eqref{eq:cdkn-as}
holds,
\item[(iii)] $(X,d,m)$ is a strong ${\cd^e(K,N)}$ space.
\end{itemize}
\end{corollary}
\begin{proof}
Note that both (i) and (iii) imply that $(X,d,m)$ satisfies the
strong $\cd(K,\infty)$ condition. \cite[Thm.~1.1]{RS12} gives that
every strong $\cd(K,\infty)$ space is essentially non-branching. In
addition, \cite[Cor.~1.4]{RS12} also states that on strong $\cd
(K,\infty)$ spaces the dynamic optimal coupling of $\mu_0$ and
$\mu_1$ is unique for each $\mu_0 , \mu_1 \in \mathscr{P}_2 (X,d,m)$. Hence
the assertion follows from the same arguments as
Theorem~\ref{thm:equiv_as}. Indeed, the dynamic optimal coupling
$\tilde{\pi}$ obtained in the proof of Theorem~\ref{thm:equiv_as}
(iii)$\Rightarrow$(ii) coincides with $\pi$. Note that the
essentially non-branching assumption is not used in the implications
(ii)$\Rightarrow$(i),(iii).
\end{proof}
We conclude this section with a globalization property of the strong
entropic curvature-dimension condition. We say that a metric measure
space $(X,d,m)$ satisfies the \emph{local} entropic
curvature-dimension condition $\cd^e_{\text{loc}}(K,N)$ if and only if
every point $x\in\supp m$ has a neighborhood $M$ such that for each
pair $\mu_0,\mu_1\in\mathscr{P}^*_2(X,d,m)$ supported in $M$ there exists a
geodesic $(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}^*_2(X,d,m)$ satisfying
\eqref{eq:ecdkn}. Similarly, we say that $(X,d,m)$ is a \emph{strong}
$\cd^e_{\text{loc}}(K,N)$ space if in addition \eqref{eq:ecdkn} holds
along \emph{every} constant speed geodesic $(\mu_t)_{t\in[0,1]}$ in
$\mathscr{P}^*_2(X,d,m)$ with $\mu_0,\mu_1$ supported in $M$. Note that
$(X,d,m)$ is essentially non-branching if it is $\cd^e_{\text{loc}}
(K,N)$ space. Indeed, we first localize the problem in the argument
in \cite{RS12} and hence the local condition is sufficient.
\begin{theorem}[Local-global]\label{thm:cde-locglob}
Let $(X,d,m)$ be a geodesic metric measure space. Then it satisfies
the strong ${\cd^e(K,N)}$ condition if and only if it satisfies the strong
$\cd^e_{\text{loc}}(K,N)$ condition.
\end{theorem}
\begin{proof}
The only if part is obvious. For the if part, assume that $(X,d,m)$
is a strong $\cd^e_{\text{loc}}(K,N)$ space. First note that this
implies that $X$ is locally compact. Indeed, this can be seen by
estimating the volume growth of balls in a small neighborhood around
any point similarly as in Proposition~\ref{prop:BGI}. $(X,d)$ being
a length space, local compactness implies that bounded closed sets in $X$
are compact, see \cite[Prop.~2.5.22]{BBI01}.
Now we first verify the $\cd^e(K,N)$ inequality \eqref{eq:ecdkn} for
a geodesic $(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}_2^*(X,d,m)$ where the
measures $\mu_t$ are jointly supported in a compact set $K$. By
compactness and the strong $\cd^e_{\text{loc}}(K,N)$ condition we
can find $\epsilon>0$ and a disjoint partition $(Y_i)_i$ of $K$ such
that the $\varepsilon$-neighborhoods $U_i$ of $Y_i$ have the following
property: any geodesic $(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}_2^*(X,d,m)$
with $\mu_0,\mu_1$ supported in $U_i$ satisfies
\eqref{eq:ecdkn}. Write $\mu_t=(e_t)_\#\pi$, where
$\pi\in\mathscr{P}(\geo(X))$ is the associated dynamic optimal
coupling. Then there exists $L>0$ such $d(\gamma_0,\gamma_1)\leq L$
for all $\gamma$ in the support of $\pi$. We claim that for any
$0\leq r\leq t\leq s\leq 1$ with $\abs{s-r}<\varepsilon/L$:
\begin{align}\label{eq:ecdkn-loc-time1}
U_{N}(\mu_t)~\geq~\sigkn{\frac{s-t}{s-r}}{W_2(\mu_r,\mu_s)} U_{N}(\mu_r) +
\sigkn{\frac{t-r}{s-r}}{W_2(\mu_r,\mu_s)} U_{N}(\mu_s)\;,
\end{align}
which suffices to show \eqref{eq:ecdkn} by virtue of
Lemma~\ref{lem:knconvexint2}. Indeed, let us define the sets
$A_{i}=\{\gamma\in\geo(X)\ :\ \gamma_{t}\in Y_i\}$ and define the
measures
\begin{align*}
\pi_{i}~:=~\alpha_{i}^{-1}\pi|_{A_{i}}\;,
\end{align*}
provided that $\alpha_{i}:=\pi(A_{i})>0$. Then for
$\pi_i$-a.e. geodesic $\gamma$ and $\tau\in[r,s]$ one has $\gamma_\tau\in
U_i$. Setting $\mu^i_\tau=(e_\tau)_\#\pi_i$ we infer that the geodesic
$(\mu^i_\tau)_{\tau\in[r,s]}$ is supported in $U_i$. From the construction
of $U_i$ we obtain for $\tau\in[r,s]$:
\begin{align}\label{eq:ecdkn-loc-time2}
U_{N}(\mu^i_\tau)~\geq~\sigkn{\frac{s-\tau}{s-r}}{W_2(\mu^i_r,\mu^i_s)} U_{N}(\mu^i_r) +
\sigkn{\frac{\tau-r}{s-r}}{W_2(\mu^i_r,\mu^i_s)} U_{N}(\mu^i_s)\;.
\end{align}
Note that $\mu_\tau=\sum_i\a_i\mu^i_\tau$.
Hence we have that (see e.g. \cite[Rem.~I.4.2]{S06})
\begin{align}\label{eq:ent-subadd}
\ent(\mu_\tau)\geq\sum_i\a_i\ent(\mu^i_\tau) +
\sum_i\a_i\log\a_i\;.
\end{align}
For $\tau=t$ we have equality in \eqref{eq:ent-subadd} since the
family $(\mu_t^i)_i$ is mutually singular by construction. Taking
logarithms in \eqref{eq:ecdkn-loc-time2} and summing over $i$ we
obtain
\begin{eqnarray*}
\lefteqn{ -\frac1N \ent(\mu_t)\
= \ -\frac1N \sum_i\a_i\big[\ent(\mu_t^i)+\log\a_i\Big]}\\
&\geq&\sum_i\a_i G_{\frac{t-r}{s-r}}\left(-\frac1N \Big[\ent(\mu_r^i)+\log\a_i\Big],-\frac1N \Big[\ent(\mu_s^i)+\log\a_i\Big],\frac{K}{N}W_2^2(\mu^i_r,\mu^i_s)\right)\\
&\geq&G_{\frac{t-r}{s-r}}\left(-\frac1N \sum_i\a_i\Big[\ent(\mu_r^i)+\log\a_i\Big],-\frac1N \sum_i\a_i\Big[\ent(\mu_s^i)+\log\a_i\Big],\frac{K}{N}\sum_i\a_iW_2^2(\mu^i_r,\mu^i_s)\right)\\
&\geq&G_{\frac{t-r}{s-r}}\left(-\frac1N \ent(\mu_r),-\frac1N \ent(\mu_s),\frac{K}{N}W_2^2(\mu_r,\mu_s)\right)\;,
\end{eqnarray*}
where we have used \eqref{eq:ent-subadd} as well as the convexity of
$G_{\frac{t-r}{s-r}} (x,y, \kappa)$ given by Lemma~\ref{lem:convexG}
and its monotonicity in $x,y$. Taking the exponential yields
\eqref{eq:ecdkn-loc-time1}.
Finally, we establish the ${\cd^e(K,N)}$ inequality \eqref{eq:ecdkn} for
an arbitrary, not necessarily compactly supported geodesic
$(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}_2^*(X,d,m)$. Partition $X$ in a
disjoint collection of precompact sets $K_i$ and let $\pi_{i,j}$ be
dynamic optimal couplings obtained by conditioning the coupling
$\pi$ associated to $(\mu_t)_t$ to have starting point in $K_i$ and
endpoint in $K_j$. By the previous argument any compactly supported
geodesic satisfies \eqref{eq:ecdkn}. Since $\cd^e_{\text{loc}}(K,N)$
implies that $(X,d,m)$ is essentially non-branching, the measures
$(e_t)_\#\pi_{i,j}$ are mutually singular using
Lemma~\ref{lem:non-branch}. Thus arguing as before the inequality
\eqref{eq:ecdkn} for $(\mu_t)_t$ can be obtained by summing the
corresponding inequalities valid along the geodesics
$(\mu_t^{i,j})_t$ associated to $\pi_{i,j}$.
\end{proof}
\subsection{Calculus and heat flow on metric measure spaces}
\label{sec:recap}
Here we recapitulate briefly some of the results obtained by Ambrosio,
Gigli and Savar\'e in a series of recent works, see
\cite{AGS11a,AGS11b,AGS12,G12}. In particular, we introduce notation and
concepts that we use in the sequel about the powerful machinery of
calculus on metric measure spaces developed by these authors. We refer
to \cite{AGS11a,AGS11b} for more details on the definitions and
results.
Let $(X,d,m)$ be a metric measure space. The basic object of study,
introduced in \cite{AGS11a} is the Cheeger energy. For a measurable
function $f:X\to{\mathbb R}$ it can be defined by
\begin{align*}
\ch(f)=\frac12\int\wug{f}^2\mathrm{d} m\;,
\end{align*}
where $\wug{f}:X\to[0,\infty]$ denotes the so called minimal weak
upper gradient of $f$. An important approximation result
\cite[Thm.~6.2]{AGS11a} states that for $f\in L^2(X,m)$ the Cheeger
energy can also be obtained by a relaxation procedure:
\begin{align*}
\ch(f)=\inf\left\{\liminf\limits_{n\to\infty}\frac12\int\abs{\nabla f_n}^2\mathrm{d} m\right\}\;
\end{align*}
where the infimum is taken over all sequences of Lipschitz functions
$(f_n)$ converging to $f$ in $L^2(X,m)$ and where $\abs{\nabla f_n}$
denotes the local Lipschitz constant. In particular, Lipschitz
functions are dense in in the domain of $\ch$ in $L^2(X,m)$ denoted by
$D(\ch)=W^{1,2}(X,d,m)$ in the following sense: For each $f \in D
(\ch)$ there exist a sequence $( f_n )_{n \in {\mathbb N}}$ of Lipschitz
functions such that $f_n \to f$ in $L^2$ and $| \nabla f_n | \to
\wug{f}$ in $L^2$ \cite[Lem.~4.3(c)]{AGS11a}. For a Lipschitz function
$f$ the slope, or local Lipschitz constant, is an upper gradient. Thus
\begin{align}\label{eq:lip-wug}
\wug{f}~\leq~\abs{\nabla f}\qquad \text{a.e.}
\end{align}
It turns out that $\ch$ is a convex and lower
semi-continuous functional on $L^2(X,m)$. It allows to define the
Laplacian $-\Delta f\in L^2(X,m)$ of a function $f\in W^{1,2}(X,d,m)$
as the element of minimal $L^2$-norm in the subdifferential
$\partial^-\ch(f)$ provided the latter is non-empty. In this
generality, $\ch$ is not necessarily a quadratic form and
consequently $\Delta$ need not be a linear operator.
The classical theory of gradient flows of convex functionals in
Hilbert-spaces allows to study the gradient flow of $\ch$ in
$L^2(X,m)$: For any $f\in L^2(X,m)$ there exists a unique continuous
curve $(f_t)_{t\in[0,\infty)}$ in $L^2(X,m)$, locally absolutely
continuous in $(0,\infty)$ with $f_0=f$ such that $\frac{\mathrm{d}}{\mathrm{d}t}
f_t~\in~\partial^-\ch(f_t)$ for a.e. $t>0$. In fact, we have $f_t\in
D(\Delta)$ and
\begin{align*}
\frac{\mathrm{d}^+}{\mathrm{d}t} f_t~=~\Delta f_t
\end{align*}
for all $t>0$. This gives rise to a semigroup $(\bH_t)_{t\geq0}$ on
$L^2(X,m)$ defined by $\bH_tf=f_t$, where $f_t$ is the unique
$L^2$-gradient flow of $\ch$.
On the other hand, one can study the metric gradient flow of the
relative entropy $\ent$ in $\mathscr{P}_2(X,d)$. Under the assumption that
$(X,d,m)$ satisfies $\cd(K,\infty)$ it has been proven in \cite{Gi10}
and more generally in \cite[Thm.~9.3(ii)]{AGS11a} that for any $\mu\in
D(\ent)$ there exist a unique gradient flow of $\ent$ starting from
$\mu$ in the sense of Definition~\ref{def:metric-gf}. This gives rise
to a semigroup $(\mathscr{H}_t)_{t\geq0}$ on $\mathscr{P}_2(X,d)$ defined by
$\mathscr{H}_t\mu=\mu_t$ where $\mu_t$ is the unique gradient flow of $\ent$
starting from $\mu$.
One of the main result of \cite{AGS11a} is the identification of the
two gradient flows, which allows to consistently define the heat flow
on $\cd(K,\infty)$ spaces.
\begin{theorem}[{\cite[Thm.~9.3]{AGS11a}}]\label{thm:gf-identification}
Let $(X,d,m)$ be a $\cd(K,\infty)$ space and let $f\in L^2(X,d,m)$
such that $\mu=f m\in \mathscr{P}_2(X,d)$.
Then we have
\begin{align*}
\mathscr{H}_t\mu=(\bH_tf) m \quad\forall t\geq0\;.
\end{align*}
\end{theorem}
A byproduct of this result is a representation of the slope of the
entropy.
\begin{align}\label{eq:slope-FI}
\abs{\nabla^-\ent}(\rho m)~=~4\int\wug{\sqrt{\rho}}^2\mathrm{d} m
\end{align}
for all probability densities $\rho$ with $\sqrt{\rho}\in
D(\ch)$. Note that the minimal weak upper gradient satisfies a chain
rule, \cite[Prop.~5.16]{AGS11a}: for $\phi:I\to{\mathbb R}$ non-decreasing and
locally Lipschitz we have
\begin{align}\label{eq:chainrule-wug}
\wug{\phi(f)}~=~\phi'(f)\wug{f}\;.
\end{align}
A basic property of the heat flow is the maximum principle, see
\cite[Thm.~4.16]{AGS11a}: If $f\in L^2(X,m)$ satisfies $f\leq C$
$m$-a.e. then also $\bH_tf\leq C$ $m$-a.e. for all $t\geq0$.
If $\ch$ is assumed to be a quadratic form, and without any curvature
assumption, the notion of weak upper gradient gives rise to a powerful
calculus, in which not only the norm of the gradient, but also scalar
products between gradients are defined. For details we refer to
\cite[Sec. 4.3]{AGS11b} and \cite[Sec. 4.3]{G12}, where this calculus
has been developed in larger generality. We note briefly that given
$f,g\in D(\ch)$, the limit
\begin{align}\label{eq:scp}
\langle\nabla f,\nabla g\rangle~:=~\lim\limits_{\varepsilon\searrow0}\frac{1}{2\varepsilon}\left(\wug{(f+\varepsilon g)}^2-\wug{f}^2\right)
\end{align}
can be shown to exists in $L^1(X,m)$. Moreover, the map
$D(\ch)^2\ni(f,g)\mapsto\langle\nabla f,\nabla g\rangle \in L^1(X,m)$ is
bilinear, symmetric and satisfies
\begin{align*}
\abs{\langle\nabla f,\nabla g\rangle}~\leq~\wug{f}\wug{g}\;.
\end{align*}
For all $f,g,h\in D(\ch)\cap L^\infty(X,m)$ we have the Leibniz rule:
\begin{align}\label{eq:Leibniz}
\int \langle\nabla f,\nabla(g h)\rangle\mathrm{d} m~=~\int h \langle\nabla f,\nabla g\rangle\mathrm{d}
m +\int g \langle\nabla f,\nabla h\rangle\mathrm{d} m\;.
\end{align}
A quadratic Cheeger energy gives rise to a strongly local Dirichlet
form $(\mathcal{E},D(\mathcal{E}))$ on $L^2(X,m)$ by setting $\mathcal{E}(f,f)=\ch(f)$ and
$D(\mathcal{E})=W^{1,2}(X,d,m)$. In particular, $W^{1,2}(X,d,m)$ is a Hilbert
space and $L^2$-Lipschitz functions are dense in the usual sense
\cite[Prop.~4.10]{AGS11b}. In this case $\bH_t$ is a semigroup of
self-adjoined linear operators on $L^2(X,m)$ with the Laplacian
$\Delta$ as its generator. The previous result implies that for
$f,g\in W^{1,2}(X,d,m)$
\begin{align*}
\mathcal{E}(f,g)~=~\int \langle\nabla f,\nabla g\rangle\mathrm{d} m\;,
\end{align*}
i.e. the energy measure of $\mathcal{E}$ has a density given by
\eqref{eq:scp}. Moreover, for $f\in W^{1,2}$ and $g\in D(\Delta)$ we
have the integration by parts formula
\begin{align}\label{eq:int-by-parts}
\int \langle\nabla f,\nabla g\rangle\mathrm{d} m~=~-\int f\Delta g\mathrm{d} m\;.
\end{align}
\subsection{The Riemannian curvature-dimension condition}
\label{sec:riem-cdkn}
In this section we introduce the notion of Riemannian
curvature-dimension bounds. This notion can be seen as a
generalization of the Riemannian Ricci curvature bounds for metric
measure spaces introduced in \cite{AGS11b} for mms with finite
reference measure and later generalized in \cite{AGMR12} to
$\sigma$-finite reference measures. We will rely on the powerful
machinery of calculus on metric measure spaces already developed by
Ambrosio, Gigli, Savar\'e and co-authors in a series of recent
works. Following their nomenclature, we make the following
\begin{definition}\label{def:riemcdkn}
We say that a metric measure space $(X,d,m)$ is
\emph{infinitesimally Hilbertian} if the associated Cheeger energy
is quadratic. Moreover, we say that it satisfies the
\emph{Riemannian curvature-dimension condition} ${\rcd^*(K,N)}$ if it
satisfies any of the equivalent properties of
Theorem~\ref{thm:RCDKN-equiv} below.
\end{definition}
\begin{theorem}\label{thm:RCDKN-equiv}
Let $(X,d,m)$ be a metric measure space with $\supp m = X$. The
following properties are equivalent:
\begin{itemize}
\item[(i)] $(X,d,m)$ is infinitesimally Hilbertian and satisfies the ${\cd^*(K,N)}$ condition.
\item[(ii)] $(X,d,m)$ is infinitesimally Hilbertian and satisfies the ${\cd^e(K,N)}$ condition.
\item[(iii)] $(X,d,m)$ is a length space satisfying the exponential
integrability condition \eqref{eq:exp-int} and any
$\mu\in\mathscr{P}_2(X,d)$ is the starting point of an ${\evi_{K,N}}$ gradient
flow of $\ent$.
\end{itemize}
\end{theorem}
\begin{remark}\label{rem:strongcde}
Note that according to Theorem~\ref{thm:eviconvex}, (iii) even
implies that $(X,d,m)$ is a strong ${\cd^e(K,N)}$ space and a geodesic
space.
\end{remark}
\begin{remark}\label{rem:additive-linear}
Since both ${\cd^*(K,N)}$ and ${\cd^e(K,N)}$ imply the $\cd(K,\infty)$
condition, \cite[Thm.~5.1]{AGS11b}, resp. \cite[Thm.~6.1]{AGMR12}
show that the requirement that the Cheeger energy $\ch$ is quadratic
can equivalently be replaced in (i) and (ii) by additivity of the
semigroup $\mathscr{H}_t$, in the sense that
$\mathscr{H}_t\big(\lambda\mu+(1-\lambda)\nu\big)=\lambda\mathscr{H}_t\mu
+(1-\lambda)\mathscr{H}_t\nu$ for any $\mu,\nu\in\mathscr{P}_2(X,d)$ and
$\lambda\in[0,1]$.
\end{remark}
\begin{proof}
(i)$\Leftrightarrow$(ii): Both ${\cd^*(K,N)}$ and ${\cd^e(K,N)}$ imply the
$\cd(K,\infty)$ condition. Thus \cite[Thm.~6.1]{AGMR12} yields that
under either (i) or (ii) the $\evi_K$ gradient flow of $\ent$ exists
for every starting point. This implies that $(X,d,m)$ is a strong
$\cd(K,\infty)$ space and hence essentially non-branching by
\cite[Thm.~1.1]{RS12}. In this setting, Theorem~\ref{thm:equiv_as}
yields equivalence of ${\cd^*(K,N)}$ and ${\cd^e(K,N)}$.
(ii)$\Rightarrow$(iii): By Remark~\ref{rem:exp-int}, $(X,d)$ is a
geodesic space and satisfies \eqref{eq:exp-int}. Taking
Theorem~\ref{thm:contraction} into account it is sufficient to show
that $\mathscr{H}_t(\mu)$ is an ${\evi_{K,N}}$-gradient flow of $\ent$ for every
$\mu\in\mathscr{P}_2(X,d,m)$ of the form $\mu=f m$ with $f$ bounded and
$\ch(\sqrt{f})<\infty$. Set $\mu_t:=\mathscr{H}_t(\mu)=f_t m$ and note that
$f_t$ is still bounded with $\ch(\sqrt{f_t})<\infty$ for all
$t>0$. By Proposition~\ref{prop:evi-equiv} it is sufficient to take
reference measures in \eqref{eq:knflow} of the form $\sigma=g m$
where $g$ is bounded and has bounded support. Taking into account
\eqref{eq:evialt1} we have to show that for a.e. $t>0$:
\begin{align}\label{eq:equiv1}
\frac{U_N(\sigma)}{U_N(\mu_t)}~\leq~\ckn{W_2(\mu_t,\sigma)} - \frac{\skn{W_2(\mu_t,\sigma)}}{N\cdot W_2(\mu_t,\sigma)}\frac{\mathrm{d}}{\mathrm{d}t}\frac12
W_2(\mu_t,\sigma)^2 \;.
\end{align}
This will follow from essentially the same arguments as in the proof
of \cite[Thm.~6.1]{AGMR12}. Let us briefly sketch these arguments,
indicating the modifications that are necessary.
First, \cite[Thm.~6.3]{AGMR12} yields that for a.e. $t>0$:
\begin{align}\label{eq:equiv2}
\frac{\mathrm{d}}{\mathrm{d}t}\frac12 W_2(\mu_t,\sigma)^2~=~-\mathcal{E}_{\mu_t}(\phi_t,\log f_t)\;,
\end{align}
where $\phi_t$ is a suitable Kantorovich potential for the optimal
transport from $\mu_t$ to $\sigma$ and $\mathcal{E}_{\mu_t}(\cdot,\cdot)$ is
the bilinear form associated to the weighted Cheeger energy
$\ch_{\mu_t}(f)=\frac12\int\abs{\nabla f}_{w,\mu_t}\mathrm{d}\mu_t$ (see
\cite[Sec. 3]{AGMR12}). We claim that also
\begin{align}\label{eq:equiv3}
\mathcal{E}_{\mu_t}(\phi_t,\log f_t)~\geq~\frac{N\cdot W_2(\mu_t,\sigma)}{\skn{W_2(\mu_t,\sigma)}}\Big[-\ckn{W_2(\mu_t,\sigma)} + \frac{U_N(\sigma)}{U_N(\mu_t)}\Big]\;.
\end{align}
Combining then \eqref{eq:equiv2} and \eqref{eq:equiv3} yields the
desired inequality \eqref{eq:equiv1}.
To prove \eqref{eq:equiv3} one argues similar as in
\cite[Thm.~6.5]{AGMR12}. First $f_t$ is approximated by suitable
truncated probability densities $f_t^\delta$. Then, by successively
minimizing the entropy of midpoints, a particularly nice geodesic
$(\Gamma_s^{\delta,t})_{s\in[0,1]}$ connecting
$\mu_t^\delta=f_t^\delta m$ to $\sigma$ is constructed which
satisfies the $\cd(K,\infty)$ condition and has density bounds. From
the construction it is immediate that in our setting this geodesic
also satisfies the ${\cd^e(K,N)}$ condition. Thus on one hand, we have by
Lemma~\ref{lem:convex-below-tangent} below the inequality
\begin{align}\label{eq:equiv4}
\liminf\limits_{s\searrow0}\frac{U_N(\Gamma^{\delta,t}_s)-U_N(\mu^\delta_t)}{s}~\geq~\frac{W_2(\mu^\delta_t,\sigma)}{\skn{W_2(\mu^\delta_t,\sigma)}}\Big[-U_N(\mu^\delta_t)\cdot\ckn{W_2(\mu^\delta_t,\sigma)}
+ U_N(\sigma)\Big]\;.
\end{align}
On the other hand, \cite[Prop.~6.6]{AGMR12} yields that
\begin{align}\label{eq:equiv5}
-\mathcal{E}_{\mu^\delta_t}(\phi^\delta_t,\log f^\delta_t)~\leq~\liminf\limits_{s\searrow0}\frac{\ent(\Gamma^{\delta,t}_s)-\ent(\mu^\delta_t)}{s}\;,
\end{align}
where $\phi^\delta_t$ is a Kantorovich potential relative to
$\mu^\delta_t$ and $\sigma$. By $K$-convexity of $\ent$ along the
geodesic $\Gamma^{\delta,t}$ we have
\begin{align*}
\limsup\limits_{s\searrow0}\frac{\ent(\Gamma^{\delta,t}_s)-\ent(\mu^\delta_t)}{s}~\leq~\ent(\sigma)
- \ent(\mu^\delta_t) - \frac{K}{2} W_2(\mu^\delta_t,\sigma)^2
\end{align*}
and thus
$\big(\ent(\Gamma^{\delta,t}_s)-\ent(\mu^\delta_t)\big)^2=o(s)$ as
$s\to0$. Now \eqref{eq:equiv4} and \eqref{eq:equiv5} together with a
Taylor expansion of $x\mapsto e^{-x/N}$ yield
\begin{align}\label{eq:equiv6}
\mathcal{E}_{\mu^\delta_t}(\phi^\delta_t,\log f^\delta_t)~\geq~\frac{N\cdot W_2(\mu^\delta_t,\sigma)}{\skn{W_2(\mu^\delta_t,\sigma)}}\Big[-\ckn{W_2(\mu^\delta_t,\sigma)} + \frac{U_N(\sigma)}{U_N(\mu^\delta_t)}\Big]\;.
\end{align}
Finally \eqref{eq:equiv3} is obtained by lifting the truncation and
passing to the limit $\delta\to0$ in \eqref{eq:equiv6}. Passage to
the limit in the RHS is obvious, for the LHS a delicate argument is
needed which is given in the proof of \cite[Thm.~6.5]{AGMR12}.
(iii)$\Rightarrow$(ii). Since by Lemma~\ref{lem:consistentKN} an
${\evi_{K,N}}$ flow is in particular an $\evi_K$ flow,
\cite[Thm.~5.1]{AGS11b} or \cite[Thm.~6.1]{AGMR12} already gives
that $(X,d,m)$ is infinitesimally Hilbertian. Let us now show that
$(X,d,m)$ is a strong ${\cd^e(K,N)}$ space. The same argument as in the
proof of \cite[Lem.~5.2]{AGS11b} yields for any pair $\mu_0,\mu_1\in
D(\ent)\subset\mathscr{P}_2(X,d,m)$ the existence of a geodesic
$\Gamma:[0,1]\to D(\ent)$ connecting $\mu_0$ to $\mu_1$. Hence
$D(\ent)$ is a geodesic space and Theorem~\ref{thm:eviconvex} shows
that \eqref{eq:ecdkn} holds along any geodesic in $D(\ent)$.
\end{proof}
\begin{lemma}\label{lem:convex-below-tangent}
Let $(X,d,m)$ satisfy the ${\cd^e(K,N)}$ condition and let
$\mu_0,\mu_1\in\mathscr{P}_2(X,d,m)$. Then there exists a geodesic
$(\mu_t)_{t\in[0,1]}$ in $\mathscr{P}_2(X,d,m)$ connecting $\mu_0$ and $\mu_1$ such
that, with $\theta=W_2(\mu_0,\mu_1)$,
\begin{align}\label{eq:convex-below-tangent}
U_N(\mu_1)~\leq~\ckn{\theta}\cdot U_N(\mu_0) + \frac{\skn{\theta}}{\theta}\cdot\liminf\limits_{t\searrow0}\frac{U_N(\mu_t)-U_N(\mu_0)}{t} \;.
\end{align}
\end{lemma}
\begin{proof}
Let $(\mu_t)_{t\in[0,1]}$ be the geodesic connecting $\mu_0$ and $\mu_1$ given by
the ${\cd^e(K,N)}$ condition. We immediately obtain that for
every $t\in[0,1]$:
\begin{align*}
U_N(\mu_t)-U_N(\mu_0)~\geq~\left[\sigkn{1-t}{\theta}-1\right]\cdot U_N(\mu_0) + \sigkn{t}{\theta}\cdot U_N(\mu_1)\;.
\end{align*}
Dividing by $t$ on both sides and passing to the limit $t\searrow0$
the assertion follows from the fact that
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}t}\sigkn{t}{\theta} = +\frac{\theta\cdot\ckn{t\theta}}{\skn{\theta}}\;,\quad \sigkn{0}{\theta}=0\;,\quad \sigkn{1}{\theta}=1\;.
\end{align*}
\end{proof}
\begin{proposition}[Weighted spaces]\label{prop:weighted-rcdkn}
Let $(X,d,m)$ be a ${\rcd^*(K,N)}$ space and let $V:X\to{\mathbb R}$ be continuous,
bounded below and strongly $(K',N')$-convex function in the sense of
Definition~\ref{def:knconvex} with $\int\exp(-V)\mathrm{d} m< \infty$. Then
$(X,d,\mathrm{e}^{-V}m)$ is a $\rcd^*(K+K',N+N')$ space.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:weighted-cde}, $(X,d,\mathrm{e}^{-V}m)$ is a
$\cd^e(K+K',N+N')$ space. Invariance of the weak upper gradient
under multiplicative changes of the reference measure by
\cite[Lem.~4.11]{AGS11a} together with the Leibniz rule
\eqref{eq:Leibniz} give that the Cheeger energy associated to
$\mathrm{e}^{-V}m$ is again quadratic. See also
\cite[Prop.~6.19]{AGS11b}. Thus the assertion follows from
Theorem~\ref{thm:RCDKN-equiv} (ii).
\end{proof}
The Riemannian curvature-dimension condition has a number of natural
properties that we collect here. The first one is the stability under
convergence of metric measure spaces in the transportation distance
${\mathbb D}$. We refer to \cite[Sec.~I.3]{S06} for the definition and properties
of the transportation distance.
\begin{theorem}[Stability]\label{thm:rcd-stable}
Let $( (X_n,d_n,m_n) )_{n \in {\mathbb N}}$ be a sequence of ${\rcd^*(K,N)}$ spaces
with $m_n \in \mathscr{P}_2 (X_n,d_n)$. If
${\mathbb D}\big((X_n,d_n,m_n),(X,d,m)\big)\to 0$ for some metric measure
space $(X,d,m)$ then $(X,d,m)$ is also a ${\rcd^*(K,N)}$ space.
\end{theorem}
Note that this in particular implies stability of the ${\rcd^*(K,N)}$-condition under {\it measured Gromov-Hausdorff convergence} (mGH-convergence for short). Indeed, for compact mms -- and only for such spaces the concept of mGH-convergence is well-established -- mGH-convergence implies ${\mathbb D}$-convergence
\cite[Lemma 3.18]{S06}.
\begin{proof}
We follow essentially the arguments of Ambrosio, Gigli and Savar\'e
in \cite[Thm.~6.10]{AGS11b} where stability of the $\rcd(K,\infty)$
condition has been established.
We show stability of characterization (iii) in Theorem~\ref{thm:RCDKN-equiv}.
By Proposition~\ref{prop:evi-equiv} and
Corollary~\ref{cor:contraction} it is sufficient to show that for
any $\mu=f m\in\mathscr{P}_2(X,d,m)$ with $f\in L^\infty(X,m)$ there
exists a continuous curve $(\mu_t)_{t\in[0,\infty)}$ in $\mathscr{P}_2(X,d)$,
locally absolutely continuous in $(0,\infty)$ and starting in $\mu$
such that for any $\nu=\sigma m\in\mathscr{P}_2(X,d)$ with $\sigma\in
L^\infty(X,d,m)$ and any $s\leq t$:
\begin{align}\label{eq:stable1}
e_K(t-s)\frac{N}{2}\left(1-\frac{U_N(\nu)}{U_N(\mu_{t})}\right)~\geq~\mathrm{e}^{K(t-s)}&\skn{\frac12
W_2(\mu_{t},\nu)}^
-&\skn{\frac12 W_2(\mu_{s},\nu)}^2\;.
\end{align}
Choose optimal couplings $(\hat d_n,q_n)$ of $(X_n,d_n,m_n)$ and
$(X,d,m)$. Given $\mu=f m\in\mathscr{P}_2(X,d,m)$ we set
\begin{align*}
Q_n\mu(\mathrm{d} x)~=~\int f(y)q_n(\mathrm{d} x,\mathrm{d} y)~\in~\mathscr{P}_2(X_n,d_n,m_n)\;.
\end{align*}
Similarly we obtain an operator
$Q_n':\mathscr{P}_2(X_n,d_n,m_n)\to\mathscr{P}_2(X,d,m)$, see \cite[Lem.~I.4.19]{S06} and also \cite[Prop.~2.2,2.3]{AGS11b}.
Now set $\mu^n=Q_n\mu$. By assumption there exists a curve
$(\mu^n_t)_{t\in[0,\infty)}$ in $\mathscr{P}_2(X_n,d_n)$ starting from
$\mu^n$ such that for all $s\leq t$:
\begin{align}\label{eq:stable2}
e_K(t-s)\frac{N}{2}\left(1-\frac{U^n_N(\nu^n)}{U^n_N(\mu^n_{t})}\right)~\geq~\mathrm{e}^{K(t-s)}&\skn{\frac12
W_2(\mu^n_{t},\nu^n)}^
-&\skn{\frac12 W_2(\mu^n_{s},\nu^n)}^2\;,
\end{align}
where $\nu^n=Q_n\nu$ and $U^n_N$ corresponds to the relative entropy
functional in $(X_n,d_n,m_n)$. By the maximum principle we have
$\mu^n_t\leq Cm_n$ with $C = \| \rho \|_{L^\infty ( X, m )}$.
For each $t\geq0$ set
$\tilde\mu^n_t:=Q'_n\mu^n_t\in\mathscr{P}_2(X,d)$. We claim that, after
extraction of a subsequence, we have that $\tilde\mu^n_t\to\mu_t$ in
$\mathscr{P}_2(X,d)$ as $n\to\infty$ for a curve $(\mu_t)$ in $\mathscr{P}_2(X,d)$.
Indeed, note that $\tilde\mu^n_t\leq C m$ for all $n$ and $t$. From
the Energy Dissipation Equality \eqref{eq:EDE} we conclude that
\begin{align*}
\int_s^t\abs{\dot\mu^n_r}^2\mathrm{d} r~\leq~\ent(\mu^n|m^n)~\leq~C\log C
\end{align*}
and hence the curves $(\mu^n_t)$ are equi-absolutely continuous.
Since $m\in\mathscr{P}_2(X,d)$, the set of measures
$\{\mu\in\mathscr{P}_2(X,d,m)):\mu\leq Cm\}$ is relatively compact w.r.t
$W_2$-convergence. Hence, by a diagonal argument, we conclude that
up to extraction of a subsequence $\tilde\mu^n_t\to\mu_t$ for all
$t\in{\mathbb Q}_+$ and some $\mu_t\in\mathscr{P}_2(X,d)$. Using the equi-absolute
continuity of the curves $(\mu^n_t)$ and the equi-continuity of the
map $Q_n'$ we obtain convergence for all times
$t\in[0,\infty )$ for
the same subsequence and a curve $(\mu_t)$ in $\mathscr{P}_2(X,d)$ which is
again absolutely continuous.
Finally, we observe that since the operators $Q_n,Q'_n$ do not
increase the entropy we have $U_N^n(\nu^n)\geq U_N(\nu)$ and by
lower semi-continuity of the entropy also
$\ent(\mu_t)\leq\liminf_n\ent(\tilde\mu^n_t)\leq\liminf_n\ent(\mu^n_t|m^n)$. Moreover,
we have $W_2(\mu^n_t,\nu^n)\to W_2(\mu_t,\nu)$. This allows to pass
to the limit in \eqref{eq:stable2} to obtain \eqref{eq:stable1}.
\end{proof}
\begin{theorem}[Tensorization]\label{thm:rcd-tensor}
For $i=1,2$ let $(X_i,d_i,m_i)$ be $\rcd^*(K,N_i)$ spaces. Then the
product space $(X_1\times X_2,d,m_1\otimes m_2)$, defined by
\begin{align*}
d\big((x,y),(x',y')\big)^2=d_1(x,x')^2 + d_2(y,y')^2\;,
\end{align*}
also satisfies $\rcd^*(K,N_1+N_2)$.
\end{theorem}
\begin{proof}
The result will follow indirectly: According to
Theorem~\ref{thm:grad-est} below, the $\rcd^*(K,N_i)$-conditions
will imply the Bakry--Ledoux conditions $\bl(K,N_i)$ on the first
and second factor. According to \cite[Thm.~5.2]{AGS12}, this implies
that the product space satisfies $\bl(K,N_1+N_2)$. Now
Theorems~\ref{thm:BEW2CDE} and \ref{thm:RCDKN-equiv} imply that the
$\rcd^*(K,N_1+N_2)$ condition holds on the product space.
\end{proof}
\begin{remark}
Let us also briefly sketch an alternative more direct argument using
characterization (i) of Theorem~\ref{thm:RCDKN-equiv}:
First, \cite[Thm.~6.17]{AGS11b} yields that the Cheeger energy on
the product space is again quadratic. Since $(X_i,d_i,m_i)$ are in
particular strong $\cd(K,\infty)$ spaces, they are essentially
non-branching according to Definition~\ref{def:ess-non-branch} by
\cite[Thm.~1.1]{RS12}. This implies that also the product space is
essentially non-branching. The latter can be seen using the fact
that if $\gamma=(\gamma_1,\gamma_2)$ is a geodesic in $X_1\times
X_2$, then $\gamma_i$ are geodesics in $X_i$. Finally, the reduced
curvature-dimension condition tensorizes under the essentially
non-branching assumption. This follows from the same arguments as in
\cite[Thm.~4.1]{BS10}, where tensorization has been proven under the
slightly stronger assumption that the full space is non-branching.
\end{remark}
We conclude with a globalization property of the ${\rcd^*(K,N)}$ condition.
\begin{theorem}[Local-to-global]\label{thm:rcd-locglob}
Let $(X,d,m)$ be a strong $\cd^e_{\text{loc}}(K,N)$ space with
$m\in\mathscr{P}_2(X,d)$ and assume that it is locally infinitesimally
Hilbertian in the following sense: there exists a countable covering
$\{Y_i\}_{i\in I}$ by closed sets with $m(Y_i)>0$ such that the
spaces $(Y_i,d,m_i)$ are infinitesimally Hilbertian, where
$m_i=m(Y_i)^{-1}m\vert_{Y_i}$. Then $(X,d,m)$ satisfies the ${\rcd^*(K,N)}$
condition.
\end{theorem}
\begin{proof}
Using characterization (ii) in Theorem~\ref{thm:RCDKN-equiv}, the
assertion is a direct consequence of the fact that both
infinitesimal Hilbertianity and the \emph{strong} ${\cd^e(K,N)}$ condition
by themselves have the local-to-global property. Indeed, by
\cite[Thm.~6.20]{AGS11b} the mms $(X,d,m)$ is again infinitesimally
Hilbertian, i.e. the associated Cheeger energy is quadratic. By
Theorem~\ref{thm:cde-locglob} it also satisfies the strong ${\cd^e(K,N)}$
condition.
\end{proof}
\begin{remark}
It is also possible to establish local--to--global property by
passing through the corresponding result for ${\cd^*(K,N)}$ with the aid
of Theorem~\ref{thm:RCDKN-equiv}. This requires to check that the
(quite complicated) proof of globalization for ${\cd^*(K,N)}$ in
\cite[Thm.~5.1]{BS10} also works under the slightly weaker
ess. non-branching assumption. Thus, we prefer to give an
independent and, to our knowledge, novel argument in the preceding
proof.
\end{remark}
\subsection{Dimension dependent functional inequalities}
\label{sec:FI}
Here we present dimensional versions of classical transport
inequalities. Namely, we show that the new entropic
curvature-dimension condition entails improvements of the HWI
inequality, the logarithmic Sobolev inequality and the Talagrand
inequality taking into account the dimension bound. These results can
be seen as finite dimensional analogues of the famous results by
Bakry--\'Emery \cite{BE85} and Otto--Villani \cite{OV00}.
Given a probability measure $\mu\in\mathscr{P}_2(X,d)$ we define the
\emph{Fisher information} by
\begin{align*}
I(\mu)~=~4\int\wug{\sqrt{f}}^2\mathrm{d} m\;,
\end{align*}
provided that $\mu=f m$ is absolutely continuous with a density
$f$ such that $\sqrt{f}\in D(\ch)$. Otherwise we set
$I(\mu)=+\infty$. With this notation,
the equality \eqref{eq:slope-FI}, which is valid on $\rcd (K,\infty)$ spaces,
means $\abs{ \nabla^-\ent}(f m)~=~I(f m)$.
\begin{theorem}[$N$-HWI inequality]\label{thm:N-HWI}
Assume that the mms $(X,d,m)$ satisfies the ${\cd^e(K,N)}$ condition. Then
for all $\mu_0,\mu_1\in\mathscr{P}_2(X,d,m)$,
\begin{align}\label{eq:UWI}
\frac{U_N(\mu_1)}{U_N(\mu_0)}~\leq~\ckn{W_2(\mu_0,\mu_1)} + \frac1N\skn{W_2(\mu_0,\mu_1)}\sqrt{I(\mu_0)}\;.
\end{align}
\end{theorem}
\begin{proof}
We can assume that $I(\mu_0)=\abs{\nabla^-\ent}(\mu_0)$ is finite,
as otherwise there is nothing to prove. Let $(\mu_t)_{t\in[0,1]}$ be
the constant speed geodesic connecting $\mu_0$ to $\mu_1$ given by
the ${\cd^e(K,N)}$ condition. Since $(K,N)$-convexity of $\ent$ along the
geodesic $(\mu_t)$ implies usual $K$-convexity along the same
geodesic we have
\begin{align*}
\limsup\limits_{t\searrow0}\frac{\ent(\mu_t)-\ent(\mu_0)}{t}~\leq~\ent(\mu_1)
- \ent(\mu_0) - \frac{K}{2} W_2(\mu_0,\mu_1)^2\;.
\end{align*}
On the other hand, we have
\begin{align}\nonumber
\liminf\limits_{t\searrow0}\frac{\ent(\mu_t)-\ent(\mu_0)}{t}~&\geq~-\limsup\limits_{t\searrow0}\frac{\max\{\ent(\mu_0)-\ent(\mu_t),0\}}{t}\\\label{eq:nhwi1}
~&\geq~-\abs{\nabla^-\ent}(\mu_0)\cdot W_2(\mu_0,\mu_1)\;.
\end{align}
Thus $\big(\ent(\mu_t)-\ent(\mu_0)\big)^2=o(t)$ as $t\to0$. By
Lemma~\ref{lem:convex-below-tangent} and a Taylor expansion of
$x\mapsto e^{-x/N}$ we obtain
\begin{align*}
\frac{U_N(\mu_1)}{U_N(\mu_0)}~&\leq~\ckn{\theta} + \frac{\skn{\theta}}{\theta\cdot U_N(
\mu_0)}\cdot\liminf\limits_{t\searrow0}\frac{U_N(\mu_t)-U_N(\mu_0)}{t}\\
&=~\ckn{\theta} - \frac{\skn{\theta}}{\theta\cdot
N}\cdot\limsup\limits_{t\searrow0}\frac{\ent(\mu_t)-\ent(\mu_0)}{t}\;,
\end{align*}
where we set $\theta=W_2(\mu_0,\mu_1)$. Applying the estimate
\eqref{eq:nhwi1} again yields the claim.
\end{proof}
\begin{corollary}[$N$-LogSobolev inequality]\label{cor:N-LogSob}
Assume that $(X,d,m)$ is a ${\cd^e(K,N)}$ space with $K>0$ and that
$m\in\mathscr{P}_2(X,d)$. Then for all $\mu\in\mathscr{P}_2(X,d,m)$,
\begin{align}
\label{eq:N-LSI}
KN\left[\exp\left(\frac{2}{N}\ent(\mu)\right) - 1\right]~\leq~I(\mu)\;.
\end{align}
\end{corollary}
The LHS obviously is bounded from below by $2K\cdot \ent(\mu)$.
\begin{proof}
We apply the $N$-HWI inequality from Theorem~\ref{thm:N-HWI} to the
measures $\mu_0=\mu$ and $\mu_1=m$. Noting that $U_N(m)=1$ and
setting $\theta=W_2(\mu,m)$ we obtain
\begin{align*}
\exp\left(\frac{1}{N}\ent(\mu)\right)~\leq~\ckn{\theta} + \frac1N\skn{\theta}\sqrt{I(\mu)}\;.
\end{align*}
Taking the square and using Young's inequality $2ab\leq Ka^2+K^{-1}b^2$ we obtain
\begin{align*}
\exp\left(\frac{2}{N}\ent(\mu)\right)~&\leq~\ckn{\theta}^2 + \frac2N\skn{\theta}\ckn{\theta}\sqrt{I(\mu)} + \frac1{N^2}\skn{\theta}^2I(\mu)\\
&\leq~\left(\ckn{\theta}^2+\frac{K}{N}\skn{\theta}^2\right)\left[1 + \frac{1}{KN}I(\mu)\right]\;.
\end{align*}
Since $\ckn{\cdot}^2+\frac{K}{N}\skn{\cdot}^2=1$, this yields the claim.
\end{proof}
\begin{corollary}[$N$-Talagrand inequality]\label{cor:N-Talagrand}
Assume that $(X,d,m)$ is a ${\cd^e(K,N)}$ space with $K>0$ and that
$m\in\mathscr{P}_2(X,d)$. Then $W_2(\mu,m)\le \sqrt{\frac NK}\,\frac\pi2$ for any $\mu\in\mathscr{P}_2(X,d,m)$ and
\begin{align}\label{eq:T}
\ent(\mu)~\geq~-N\log\cos\left(\sqrt{\frac{K}{N}}W_2(\mu,m)\right)\;.
\end{align}
\end{corollary}
Note that under the given upper bound on $W_2(\mu,m)$, the RHS in the above estimate is bounded from below by $\frac K2 \, W_2(\mu,m)^2$.
\begin{proof}
The claims follow immediately by applying the $N$-HWI inequality
\eqref{eq:UWI} from Theorem~\ref{thm:N-HWI} to the measures
$\mu_0=m$ and $\mu_1=\mu$ and noting that $U_N(m)=1$ as well as
$I(m)=0$.
\end{proof}
It is interesting to note that in the spirit of Otto--Villani a
slightly weaker Talagrand-like inequality can also be derived from the
$N$-LogSobolev inequality.
\begin{proposition}
Let $(X,d,m)$ be a $\cd(K',\infty)$ space for some $K'\in{\mathbb R}$ such
that $m\in\mathscr{P}_2(X,d)$. Assume that the $N$-LogSobolev inequality
\eqref{eq:N-LSI} holds for some $K>0$. Then for any
$\mu\in\mathscr{P}_2(X,d,m)$,
\begin{align}\label{eq:T-alt}
W_2(\mu,m)~\leq~\sqrt{\frac{N}{K}\left[\exp\left(\frac{2}{N}\ent(\mu)\right)-1\right]}\;.
\end{align}
\end{proposition}
\begin{proof}
We fix $\mu\in\mathscr{P}_2(X,d,m)$ and introduce the function
$A:[0,\infty)\to{\mathbb R}_+$ defined by
\begin{align*}
A(t)~=~W_2(\mathscr{H}_t\mu,\mu) + \sqrt{\frac{N}{K}\left[\exp\left(\frac{2}{N}\ent(\mathscr{H}_t\mu)\right)-1\right]}\;.
\end{align*}
Obviously, $A(0)$ equals the right hand side of \eqref{eq:T-alt},
while $A(t)\to W_2(\mu,m)$ as $t\to\infty$. Thus it is sufficient to
prove that $A$ is non-increasing. First note that under the
$\cd(K',\infty)$ condition we have the estimate
\begin{align}\label{eq:ddtrW2}
\frac{\mathrm{d}^+}{\mathrm{d}t} W_2(\mathscr{H}_t\mu,\mu)~\leq~\sqrt{I(\mathscr{H}_t\mu)}\;.
\end{align}
Indeed, using triangle inequality we find
\begin{align*}
\limsup\limits_{h\searrow0}\frac{1}{h}\Big(W_2(\mathscr{H}_{t+h}\mu,\mu)-W_2(\mathscr{H}_t\mu,\mu)\Big)~&\leq~\limsup\limits_{h\searrow0}\frac{1}{h} W_2(\mathscr{H}_{t+h}\mu,\mathscr{H}_t\mu)~=~\abs{\dot{(\mathscr{H}_t\mu)}}\;.
\end{align*}
Now \eqref{eq:ddtrW2} follows from the fact that $\mathscr{H}_t\mu$ is a
metric gradient flow of $\ent$ by virtue of the Energy Dissipation
Equality \eqref{eq:EDE} and \eqref{eq:slope-FI}. Moreover, we
calculate
\begin{align*}
\frac{\mathrm{d}^+}{\mathrm{d}t} \sqrt{\frac{N}{K}\left[\exp\left(\frac{2}{N}\ent(\mathscr{H}_t\mu)\right)-1\right]}~&=~\left(NK\left[\exp\left(\frac{2}{N}\ent(\mathscr{H}_t\mu)\right)-1\right]\right)^{-\frac12}\frac{\mathrm{d}^+}{\mathrm{d}t} \ent(\mathscr{H}_t\mu)\\
&~=~-\left(NK\left[\exp\left(\frac{2}{N}\ent(\mathscr{H}_t\mu)\right)-1\right]\right)^{-\frac12}I(\mathscr{H}_t\mu)\\
&~\leq~-\sqrt{I(\mathscr{H}_t\mu)}\;,
\end{align*}
where we have used \eqref{eq:N-LSI} in the last step. Thus we have
shown that $\frac{\mathrm{d}^+}{\mathrm{d}t} A(t)\leq 0$ which yields the claim.
\end{proof}
\begin{remark}
Note that the arguments in the proofs above are of a purely metric
nature. The preceding results can be formulated and proven verbatim
in the setting of Section~\ref{sec:metric} by replacing $\ent$ with
a $(K,N)$-convex function $S$ on a metric space, the Fisher
information $I$ with the slope $\abs{\nabla^-S}$ and $\mathscr{H}_t\mu$ with
the gradient flow of $S$. However, for concreteness we choose to work
in the Wasserstein framework.
\end{remark}
\section{ Equivalence of $\cd^e(K,N)$ and the Bochner Inequality $\be(K,N)$}\label{sec:cde-bochner}
In this section we will study properties of the gradient flow $H_t f$
of the (quadratic) Cheeger energy $\ch$ in $L^2 (X,m)$. We refer to
Section~\ref{sec:recap} and references therein for notations and basic
properties of them.
\subsection{From $\cd^e(K,N)$ to $\bl(K,N)$ and $\be(K,N)$}
\label{sec:cde2bochner}
In this section we study the analytic consequences of the Riemannian
curvature-dimension condition. In particular, we show that it implies
a pointwise gradient estimate in the spirit of Bakry--Ledoux. This in
turn allows us to establish the full Bochner inequality.
As an immediate consequence of Definition~\ref{def:riemcdkn} and
Theorem~\ref{thm:contraction} we obtain the following Wasserstein
expansion bound. Recall from Proposition \ref{prop:simp-control} that
this bound in turn implies a slightly weaker and simpler bound not
involving the function $\skn{\cdot}$.
\begin{theorem}[$W_2$-expansion bound]\label{thm:W2-contraction}
Let $(X,d,m)$ be a ${\rcd^*(K,N)}$ space. For any $\mu,\nu\in\mathscr{P}_2(X,d)$
and $0<s,t$ we have
\begin{align}\label{eq:W2-contraction}
\skn{\frac12
W_2(\mathscr{H}_t\mu,\mathscr{H}_s\nu)}^2~\leq~&e^{-K(s+t)}\skn{\frac12
W_2(\mu,\nu)}^2\\\nonumber &+
\frac{N}{K}\Big(1-e^{-K(s+t)}\Big)\frac{\big(\sqrt{t}-\sqrt{s}\big)^2}{2(s+t)}\;.
\end{align}
In particular, in the limit $s\to t$ and $\nu\to\mu$ we have
\begin{align}\label{eq:W2-contraction-inf}
W_2(\mathscr{H}_t\mu,\mathscr{H}_s\nu)^2~\leq~&e^{-2Kt}W_2(\mu,\nu)^2 +
\frac{N}{K}\frac{1-e^{-2Kt}}{4t^2}\cdot\abs{s-t}^2\\\nonumber & +
o\big(W_2(\mu,\nu)^2+\abs{t-s}^2\big)\;.
\end{align}
\end{theorem}
Next we will show that \eqref{eq:W2-contraction} implies
Bakry--Ledoux's gradient estimate. To do it with minimal a priori
regularity assumptions, we will introduce another condition, which is
satisfied for each $\rcd(K',\infty)$ space (see Remark~\ref{rem:regularity}
below).
\begin{assumption} \label{ass:Ch-reg} $(X,d,m)$ is a length metric
measure space satisfying $\supp m = X$ and
\eqref{eq:exp-int}. In addition, every $f \in D ( \ch )$ with
$\wug{f} \le 1$ has a 1-Lipschitz representative.
\end{assumption}
\begin{theorem}[Bakry--Ledoux gradient estimate]\label{thm:grad-est}
Let $(X,d,m)$ be an infinitesimally Hilbertian metric measure space
satisfying Assumption~\ref{ass:Ch-reg}. Assume that
\eqref{eq:W2-contraction} with $K\in {\mathbb R}$, $N\in(0,\infty)$ holds for
the measures $( \bH_t \eta ) m$ and $(\bH_s\sigma)m$ instead of
$\mathscr{H}_t \mu$ and $\mathscr{H}_s \nu$ for each $\mu= \eta m$ and $\nu= \sigma
m$ in $\mathscr{P}_2 (X,d,m)$ and $t,s \ge 0$. Then
\begin{align}\label{eq:grad-est}
\wug{\bH_t f}^2 + \frac{4Kt^2}{N\big(e^{2Kt}-1\big)}\abs{\Delta \bH_t f}^2~\leq~e^{-2Kt}\bH_t\big(\wug{f}^2\big)\;.
\end{align}
$m$-a.e. in $X$ for any $f\in D(\ch)$ and $t>0$.
\end{theorem}
Before giving the proof we note the following result, which gives a
stronger version of the gradient estimate involving the Lipschitz
constant under more restrictions on $f$.
\begin{proposition}\label{prop:grad-est-ref}
Let $(X,d,m)$ be an infinitesimally Hilbertian metric measure space
satisfying Assumption~\ref{ass:Ch-reg}. If \eqref{eq:grad-est} holds and $\wug{f}\in
L^\infty(X,m)$ then $\bH_t f$, $\bH_t ( \wug{f}^2 )$ and $\Delta
\bH_t f$ have continuous representatives satisfying everywhere in
$X$:
\begin{align}\label{eq:grad-est-ref}
\abs{\nabla \bH_t f}^2 + \frac{4Kt^2}{N\big(e^{2Kt}-1\big)}\abs{\Delta
\bH_t f}^2~\leq~e^{-2Kt} \bH_t\big(\wug{f}^2\big)\;.
\end{align}
\end{proposition}
\begin{remark}\label{rem:regularity}
Under $\rcd (K',\infty)$, Assumption~\ref{ass:Ch-reg} is always
satisfied (see \cite{AGMR12,AGS11b,AGS12}). Moreover, with the aid
of Theorem~\ref{thm:gf-identification}, the other assumption in
Theorem~\ref{thm:grad-est} easily yields \eqref{eq:W2-contraction}
in this case. Conversely, the assumptions in
Theorem~\ref{thm:grad-est} implies $\rcd (K,\infty)$. Indeed, by
Proposition~\ref{prop:simp-control}, \eqref{eq:W2-contraction}
yields the $W_2$-contraction estimate, which corresponds to
\eqref{eq:WC0}. Under Assumption~\ref{ass:Ch-reg}, such an estimate
yields Bakry--\'Emery's $L^2$-gradient estimate (see
\cite[Cor.~3.18]{AGS12}, \cite[Thm.~2.2]{Kuw10}). Then $\rcd
(K,\infty)$ follows from \cite[Thm~4.18]{AGS12} under
Assumption~\ref{ass:Ch-reg} again.
Note that $\rcd (K',\infty)$ ensures some regularization property of
$\bH_t$. For instance, $\bH_t f (x) = \int_X f \, \mathrm{d} \mathscr{H}_t
\delta_x$ holds $m$-a.e. for every $f \in L^2 (X,m)$. Moreover,
this representative of $\bH_t f$ satisfies the strong Feller
property, that is, $x \mapsto \int_X f \, \mathrm{d} \mathscr{H}_t \delta_x$ is
bounded and continuous for any bounded measurable $f$ (see
\cite[Thm.~6.1]{AGS11b}, \cite[Thm.~7.1]{AGMR12}).
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:grad-est}]
For simplicity of presentation, we give a proof when $(X,d)$ is a
geodesic space. One can easily extend the argument to the length
space case. We first consider the case that $f$ is bounded and
Lipschitz with bounded support. Let us denote $\tilde{\bH}_t f (x) :
= \int_X f \,\mathrm{d} \mathscr{H}_t \delta_x$, which is a representative of
$\bH_t f$, see Remark \ref{rem:regularity}. For $x , y \in X$, $x
\neq y$ and $t , s \ge 0$ and any coupling $\pi_{s,t}$ of $\mathscr{H}_{s} (
\d_{x} )$ and $\mathscr{H}_{t} ( \d_{y} )$, we have
\begin{equation} \label{eq:couple} \tilde{\bH}_{s} f (x) -
\tilde{\bH}_{t} f (y) \le \int_{X \times X} \abs{ f (z) - f (w) }
\pi_{s,t} (\mathrm{d} z \mathrm{d} w )\; .
\end{equation}
Since $| f (z) - f (w) | \le \Lip (f) d (x,y)$, \eqref{eq:couple}
and \eqref{eq:W2-contraction} yield
\begin{align*}
\skn{ \frac{1}{2 \Lip(f)} ( \tilde{\bH}_s f (x) - \tilde{\bH}_t f
(y) ) }^2 & \le \skn{ \frac{1}{2} W_1 ( \mathscr{H}_s ( \d_x ) , \mathscr{H}_t (
\d_y ) ) }^2
\\
& \le \skn{ \frac{1}{2} W_2 ( \mathscr{H}_{s} ( \d_x ) , \mathscr{H}_{t} ( \d_y ) )
}^2
\\
& \hspace{-12em} \le \mathrm{e}^{-K (s+t)} \skn{\frac12 d ( x , y )}^2 +
\frac{N ( 1 - \mathrm{e}^{-K(s+t)} )}{2 K ( s+t )} ( \sqrt{t} - \sqrt{s}
)^2 \;.
\end{align*}
It implies that the map $(u,z) \mapsto \tilde{\bH}_u f (z)$ is
locally Lipschitz on $( 0 , 1 )\times X$ and hence $u \mapsto
\tilde{\bH}_u f (z)$ is differentiable $\mathcal{L}^1$-a.e.
for each fixed $z \in X$,
where $\mathcal{L}^1$ is the one-dimensional Lebesgue measure.
The first step is to show the following inequality:
\begin{align} \label{eq:grad-est0} \abs{\nabla \tilde{\bH}_t f}
(x)^2 + \frac{4Kt^2}{N\big( e^{2Kt}-1 \big)} \left(
\frac{\partial}{\partial t} \tilde{\bH}_t f (x) \right)^2 \leq
\mathrm{e}^{-2Kt} \tilde{\bH}_t ( \abs{ \nabla f }^2 ) (x)
\end{align}
for each $x \in X$ and $t > 0$ such that $u \mapsto \bH_u f (x)$ is
differentiable at $t$.
Let $y \in X$ and $s \ge 0$.
let us define $r = r (x,y; s,t) > 0$ and $G_r f : X \to {\mathbb R}$ by
\begin{align*}
r & : =
\begin{cases}
W_2 ( \mathscr{H}_s ( \d_x ) , \mathscr{H}_t ( \d_y ) )^{1/2} & \mbox{if $W_2 (
\mathscr{H}_s ( \d_x ) , \mathscr{H}_t ( \d_y ) ) > 0$},
\\
d (x,y) & \mbox{otherwise}.
\end{cases}
\\
G_r f (z) & : = \sup_{z' ; \; d(z, z') \in (0,r)} \frac{ \abs{f(z)
- f (z')} }{d (z,z')}\;.
\end{align*}
Then by taking a coupling $\pi_{s,t}$ as a minimizer of $W_2 ( \mathscr{H}_s
( \d_x ) , \mathscr{H}_t ( \d_y ) )$ in \eqref{eq:couple},
\begin{align} \nonumber \int_{X \times X} & \abs{ f (z) - f (w) }
\pi_{s,t} ( \mathrm{d}z \mathrm{d} w ) \\ \nonumber & = \int_{X
\times X} \abs{ f (z) - f (w) } 1_{ \{ d(z,w) \le r \} }
\pi_{s,t} ( \mathrm{d}z \mathrm{d} w ) \\ \nonumber & \quad +
\int_{X \times X} \abs{ f (z) - f (w) } 1_{ \{ d(z,w) > r \} }
\pi_{s,t} ( \mathrm{d}z \mathrm{d} w ) \\ \nonumber & \le \int_{X
\times X} G_r f (z) d( z, w ) \pi_{s,t} ( \mathrm{d}z \mathrm{d}
w ) + 2 \| f \|_\infty \pi_{s,t} ( d > r ) \\ \nonumber & \le
\left( \int_{X} ( G_r f )^2 d \mathscr{H}_s (\d_x) \right)^{1/2} W_2 (
\mathscr{H}_{s} (\d_x) , \mathscr{H}_t (\d_{y}) ) \\ \label{eq:difference} &
\hspace{6em} + \frac{ 2 \| f \|_\infty }{r^2} W_2 ( \mathscr{H}_{s}
(\d_{x}) , \mathscr{H}_{t} (\d_{y} ) )^2\; .
\end{align}
After substituting \eqref{eq:difference} into \eqref{eq:couple}, we
apply \eqref{eq:W2-contraction} with $\mu = \d_y$ and $\nu
= \d_x$ to obtain
\begin{align} \nonumber \tilde{\bH}_{s} & f (x) - \tilde{\bH}_{t} f
(y) \\ \nonumber & \le \tilde{\bH}_{s} ( ( G_r f )^2 )(x)^{1/2} \\
\nonumber & \quad \times 2 s_{K/N}^{-1} \Bigg( \sqrt{ \mathrm{e}^{-K
(s+t)} s_{K/N} \left( \frac12 d(x,y) \right)^2 + \frac{ N ( 1
- \mathrm{e}^{- K (s+t)} )}{2 K ( s + t ) } ( \sqrt{t} - \sqrt{s} )^2
} \Bigg)
\\
& \qquad \label{eq:diff2} + 2 \| f \|_{\infty} W_2 ( \mathscr{H}_s (\d_x)
, \mathscr{H}_t (\d_y) )
\end{align}
by using our choice of $r$.
Since the inequality \eqref{eq:grad-est0} is quadratic w.r.t. scalar
multiplication of $f$, we may assume without loss of generality that
\begin{align*}
| \nabla \tilde\bH_t f |(x) = \limsup_{y \to x} \frac{ [ \tilde\bH_tf (x) - \tilde\bH_tf (y) ]_+ }{
d ( x, y ) }\;.
\end{align*}
Take a sequence $( y_n )_{n \in {\mathbb N}}$ in $X$ such that $\displaystyle
\lim_{n \to \infty} \frac{ \tilde\bH_tf (x) - \tilde\bH_tf (y_n) }{ d(x, y_n ) } = |
\nabla \tilde\bH_tf | (x) $ holds. Take $\a \in {\mathbb R} \setminus \{ 0 \}$, which
will be specified later. For each $n \in {\mathbb N}$, let us take $s_n = t
+ \a d ( x, y_n )$ and $r_n = r ( x, y_n ; s_n , t )$. Then we have
\begin{align*}
\lim_{n \to \infty} \frac{ \tilde{\bH}_{s_n} f (x) - \tilde{\bH}_t
f (y_n) }{ d ( x, y_n ) } & = \lim_{n \to \infty} \left( \a
\frac{ \tilde{\bH}_{s_n} f (x) - \tilde{\bH}_t f (x) }{ s_n - t
} + \frac{ \tilde{\bH}_{t} f (x) - \tilde{\bH}_t f (y_n) }{ d (
x, y_n ) } \right)
\\
& = \a \frac{\partial}{\partial t} \tilde{\bH}_t f (x) + | \nabla
\tilde{\bH}_t f | (x)\;.
\end{align*}
Take $\varepsilon > 0$ arbitrary. Since $G_r f$ is non-decreasing in $r$,
by substituting $s = s_n$, $y = y_n$ into \eqref{eq:diff2}, dividing
both sides by $d ( x , y_n )$ and letting $n \to \infty$, we obtain
\begin{align} \nonumber \a \frac{\partial}{\partial t} \tilde{\bH}_t
f (x) + | \nabla \tilde{\bH}_t f | (x) & \le \tilde{\bH}_{t} ( |
G_\varepsilon f |^2 ) (x)^{1/2} \\ \nonumber & \qquad \times \sqrt{ \mathrm{e}^{-
2 K t} + \a^2 \frac{ N ( 1 - \mathrm{e}^{- 2 K t} )}{4 K t^2} } \;.
\end{align}
Here we used the fact that $\tilde{\bH}_{u} ( | G_\varepsilon f |^2 )$ is
continuous in $u$ (see Remark~\ref{rem:regularity}).
Let $v_\a$ be a unit vector in ${\mathbb R}^2$ of the
form $\lambda ( 1, \a \sqrt{ N ( \mathrm{e}^{2Kt} - 1 ) / (4Kt^2)} )$ with
$\lambda > 0$. Then, by rewriting the last inequality after $\varepsilon
\downarrow 0$, we obtain
\begin{align*}
v_\a \cdot \left( | \nabla \tilde{\bH}_t f | (x) , \sqrt{ \frac{4 K
t}{ N ( \mathrm{e}^{2Kt} - 1 )}} \frac{\partial}{\partial t}
\tilde{\bH}_t f (x) \right) \le \mathrm{e}^{-Kt} \tilde{\bH}_{t} ( |
\nabla f |^2 ) (x)^{1/2} \;.
\end{align*}
By optimizing this inequality in $\a$, we obtain
\eqref{eq:grad-est0}.
The second step is to show the following for any bounded and
Lipschitz $f \in D ( \ch )$: For each $t > 0$ and $m$-a.e.~$x \in
X$,
\begin{align}\label{eq:grad-est1}
\abs{ \nabla \tilde{\bH}_t f } (x)^2 +
\frac{4Kt^2}{N\big(\mathrm{e}^{2Kt}-1\big)}\abs{ \Delta \bH_{t} f (x)}^2
\leq \mathrm{e}^{-2Kt} \tilde{\bH}_t \big( \abs{ \nabla f }^2 \big) (x)
\;.
\end{align}
For each $x \in X$, we already know that $t \mapsto \tilde{\bH}_t f
(x)$ is differentiable for $\mathcal{L}^1$-a.e.~$t \in [ 0 , \infty )$.
Thus the Fubini theorem yields that the set $I \subset ( 0 , \infty
)$ given by
\begin{equation*}
I :=
\left\{
t \in ( 0 , \infty)
\; \left| \;
\mbox{$t \mapsto \tilde{\bH}_t f (x)$
is differentiable
for $m$-a.e.~$x \in X$}
\right.
\right\}
\end{equation*}
is of full $\mathcal{L}^1$-measure. Take $t \in I$. Then we have
$\displaystyle \frac{\partial}{\partial t} \tilde{\bH}_t f (x) =
\Delta \bH_t f (x) $ $m$-a.e.~and hence \eqref{eq:grad-est0} yields
\eqref{eq:grad-est1}. Thus it suffices to show $I = ( 0 , \infty )$
to prove \eqref{eq:grad-est1}. Indeed, for any $t \in ( 0, \infty
)$, there is $s \in I$ with $s < t$. Since $( u , z ) \mapsto
\tilde{\bH}_u f (z)$ is locally Lipschitz, the dominated convergence
theorem implies
\begin{align*}
\tilde{\bH}_{t - s} \big( \frac{\partial}{\partial s}
\tilde{\bH}_{s} f \big) (x) & = \tilde{\bH}_{t-s} \left( \lim_{u
\to 0} \frac{\tilde{\bH}_{s + u } f - \tilde{\bH}_{s} f }{u}
\right) (x) = \frac{\partial}{\partial t} \tilde{\bH}_t f (x)
\end{align*}
and hence $u \mapsto \tilde{\bH}_u f (x)$ is differentiable at $t$
for any $x \in X$.
Finally we prove the assertion for $f \in D ( \ch )$. Let $f_n \in
D ( \ch )$ be a sequence of bounded Lipschitz functions on $X$
converging to $f$ in $W^{1,2}$ strongly and $| \nabla f_n | \to |
\nabla f |_w$ in $L^2$. Then $\Delta \bH_t f_n \to \Delta \bH_t f$
in $L^2$ and hence the conclusion follows
(cf.~\cite[Thm.~6.2]{AGS11b}).
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:grad-est-ref}]
Note first that \eqref{eq:grad-est} implies $\rcd (K,\infty)$ as in
Remark~\ref{rem:regularity}. Take $\mu_0, \mu_1 \in \mathscr{P}_2 (X,d,m)$
with bounded densities and bounded supports and $\pi$ be a dynamic
optimal coupling satisfying $( e_i )_\# \pi = \mu_i$ for $i=0,1$.
Note that $(e_t)_\# \pi \ll m$ holds since $\rcd (K,\infty)$ holds.
Let $f_n \in D (\ch)$ be an approximating sequence of $f$ as
above. We may assume that $( | \nabla f_n | )_{n \in {\mathbb N}}$ is
uniformly bounded without loss of generality since $\wug{f} \in
L^\infty (X,m)$. Then $( \Delta \bH_t f_n )_{n \in {\mathbb N}}$ is
uniformly bounded in $L^\infty (X,m)$ by \eqref{eq:grad-est1}. We
may assume also that $\bH_t ( | \nabla f_n |^2 )$ and $\Delta \bH_t
f_n$ converges $m$-a.e.~by taking a subsequence if necessary. We
apply \eqref{eq:grad-est1} to $f_n$ to obtain
\begin{align*}
& \left|
\int_X \bH_t f_n \,\mathrm{d} \mu_1
-
\int_X \bH_t f_n \,\mathrm{d} \mu_0
\right|
\le
\int_{\geo (X)} \int_0^1
| \nabla \tilde{\bH}_t f_n | ( \gamma_t ) | \dot{\gamma}_t |
\,\mathrm{d} t \,
\pi ( \mathrm{d} \gamma )
\\
& \hspace{2em} \le
W_2 ( \mu_0 , \mu_1 )
\int_0^1 \int_{\geo(X)}
\left(
\mathrm{e}^{-2Kt} \bH_t ( | \nabla f_n |^2 ) ( \gamma_t )
-
\frac{4Kt^2}{N ( \mathrm{e}^{-2Kt} - 1 )} | \Delta \bH_t f_n ( \gamma_t ) |^2
\right)
\, \pi ( \mathrm{d} \gamma )
\,\mathrm{d} t .
\end{align*}
Then, as $n \to \infty$, the dominated convergence theorem yields
\begin{multline*}
\left|
\int_X \bH_t f \,\mathrm{d} \mu_1
-
\int_X \bH_t f \,\mathrm{d} \mu_0
\right|
\\
\le
W_2 ( \mu_0 , \mu_1 )
\int_0^1 \int_{\geo(X)}
\left(
\mathrm{e}^{-2Kt} \bH_t ( \wug{f}^2 ) ( \gamma_t )
-
\frac{4Kt^2}{N ( \mathrm{e}^{-2Kt} - 1 )} | \Delta \bH_t f ( \gamma_t ) |^2
\right)
\, \pi ( \mathrm{d} \gamma )
\,\mathrm{d} t .
\end{multline*}
By the strong Feller property, $\bH_t ( \wug{f}^2 )$ has a continuous representative.
Since $\Delta \bH_{t/2} f \in L^\infty (X,m)$ by \eqref{eq:grad-est} with $t/2$ instead of $t$,
the strong Feller property again implies that $\Delta \bH_t f = \bH_{t/2} \Delta \bH_{t/2} f$
has a continuous representative.
Thus by taking $\mu_0$ and $\mu_1$ as a uniform distribution on $B_r (x_0)$ and $B_r (x_1)$
respectively and letting $r \to 0$, we obtain
\begin{multline*}
\left| \bH_t f (x_0) - \bH_t f (x_1) \right|
\\
\le
d ( x_0 , x_1 )
\sup_{z \in B_{2 d(x_0 , x_1 )} (x_0)}
\left[
\mathrm{e}^{-2Kt} \bH_t ( \wug{f}^2 ) (z)
-
\frac{4Kt^2}{N ( \mathrm{e}^{-2Kt} - 1 )} | \Delta \bH_t f (z) |^2
\right]
\end{multline*}
for $m$-a.e.~$x_0, x_1$.
Thus $\bH_t f$ has a Lipschitz representative and \eqref{eq:grad-est-ref} holds.
\end{proof}
\begin{definition}\label{def:BL}
We say that $(X,d,m)$ satisfies the \emph{Bakry--Ledoux gradient
estimate} $\bl(K,N)$ with $K\in{\mathbb R}$, $N\in(0,\infty)$ if for any $f\in D(\ch)$
and $t > 0$
\begin{align}\label{eq:bl}
\wug{\bH_t f}^2
+ \frac{2t}{N}C(t)\abs{\Delta \bH_t f}^2
~\leq~
e^{-2Kt}\bH_t\big(\wug{f}^2\big)\quad m\text{-a.e. in }X\;,
\end{align}
where $C>0$ is a function satisfying $C(t)=1+O(t)$ as $t\to0$.
\end{definition}
Now Theorem~\ref{thm:grad-est} can be reformulated as follows: For an
infinitesimally Hilbertian metric measure space, the $W_2$-expansion
bound \eqref{eq:W2-contraction} implies the $\bl(K,N)$ condition under
Assumption~\ref{ass:Ch-reg}. Indeed, \eqref{eq:grad-est} states that
\eqref{eq:bl} holds with $C(t)=2K t/(\mathrm{e}^{2Kt}-1)$. The Bakry--Ledoux
gradient estimate $\bl (K,N)$ will allow us to establish the full
Bochner inequality including the dimension term in ${\rcd^*(K,N)}$
spaces. This extends the result in \cite{AGS11b}, where a Bochner
inequality without dimension term has been established on
$\rcd(K,\infty)$ spaces. Let us also make precise what we mean by
Bochner's inequality, or the Bakry--\'Emery condition.
\begin{definition}\label{def:BE}
We say that an infinitesimally Hilbertian metric measure space
$(X,d,m)$ satisfies the \emph{Bakry--\'Emery condition} $\be(K,N)$,
or \emph{Bochner inequality}, with $K\in{\mathbb R}$, $N\in(0,\infty)$ if for
all $f\in D(\Delta)$ with $\Delta f\in W^{1,2}(X,d,m)$ and all $g\in
D(\Delta) \cap L^\infty (X,m)$ with $g \ge 0$ and $\Delta g\in
L^\infty(X,m)$ we have
\begin{align}\label{eq:Bochner}
&\frac12\int\Delta g \wug{f}^2 \mathrm{d} m - \int g\langle\nabla(\Delta f),\nabla f\rangle \mathrm{d}
~\geq~K\int g \wug{f}^2\mathrm{d} m + \frac{1}{N}\int g\big(\Delta f\big)^2\mathrm{d} m\;.
\end{align}
\end{definition}
To investigate the relation between Bochner's inequality and the
Bakry-Ledoux gradient estimate, we introduce a mollification of the
semigroup $h^\varepsilon$ given by
\begin{align}\label{eq:sg-moll}
h^\varepsilon
f~=~\int_0^\infty\frac{1}{\varepsilon}\eta\left(\frac{t}{\varepsilon}\right)\bH_tf\,\mathrm{d}
t\;,
\end{align}
with a non-negative kernel $\eta\in C^\infty_c(0,\infty)$ satisfying
$\int_0^\infty\eta(t)\mathrm{d} t=1$
for $f \in L^p (X,m)$, $1 \le p \le \infty$.
Note that $h^\varepsilon f \in D (\Delta)$ and
\begin{equation} \label{eq:sg-moll2}
\Delta h^\varepsilon f = - \int_0^1 \frac{1}{\varepsilon} \eta' \left( \frac{t}{\varepsilon} \right)\bH_t f\,\mathrm{d} t
\end{equation}
for any $f \in L^p (X,m)$, $1 \le p < \infty$.
\begin{theorem}[Bochner inequality]\label{thm:Bochner}
Let $(X,d,m)$ be an infinitesimally Hilbertian metric measure space
satisfying $\bl(K,N)$. Then the Bochner inequality $\be(K,N)$ holds.
\end{theorem}
\begin{proof}
In the language of Dirichlet forms, this is proven in
\cite[Cor.~2.3, (vi)$\Rightarrow$(i)]{AGS12}. We sketch here an
argument following basically the ideas developed in \cite{GKO10} in
the setting of Alexandrov spaces.
We will first prove \eqref{eq:Bochner} for $f\in D(\Delta)\cap
L^\infty (X, m)$ with $\Delta f\in D(\Delta)\cap L^\infty(X,m)$ and
for $g$ satisfying $\Delta g \in D (\ch)$ additionally. From
\eqref{eq:grad-est} we obtain immediately
\begin{align}\label{eq:Bochner1}
&\int g\wug{\bH_t f}^2\mathrm{d} m +
\frac{2t}{N}C(t)\int g \abs{\Delta \bH_t f}^2\mathrm{d}
m
\leq~e^{-2Kt}\int g\bH_t\big(\wug{f}^2\big)\mathrm{d} m\;.
\end{align}
This will yield \eqref{eq:Bochner} by subtracting $\int
g\wug{f}^2\mathrm{d} m$ on both sides, dividing by $t$ and taking the limit
$t\searrow0$. Indeed, for the left hand side of \eqref{eq:Bochner1},
we can argue exactly as in the proof of
\cite[Thm.~4.6]{GKO10}, using the Leibniz rule \ref{eq:Leibniz},
and note in addition that
\begin{align*}
\lim\limits_{t\to0} \frac{2}{N}C(t)\int g \abs{\Delta \bH_t f}^2\mathrm{d}
m~=~\frac{2}{N}\int g\big(\Delta f\big)^2\mathrm{d} m\;.
\end{align*}
For the right hand side of \eqref{eq:Bochner1},
by a similar calculation, we obtain
\begin{multline} \label{eq:Bochner3}
\frac{1}{t}
\left(
\int g \bH_t \big( \wug{f}^2 \big) \mathrm{d} m
-
\int g \wug{f}^2 \, \mathrm{d} m
\right)
\\
=
- \frac{1}{t} \left(
\int \bH_t g f \Delta f \,\mathrm{d} m
-
\int g f \Delta f \,\mathrm{d} m
\right)
+ \frac{1}{2t} \left(
\int \Delta \bH_t g \cdot f^2 \,\mathrm{d} m
-
\int \Delta g \cdot f^2 \,\mathrm{d} m
\right)\;.
\end{multline}
Since $\Delta g, f^2 , f \Delta f \in D ( \ch )$, it converges to
$\int \Delta g \wug{f}^2 \,\mathrm{d} m$ as $t \to 0$ and thus we obtain
\eqref{eq:Bochner}. To obtain the estimate \eqref{eq:Bochner} for
general $f$, we approximate $f$ by $h^\varepsilon (f\wedge R)$ and $g$ by
$T_{\varepsilon'} g$. By \eqref{eq:sg-moll2}, these functions have the
expected regularity. First we take $\varepsilon' \to 0$. Since $\wug{f},
\wug{\Delta f} \in L^1 (X,m) \cap L^\infty (X,m)$ by virtue of
\eqref{eq:bl} and \eqref{eq:sg-moll2}, it goes well. Next we take
$R \to \infty$. Since $\lim_{R \to \infty} \ch ( f\wedge R - f ) =
0$ and $\ch ( f \wedge R ) \le \ch (f)$, we can show $\wug{ h^\varepsilon (
f \wedge R ) }^2 \to \wug{ h^\varepsilon f}^2$ weakly in $L^1 (X,m)$
similarly as in the proof of \cite[Thm.~4.6]{GKO10}. The same
argument also works for $\langle \nabla \Delta h^\varepsilon ( f \wedge R
), \nabla h^\varepsilon (f \wedge R ) \rangle$ with the aid of
\eqref{eq:sg-moll2}. Again \eqref{eq:sg-moll2} helps the
convergence of the term involving $N$. Finally we take $\varepsilon \to
0$. we can employ the approximation argument in
\cite[Thm.~4.6]{GKO10} again when arguing this limit to conclude the
convergence of the same kind. The additional dimension term posing
no difficulty at this moment.
\end{proof}
Also the converse implication holds.
Originally, this was proven by
Bakry and Ledoux in \cite{BL06} in the setting of Gamma calculus. See
also the work of Wang \cite{Wa11}, where the equivalence of gradient
estimates and Bochner's inequality has been rediscovered in the
setting of smooth Riemannian manifolds. Note that the function $C$ in
the next proposition gives a stronger estimate than
\eqref{eq:grad-est} for large $t$.
\begin{proposition}\label{prop:Bochner2BEW}
Let $(X,d,m)$ be an infinitesimally Hilbertian mms satisfying the
Bakry--\'Emery condition $\be(K,N)$.
Then the $\bl(K,N)$ condition holds with $C(t)=(1-\mathrm{e}^{-2Kt})/2Kt$.
\end{proposition}
\begin{proof}
In the language of Dirichlet forms, this is basically proven in
\cite[Cor.~2.3, (i)$\Rightarrow$(vi)]{AGS12}. Let us sketch the
argument.
As in the proof of Theorem~\ref{thm:Bochner}, we first assume $f \in
D ( \Delta ) \cap L^\infty (X,m)$ with $\Delta f \in D ( \Delta )
\cap L^\infty (X,m)$. Fix $g\geq0$ with $g\in D(\Delta)\cap
L^\infty(X,m)$ and $\Delta g\in L^\infty(X,m) \cap D (\ch)$ and
consider the function
\begin{align*}
h(s)~:=~\mathrm{e}^{-2Ks}\int H_{s}g\wug{H_{t-s}f}^2\,\mathrm{d} m\;.
\end{align*}
One estimates the derivative of $h$ as:
\begin{align*}
h'(s)~&=~-2K\mathrm{e}^{-2Ks}\int H_{s}g\wug{H_{t-s}f}^2\,\mathrm{d} m\\
&\quad +\mathrm{e}^{-2Ks}\int \Delta H_{s}g\wug{H_{t-s}f}^2\,\mathrm{d} m\\
&\quad -2\mathrm{e}^{-2Ks}\int H_{s}g \langle\nabla H_{t-s}f,\nabla \Delta H_{t-s}f\rangle\,\mathrm{d} m\\
&\geq~\frac{2}{N}\mathrm{e}^{-2Ks}\int H_{s}g\big(\Delta H_{t-s}f\big)^2\,\mathrm{d} m\\
&\geq~\frac{2}{N}\mathrm{e}^{-2Ks}\int g\big(\Delta H_{t}f\big)^2\,\mathrm{d} m\;,
\end{align*}
where we have used \eqref{eq:Bochner} in the first and Jensen's
inequality in the second inequality. A computation similar to the
first equality in \eqref{eq:Bochner3}, deduces that $h$ is
continuous at $0$ and $t$ since $g,f \in L^\infty$. Thus,
integrating from $0$ to $t$ we obtain:
\begin{align*}
\int g\wug{ H_{t}f}^2\,\mathrm{d} m + \frac{1-\mathrm{e}^{-2Kt}}{NK}\int g\big(\Delta H_{t}f\big)^2\,\mathrm{d} m
&\leq \mathrm{e}^{-2Kt}\int H_{t} g\wug{f}^2\,\mathrm{d} m\;.
\end{align*}
For the general case,
we approximate $f \in D (\ch)$ and $g \in L^2 (X,m) \cap L^\infty (X,m)$
by $h^\varepsilon ( f \wedge R )$ and $h^{\varepsilon'} g$ respectively.
As we did in the proof of Theorem~\ref{thm:Bochner},
We can take $R \to \infty$, $\varepsilon \to 0$ to obtain the last inequality
for $f$ and $h^{\varepsilon'} g$.
Since $h^{\varepsilon'} g$ converges to $g$
with respect to weak${}^*$ topology in $L^\infty (X,m)$ as $\varepsilon' \to 0$,
the last inequality holds for general $f$ and $g$.
This is sufficient to complete the proof.
\end{proof}
\subsection{From $\bl(K,N)$ to ${\cd^e(K,N)}$}\label{sec:bochner2cde}
In the following section, we will always assume that $(X,d,m)$ is an
infinitesimally Hilbertian metric measure space and that
Assumption~\ref{ass:Ch-reg} holds. We will show that the Bakry--Ledoux
gradient estimate $\bl(K,N)$ implies the entropic curvature-dimension
condition ${\cd^e(K,N)}$ and thus the ${\rcd^*(K,N)}$ condition.
Our approach is strongly inspired by the recent work \cite{AGS12} of
Ambrosio, Gigli and Savar\'e. We follow their presentation and adopt
to a large extent their notation. Under Assumption~\ref{ass:Ch-reg} we
can rely on the results in \cite{AGS12}, since the condition
$\bl(K,N)$ is more restrictive than the classical Bakry--\'Emery
gradient estimate $\bl(K,\infty)$. In particular, we already know that
the Riemannian curvature condition $\rcd(K,\infty)$ holds true,
c.f. Remark~\ref{rem:regularity}, \cite[Cor.~4.18]{AGS12}. Moreover,
we also know that the semigroup $H_t$ coincides with the gradient flow
$\mathscr{H}_t$ of the entropy in $\mathscr{P}_2(X,d)$ in the sense of
Theorem~\ref{thm:gf-identification}.
The crucial ingredient in our argument is the action estimate
Proposition~\ref{prop:action-est}. This result calls for an extensive
regularization procedure that was already used in \cite{AGS12}, both
for curves in $\mathscr{P}_2(X)$ and for the entropy functional, which we will
discuss below. The main difference of our approach compared to
\cite{AGS12} is that our argument now relies on the analysis of the
(nonlinear) gradient flow $(\nu_t)_{t\ge0}$ for the functional $-U_N$
instead of the analysis of the (linear) heat flow which is the
gradient flow $(\mu_t)_{t\ge0}$ for $\ent$. Both flows are related to
each other via time change:
\begin{align*}
\nu_t=\mu_{\tau_t},\qquad \partial_t \tau_t=\frac1N U_N(\mu_{\tau_t})\;.
\end{align*}
More precisely, the following lemma yields that this time change is
well-defined.
\begin{lemma}\label{lem:welldef-tau}
Let $\rho\in D(\ent)\subset \mathscr{P}_2(X,d,m)$. Then there exist
constants $a,c>0$ depending only on $\abs{\ent(\rho)}$ and the
second moment of $\rho$ such that a map $\tau:[0,a]\to[0,\infty)$
can be defined implicitly by
\begin{align}\label{eq:deftau}
\int_0^{\tau_{t}}\exp\left(\frac1N\ent(\mathscr{H}_r\rho)\right)\mathrm{d} r~=~t
\end{align}
and for any $t\in[0,a]$ we have $\tau_t\leq c t$. Moreover, we have
\begin{align}\label{eq:diff-tau}
\frac{\mathrm{d}}{\mathrm{d}t} \tau_t~=~\frac1N U_N(\mathscr{H}_{\tau_t}\rho)\;.
\end{align}
\end{lemma}
\begin{proof}
We first derive a lower bound on $\ent(\mathscr{H}_r\rho)$. Let us set
$V(x)=d(x_0,x)$ for some $x_0\in X$. By \eqref{eq:exp-int} we have
that $z=\int \mathrm{e}^{-V^2}\mathrm{d} m<\infty$ and $\tilde m=z^{-1}\mathrm{e}^{-V^2}m$
is a probability measure. Now \cite[Thm.~4.20]{AGS11a} (together
with a trivial truncation argument) yields that
\begin{align*}
\int V^2\mathrm{d} (\mathscr{H}_r\rho)~\leq~\mathrm{e}^{4r}\Big(\ent(\rho)+2\int V^2\mathrm{d}\rho\Big)~=:~\mathrm{e}^{4r}c'\;.
\end{align*}
Hence we obtain
\begin{align*}
\ent(\rho)~\geq~\ent(\mathscr{H}_r\rho)~=~\ent(\mathscr{H}_r\rho|\tilde m) - \int V^2\mathrm{d} (\mathscr{H}_r\rho) - \log z~\geq~-\mathrm{e}^{4r}c' - \log z\;.
\end{align*}
Now fix some $R>0$ and put $a=z^{-1}\int_0^R\exp(-\mathrm{e}^{4r}c'/N)\mathrm{d}
r$, $c=z \exp(\mathrm{e}^{4R}c'/N)$. Then define the function
$F:[0,R]\to[0,F(R)]$ via
$F(u)=\int_0^u\exp\big(\ent(\mathscr{H}_r\rho)/N\big)\mathrm{d} r$. Since $F$ is
strictly increasing with $F(0)=0$ and $F(R)\geq a$ by the preceding
estimate we can define $\tau_t=F^{-1}(t)$ for any
$t\in[0,a]$. Moreover, we have $F(u)\geq c^{-1}u$ for any $u\leq R$
which implies $\tau_t\leq ct$. Finally \eqref{eq:diff-tau} follows
immediately from the differentiability of $F$.
\end{proof}
More generally, given a continuous curve $(\rho_s)_{s\in[0,1]}$ in
$\mathscr{P}_2(X,d,m)$ such that $\max_s\abs{\ent(\rho_s)}<\infty$ we define a
time change $\tau_{s,t}$ implicitly via
\begin{align}\label{eq:tau-def2}
\int_0^{\tau_{s,t}}\exp\left(\frac1N\ent(\mathscr{H}_r\rho_s)\right)\mathrm{d} r~=~st
\end{align}
for $s\in[0,1]$ and $t\in[0,a]$ satisfying
\begin{align}\label{eq:bd-tau}
\tau_{s,t}~\leq~c\cdot st
\end{align}
for suitable constants $a,c>0$ depending only on a uniform bound on
the entropy and second moments of $( \rho_s )_{s \in [0,1]}$ and moreover
\begin{align}\label{eq:diff-tau2}
\partial_t \tau_{s,t}~=~s\cdot U_N(\mathscr{H}_{\tau_{s,t}}\rho_s)\;.
\end{align}
We will now describe the regularization procedure needed in the
sequel. We will use the notion of \emph{regular curve} as introduced
in \cite[Def.~4.10]{AGS12}. Briefly, a curve $(\rho_s)_{s\in[0,1]}$
with $\rho_s=f_s m$ is called regular if the following are satisfied:
\begin{itemize}
\item $(\rho_s)$ is $2$-absolutely continuous in $\mathscr{P}_2(X,d)$,
\item $\ent(\rho_s)$ and $I(\bH_tf_s)$ are bounded for $s\in[0,1], t\in[0,T]$,
\item $f\in C^1\big([0,1],L^1(X,m)\big)$ and $\Delta^{(1)}f\in C\big([0,1],L^1(X,m)\big)$,
\item $f_s=h^\varepsilon\tilde f_s$ for some $\tilde f_s\in L^1(X,m)$ and $\varepsilon>0$.
\end{itemize}
Here $I(f)=4\ch(\sqrt{f})$ denotes the Fisher information,
$\Delta^{(1)}$ denotes the generator of the semigroup $H_t$ in
$L^1(X,m)$ and $h^\varepsilon$ is the mollification of the semigroup given in
\eqref{eq:sg-moll}. In the sequel we will denote by $\dot f_s$ the
derivative of $[0,1]\ni s\mapsto f_s\in L^1(X,m)$. We will mostly
denote both the generator in $L^1$ and in $L^2$ by $\Delta$. In the
following we will need an approximation result which is a
reinforcement of \cite[Prop.~4.11]{AGS12}.
\begin{lemma}[Approximation by regular curves]\label{lem:reg-curves}
Let $(\rho_s)_{s\in[0,1]}$ be an $AC^2$-curve in $\mathscr{P}_2(X,d,m)$ such
that $s\mapsto\ent(\rho_s)$ is bounded and continuous. Then there
exists a sequence of regular curves $(\rho_s^n)$ with the following
properties. As $n\to\infty$ we have for any $s\in[0,1]$:
\begin{align}\label{eq:reg-curves1}
W_2(\rho^n_s,\rho_s)~&\to~0\;,\\\label{eq:reg-curves1a}
\limsup \abs{\dot\rho^n_s}~&\leq~\abs{\dot\rho_s}\quad\text{a.e. in }[0,1]\;,\\\label{eq:reg-curves2}
\ent(\mathscr{H}_r\rho^n_s)~&\to~\ent(\mathscr{H}_r\rho_s)\quad\forall r>0\;,\\\label{eq:reg-curves3}
\tau^n_{s,t}~&\to~\tau_{s,t}\;,
\end{align}
where $\tau^n$ and $\tau$ denote the time changes defined via the
curves $(\rho^n_s)$ and $(\rho_s)$ respectively on $[0,1]\times[0,a]$
for suitable $a>0$. Moreover, for any $\delta>0$ there are $n_0,r_0>0$
such that for any $n> n_0$ and $r<r_0$ and all $s\in[0,1]$ we have:
\begin{align}\label{eq:reg-curves4}
\abs{\ent(\rho_s)-\ent(\mathscr{H}_r\rho^n_s)}~<~\delta\;.
\end{align}
\end{lemma}
\begin{proof}
Following \cite[Prop.~4.11]{AGS12} we employ a threefold
regularization procedure. We trivially extend $(\rho_s)_s$ to ${\mathbb R}$
with value $\rho_0$ in $(-\infty,0)$ and $\rho_1$ in
$(1,\infty)$. Given $n$, we first define
$\rho^{n,1}_s=\mathscr{H}_{1/n}\rho_s=f^{n,1}_sm$. The second step
consists in a convolution in the time parameter. We set
\begin{align*}
\rho^{n,2}_s=f^{n,2}m\;,\qquad f^{n,2}_s~=~\int_{\mathbb R} f^{n,1}_{s-s'}\psi_{n}(s')\mathrm{d} s'\;,
\end{align*}
where $\psi_n(s)=n\cdot\psi(n s)$ for some smooth kernel
$\psi:{\mathbb R}\to{\mathbb R}_+$ with $\int\psi(s)\mathrm{d} s=1$. Finally, we set
\begin{align*}
\rho^{n}_s~=~f^n_sm\;,\qquad f^n_s~=~h^{1/n} f^{n,2}_s\;,
\end{align*}
where $h^\varepsilon$ denotes a mollification of the semigroup given by
\eqref{eq:sg-moll}. It has been proven in \cite[Prop.~4.11]{AGS12}
that $(\rho^n_s)_{s\in[0,1]}$ constructed in this way is a regular
curve and that \eqref{eq:reg-curves1} holds. \eqref{eq:reg-curves1a}
follows from the convexity properties of $W_2^2$ and the
$K$-contractivity of the heat flow. Let us now prove
\eqref{eq:reg-curves2}. Note that on the level of measures the
semigroup commutes with the regularization, i.e. $H_r\rho^n_s =
\tilde\rho_s^n$ where $\tilde\rho_s:=\mathscr{H}_r\rho_s$. Thus it is
sufficient to prove \eqref{eq:reg-curves2} for $r=0$. By
\eqref{eq:reg-curves1} and lower semicontinuity of the entropy we
have $\ent(\rho_s)\leq\liminf_{n\to\infty}\ent(\rho_s^n)$. On the
other hand, using the convexity properties of the entropy and the
fact that $\mathscr{H}_r$ and thus also $h^{1/n}$ decreases the entropy we
estimate
\begin{align}\nonumber
\ent(\rho^n_s)~&\leq~\ent(\rho^{n,2}_s)~\leq~\int\psi_n(s')\ent(\mathscr{H}_{1/n}\rho_{s-s'})\mathrm{d} s'~\leq~\int\psi_n(s')\ent(\rho_{s-s'})\mathrm{d} s'\\\label{eq:contra0}
&\leq~\ent(\rho_s) + \int\psi_n(s')\abs{\ent(\rho_{s-s'})-\ent(\rho_s)}\mathrm{d} s'\;.
\end{align}
The last term vanishes as $n\to\infty$ since $s\mapsto\ent(\rho_s)$ is
uniformly continuous by compactness. Thus we obtain
$\limsup_{n\to\infty}\ent(\rho_s^n)\leq \ent(\rho_s)$ and hence
\eqref{eq:reg-curves2}. To prove \eqref{eq:reg-curves3} define the
functions
\begin{align*}
F_n(u)~&=~\int_0^u\exp\left(\frac1N\ent(\mathscr{H}_r\rho^n_s)\right)\mathrm{d} r\;, &
F(u)~&=~\int_0^u\exp\left(\frac1N\ent(\mathscr{H}_r\rho_s)\right)\mathrm{d} r\;.
\end{align*}
Arguing as in Lemma~\ref{lem:welldef-tau} we see that
$\tau^n_{s,t}=F^{-1}_n(st)$ and $\tau_{s,t}=F^{-1}(st)$ can be
defined simultaneously on $[0,1]\times[0,a]$ and satisfy
$\abs{F_n(u)-F_n(v)}\geq c^{-1} \abs{u-v}$ for suitable constants
$a,c>0$ independent of $n$. Since moreover, by
\eqref{eq:reg-curves2} and dominated convergence we have $F_n\to F$
pointwise as $n\to\infty$ we conclude the convergence
\eqref{eq:reg-curves3}.
We now prove the last statement of the lemma. To conclude the proof
we proceed by contradiction. Assume the contrary, i.e. that there
exists $\delta>0$ and a sequences $n_k\to\infty,r_k\to0$ and
$(s_k)\subset[0,1]$ such that
$\abs{\ent(\rho_{s_k})-\ent(\mathscr{H}_{r_k}\rho^{n_k}_{s_k})}~\geq~\delta$
for all $k$. Taking into account \eqref{eq:contra0} and the fact
that $\mathscr{H}_r$ decreases entropy we must have that for all $k$
sufficiently large
\begin{align}\label{eq:contra1}
\ent(\rho_{s_k})-\ent(\mathscr{H}_{r_k}\rho^{n_k}_{s_k})~\geq~\delta\;.
\end{align}
By compactness we can assume $s_k\to s_0$ as $k\to\infty$ for some
$s_0\in[0,1]$. We claim that as $k\to\infty$ we have
$\mathscr{H}_{r_k}\rho^{n_k}_{s_k}\to \rho_{s_0}$ in $W_2$. Indeed, since
$\mathscr{H}_r$ satisfies a Wasserstein contraction and by the convexity
properties of $W_2$ the regularizing procedure increases distances
at most an exponential factor (see also
\cite[Prop.~4.11]{AGS12}). Hence, the triangle inequality yields
\begin{align*}
W_2(\rho_{s_0},\mathscr{H}_{r_k}\rho^{n_k}_{s_k})~&\leq~W_2(\rho_{s_0},\mathscr{H}_{r_k}\rho_{s_0}) + W_2(\mathscr{H}_{r_k}\rho_{s_0}, \mathscr{H}_{r_k}\rho^{n_k}_{s_0}) + W_2(\mathscr{H}_{r_k}\rho^{n_k}_{s_0},\mathscr{H}_{r_k}\rho^{n_k}_{s_k})\\
&\leq~W_2(\rho_{s_0},\mathscr{H}_{r_k}\rho_{s_0}) + \mathrm{e}^{-Kr_k}W_2(\rho_{s_0},\rho^{n_k}_{s_0}) + \mathrm{e}^{-K r_k}W_2(\rho_{s_0},\rho_{s_k}) + o(1)\;,
\end{align*}
and the claim follows from the continuity of $\mathscr{H}_r$ at $r=0$,
\eqref{eq:reg-curves1} and the continuity of the curve $(\rho_s)$.
Letting now $k\to\infty$ in \eqref{eq:contra1}, using continuity of
$s\mapsto\ent(\rho_s)$ and lower semicontinuity of $\ent$, we obtain
the following contradiction:
\begin{align*}
0~=~\ent(\rho_{s_0})-\ent(\rho_{s_0})~\geq~\limsup\limits_{k\to\infty} \Big(\ent(\rho_{s_k}) - \ent(\mathscr{H}_{r_k}\rho^{n_k}_{s_k})\Big)~\geq~\delta\;.
\end{align*}
\end{proof}
The following calculations will be a crucial ingredient in our
argument. For a detailed justification see \cite[Lem.~4.13,
4.15]{AGS12}. The only difference here is the additional time change
in the semigroup. For the following lemmas let $(\rho_s)_{s\in[0,1]}$
be a regular curve and let $\phi:X\to{\mathbb R}$ be Lipschitz with bounded
support. Let $\theta:[0,1]\to[0,\infty)$ be an increasing $C^1$
function with $\theta(0)=0$ and set
$\rho_{s,\theta}=\mathscr{H}_{\theta_s}\rho_s=f_{s,\theta}m$. Moreover, we set
$\phi_s=Q_s\phi$ for $s\in[0,1]$, where
\begin{align*}
Q_s\phi(x)~:=~\inf\limits_{y\in X} \left[ f(y) + \frac{d^2(x,y)}{2s} \right]
\end{align*}
denotes the Hopf-Lax semigroup. We refer to \cite[Sec. 3]{AGS11a} for
a detailed discussion. We recall that since $(X,d)$ is a length space,
$Q$ provides a solution to the Hamilton--Jacobi equation, i.e.
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d} s}Q_s\phi~=~-\abs{\nabla Q_s\phi}
\end{align*}
for a.e. $s\in[0,1]$, see \cite[Prop.~3.6]{AGS11a}. Moreover, we have
the a priori Lipschitz bound (\cite[Prop.~3.4]{AGS11a})
\begin{align}\label{eq:Lip-HL}
\Lip(Q_s\phi)~\leq~2\Lip(\phi)\;.
\end{align}
\begin{lemma}\label{lem:calculus1}
The map $s\mapsto\int\phi_s\mathrm{d}\rho_{s,\theta}$ is absolutely
continuous and we have for a.e. $s\in[0,1]$:
\begin{align}\label{eq:calculus1}
\frac{\mathrm{d}}{\mathrm{d} s}\int\phi_s\mathrm{d}\rho_{s,\theta}~=~
\int\left(-\frac12|\nabla\varphi_s|^2\,f_{s,\theta}+\dot
f_s \bH_{\theta_s}\varphi_s+\dot\theta_s \Delta f_{s,\theta}\cdot\varphi_s \right)\mathrm{d} m\;.
\end{align}
\end{lemma}
We use a regularization $E_\varepsilon$ of the entropy functional where the
singularities of the logarithm a truncated. Let us define
$e_\varepsilon:[0,\infty)\to{\mathbb R}$ by setting
$e_\varepsilon'(r)=\log(\varepsilon+r\wedge\varepsilon^{-1} )+1$ and $e_\varepsilon(0)=0$. Then for
any $\rho=fm\in\mathscr{P}_2(X,d,m)$ we define
\begin{align*}
E_\varepsilon(\rho):=\int e_\varepsilon(f)\mathrm{d} m\;,\qquad
U^\varepsilon_N(\rho)=\exp\left(-\frac1N E_\varepsilon(\rho)\right)\;.
\end{align*}
Moreover we set $p_\varepsilon(r)=e_\varepsilon'(r^2)-\log\varepsilon - 1$. Note that for any
$\rho\in D(\ent)$ we have $E_\varepsilon(\rho)\to\ent(\rho)$ as $\varepsilon\to0$.
\begin{lemma}\label{lem:calculus2}
The map $s\mapsto E_\varepsilon(\rho_{s,\theta})$ is absolutely continuous
and we have for all $s\in[0,1]$:
\begin{align}\label{eq:calculus2}
\frac{\mathrm{d}}{\mathrm{d} s}E_\varepsilon(\rho_{s,\theta})~=~ \int \left( \dot f_s\bH_{\theta_s}g^\varepsilon_{s,\theta}+\dot\theta_s \Delta f_{s,\theta}\cdot g^\varepsilon_{s,\theta}\right)\mathrm{d} m\;,
\end{align}
where we put $g^\varepsilon_{s,r}=p_\varepsilon(\sqrt{f_{s,r}})$.
\end{lemma}
We also need to introduce the time change related to the regularized
entropy. For fixed $\varepsilon>0$ and let us define $\tau^\varepsilon_{s,t}$
implicitly by
\begin{align}\label{eq:deftau-reg}
\int_0^{\tau^\varepsilon_{s,t}}\exp\left(\frac1N E_\varepsilon(\mathscr{H}_r\rho_s)\right)\mathrm{d} r~=~st\;.
\end{align}
\begin{lemma}\label{lem:calculus3}
$\tau^\varepsilon$ is well defined on $[0,1]\times[0,a]$ and satisfies
$\tau^\varepsilon_{s,t}\leq c\cdot st$ for constants $a,c>0$ depending only
on $\max_s\abs{\ent(\rho_s)}$ and the second moments of $(\rho_s)_{s
\in [ 0, 1 ]}$. For fixed $t$ the map $s\mapsto\tau^\varepsilon_{s,t}$
is $C^1$ on $[0,1]$ and we have:
\begin{align}\label{eq:calculus3}
\partial_s\tau^\varepsilon_{s,t}~=~t\cdot
U^\varepsilon_N(\mathscr{H}_{\tau^\varepsilon}\rho_s)-\frac1N\int_0^{\tau^\varepsilon}
\frac{U^\varepsilon_N(\mathscr{H}_{\tau^\varepsilon}\rho_s)}{U^\varepsilon_N(\mathscr{H}_r\rho_s)}\int_X\dot
f_sH_rg^\varepsilon_{s,r}\,\mathrm{d} m \, \mathrm{d} r\;.
\end{align}
Moreover, as $\varepsilon\to0$ we have $\tau^\varepsilon_{s,t}\to\tau_{s,t}$,
where $\tau$ is the time change defined by \eqref{eq:deftau-reg}.
\end{lemma}
\begin{proof}
Define the function
$F_\varepsilon(s,u)=\int_0^u\exp\left(E_\varepsilon(\mathscr{H}_r\rho_s)/N\right)\mathrm{d} r$.
Note that a uniform bound on $\abs{\ent(\rho_s)}$ implies a uniform
bound on $\abs{E_\varepsilon(\rho_s)}$ independent of $\varepsilon$. Thus we can
argue as in Lemma~\ref{lem:welldef-tau} to find $a,c$ such that
$\tau_{s,t}^\varepsilon$ is well-defined on $[0,1]\times[0,a]$ by
$F_\varepsilon(s,\tau^\varepsilon_{s,t})=st$ and satisfies $\tau^\varepsilon_{s,t}\leq c
\cdot st$. Using Lemma~\ref{lem:calculus2} and the fact that
$s\mapsto \dot f_s$ is continuous in $L^1(X,m)$, since $(\rho_s)_s$
is a regular curve, we see that $s\mapsto E_\varepsilon(\mathscr{H}_r\rho_s)$ is
$C^1$ for fixed $r\geq0$. Moreover, using the boundedness of
$E_\varepsilon(\mathscr{H}_r\rho_s)$ we obtain that $F_\varepsilon(\cdot,\cdot)$ is
$C^1$. Thus the differentiability of $s\mapsto\tau^\varepsilon_{s,t}$
follows from the implicit function theorem and \eqref{eq:calculus3}
is obtained by differentiating \eqref{eq:deftau-reg} w.r.t. $s$. The
last statement about convergence follows as for
\eqref{eq:reg-curves3} using that $E_\varepsilon(\rho_s)\to\ent(\rho_s)$ as
$\varepsilon\to0$.
\end{proof}
We need the following integrations by parts and estimates for the
integrals appearing in \eqref{eq:calculus1},
\eqref{eq:calculus2}. Recall that $I(f)=4\int \wug{\sqrt{f}}^2\mathrm{d}
m$ denotes the Fisher information of a measure $\rho=fm$.
\begin{lemma}\label{lem:alphabeta-est}
Let $f=h^\varepsilon\tilde f$ for some $\tilde f\in L^1_+(X,m)$ with
$\tilde f m\in \mathscr{P}_2(X,m)$. Then for any Lipschitz function $\phi$
with bounded support we have
\begin{align}\label{eq:beta-est}
\int\ip{\nabla\phi,\nabla g^\varepsilon}f\mathrm{d} m + \int
q_\varepsilon(f)\ip{\nabla\sqrt{f},\nabla\phi}\mathrm{d} m~=~-\int \phi\Delta f\mathrm{d}
m~\leq~2\Lip(\phi)\cdot\sqrt{I(f)}\;,
\end{align}
where $q_\varepsilon(r)=\sqrt{r}\big(2-\sqrt{r}p_\varepsilon'(\sqrt{r})\big)$ and
$g^\varepsilon=p_\varepsilon(\sqrt{f})$. Moreover we have
\begin{align}\label{eq:alpha-est}
\int \wug{g^\varepsilon}^2f\mathrm{d} m ~\leq~-\int g^\varepsilon\Delta f\mathrm{d}
m~\leq~I(f)\;.
\end{align}
\end{lemma}
\begin{proof}
We first obtain from \cite[Thm.~4.4]{AGS12}
\begin{align*}
-\int \phi\Delta f\mathrm{d} m~&=~2\int\sqrt{f} \langle\nabla\phi , \nabla \sqrt{f}\rangle\,\mathrm{d} m\;
\end{align*}
Now the first equality in \eqref{eq:beta-est} is immediate from the
chain rule \eqref{eq:chainrule-wug} for minimal weak upper gradients
and integration by parts while the second inequality follows readily
using H\"older's inequality. To prove \eqref{eq:alpha-est} we use
that by \cite[Lem.~4.9]{AGS12} for any bounded non-decreasing
Lipschitz function $\omega:[0,\infty)\to{\mathbb R}$ with
$\sup_rr\omega'(r)<\infty$:
\begin{align}\label{eq:rough-ibp}
-\int \omega(f)\Delta^{(1)} f \mathrm{d} m~\geq~4\int f
\omega'(f)\wug{\sqrt{f}}^2\mathrm{d} m\;.
\end{align}
Further note that $r\cdot e_\varepsilon''(r)\leq1$ and hence $4r\cdot
e_\varepsilon''(r)\geq 4r^2\big(e_\varepsilon''(r)\big)^2 =
r\big(p_\varepsilon'(\sqrt{r})\big)^2$. Hence we get by the chain rule:
\begin{align}\label{eq:fwugg}
f\wug{g^\varepsilon}^2~=~f\big(p_\varepsilon'(\sqrt{f})\big)^2\wug{\sqrt{f}}^2~\leq~ 4fe_\varepsilon''(f)\wug{\sqrt{f}}^2\;.
\end{align}
Combining this with \eqref{eq:rough-ibp} yields the first inequality
in \eqref{eq:alpha-est}. For the second inequality note that, since
we already now that $\rcd(K,\infty)$ holds, $\tilde\bH_\delta
g^\varepsilon$ is bounded and Lipschitz for all $\delta>0$ by
\cite[Thm.~6.8]{AGS11b}.
Hence \cite[Thm.~4.4]{AGS12} and H\"older's
inequality yield
\begin{align*}
-\int\Delta f\bH_\delta g^\varepsilon\mathrm{d} m~&=~2\int\sqrt{f}\langle\nabla \bH_\delta g^\varepsilon , \nabla \sqrt{f}\rangle\,\mathrm{d} m~\leq~2\ch(\sqrt{f})^{\frac12}\cdot\left(\int f\wug{\bH_\delta g^\varepsilon}\mathrm{d} m\right)^{\frac12}\\
&\leq~4\mathrm{e}^{-K\delta}\ch(\sqrt{f})\;,
\end{align*}
where we have used again \eqref{eq:fwugg} and $\bl(K,\infty)$ in the
last step. Letting $\delta\to0$ yields the second inequality in
\eqref{eq:alpha-est}.
\end{proof}
We will often use the following estimate (see
\cite[Lem.~4.12]{AGS12}). For any AC$^2$ curve
$(\rho_s)_{s\in[0,1]}$ with $\rho_s=f_s m$ and $f\in
C^1\big((0,1),L^1(X,m)\big)$ and any
Lipschitz function $\phi$
we have
\begin{align}\label{eq:speed-est}
\left|\int \dot f_s \phi\mathrm{d} m\right|~\leq~\abs{\dot\rho_s}\cdot\sqrt{\int \abs{\nabla\phi}^2f_s\mathrm{d} m}\;.
\end{align}
The following result is the crucial ingredient in our argument.
\begin{proposition}[Action estimate]\label{prop:action-est}
Assume that $(X,d,m)$ satisfies $\bl(K,N_0)$. Let
$(\rho_s)_{s\in[0,1]}$ be a regular curve and $\phi$ a Lipschitz
function with bounded support and denote by $\phi_s=Q_s\phi$ the
Hamilton--Jacobi flow for $s\in[0,1]$. Then for any $N>N_0$ and
$t\in[0,a]$:
\begin{align}\nonumber
&\int\varphi_1\mathrm{d}\rho_{1,\tau}-\int\varphi_0\mathrm{d}\rho_{0}-\frac12\int_0^1|\dot\rho_s|^2\mathrm{e}^{-2K\tau}\mathrm{d} s+Nt\cdot\left[ U_N(\rho_0)-U_N(\rho_{1,\tau})\right]\\\label{eq:key-est}
\le~&C_1\int_0^1\frac\tau4\left[\left(\frac{U_N(\rho_{s,\tau})}{U_N(\rho_{s})}\right)^2-1-4\left(\frac{N}{N_0}-1\right) + C_2\tau\right]\,\mathrm{d} s\;,
\end{align}
The constant $C_2$ depends only on $K$ and
$\max_{s\in[0,1]}\abs{\ent(\rho_s)}$, the constant $C_1$ depends in
addition on $\max_{s\in[0,1]}I(\rho_s)$ and $\phi$.
\end{proposition}
\begin{proof}
For simplicity we assume that $\bl(K,N_0)$ holds with $C\equiv
1$. We use the abbreviations $\a_r=\a_{s,r}=-\int g^\varepsilon_{s,r}\Delta
f_{s,r}\,\mathrm{d} m$ and $\b_r=\b_{s,r}=\int \varphi_s \Delta f_{s,r}\mathrm{d}
m$. Moreover, we put $u_r=u_{s,r}=U^\varepsilon_N(\rho_{s,r})$. We will
also write $\a=\a_{s,\tau}$, $\b=\b_{s,\tau}$, $u=u^\varepsilon_{s,\tau}$.
Using Lemmas~\ref{lem:calculus1}, \ref{lem:calculus3} and \eqref{eq:lip-wug}, we obtain
\begin{eqnarray*}
(A)&:=&\int\varphi_1d\rho_{1,\tau}-\int\varphi_0d\rho_{0}-\frac12\int_0^1|\dot\rho_s|^2\mathrm{e}^{-2K\tau}\mathrm{d} s\\
&=&
\int_0^1\left[-\frac12|\dot\rho_s|^2\mathrm{e}^{-2K\tau}+\int\left(-\frac12|\nabla\varphi_s|^2\,f_{s,\tau}+\dot f_sH_\tau\varphi_s+\dot\tau \Delta H_\tau f_s\cdot\varphi_s \right)\mathrm{d} m\right]\mathrm{d} s\\
&\leq&
\int_0^1 \left[-\frac12 |\dot\rho_s|^2\mathrm{e}^{-2K\tau}-\frac12\int \wug{\varphi_s}^2\, f_{s,\tau} \mathrm{d} m\right.\\
&&\left.+\int \dot f_s\cdot H_{\tau}\varphi_s\,\mathrm{d} m+ \beta t u
-\beta\frac1N\int_0^\tau\frac{u}{u_r} \int \dot f_s\cdot H_rg^\varepsilon_{s,r}\,\mathrm{d} m\,\mathrm{d} r\right]\,\mathrm{d} s.
\end{eqnarray*}
Moreover, by Lemma~\ref{lem:calculus2}, we have
\begin{eqnarray*}
(B)&:=& Nt\cdot\left[ U^\varepsilon_N(\rho_0)-U^\varepsilon_N(\rho_{1,\tau})\right]=t\int_0^1U^\varepsilon_N(\rho_{s,\tau})\partial_sE_\varepsilon(\rho_{s,\tau})\mathrm{d} s\\
&=&
t\int_0^1 U^\varepsilon_N(\rho_{s,\tau})\cdot\int g^\varepsilon_{s,\tau}\cdot\left[H_\tau \dot f_s+\dot\tau \Delta H_\tau f_s\right]\mathrm{d} m\, \mathrm{d} s\\
&=& \int_0^1 \left[t u\cdot \int \dot f_s\cdot H_\tau g^\varepsilon_{s,\tau}\,\mathrm{d} m
-t^2 u^2\alpha\right.\\
&&\left. +tu\alpha\frac1N\int_0^{\tau} \frac{u}{u_r} \int \dot f_s\cdot H_rg^\varepsilon_{s,r}\,\mathrm{d} m\,\mathrm{d} r\right]\,\mathrm{d} s.
\end{eqnarray*}
Adding up
\begin{eqnarray*}
(A)+(B)&\le&
\int_0^1 \left[-\frac12 |\dot\rho_s|^2\mathrm{e}^{-2K\tau}-\frac12\int \wug{\varphi_s}^2\, f_{s,\tau} \mathrm{d} m
+tu(\beta-tu\alpha)
\right.\\
&&
\left.+\frac1\tau\int_0^\tau \int \dot f_s\,\mathrm{e}^{-K\tau}\cdot\left[ H_\tau\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)\,
-\frac\tau N(\beta-tu\alpha)\frac{u}{u_r} H_rg^\varepsilon_{s,r}\right]\,\mathrm{d} m\,\mathrm{e}^{K\tau}\,\mathrm{d} r\right]\mathrm{d} s\\
&\le&
\int_0^1 \left[-\frac12\int \wug{\varphi_s}^2\, f_{s,\tau} \mathrm{d} m
+tu(\beta-tu\alpha)
\right.\\
&&
\left.+\frac1\tau\int_0^\tau \frac12\int \left|\nabla\left[ H_\tau\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)\
-\frac\tau N(\beta-tu\alpha)\frac{u}{u_r} H_r g^\varepsilon_{s,r}\right]\right|^2f_s\,\mathrm{d} m\,\mathrm{e}^{2K\tau}\,\mathrm{d} r\right]\mathrm{d} s\\
&\le&
\int_0^1 \left[-\frac12\int \wug{\varphi_s}^2\, f_{s,\tau} \mathrm{d} m
+tu(\beta-tu\alpha)
\right.\\
&&
\left.+\frac1\tau\int_0^\tau \frac12\int \left|\nabla\left[ H_{\tau-r}\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)\,
-\frac\tau N(\beta-tu\alpha)\frac{u}{u_r} g^\varepsilon_{s,r}\right]\right|_{w}^2f_{s,r}\,\mathrm{d} m\,\mathrm{e}^{2K(\tau-r)}\,\mathrm{d} r\right]\mathrm{d} s\\
&&
\left.-\frac1\tau\int_0^\tau \frac {r}{N_0}\int \left|\Delta\left[ H_\tau\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)\,
-\frac{\tau} {N}(\beta-tu\alpha)\frac{u}{u_r} H_r g^\varepsilon_{s,r}\right]\right|^2f_s\,\mathrm{d} m\, \mathrm{e}^{2K\tau} \,\mathrm{d} r\right]\mathrm{d} s\\
&=:& (C) + ([D+E]^2) +(F)\;.
\end{eqnarray*}
Here we have used \eqref{eq:speed-est} in the second inequality and in
the last inequality the Bakry--Ledoux gradient estimate $\bl(K,N_0)$
applied to the semigroup $H_r$ in the strong form given by Proposition
\ref{prop:grad-est-ref}. The last term will be estimated as follows
\begin{eqnarray*}
(F)&\le&\int_0^1\left[-\frac1\tau\int_0^\tau \frac {r}{N_0}\left|\int \Delta\left[ H_\tau\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)-\frac\tau N(\beta-tu\alpha)\frac{u}{u_r} H_rg^\varepsilon_{s,r}\right]f_s\,\mathrm{d} m\right|^2\,\mathrm{e}^{2K\tau}
\,\mathrm{d} r\right]\mathrm{d} s\\
&=&\int_0^1\left[-\frac1\tau\int_0^\tau \frac {r}{N_0}\left|\beta-tu\alpha
+\frac\tau N(\beta-tu\alpha)\frac{u}{u_r}\alpha_r\right|^2\,\mathrm{e}^{2K\tau}
\,\mathrm{d} r\right]\mathrm{d} s\\
&=&\int_0^1\left[-\frac1\tau\int_0^\tau \frac {r}{N_0}\left| \beta-tu\alpha\right|^2\cdot\left|1+\frac\tau N\frac{u}{u_r}\alpha_r\right|^2\,\mathrm{e}^{2K\tau}
\,\mathrm{d} r\right]\mathrm{d} s.
\end{eqnarray*}
By virtue of Lemma~\ref{lem:alphabeta-est}, the second last term
$([D+E]^2)$ can be decomposed into
\begin{eqnarray*}
(E^2)&=&
\int_0^1\left[\frac1\tau\int_0^\tau \frac12\frac{\tau^2}{N^2}\left(\frac{u}{u_r}\right)^2 (\beta-tu\alpha)^2\,\mathrm{e}^{2K(\tau-r)}\int\wug{ g^\varepsilon_{s,r}}^2f_{s,r}\,\mathrm{d} m\,\mathrm{d} r\right]\mathrm{d} s\\
&\le&
\int_0^1\left[\frac1\tau\int_0^\tau \frac12\frac{\tau^2}{N^2}\left(\frac{u}{u_r}\right)^2\a_r\cdot (\beta-tu\alpha)^2\,\mathrm{e}^{2K(\tau-r)}\mathrm{d} r\right]\mathrm{d} s\;,
\end{eqnarray*}
\begin{eqnarray*}
(2DE)&=&\int_0^1-\frac1\tau\int_0^\tau(\beta-tu\alpha)\frac{u}{u_r}\frac{\tau}{N}\mathrm{e}^{2K(\tau-r)}\int \ip{\nabla H_{\tau-r}\left( \varphi_s + tug^\varepsilon_{s,\tau}\right),\nabla g^\varepsilon_{s,r}}f_{s,r}\,\mathrm{d} m\,\,\mathrm{d} r\mathrm{d} s\\
&=& \int_0^1\left[\frac1\tau\int_0^\tau \frac{\tau}{N}\frac{u}{u_r} (\beta-tu\alpha)^2\,\mathrm{e}^{2K(\tau-r)}+ \frac{\tau}{N}\frac{u}{u_r}(\beta-tu\alpha)\gamma^{(1)}\,\mathrm{e}^{2K(\tau-r)} \mathrm{d} r\right]\mathrm{d} s\;,
\end{eqnarray*}
where $\gamma^{(1)}=\int q_\varepsilon(f_{s,r})\ip{\nabla H_{\tau-r}\left(
\varphi_s + tug^\varepsilon_{s,\tau}\right),\nabla\sqrt{f_{s,r}}}\,\mathrm{d}
m$, and finally
\begin{eqnarray*}
(D^2)&=&\int_0^1\left[\frac1\tau\int_0^\tau \frac12\int \left|\nabla H_{\tau-r}\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)
\right|_w^2f_{s,r}\,\mathrm{d} m\,\mathrm{e}^{2K(\tau-r)}\,\mathrm{d} r\right]\mathrm{d} s\\
&\le&\int_0^1\left[\frac1\tau\int_0^\tau \frac12\int \left|\nabla \left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)
\right|_w^2f_{s,\tau}\,\mathrm{d} m\,\mathrm{d} r\right.\\
&&\left.-
\frac1\tau\int_0^\tau \frac{\tau-r}{N_0}\int \left|\Delta H_{\tau-r}\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)
\right|^2f_{s,r}\,\mathrm{d} m\,\mathrm{e}^{2K(\tau-r)}
\,\mathrm{d} r\right]\mathrm{d} s\\
&\le&\int_0^1\left[\frac1\tau\int_0^\tau \frac12\int \left|\nabla \left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)
\right|_w^2f_{s,\tau}\,\mathrm{d} m\,\mathrm{d} r\right.\\
&&\left.-
\frac1\tau\int_0^\tau \frac{\tau-r}{N_0}\left|\int \Delta H_{\tau-r}\left(\varphi_s+tu g^\varepsilon_{s,\tau}
\right)
f_{s,r}\,\mathrm{d} m\right|^2\,\mathrm{e}^{2K(\tau-r)}
\,\mathrm{d} r\right]\mathrm{d} s\\
&\le&
\int_0^1 \left[\frac12\int |\nabla\varphi_s|_w^2\, f_{s,\tau} \mathrm{d} m
-tu\b-tu\gamma^{(2)}+ \frac12t^2u^2\a
-\frac1\tau\int_0^\tau \frac{\tau-r}{N_0} (\beta-tu\alpha)^2\,\mathrm{e}^{2K(\tau-r)}
\mathrm{d} r\right]\mathrm{d} s
\end{eqnarray*}
where $\gamma^{(2)}=\int q_\varepsilon(f_{s,\tau})\ip{\nabla \varphi_s,\nabla\sqrt{f_{s,\tau}}}\,\mathrm{d} m$ and
where we applied again the Bakry--Ledoux estimate $\bl(K,N_0)$, now to
the semigroup $H_{\tau-r}$. Summing up everything yields
\begin{eqnarray*}
(A)+(B)&\le&
\int_0^1 \left[-\frac12t^2u^2\alpha +\frac1N(\beta-tu\alpha)^2\cdot (G)+ (H)\right]\mathrm{d} s
\end{eqnarray*}
where
\begin{eqnarray*}
(H)&:=&-tu\gamma^{(2)} + \int_0^\tau\frac1N\frac{u}{u_r}(\beta-tu\alpha)\gamma^{(1)}\,\mathrm{e}^{2K(\tau-r)} \mathrm{d} r\;,
\end{eqnarray*}
and
\begin{eqnarray*}
(G)&:=&\int_0^\tau\left[
-\frac{N}{N_0}\frac r\tau\left(1+\frac\tau N\frac{u}{u_r}\alpha_r\right)^2\,\mathrm{e}^{2K\tau}
+\frac\tau{2N}\left(\frac{u}{u_r}\right)^2\alpha_r\,\mathrm{e}^{2K(\tau-r)}
\right.\\
&&\left.\qquad
+\frac{u}{u_r}\,\mathrm{e}^{2K(\tau-r)}
-\frac{N}{N_0}\frac{\tau-r}\tau\,\mathrm{e}^{2K(\tau-r)}
\right]\,\mathrm{d} r\\
&\leq&
\int_0^\tau\left[ \frac{N}{N_0}\frac{r}{\tau}\big(\mathrm{e}^{2\abs{K}\tau}-\mathrm{e}^{-2\abs{K}\tau}\big)
-\frac{r}{N}\frac{u}{u_r}\alpha_r\mathrm{e}^{-2\abs{K}\tau}\right.\\
&&\left.\qquad
+\frac\tau{2N}\left(\frac{u}{u_r}\right)^2\alpha_r\mathrm{e}^{2\abs{K}\tau}
+\frac{u}{u_r}\mathrm{e}^{2\abs{K}\tau} -\frac{N}{N_0}\mathrm{e}^{-2\abs{K}\tau}\right]\,\mathrm{d} r\\
&=&
\frac{\tau N}{2 N_0}\big(\mathrm{e}^{2\abs{K}\tau}-\mathrm{e}^{-2\abs{K}\tau}\big)
+ \frac\tau4\left[\left(\frac{u}{u_0}\right)^2-1\right]\mathrm{e}^{2\abs{K}\tau} + \tau\mathrm{e}^{-2\abs{K}\tau}\left(1-\frac{N}{N_0}\right) \\
&&\qquad
+\left(\mathrm{e}^{2\abs{K}\tau}-\mathrm{e}^{-2\abs{K}\tau}\right)\int_0^\tau\frac{u}{u_r}\mathrm{d} r\;.
\end{eqnarray*}
Here we used that by Lemma~\ref{lem:alphabeta-est} $\a_r \ge 0$, by
Lemma~\ref{lem:calculus2} $\partial_r
\frac1{u_r}=-\frac1{N\,u_r}\alpha_r$ and thus
$$0>-\int_0^\tau \frac r N\frac{u}{u_r}\alpha_r\,\mathrm{d} r= \tau- \int_0^\tau\frac{u}{u_r}\,\mathrm{d} r $$
and
$$\frac1N\int_0^\tau \left(\frac{u}{u_r}\right)^2\alpha_r\,\mathrm{d} r= \frac12\left[\left(\frac{u_\tau}{u_0}\right)^2-1\right].$$
Since $(\rho_s)$ is regular, $\abs{\ent(\rho_s)}$ and the second
moments of $( \rho_s )_{s \in [0,1]}$ are uniformly bounded. Arguing
as in the proof of Lemma~\ref{lem:welldef-tau} and using that
$\tau_{s,t}\leq c\cdot st$ we find that $\frac{u}{u_r}$ is
bounded. Taylor expansion of the exponentials in the estimate above
thus yields, that for some constant $C_2$, depending only on $K$ and
the $\max_{s\in[0,1]}\abs{\ent(\rho_s)}$,
\begin{eqnarray*}
(G) &\leq& \frac\tau4\left[\left(\frac{u_\tau}{u_0}\right)^2 - 1 - 4\left(\frac{N}{N_0}-1\right) \right] + C_2\tau^2\;.
\end{eqnarray*}
To control $(H)$ we estimate using Young inequality for any $\delta>0$:
\begin{eqnarray*}
\gamma^{(2)} &\le& \frac\delta 8 I(\rho_{s,\tau}) + \frac{1}{2\delta}\int q^2_\varepsilon(f_{s,\tau})\wug{\phi_s}^2\mathrm{d} m\;,\\
\gamma^{(1)} &\le& \frac\delta 8 I(\rho_{s,r}) + \frac{1}{\delta}\int q^2_\varepsilon(f_{s,r})\Big(\wug{H_{\tau-r}\phi_s}^2 +t^2u^2 \wug{H_{\tau-r}g^\varepsilon_{s,\tau}}^2\Big)\mathrm{d} m\;.
\end{eqnarray*}
Note that $q_\varepsilon^2(r)\leq 4r$, $q^2_\varepsilon(r)\to0$ as $\varepsilon\to0$. Using
the gradient estimate $\bl(K,\infty)$, \eqref{eq:alpha-est} and
\eqref{eq:Lip-HL} we estimate
\begin{align*}
\int f_{s,r}\Big(\wug{H_{\tau-r}\phi_s}^2 +t^2u^2 \wug{H_{\tau-r}g^\varepsilon_{s,\tau}}^2\Big)\mathrm{d} m ~&\leq~ \mathrm{e}^{-2K(\tau -r)}\int f_{s,\tau}\Big(\wug{\phi_s}^2 +t^2u^2 \wug{g^\varepsilon_{s,\tau}}^2\Big)\mathrm{d} m\\
~&\leq~\mathrm{e}^{-2K(\tau -r)}\Big(4\Lip(\phi)^2 + t^2u^2I(\rho_{s,\tau})\Big)~<~\infty\;.
\end{align*}
Thus, dominated convergence yields that $\gamma^{(1)}\leq (\delta/8)
I(\rho_{s,r}) + O(\varepsilon)$ and $\gamma^{(2)}\leq (\delta/8)
I(\rho_{s,\tau}) + O(\varepsilon)$. It remains to estimate $\a,\b$. By
Lemma~\ref{lem:alphabeta-est} and \eqref{eq:Lip-HL} we have $\a\leq
I(\rho_{s,\tau})$ and $\b\leq
2\Lip(\phi)\sqrt{I(\rho_{s,\tau})}$. Note that combining
\eqref{eq:EDE}, \eqref{eq:slope-FI} and $K$-contractivity of the heat
flow we have $I(\rho_{s,r})\leq \mathrm{e}^{-Kr}I(\rho_s)$ for any $r\geq0$.
Putting everything together we conclude that there exist constants
$C_1,C_3$ depending on $K$, $\max_{s\in[0,1]}\abs{\ent(\rho_s)}$,
$\max_{s\in[0,1]}I(\rho_s)$ and $\phi$ such that
\begin{align*}
&\int\varphi_1\mathrm{d}\rho_{1,\tau^\varepsilon}-\int\varphi_0\mathrm{d}\rho_{0}-\frac12\int_0^1|\dot\rho_s|^2\mathrm{e}^{-2K\tau^\varepsilon}\mathrm{d} s+Nt\cdot\left[ U^\varepsilon_N(\rho_0)-U^\varepsilon_N(\rho_{1,\tau^\varepsilon})\right]\\
\leq~&\int_0^1C_1\frac{\tau^\varepsilon}{4}\left[\left(\frac{U^\varepsilon_N(\rho_{s,\tau^\varepsilon})}{U^\varepsilon_N(\rho_{s})}\right)^2-1-4\left(\frac{N}{N_0}-1\right)+C_2\tau^\varepsilon\right]\,\mathrm{d} s + C_3\delta + O(\varepsilon)\;,
\end{align*}
where we have made the dependence of $\tau$ and $u$ on $\varepsilon$
explicit. Finally, passing to the limit first as $\varepsilon\to0$ and then
as $\delta \to0$ yields \eqref{eq:key-est}.
\end{proof}
\begin{proposition}\label{prop:green}
Assume that $(X,d,m)$ satisfies $\bl(K,N)$. Then for each geodesic
$(\rho_s)_{s\in[0,2]}$ in $\mathscr{P}_2(X,d,m)$ with $\rho_0,\rho_2\in
D(\ent)$ and $r\in[0,2]$ we have
\begin{align}\label{eq:key-est2}
U_N(\rho_r)~\ge~\frac{2-r}2 U_N(\rho_0)+\frac{r}2 U_N(\rho_2)+ \frac KN |\dot\rho|^2\cdot\int_0^2 g(s,r)U_N(\rho_s)\,\mathrm{d} s
\end{align}
where $g(s,r)=\frac12\min\{s(2-r),r(2-s)\}$ denotes the Green
function on the interval $[0,2]$.
\end{proposition}
\begin{proof}
We will only prove \eqref{eq:key-est2} for $r=1$ the general
argument being very similar. Obviously, it is sufficient to prove
that the inequality \eqref{eq:key-est2} is satisfied with $N$
replaced by $N'$ for any $N'>N$ and then let $N'\to N$. So let us
fix $N'>N$ and a geodesic $(\rho_s)_{s\in[0,2]}$ in
$\mathscr{P}_2(X,d,m)$. Since we already know that $(X,d,m)$ is a strong
$\cd(K,\infty)$ space we have that $s\mapsto\ent(\rho_s)$ is
$K$-convex and thus continuous.
Using Lemma~\ref{lem:reg-curves} we approximate the geodesic
$(\rho_s)_{s\in[0,2]}$ by regular curves
$(\rho_s^n)_{s\in[0,2]}$. Given $t>0$, the estimate
\eqref{eq:key-est} from Proposition~\ref{prop:action-est}, with
$N_0,N$ replaced by $N,N'$, holds true for each of the regular
curves $(\rho^n_s)_{s\in[0,1]}$ and $(\rho^n_{2-s})_{s\in[0,1]}$ and
any Lipschitz function $\phi$ with bounded support. From the uniform
convergence \eqref{eq:reg-curves4} in Lemma~\ref{lem:reg-curves} and
\eqref{eq:bd-tau} we conclude that for all $n$ large enough and $t$
sufficiently small and all $s\in[0,1]$:
\begin{align*}
\left[\left(\frac{U_{N'}(\rho^n_{s,\tau^n})}{U_{N'}(\rho^n_s)}\right)^2-1 + C_2\tau^n\right]~\leq~4\left(\frac{N'}{N}-1\right)\;,
\end{align*}
i.e. the right hand side of \eqref{eq:key-est} is
non-positive. Hence we obtain
\begin{align*}
\int\varphi_1\mathrm{d}\rho^n_{1,\tau^n}-\int\varphi_0\mathrm{d}\rho^n_{0}-\frac12\int_0^1|\dot{\rho^n_s}|^2\mathrm{e}^{-2K\tau^n}\,\mathrm{d} s
~\leq~N't\cdot\left[U_{N'}(\rho^n_{1,\tau^n})- U_{N'}(\rho^n_0)\right]\;,
\end{align*}
for all such $n$ and $t$. Taking the supremum over $\phi$ yields by Kantorovich duality
\begin{align*}
\frac12 W_2^2(\rho^n_0,\rho^n_{1,\tau^n})-\frac12\int_0^1|\dot{\rho^n_s}|^2\mathrm{e}^{-2K\tau^n}\,\mathrm{d} s
~\leq~N't\cdot\left[U_{N'}(\rho^n_{1,\tau^n})- U_{N'}(\rho^n_0)\right]\;,
\end{align*}
As $n\to\infty$, using the continuity properties
\eqref{eq:reg-curves1}-\eqref{eq:reg-curves3} we obtain the same
estimate for the geodesic $(\rho_s)_{s\in[0,1]}$.
\begin{align*}
\frac12 W_2^2(\rho_0,\rho_{1,\tau})-\frac12 W_2^2(\rho_0,\rho_1)\cdot\int_0^1 e^{-2K\tau}\mathrm{d} s ~\le~
N't\cdot\left[ U_{N'}(\rho_{1,\tau})-U_{N'}(\rho_{0})\right]\mathrm{d} s\;.
\end{align*}
An analogous estimate holds true for the geodesic
$(\rho_{2-s})_{s\in[0,1]}$
\begin{align*}
\frac12 W_2^2(\rho_2,\rho_{1,\tau})-\frac12 W_2^2(\rho_2,\rho_1)\cdot\int_1^2 e^{-2K\tau}\mathrm{d} s~\le~
N't\cdot\left[ U_{N'}(\rho_{1,\tau})-U_{N'}(\rho_{2})\right]\mathrm{d} s\;.
\end{align*}
Moreover, since $(\rho_s)_{s\in[0,2]}$ is a geodesic
\begin{align*}
\frac12 W_2^2(\rho_0,\rho_{1})+\frac12 W_2^2(\rho_2,\rho_{1})-\frac12
W_2^2(\rho_0,\rho_{1,\tau})-\frac12
W_2^2(\rho_2,\rho_{1,\tau})~\le~0\;.
\end{align*}
Adding up the last three inequalities (and dividing by $t$) yields
\begin{align*}
\frac18 W_2^2(\rho_0,\rho_2)\cdot \frac1t\left[2-\int_0^1
e^{-2K\tau}\mathrm{d} s -\int_1^2 e^{-2K\tau}\mathrm{d} s\right]~\le~
N'\cdot\Big[2U_{N'}(\rho_{1,\tau})-
U_{N'}(\rho_0)-U_{N'}(\rho_2)\Big]\mathrm{d} s\;.
\end{align*}
Lower semi-continuity of the entropy implies that in the limit
$t\to0$ the RHS will be bounded from above by
\begin{align*}
N'\cdot\left[2U_{N'}(\rho_{1})- U_{N'}(\rho_0)-U_{N'}(\rho_2)\right]\;.
\end{align*}
Finally, by the very definition of $\tau$,
\begin{eqnarray*}
\lim_{t\to0}\frac1t\left[2-\int_0^1 e^{-2K\tau}\mathrm{d} s -\int_1^2 e^{-2K\tau}\mathrm{d} s\right]&=&-2K\, \int_0^2\partial_t \tau_{s,t}\,\mathrm{d} s\\
&=&-2K\left[\int_0^1 s U_{N'}(\rho_s)\mathrm{d} s+\int_1^2 (2-s) U_{N'}(\rho_s)\mathrm{d} s\right]\\
&=&-4K\int_0^2 g(s,1)\, U_{N'}(\rho_s)\,\mathrm{d} s.
\end{eqnarray*}
Thus we end up with
\begin{eqnarray*}
-\frac K 2 W_2^2(\rho_0,\rho_2)\cdot \int_0^2 g(s,1)\, U_{N'}(\rho_s)\,\mathrm{d} s~\le~ N'\cdot\Big[2U_{N'}(\rho_{1})- U_{N'}(\rho_0)-U_{N'}(\rho_2)\Big]\;.
\end{eqnarray*}
Since $|\dot\rho|^2 =W_2^2(\rho_0,\rho_2)/4$,
this proves the claim.
\end{proof}
\begin{remark} A simple rescaling argument yields that for each
geodesic $(\rho_s)_{s\in[0,1]}$ in $\mathscr{P}_2(X,d,m)$ with
$\rho_0,\rho_1\in D(\ent)$ and $r\in[0,1]$:
\begin{align} \label{eq:ecdknG0}
U_N(\rho_{r})~\ge~(1-r)\cdot U_N(\rho_0)+r\cdot U_N(\rho_1)+ \frac KN |\dot\rho|^2\cdot\int_0^1 g\left(s,r\right)U_N(\rho_s)\,\mathrm{d} s
\end{align}
where $g(s,r)=\min\{s(1-r),r( 1-s)\}$ now denotes the Green function
on the interval $[0,1]$.
\end{remark}
\begin{theorem}\label{thm:BEW2CDE}
Let $(X,d,m)$ be a infinitesimally Hilbertian mms satisfying the
exponential integrability condition \eqref{eq:exp-int} and
$\bl(K,N)$. Then the strong ${\cd^e(K,N)}$ condition holds. In particular,
$(X,d,m)$ is a ${\rcd^*(K,N)}$ space and the heat flow satisfies
$\evi_{K,N}$.
\end{theorem}
\begin{proof}
By virtue of Lemma~\ref{lem:knconvexint2}, this is merely a
consequence of Proposition \ref{prop:green} and \eqref{eq:ecdknG0}.
\end{proof}
\begin{remark}
In the special case $K=0$ it turns out to be possible to derive the
$\evi_{0,N}$ property directly from the action estimate in
Proposition~\ref{prop:action-est}. Let us give an alternative
argument in this case.
We want to show that for any $\rho,\sigma\in\mathscr{P}_2(X,d)$ we have for all $t>0$:
\begin{align}\label{eq:BEW2EVI}
\frac{\mathrm{d}^+}{\mathrm{d}t}\frac12 W_2^2(H_t\rho,\sigma)~\le~ N\cdot\left[
1-\frac{U_N(\sigma)}{U_N(H_t\rho)}\right].
\end{align}
Obviously, it is sufficient to prove that \eqref{eq:BEW2EVI} is
satisfied for any $N'>N$ and then let $N'\to N$. Moreover, by the
semigroup property and Proposition~\ref{prop:evi-equiv} it is
sufficient to assume that $\rho,\sigma\in D(\ent)$ and show that
\eqref{eq:BEW2EVI} holds at $t=0$. So let us fix $N'>N$ and a
geodesic $(\rho_s)_{s\in[0,1]}$ in $\mathscr{P}_2(X,d,m)$ connecting
$\rho_0=\sigma$ to $\rho_1=\rho$. Since we already know that $(X,d,m)$ is a strong
$\cd(0,\infty)$ space we have that $s\mapsto\ent(\rho_s)$ is convex
and thus continuous. By approximating the geodesic $(\rho_s)$ by
regular curves one can show as in the proof of Proposition~\ref{prop:green} that
\begin{align*}
\frac1t\left[\frac12 W_2^2(\rho_0,\rho_{1,\tau})-\frac12 W_2^2(\rho_0,\rho_{1})\right]~\leq~N'\cdot\left[ U_{N'}(\rho_{1,\tau})-U_{N'}(\rho_{0})\right]\;.
\end{align*}
Thus passing to the limit $t\to0$ yields
\begin{eqnarray*}
\frac{\mathrm{d}^+}{\mathrm{d}t}\frac12 W_2^2(\rho_0,H_t\rho_1)\Big|_{t=0}\cdot \frac{d}{dt}\tau_{1,t}\Big|_{t=0}~=~
\frac{\mathrm{d}^+}{\mathrm{d}t}\frac12 W_2^2(\rho_0,H_{\tau_{1,t}}\rho_{1})\Big|_{t=0}~\le~ N'\cdot\left[ U_{N'}(\rho_{1})-U_{N'}(\rho_0)\right]\;.
\end{eqnarray*}
Since $\frac{d}{dt}\tau_{1,t}\Big|_{t=0}=U_{N'}(\rho_1)$, this
finally yields the $\evi_{0,N'}$ inequality:
\begin{eqnarray*}
\frac{\mathrm{d}^+}{\mathrm{d}t}\frac12 W_2^2(\rho_0,H_t\rho_1)\Big|_{t=0}~\le~ N'\cdot\left[ 1-\frac{U_{N'}(\rho_0)}{U_{N'}(\rho_1)}\right]\;.
\end{eqnarray*}
\end{remark}
To finish this section let us consider the classical case of weighted
Riemannian manifolds. More precisely, let $(M,d)$ be a $n$-dimensional
smooth, complete Riemannian manifold and let $V:M\to{\mathbb R}$ be a smooth
function bounded below. Consider the metric measure space
$(M,d,\mathrm{e}^{-V}\text{vol})$. The associated weighted Laplacian is given
by
\begin{align*}
Lu~=~\Delta u -\nabla V\cdot\nabla u\;.
\end{align*}
It is well known (see e.g. \cite[Thm.~14.8]{Vil09}) that the operator
$L$ satisfies the Bakry--\'Emery condition $\be(K,N)$ if and only if
the generalized Ricci tensor
\begin{align*}
\Ric_{N,V}~:=~\Ric +\Hess V - \frac{1}{N-n}\nabla V\otimes\nabla V
\end{align*}
is bounded below by $K$. As an immediate consequence of our
equivalence result we thus obtain the following
\begin{proposition}\label{prop:rcdkn-mfds}
The mms $(M,d,\mathrm{e}^{-V}\text{vol})$ satisfies the ${\cd^e(K,N)}$-condition if and
only if
\begin{align*}
\Ric + \Hess V~\geq~ K + \frac{1}{N-n}\nabla V\otimes\nabla V\;.
\end{align*}
\end{proposition}
\subsection{The sharp Lichnerowicz inequality (spectral gap)}
\label{sec:Lichnerowicz}
Here we provide a first application of the Bochner formula on
infinitesimally Hilbertian metric measure spaces. Namely we establish
the sharp spectral gap estimate on ${\rcd^*(K,N)}$ spaces in the case of
positive curvature $K>0$.
We consider an infinitesimally Hilbertian metric measure space
$(X,d,m)$. Recall that we denote by $\Delta$ the canonical Laplacian
on $(X,d,m)$, i.e. the generator of the heat semigroup in $L^2$ which
is given as the $L^2$-gradient flow of the Cheeger energy $\ch$, see
Section~\ref{sec:recap}.
\begin{theorem}[Spectral gap estimate]\label{thm:lichnerowitz}
Let $(X,d,m)$ be a mms satisfying the Riemannian curvature dimension
condition ${\rcd^*(K,N)}$ with $K>0$ and $N>1$. Then the spectrum of
$(-\Delta)$ is discrete and the first non-zero eigenvalue
$\lambda_1(X,d,m)$ satisfies the following bound:
\begin{align}\label{eq:lichnerowitz}
\lambda_1(X,d,m)~\geq~\frac{N}{N-1}K\;.
\end{align}
\end{theorem}
\begin{proof}
First
recall that the ${\rcd^*(K,N)}$ condition with $K>0$ implies that $(X,d,m)$
is doubling by Proposition~\ref{prop:BGI} and compact by Corollary~\ref{cor:BMT}.
In combination with the result in \cite{Ra12} this
yields that $(X,d,m)$ supports a global Poincar\'e
inequality. Moreover, the ${\cd^*(K,N)}$ condition implies a global
Sobolev inequality, by adapting \cite[Thm.~30.23]{Vil09}. These
ingredients yield the following Rellich--Kondrachov compactness
property(c.f. \cite[Thm.~8.1]{HK00}): for any sequence of
functions $(f_n)_n\subset W^{1,2}(X,d,m)$ with
\begin{align*}
\sup\limits_n\left( \norm{f_n}_{L^2(X,m)} + \ch(f_n)\right)~<~\infty
\end{align*}
we have that up to extraction of a subsequence $f_n\to f$ in
$L^2(X,m)$ for some $f\in L^2(X,m)$. This compactness theorem is
sufficient to prove that the spectrum of $(-\Delta)$ is discrete,
e.g. by following verbatim the proof in \cite{Ber86} of the
corresponding result for Riemannian manifolds.
For the eigenvalue estimate we follow the argument in
\cite{Dav90}. Let $\lambda>0$ be a non-zero eigenvalue of
$(-\Delta)$ and let $\psi\in D(\Delta)$ be a corresponding
eigenfunction. We apply the Bochner inequality of Theorem~\ref{thm:Bochner}
to $f=\psi$ and the test function $g\equiv
1$. Note that this pair is admissible since $X$ is compact. Thus we
obtain using the integration by parts formula
\eqref{eq:int-by-parts}:
\begin{align*}
0~&\geq~\int\ip{\nabla(\Delta\psi),\nabla\psi}\mathrm{d} m + K\int\wug{\psi}^2\mathrm{d} m + \frac{1}{N}\int(\Delta\psi)^2\mathrm{d} m\\
&=~(K-\lambda)\int\wug{\psi}^2\mathrm{d} m - \frac{\lambda}{N}\int \psi\Delta\psi\mathrm{d} m\\
&=~\left(K-\lambda+\frac{\lambda}{N}\right)\int\wug{\psi}^2\mathrm{d} m\;.
\end{align*}
Since $\ch(\psi)>0$ it follows that $\lambda\geq KN/(N-1)$ which
yields the claim.
\end{proof}
Note that this estimate of the spectral gap is sharp. This can be seen
by considering the model space
\begin{align*}
X=(-\frac{\pi}{2}\sqrt{\frac{N-1}{K}},\frac{\pi}{2}\sqrt{\frac{N-1}{K}})\;,\quad d(x,y)=\abs{x-y}\;,\quad m(\mathrm{d} x)=\cos\left(x\sqrt{\frac{K}{N-1}}\right)^{N-1}\mathrm{d} x\;.
\end{align*}
The corresponding operator is given by
\begin{align*}
Lf(r)=f''(r) - \sqrt{K(N-1)}\tan\left(r\sqrt{K/(N-1)}\right)f'(r)
\end{align*}
with Neumann boundary conditions. By Proposition~\ref{prop:rcdkn-mfds}
the metric measure space $(X,d,m)$ satisfies ${\rcd^*(K,N)}$. It is well
known that the first non-zero eigenvalue of the Neumann problem
associated to $L$ is given by $KN/(N-1)$.
\section{Dirichlet form point of view}
\label{sec:dirichlet}
Up to now we have formulated our results in the setting of metric
measure spaces. Here the Cheeger energy, if assumed to be a quadratic
form, gives rise to a canonical Dirichlet form. In this final section
we take a different point of view and reformulate our results starting
from a Dirichlet form. The relation between the two points of view and
the compatibility of metric measure structures and Energy structures
has been discussed extensively in \cite{AGS12} as well as in \cite{KZ13}.
Let $X$ be a Polish space and let $m$ be a locally finite Borel measure on
$X$. Let $\mathcal{E}$ be a strongly local Dirichlet form on $L^2(X,m)$ with
domain $D(\mathcal{E})$. Denote the associated Markov semigroup in $L^2(X,m)$
by $(P_t)_{t>0}$ and its generator by $\Delta$. Given a function $f\in
D(\mathcal{E})$ we denote by $\Gamma(f)$ the associated energy measure
defined by the relation
\begin{align*}
\int \phi\mathrm{d}\Gamma(f)~=~\mathcal{E}(f,f\phi) - \frac12\mathcal{E}(f^2,\phi)\quad\forall \phi\in D(\mathcal{E})\cap L^\infty(X,m)\;.
\end{align*}
If $\Gamma(f)$ is absolutely continuous w.r.t. $m$ we will also denote
its density with $\Gamma(f)$. The natural notion of a
(pseudo-)distance on $X$ associated to $\mathcal{E}$ is the intrinsic $d_\mathcal{E}$
defined by
\begin{align*}
d_\mathcal{E}(x,y)~:=~\sup\left\{\abs{f(x)-f(y)}\ :\ f\in D(\mathcal{E})\cap C(X), \Gamma(f)\leq m\right\}\;.
\end{align*}
For the sequel, assume that $d_\mathcal{E}$ is a finite, complete distance on $X$ inducing
the given topology and assume that $( X, d, m, \mathcal{E} )$ is upper
regular energy measure space in the sense of \cite[Def.3.6,
Def.~3.13]{AGS12}.
\begin{corollary}\label{cor:Dirichlet}
Under the previous assumptions, the following are equivalent:
\begin{itemize}
\item[(i)] Assumption~\ref{ass:Ch-reg} and $\bl(K,N)$ holds,
i.e. for any $f\in D(\mathcal{E})$ with
$\Gamma(f)\leq m$ and $t>0$, $f$ is 1-Lipschitz and
\begin{align*}
\abs{\Gamma P_tf}^2 + \frac{1-\mathrm{e}^{-2Kt}}{NK}\abs{\Delta P_t
f}^2~\leq~e^{-2Kt}P_t\Gamma(f)\;.
\end{align*}
\item[(ii)] $(X,d_\mathcal{E},m)$ is an ${\rcd^*(K,N)}$ space.
\end{itemize}
\end{corollary}
\begin{proof}
Under the assumptions on $d_\mathcal{E}$ and $\mathcal{E}$, it is shown in \cite[Thm.~3.14]{AGS12}
that $\mathcal{E}$ coincides with the Cheeger energy on
$(X,d_\mathcal{E},m)$. Thus $(X,d_\mathcal{E},m)$ is infinitesimally Hilbertian and
for any $f\in D(\mathcal{E})$ we have $\Gamma(f)\ll m$ with density
$\wug{f}^2$. The equivalence of (i) and (ii) then follows from
Theorems~\ref{thm:BEW2CDE}, \ref{thm:grad-est}.
\end{proof}
\begin{remark}
According to \cite[Cor.~2.3]{AGS12} conditions (i) and (ii) of the
previous result are in turn equivalent to the Bakry--\'Emery
inequality $\Gamma_2(f)\geq K\Gamma(f)+\frac{1}{N}(\Delta f)^2$ in
the form of $\be(K,N)$, see Definition~\ref{def:BE}.
\end{remark}
\medskip
\noindent
{\bf Note added in proof.}~~\it{
Since the first version of this article was published on arxiv, several remarkable follow-up papers appeared.
Garofalo and Mondino have \cite{GM13} have established the Li--Yau
estimates on metric measure spaces satisfying
$\rcd^*(K,N)$. Contraction properties of the heat flow reflecting
dimensional effects have been exhibited by Bolley, Gentil and
Guillin \cite{BGG13}, their approach however being very different from ours,
based on a new transportation distance instead of the
$L^2$-Wasserstein distance. The concept of $(K,N)$-convexity has been adopted by Naber
\cite{Nab13} in the study of upper and lower Ricci bounds on
metric measure spaces and the relation with spectral gaps on the
associated path space
The authors also would
like to mention the closely related, independent work in progress of
Ambrosio, Mondino and Savar\'{e} \cite{AMS13}, where partly similar
results as in the present article are obtained via a study of the
porous medium equation in metric measure spaces.
}
\bibliographystyle{plain}
|
1,314,259,993,561 | arxiv | \section{INTRODUCTION}
Localization is one of important research topics concerning autonomous vehicles~\cite{ETHlandmark, ABLE}, robotics~\cite{Tang2019} and assistive navigation~\cite{KeyPosition, VisualLocalizer}. Generally, GNSS (global navigation satellite system) is the straightforward way to localize the autonomous vehicles in the urban areas. However, localization tends to fail at those fade zones, such as the streets with high-rises, or under severe conditions, such as bad space weather. Fortunately, as the non-trivial sensing source, visual information is eligible for providing localization tasks with sufficient place fingerprint. The proliferation of computer vision has spurred the researchers to propose a great deal of vision-based localization solutions~\cite{VPR_SURVEY, ETHlandmark, ABLE, Chemnitz, Bonn2019, FukuiITSC2018, MOLP}.
Given a query image, visual localization predicts the camera location by searching the best-matching database images featuring the largest similarity to that query image.
The most challenging and attractive part of visual localization is that the appearance variations between query and database images impact on the similarity measurement of images, so as to impede the robustness of the algorithm. Those appearance variations involve illumination variations, season variations, dynamic object variations and viewpoint variations.
The variations of illumination, season and dynamic objects have been thoroughly researched by the research community. On the contrary, viewpoint variations are highly related to camera FOV (field of view) , and are tough to tackle merely using ordinary cameras. Therefore, the expansion of FOV is essential to overcome viewpoint variations between query images and database images.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{System.pdf}
\caption{The schematic diagram of the proposed Panoramic Annular Localizer.}
\label{fig:System}
\end{figure}
Based on our preliminary research~\cite{KeyPosition, VisualLocalizer, OpenMPR}, we propose a visual localization system PAL (panoramic annular localizer), which utilizes panoramic annular lens as the imaging sensor to capture the omnidirectional images of the surrounding environments and utilizes deep image descriptors to overcome various appearance changes. As shown in Fig.~\ref{fig:System}, the proposed PAL is composed of three phases: \textit{panoramic image processing}, \textit{deep descriptor extraction} and \textit{localization decision}.
The contributions of this paper are summarized as follows.
\begin{itemize}
\item In this paper, the panoramic annular lens is firstly integrated into outdoor visual localization system, which assists in the issues of viewpoint variations. Moreover, the car-mounted panoramic annular datasets captured in the real-world scenarios are released for the tasks of visual localization.
\item Active image descriptors extracted from panoramic images are leveraged to measure the similarity between images featuring various appearance changes. The proposed active descriptor outperforms the passive descriptors derived from feature maps of convolutional neural networks.
\item The active deep descriptors are combined with the sequential matching scheme. The performance of the proposed PAL system is validated on public and self-captured panoramic datasets for outdoor visual localization.
\end{itemize}
\section{Related work}
In this section, the prevailing panoramic cameras and visual localization solutions based on those cameras are summarized.
\subsection{Panoramic Cameras}
Various imaging systems are eligible to capture panoramic or omnidirectional images, including \textit{multi-camera systems}, \textit{catadioptric cameras} and \textit{panoramic annular cameras}.
Those multi-camera systems, capture high-quality panoramic images, but the different exposure between different cameras may result in the illumination inconsistency in the panoramic image. A catadioptric camera usually constitute of an ordinary camera and a curved reflection mirror above the camera.
Therefore, the mechanical structure is relatively complicated and the volume of camera is large compared with other types of panoramic cameras.
Compared with those cameras, the panoramic annular lens~\cite{pal_Niu, pal_huang, pal_Zhou, pal_Luo, pal_huangX} features a simpler structure and lower cost. Through special optical design, the lateral light of lens is collected to the camera as much as possible by reflection twice within the panoramic annular lens. In view of the integrated package, the panoramic annular lens features flexible installation and is free from occlusion caused by camera's structure, especially compared with the catadioptric cameras. Apart from that, the annular images get rid of the upper and lower part of spherical images and focus more on the intermediate part images that contain important visual cues of places.
\subsection{Image Features}
Extracting robust features from images is the fundamental factor that impacts the performance of visual localization. The research community has focused on this topic for a long time.
As the early research on vehicle visual localization, SeqSLAM~\cite{OpenSeqSLAM2.0} utilized the SAD (sum of absolute difference) of normorlized image patches to measure the similarity between query and database images, which is not robust against various appearance variations.
Aggregating local features (such as SURF~\cite{SURF} and ORB~\cite{ORB}) into compact vectors, BoW (bag of words) places a vital role in place recognition~\cite{OpenFABMAP, ORB-SLAM2}, especially becomes a popular place recognition approaches in SLAM (simultaneous localization and mapping) system. Unfortunately, in view of the limited description capability of hand-crafted local features, BoW performs badly under the conditions of illumination variations.
Apart from the hand-crafted descriptors mentioned above, the feature maps of prevailing CNN (Convolutional Neural Network) are also leveraged as powerful images descriptors~\cite{Garden,VisualLocalizer}.
The ``passive'' deep descriptors cope with variation issues by the limited pretrained knowledge derived from vanilla classification tasks, which results in sub-optimal localization performance when compared with the active deep descriptors. There are several work~\cite{netVLAD, activeCNN, netVLADpano} that falls into active image descriptors, which are trained specially to robustify the descriptor performance under the conditions of appearance variations.
\subsection{Panoramic Visual Localization}
In order to tackle the viewpoint variations, the research community has proposed various visual localization approaches to achieving robust place recognition. Murillo and Josecka proposed place recognition using GIST descriptors extracted from panoramic images~\cite{gist_panoramas}. As one of the earliest attempts on the task, the proposed approach achieved good performance on a large-scale dataset, but the issues of appearance variations are not attached importance to. Based on the NetVLAD module, the image retrieval approach of panoramic images proposed in~\cite{netVLADpano} performed well on the street view dataset. However, the algorithm was not designed for the localization problems and not validated on other panoramic datasets. Oishi et al~\cite{SeqSLAM++} proposed view-based robot localization and navigation, where panoramic images are one of the multi-modal data. The hand-crafted image features and the sliding window scheme were utilized for matching the panoramic images, which naturally behaves inferior when matching images with apparent variations.
\section{Methodology}
In this section, we elaborate the proposed PAL system, a visual localization framework that is designed for the challenging variation issues of visual localization. Panoramic annular lens captures the omnidirectional images with $360^\circ$ FOV in single frame, which is the effective camera to tackle the viewpoint variations in visual localization. Apart from that, efficient deep image features are utilized to extract the place fingerprint embedded in images, which boosts the robustness against appearance variations, such as illumination, season and dynamic object variations.
\subsection{Preprocessing of panoramic annular images}
As the name suggests, panoramic annular images [e.g. the left image of Fig.~\ref{fig:ActiveDescriptor} (a)] are annular images that covers $360^\circ$ field of view. In order to apply different kinds of feature extraction procedures to the panoramic images, the annular images are firstly unwrapped into the rectangular images, shown as the right image of Fig.~\ref{fig:ActiveDescriptor} (a).
\begin{figure}
\centering
\subfigure[]{
\includegraphics[width=\columnwidth]{unwrapPAL.pdf}
}
\subfigure[]{
\includegraphics[width=\columnwidth]{ActiveDescriptor.pdf}
}
\caption{The unwrapping process and feature extraction of panoramic annular images. (a) Unwrapping the panoramic annular image into the rectangular image. (b) The extraction procedures of the active deep descriptor from the panoramic image.}
\label{fig:ActiveDescriptor}
\end{figure}
The camera model of panoramic annular lens is completely different from those of ordinary cameras where the pinhole model applies. Due to the wide FOV, the projection surface of the panoramic annular lens should be the curved surface (e.g. sphere), instead of the plane surface. Fortunately, the panoramic annular lens could be taken as the single-view wide-angle imaging system, and the related camera model has been studied by the research community~\cite{CalibrationPAMI, CalibrationECCV2004, CalibrationIROS, CalibrationICRA2007}. In this paper, OCamCalib~\cite{CalibrationIROS} toolbox is utilized to calibrate the panoramic annular lens and to obtain the intrinsic parameters.
The unwrapping of the annular image is implemented complying with the following mapping, where the point $(i,j)$ of the rectangular image corresponds to the point $(x,y)$ of the annular image.
\begin{equation}
x = y_c + \rho\sin{\theta}~~~~~~~~~
y = x_c + \rho\cos{\theta}
\end{equation}
\begin{equation}
\rho = R_{min} + \frac{R_{max}-R_{min}}{height}i~~~~~~~~~
\theta = \frac{2\pi j}{width}
\end{equation}
The center of the annular image $(x_c, y_c)$ is estimated by the calibration toolbox. Subsequently, the circular borders of the annular image are manually determined [see the double circles in the left image of Fig.~\ref{fig:ActiveDescriptor} (a)], and $R_{max}$ and $R_{min}$ are thus obtained. The FOV ratio of the panoramic annular lens is utilized to determine the aspect ratio of the unwrapped rectangular image. The horizontal FOV of the panoramic annular lens is $360^\circ$, meanwhile the vertical FOV is determined by the projection model and the circular boarder of the annular image. In this paper, the panoramic annular lens features the vertical FOV of $75^\circ$, and the aspect ratio of unwrapped image is set to 4.8:1.
\subsection{Active deep image descriptor}
In this paper, we leverage NetVLAD~\cite{netVLAD} to describe the panoramic images effectively. The NetVLAD model is based on the backbone network, which is usually the pre-trained network for the classification task on the large-scale image datasets (e.g. ImageNet~\cite{ImageNet} or Places~\cite{places}). As shown in Fig.~\ref{fig:ActiveDescriptor} (b), ResNet-18~\cite{resnet} without fully connected layers is utilized as the base network of NetVLAD module.
As an analogue of the pooling layer of the CNN, NetVLAD pools the discriminative descriptor with place fingerprint from the preceding feature map.
The pooling ability of NetVLAD is obtained from the training phase, when the images with diverse appearances (e.g. with different illumination, viewpoint and dynamic objects) but captured at the same place are leveraged as training data.
Specifically, the triplet loss function~\cite{netVLAD} impels the descriptor of the query image to close to that of positive database images rather than that of negative images, which robustifies the adaptability of the descriptor under variations.
Similar with the training procedures in~\cite{netVLAD}, the dataset Pittsburgh-30k~\cite{netVLAD} with images of ordinary FOVs is utilized to in the training phase.
Having trained, the network is leveraged to extract active deep image descriptors from panoramic images.
As shown in Fig.~\ref{fig:ActiveDescriptor} (b), each unwrapped panoramic image is split into four parts along the horizontal direction. Subsequently, the four sub-images constituting a batch are fed into the proposed deep network to obtain four NetVLAD vectors.
The active deep descriptor of the panoramic image is derived by adding (rather than concatenating) the four NetVLAD vectors, which is reasonable according to the principles of NetVLAD, meanwhile the final descriptor maintains compacted size.
\subsection{Sequential localization decision}
The extracted deep descriptors are leveraged to measure the similarity between images, thus to characterize the correspondence of query images and database images.
Herein, we define $ D$ as the distance matrix, where the element $ D_{i,j} $ is the cosine distance between the \textit{i}-th query image and the \textit{j}-th database image.
Inspired by the offline cone-based searching proposed in~\cite{OpenSeqSLAM2.0}, the online cone-based searching is executed upon the distance matrix $ D $. As shown in Fig.~\ref{fig:3}, each query-database pair $(i,j)$ within the distance matrix is associated with two symmetrical cone regions which is limited by sequential length $ n_{q} $, maximal velocity $ v_{max} $ and minimal velocity $ v_{min} $. The online searching algorithm only makes use of the ``past'' query images instead of the ``future'' query images.
\begin{figure}[thpb]
\centering
\includegraphics[width=0.9\columnwidth]{cone_search.pdf}
\caption{Using cone-based sequential searching to score the matching correspondence of images.}
\label{fig:3}
\end{figure}
Within the regions, the number of best-matching pairs $ n_{match} $ is counted. The query descriptor and its nearest neighbor among the database descriptors compose the best-matching pair. The score of the query-database pair $(i,j)$ is defined as $ s_{i,j} = n_{match} / n_{q} $. Naturally, all of the matching scores form into a score matrix $ S $, where each query image corresponds to the best database image with the highest score. In order to get the final place recognition results, the matching score of the best query-database pair is evaluated through window uniqueness thresholding~\cite{OpenSeqSLAM2.0} to rule out the mismatched pairs.
\section{Experiments}
In this section, both public and self-collected datasets are used to evaluate the proposed PAL system. Firstly, the performance of active deep descriptor was validated on MOLP dataset~\cite{MOLP}. Secondly, the visual localization performance was tested in the field.
\subsection{Comparison between passive and active deep descriptors}
In the public panoramic dataset MOLP, four binocular cameras mounted on the vehicle were utilized to capture images of four different directions.
In this paper, the summer night images of city roads captured in the reverse traversing direction are used to evaluate the performance of deep descriptors, including passive and active descriptors.
If the index difference between the place recognition result and the ground truth is not larger than the tolerance (set to $ 5 $ in this paper), the result is defined as a $ TP $ (true positive) result. Otherwise, the positive result is defined as a $ FP $ (false positive) result. Moreover, if the ground truth is not empty but the query does not match with any database image, the negative result is defined as a $ FN $ (false negative) result. The performance of localization is evaluated and analyzed in terms of $ F_1 $ score.
\begin{equation}
F_1 = 2 \times (P \times R)/(P + R),
\end{equation}
where $ R = TP/(TP+FN) $ and $ P = TP/(TP+FP) $.
According to our previous research~\cite{VisualLocalizer}, the deep descriptors derived from GoogLeNet~\cite{GoogLeNet} pretrained on ImageNet~\cite{ImageNet} is the optimal choice on the task of visual localization among the prevailing networks. Therefore, the five optimal feature maps of convolutional or pooling layers in GoogLeNet (listed in Table~\ref{table_example}) are selected as one of the baseline of passive descriptors. Moreover, the conv3 and fc6 layer of AlexNet~\cite{AlexNet} pretrained on ImageNet are also used as another baseline of passive descriptors. Similar to the active descriptor extraction, the split images are fed into the network to obtain the feature maps of the designated layer, and those feature maps derived from sub-images are flatten and concatenated together to form into the descriptor. The reverse traversing direction of database and query sets is taken as a priori knowledge to concatenating the partial descriptors in the consistent order of sub-images.
Among active deep descriptors, the NetVLAD descriptors based on different backbone networks (i.e. AlexNet, VGG-16~\cite{vgg16} and ResNet-18) are compared to choose the optimal structure. The fully connected layers of the backbone networks are removed and the last convolutional layer is connected with the successive NetVLAD module. In this paper, all of the NetVLAD networks are trained on the public dataset Pittsburgh-30k with the default training parameters~\cite{netVLAD}. In order to compare various deep descriptors fairly, the input images fed into different networks are universally set to $ 224\times224 $. Meanwhile, the split number of panoramic images are also compared in the experiment, where four-part split, two-part split and one-part split are tried. The brute force searching is utilized to find the nearest neighbor of the query descriptor as the best-matching results, which are evaluated with $F_1$ score.
\begin{table}[h]
\caption{The localization performance ($F_1$ score) of different split number on MOLP dataset (In.=Inception)}
\label{table_example}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{| c |}{$F_1$} & One-part & Two-part & Four-part \\
\hline
\multirow{2}{*}{AlexNet} & conv3&0.06&0.43&0.38\\
& fc6&0.03&0.18&0.11\\
\hline
\multirow{5}{*}{GoogLeNet} & In.3a/3$\times$3&0.04&0.29&0.26\\
& In.3a/3$\times$3\_reduce&0.07&0.27&0.12\\
& In.3b/3$\times$3\_reduce&0.06&0.51&0.37\\
& In.3a/pool\_proj&0.07&0.34&0.25\\
& In.5b/1$\times$1&0.09&0.32&0.06\\
\hline
\multicolumn{2}{| c |}{NetVLAD with AlexNet}&0.19& 0.29&0.28\\
\hline
\multicolumn{2}{| c |}{NetVLAD with VGG-16}&0.33& 0.51&0.50\\
\hline
\multicolumn{2}{| c |}{\textbf{NetVLAD with ResNet-18}}&\textbf{0.41}& \textbf{0.51}&\textbf{0.54}\\
\hline
\end{tabular}
\end{center}
\end{table}
The experimental results are shown in Table~\ref{table_example}. The proposed active descriptors composed of NetVLAD network with ResNet-18 achieves the best performance among the listed descriptors. The feature maps of AlexNet and GoogLeNet behave better on the condition of two-part split, and are influenced substantially by the way of split. Comparatively, the descriptors derived from NetVLAD networks perform more stable among different split ways. Moreover, the superiority of the NetVLAD descriptors also lies in the combination way of sub-image descriptors. Due to the principle of netVLAD, the direct superposition of four descriptors does not require to know the traversing direction of datasets.
\subsection{Performance on the real-world scenarios}
The performance of PAL is validated on the Yuquan dataset, which was captured at the Yuquan Campus of Zhejiang University, China. The panoramic images were captured by the car-mounted panoramic annular lens on a three-kilometer route, as shown in Fig.~\ref{fig:hardware}. The database sequence were captured in the sunny afternoon, meanwhile the query sequences were captured both in the afternoon (subset-1) and at dusk (subset-2). Meanwhile, GNSS data were also collected.
\begin{figure}[thpb]
\centering
\includegraphics[width=\columnwidth]{Hardware.pdf}
\caption{(a) The panoramic annular lens and the peripheral device are mounted on the top of the car. (b) The test route (denoted by yellow lines) covers around three kilometers.}
\label{fig:hardware}
\end{figure}
It is worthwhile to note that the database sequence is not completely overlapped with the two query sequences. Those unseen query images matched with the database image are denoted as false results. The ratio of false results out of all positive results is $ FR $ (false rate). The $ PR $ (positive rate) of localization is defined as the ratio of matching results, the distance between which is within the tolerance ($ 50 m $). In the experiment, the sequential matching parameters are set as $v_{min}=0.4$, $v_{max}=2.5$, $n_q=10$. As shown in Table~\ref{table_2}, the proposed PAL surpasses OpenSeqSLAM2.0~\cite{OpenSeqSLAM2.0} on the real-world dataset. Some localization results of PAL system are shown in Fig.~\ref{fig:afternoon}. It is concluded that the proposed approach is eligible to overcome the appearance variations (especially illumination variation). Moreover, the panoramic camera mounted on the roof of the car naturally reduces the impact of dynamic objects (like pedestrians and cars) on the performance of visual localization.
\begin{table}[h]
\caption{The localization performance on Yuquan dataset}
\label{table_2}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{Algorithms }& \multicolumn{2}{c|}{Subset-1}&\multicolumn{2}{c|}{Subset-2}\\
\cline{2-5}
& $ FR$ & $ PR $ &$ FR $ & $ PR $ \\
\hline
OpenSeqSLAM2.0 &17.15\%&21.67\%& 23.81\%&41.31\%\\
\hline
PAL & \textbf{11.13\%}&\textbf{32.89\%}&\textbf{20.92\%}&\textbf{45.24\%}\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[thpb]
\centering
\subfigure[]{
\includegraphics[width=\columnwidth]{afternoon2.pdf}
}
\subfigure[]{
\includegraphics[width=\columnwidth]{dusk1.pdf}
}
\caption{The visual localization results of (a) the afternoon test set and (b) the dusk test set.}
\label{fig:afternoon}
\end{figure}
For computational complexity, the network inference of the deep feature extraction phase consumes around $ 40 ms $ on NVIDIA GPU (GTX-1060). Thereby, the active deep descriptor features not only robust description capability but also real-time performance. As for the phase of sequential matching, each image consumes around $13 ms$ to get the decision of place recognition. In a brief, the proposed PAL system could perform in real time in the practical scenarios.
\section{CONCLUSIONS}
In order to tackling the variation issues of visual localization, PAL (panoramic annular localizer) is proposed in this paper. We incorporate panoramic annular lens and the active deep descriptor into the visual localization system. Firstly, the unwrapping of annular images is executed to prepare for the phase of descriptor extraction, where the split panoramic images are fed into the NetVLAD network based on ResNet-18. The descriptors obtained from sub-images are added together regardless of concatenation order, and the cone-based matching is leveraged to robustify the single-frame retrieval results. The experiment on MOLP dataset illustrate that the proposed active descriptor outperforms the off-the-shelf deep descriptors. In the field test, the performance of the proposed system in the practical scenarios is validated.
The code and dataset related to the proposed PAL system are publicly available at https://github.com/chengricky/PAL. In the future, the scene understanding and pose estimation are planned to be researched based on this work.
\section*{ACKNOWLEDGMENT}
This work was supported by the State Key Laboratory of Modern Optical Instrumentation.
\bibliographystyle{IEEEtran.bst}
|
1,314,259,993,562 | arxiv | \section*{Acknowledgments}
I an grateful to Daniel Schubring for useful discussions. This work is supported in part by DOE grant DE-SC0011842.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.